LLM Finder
Match local LLMs to your GPU. Filter by VRAM + use case, then copy the download command.
29 models fit 48GB for research
Data powered by llmfit ↗Qwen2.5-72B-Instruct
72Bq3_K_M • 38.4GBFamily: Qwen
Estimated speed: 37-47 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5:72bLlama-3.3-70B-Instruct
70Bq4_K_M • 47.4GBFamily: Llama
Estimated speed: 17-27 tok/s
Context: 131k
License: Llama 3.3 Community
ollama pull llama3.3:70bQwen2.5-Coder-32B-Instruct
32Bq8_0 • 39.6GBFamily: Qwen
Estimated speed: 35-45 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5-coder:32bLlama-3.1-70B-Instruct
70Bq4_K_M • 47.4GBFamily: Llama
Estimated speed: 18-28 tok/s
Context: 131k
License: Llama 3.1 Community
ollama pull llama3.1:70bDeepSeek-R1-Distill-Llama-70B
70Bq4_K_M • 47.4GBFamily: DeepSeek
Estimated speed: 18-28 tok/s
Context: 33k
License: MIT
ollama pull deepseek-r1:70bYi-1.5-34B-Chat
34Bq8_0 • 41.9GBFamily: Yi
Estimated speed: 31-41 tok/s
Context: 33k
License: Apache-2.0
ollama pull yi:34bQwen2.5-32B-Instruct
32Bq8_0 • 39.6GBFamily: Qwen
Estimated speed: 37-47 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5:32bPhi-4-14B-Instruct
14Bfp16 • 33.3GBFamily: Phi
Estimated speed: 52-62 tok/s
Context: 128k
License: MIT
ollama pull phi4:14bPhi-3-medium-128k-instruct
14Bfp16 • 33.3GBFamily: Phi
Estimated speed: 53-63 tok/s
Context: 131k
License: MIT
ollama pull phi3:mediumDeepSeek-R1-Distill-Llama-8B
8Bfp16 • 20.1GBFamily: DeepSeek
Estimated speed: 86-96 tok/s
Context: 33k
License: MIT
ollama pull deepseek-r1:8bQwen2.5-14B-Instruct
14Bfp16 • 33.3GBFamily: Qwen
Estimated speed: 54-64 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5:14bMixtral-8x22B-Instruct
141B MoEq3_K_M • 47.8GBFamily: Mistral
Estimated speed: 22-32 tok/s
Context: 66k
License: Apache-2.0
ollama pull mixtral:8x22bWizardLM-2-8x22B
141B MoEq3_K_M • 47.8GBFamily: WizardLM
Estimated speed: 22-32 tok/s
Context: 66k
License: Llama 2 Community
ollama pull wizardlm2:8x22bGemma-2-27B-Instruct
27Bq8_0 • 33.7GBFamily: Gemma
Estimated speed: 54-64 tok/s
Context: 8k
License: Gemma Terms
ollama pull gemma2:27bDeepSeek-R1-Distill-Qwen-14B
14Bfp16 • 33.3GBFamily: DeepSeek
Estimated speed: 56-66 tok/s
Context: 33k
License: MIT
ollama pull deepseek-r1:14bMinistral-8B-Instruct
8Bfp16 • 20.1GBFamily: Mistral
Estimated speed: 88-98 tok/s
Context: 128k
License: Mistral Research
ollama pull ministral:8bDBRX-Instruct
132B MoEq3_K_M • 44.7GBFamily: Databricks
Estimated speed: 29-39 tok/s
Context: 33k
License: Databricks Open
ollama pull dbrxCommand-R-7B
7Bfp16 • 17.9GBFamily: Cohere
Estimated speed: 94-104 tok/s
Context: 128k
License: CC-BY-NC
ollama pull command-r:7bMixtral-8x7B-Instruct
46.7B MoEq5_K_M • 39.9GBFamily: Mistral
Estimated speed: 41-51 tok/s
Context: 33k
License: Apache-2.0
ollama pull mixtral:8x7bMistral-7B-Instruct-v0.3
7Bfp16 • 17.9GBFamily: Mistral
Estimated speed: 94-104 tok/s
Context: 33k
License: Apache-2.0
ollama pull mistral:7bYi-1.5-9B-Chat
9Bfp16 • 22.3GBFamily: Yi
Estimated speed: 84-94 tok/s
Context: 33k
License: Apache-2.0
ollama pull yi:9bQwen2.5-7B-Instruct
7Bfp16 • 17.9GBFamily: Qwen
Estimated speed: 95-105 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5:7bLlama-3.1-8B-Instruct
8Bfp16 • 20.1GBFamily: Llama
Estimated speed: 90-100 tok/s
Context: 131k
License: Llama 3.1 Community
ollama pull llama3.1:8bGemma-2-9B-Instruct
9Bfp16 • 22.3GBFamily: Gemma
Estimated speed: 85-95 tok/s
Context: 8k
License: Gemma Terms
ollama pull gemma2:9bNeural-Chat-7B-v3.3
7Bfp16 • 17.9GBFamily: Intel
Estimated speed: 96-106 tok/s
Context: 33k
License: Apache-2.0
ollama pull neural-chatPhi-4-mini-instruct
3.8Bfp16 • 10.9GBFamily: Phi
Estimated speed: 113-123 tok/s
Context: 128k
License: MIT
ollama pull phi4-miniCommand-R-35B
35Bq8_0 • 43.1GBFamily: Cohere
Estimated speed: 37-47 tok/s
Context: 128k
License: CC-BY-NC
ollama pull command-r:35bDeepSeek-R1-Distill-Qwen-7B
7Bfp16 • 17.9GBFamily: DeepSeek
Estimated speed: 98-108 tok/s
Context: 33k
License: MIT
ollama pull deepseek-r1:7bQwen2.5-3B-Instruct
3Bfp16 • 9.1GBFamily: Qwen
Estimated speed: 122-132 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5:3b