LLM Finder
Match local LLMs to your GPU. Filter by VRAM + use case, then copy the download command.
21 models fit 16GB for research
Data powered by llmfit ↗Qwen2.5-Coder-32B-Instruct
32Bq2_K • 13.6GBFamily: Qwen
Estimated speed: 20-30 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5-coder:32bCommand-R-35B
35Bq2_K • 14.8GBFamily: Cohere
Estimated speed: 18-28 tok/s
Context: 128k
License: CC-BY-NC
ollama pull command-r:35bQwen2.5-32B-Instruct
32Bq2_K • 13.6GBFamily: Qwen
Estimated speed: 21-31 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5:32bGemma-2-27B-Instruct
27Bq3_K_M • 15GBFamily: Gemma
Estimated speed: 18-28 tok/s
Context: 8k
License: Gemma Terms
ollama pull gemma2:27bYi-1.5-34B-Chat
34Bq2_K • 14.4GBFamily: Yi
Estimated speed: 20-30 tok/s
Context: 33k
License: Apache-2.0
ollama pull yi:34bDeepSeek-R1-Distill-Qwen-14B
14Bq5_K_M • 12.9GBFamily: DeepSeek
Estimated speed: 24-34 tok/s
Context: 33k
License: MIT
ollama pull deepseek-r1:14bPhi-4-14B-Instruct
14Bq5_K_M • 12.9GBFamily: Phi
Estimated speed: 25-35 tok/s
Context: 128k
License: MIT
ollama pull phi4:14bPhi-3-medium-128k-instruct
14Bq5_K_M • 12.9GBFamily: Phi
Estimated speed: 25-35 tok/s
Context: 131k
License: MIT
ollama pull phi3:mediumDeepSeek-R1-Distill-Llama-8B
8Bq8_0 • 11.2GBFamily: DeepSeek
Estimated speed: 30-40 tok/s
Context: 33k
License: MIT
ollama pull deepseek-r1:8bQwen2.5-14B-Instruct
14Bq5_K_M • 12.9GBFamily: Qwen
Estimated speed: 26-36 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5:14bDeepSeek-R1-Distill-Qwen-7B
7Bq8_0 • 10.1GBFamily: DeepSeek
Estimated speed: 33-43 tok/s
Context: 33k
License: MIT
ollama pull deepseek-r1:7bMinistral-8B-Instruct
8Bq8_0 • 11.2GBFamily: Mistral
Estimated speed: 33-43 tok/s
Context: 128k
License: Mistral Research
ollama pull ministral:8bGemma-2-9B-Instruct
9Bq8_0 • 12.4GBFamily: Gemma
Estimated speed: 30-40 tok/s
Context: 8k
License: Gemma Terms
ollama pull gemma2:9bCommand-R-7B
7Bq8_0 • 10.1GBFamily: Cohere
Estimated speed: 36-46 tok/s
Context: 128k
License: CC-BY-NC
ollama pull command-r:7bLlama-3.1-8B-Instruct
8Bq8_0 • 11.2GBFamily: Llama
Estimated speed: 33-43 tok/s
Context: 131k
License: Llama 3.1 Community
ollama pull llama3.1:8bMistral-7B-Instruct-v0.3
7Bq8_0 • 10.1GBFamily: Mistral
Estimated speed: 36-46 tok/s
Context: 33k
License: Apache-2.0
ollama pull mistral:7bQwen2.5-7B-Instruct
7Bq8_0 • 10.1GBFamily: Qwen
Estimated speed: 37-47 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5:7bYi-1.5-9B-Chat
9Bq8_0 • 12.4GBFamily: Yi
Estimated speed: 31-41 tok/s
Context: 33k
License: Apache-2.0
ollama pull yi:9bNeural-Chat-7B-v3.3
7Bq8_0 • 10.1GBFamily: Intel
Estimated speed: 38-48 tok/s
Context: 33k
License: Apache-2.0
ollama pull neural-chatPhi-4-mini-instruct
3.8Bfp16 • 10.9GBFamily: Phi
Estimated speed: 37-47 tok/s
Context: 128k
License: MIT
ollama pull phi4-miniQwen2.5-3B-Instruct
3Bfp16 • 9.1GBFamily: Qwen
Estimated speed: 44-54 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5:3b