LLM Finder
Match local LLMs to your GPU. Filter by VRAM + use case, then copy the download command.
22 models fit 12GB for coding
Data powered by llmfit ↗Phi-4-14B-Instruct
14Bq4_K_M • 10.4GBFamily: Phi
Estimated speed: 21-31 tok/s
Context: 128k
License: MIT
ollama pull phi4:14bPhi-3-medium-128k-instruct
14Bq4_K_M • 10.4GBFamily: Phi
Estimated speed: 22-32 tok/s
Context: 131k
License: MIT
ollama pull phi3:mediumMistral-Nemo-12B-Instruct
12Bq5_K_M • 11.2GBFamily: Mistral
Estimated speed: 21-31 tok/s
Context: 128k
License: Apache-2.0
ollama pull mistral-nemo:12bQwen2.5-Coder-14B-Instruct
14Bq4_K_M • 10.4GBFamily: Qwen
Estimated speed: 23-33 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5-coder:14bQwen2.5-14B-Instruct
14Bq4_K_M • 10.4GBFamily: Qwen
Estimated speed: 23-33 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5:14bGemma-2-27B-Instruct
27Bq2_K • 11.6GBFamily: Gemma
Estimated speed: 22-32 tok/s
Context: 8k
License: Gemma Terms
ollama pull gemma2:27bDeepSeek-R1-Distill-Qwen-14B
14Bq4_K_M • 10.4GBFamily: DeepSeek
Estimated speed: 25-35 tok/s
Context: 33k
License: MIT
ollama pull deepseek-r1:14bQwen2.5-Coder-7B-Instruct
7Bq8_0 • 10.1GBFamily: Qwen
Estimated speed: 27-37 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5-coder:7bDolphin-2.9.2-Qwen2-7B
7Bq8_0 • 10.1GBFamily: Dolphin
Estimated speed: 27-37 tok/s
Context: 33k
License: Apache-2.0
ollama pull dolphin3:8bQwen2.5-7B-Instruct
7Bq8_0 • 10.1GBFamily: Qwen
Estimated speed: 27-37 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5:7bCodeLlama-7B-Instruct
7Bq8_0 • 10.1GBFamily: CodeLlama
Estimated speed: 29-39 tok/s
Context: 16k
License: Llama 2 Community
ollama pull codellama:7bPhi-4-mini-instruct
3.8Bfp16 • 10.9GBFamily: Phi
Estimated speed: 27-37 tok/s
Context: 128k
License: MIT
ollama pull phi4-miniDeepSeek-R1-Distill-Qwen-7B
7Bq8_0 • 10.1GBFamily: DeepSeek
Estimated speed: 30-40 tok/s
Context: 33k
License: MIT
ollama pull deepseek-r1:7bPhi-3-mini-4k-instruct
3.8Bfp16 • 10.9GBFamily: Phi
Estimated speed: 29-39 tok/s
Context: 4k
License: MIT
ollama pull phi3:miniDeepSeek-Coder-6.7B-Instruct
6.7Bq8_0 • 9.7GBFamily: DeepSeek
Estimated speed: 33-43 tok/s
Context: 16k
License: DeepSeek License
ollama pull deepseek-coder:6.7bWizardLM-2-7B
7Bq8_0 • 10.1GBFamily: WizardLM
Estimated speed: 33-43 tok/s
Context: 33k
License: Llama 2 Community
ollama pull wizardlm2:7bStarCoder2-15B-Instruct
15Bq4_K_M • 11.1GBFamily: StarCoder
Estimated speed: 30-40 tok/s
Context: 16k
License: OpenRAIL-M
ollama pull starcoder2:15bQwen2.5-3B-Instruct
3Bfp16 • 9.1GBFamily: Qwen
Estimated speed: 36-46 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5:3bCodeGemma-7B-Instruct
7Bq8_0 • 10.1GBFamily: Gemma
Estimated speed: 33-43 tok/s
Context: 8k
License: Gemma Terms
ollama pull codegemma:7bCodeLlama-13B-Instruct
13Bq4_K_M • 9.8GBFamily: CodeLlama
Estimated speed: 35-45 tok/s
Context: 16k
License: Llama 2 Community
ollama pull codellama:13bStarCoder2-7B-Instruct
7Bq8_0 • 10.1GBFamily: StarCoder
Estimated speed: 37-47 tok/s
Context: 16k
License: OpenRAIL-M
ollama pull starcoder2:7bStarCoder2-3B-Instruct
3Bfp16 • 9.1GBFamily: StarCoder
Estimated speed: 43-53 tok/s
Context: 16k
License: OpenRAIL-M
ollama pull starcoder2:3b