LLM Finder
Match local LLMs to your GPU. Filter by VRAM + use case, then copy the download command.
35 models fit 8GB for all
Data powered by llmfit ↗DeepSeek-R1-Distill-Qwen-14B
14Bq2_K • 6.4GBFamily: DeepSeek
Estimated speed: 20-30 tok/s
Context: 33k
License: MIT
ollama pull deepseek-r1:14bPhi-4-14B-Instruct
14Bq2_K • 6.4GBFamily: Phi
Estimated speed: 21-31 tok/s
Context: 128k
License: MIT
ollama pull phi4:14bQwen2.5-Coder-14B-Instruct
14Bq2_K • 6.4GBFamily: Qwen
Estimated speed: 21-31 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5-coder:14bPhi-3-medium-128k-instruct
14Bq2_K • 6.4GBFamily: Phi
Estimated speed: 22-32 tok/s
Context: 131k
License: MIT
ollama pull phi3:mediumStarCoder2-15B-Instruct
15Bq2_K • 6.8GBFamily: StarCoder
Estimated speed: 21-31 tok/s
Context: 16k
License: OpenRAIL-M
ollama pull starcoder2:15bMistral-Nemo-12B-Instruct
12Bq3_K_M • 7.2GBFamily: Mistral
Estimated speed: 21-31 tok/s
Context: 128k
License: Apache-2.0
ollama pull mistral-nemo:12bQwen2.5-14B-Instruct
14Bq2_K • 6.4GBFamily: Qwen
Estimated speed: 23-33 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5:14bDeepSeek-R1-Distill-Llama-8B
8Bq5_K_M • 8GBFamily: DeepSeek
Estimated speed: 21-31 tok/s
Context: 33k
License: MIT
ollama pull deepseek-r1:8bDeepSeek-R1-Distill-Qwen-7B
7Bq5_K_M • 7.1GBFamily: DeepSeek
Estimated speed: 22-32 tok/s
Context: 33k
License: MIT
ollama pull deepseek-r1:7bQwen2.5-Coder-7B-Instruct
7Bq5_K_M • 7.1GBFamily: Qwen
Estimated speed: 23-33 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5-coder:7bCodeLlama-13B-Instruct
13Bq3_K_M • 7.8GBFamily: CodeLlama
Estimated speed: 23-33 tok/s
Context: 16k
License: Llama 2 Community
ollama pull codellama:13bGemma-2-9B-Instruct
9Bq4_K_M • 7.1GBFamily: Gemma
Estimated speed: 24-34 tok/s
Context: 8k
License: Gemma Terms
ollama pull gemma2:9bMinistral-8B-Instruct
8Bq5_K_M • 8GBFamily: Mistral
Estimated speed: 24-34 tok/s
Context: 128k
License: Mistral Research
ollama pull ministral:8bDeepSeek-Coder-6.7B-Instruct
6.7Bq5_K_M • 6.9GBFamily: DeepSeek
Estimated speed: 24-34 tok/s
Context: 16k
License: DeepSeek License
ollama pull deepseek-coder:6.7bCommand-R-7B
7Bq5_K_M • 7.1GBFamily: Cohere
Estimated speed: 24-34 tok/s
Context: 128k
License: CC-BY-NC
ollama pull command-r:7bLlama-3.1-8B-Instruct
8Bq5_K_M • 8GBFamily: Llama
Estimated speed: 24-34 tok/s
Context: 131k
License: Llama 3.1 Community
ollama pull llama3.1:8bMistral-7B-Instruct-v0.3
7Bq5_K_M • 7.1GBFamily: Mistral
Estimated speed: 24-34 tok/s
Context: 33k
License: Apache-2.0
ollama pull mistral:7bDolphin-2.9.2-Qwen2-7B
7Bq5_K_M • 7.1GBFamily: Dolphin
Estimated speed: 24-34 tok/s
Context: 33k
License: Apache-2.0
ollama pull dolphin3:8bQwen2.5-7B-Instruct
7Bq5_K_M • 7.1GBFamily: Qwen
Estimated speed: 25-35 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5:7bCodeGemma-7B-Instruct
7Bq5_K_M • 7.1GBFamily: Gemma
Estimated speed: 25-35 tok/s
Context: 8k
License: Gemma Terms
ollama pull codegemma:7bYi-1.5-9B-Chat
9Bq4_K_M • 7.1GBFamily: Yi
Estimated speed: 25-35 tok/s
Context: 33k
License: Apache-2.0
ollama pull yi:9bWizardLM-2-7B
7Bq5_K_M • 7.1GBFamily: WizardLM
Estimated speed: 25-35 tok/s
Context: 33k
License: Llama 2 Community
ollama pull wizardlm2:7bStarCoder2-7B-Instruct
7Bq5_K_M • 7.1GBFamily: StarCoder
Estimated speed: 25-35 tok/s
Context: 16k
License: OpenRAIL-M
ollama pull starcoder2:7bLlama-3-8B-Instruct
8Bq5_K_M • 8GBFamily: Llama
Estimated speed: 25-35 tok/s
Context: 8k
License: Llama 3 Community
ollama pull llama3:8bOpenChat-3.6-8B
8Bq5_K_M • 8GBFamily: OpenChat
Estimated speed: 25-35 tok/s
Context: 8k
License: Apache-2.0
ollama pull openchat:8bOpenHermes-2.5-Mistral-7B
7Bq5_K_M • 7.1GBFamily: Nous
Estimated speed: 26-36 tok/s
Context: 33k
License: Apache-2.0
ollama pull openhermesZephyr-7B-beta
7Bq5_K_M • 7.1GBFamily: Zephyr
Estimated speed: 26-36 tok/s
Context: 33k
License: MIT
ollama pull zephyr:7bCodeLlama-7B-Instruct
7Bq5_K_M • 7.1GBFamily: CodeLlama
Estimated speed: 26-36 tok/s
Context: 16k
License: Llama 2 Community
ollama pull codellama:7bNeural-Chat-7B-v3.3
7Bq5_K_M • 7.1GBFamily: Intel
Estimated speed: 26-36 tok/s
Context: 33k
License: Apache-2.0
ollama pull neural-chatPhi-4-mini-instruct
3.8Bq8_0 • 6.3GBFamily: Phi
Estimated speed: 28-38 tok/s
Context: 128k
License: MIT
ollama pull phi4-miniPhi-3-mini-4k-instruct
3.8Bq8_0 • 6.3GBFamily: Phi
Estimated speed: 30-40 tok/s
Context: 4k
License: MIT
ollama pull phi3:miniLlama-3.2-3B-Instruct
3Bq8_0 • 5.3GBFamily: Llama
Estimated speed: 33-43 tok/s
Context: 131k
License: Llama 3.2 Community
ollama pull llama3.2:3bStarCoder2-3B-Instruct
3Bq8_0 • 5.3GBFamily: StarCoder
Estimated speed: 33-43 tok/s
Context: 16k
License: OpenRAIL-M
ollama pull starcoder2:3bQwen2.5-3B-Instruct
3Bq8_0 • 5.3GBFamily: Qwen
Estimated speed: 34-44 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5:3bGemma-2-2B-Instruct
2Bfp16 • 6.9GBFamily: Gemma
Estimated speed: 31-41 tok/s
Context: 8k
License: Gemma Terms
ollama pull gemma2:2b