LLM Finder
Match local LLMs to your GPU. Filter by VRAM + use case, then copy the download command.
10 models fit 8GB for math
Data powered by llmfit ↗Phi-4-14B-Instruct
14Bq2_K • 6.4GBFamily: Phi
Estimated speed: 21-31 tok/s
Context: 128k
License: MIT
ollama pull phi4:14bPhi-3-medium-128k-instruct
14Bq2_K • 6.4GBFamily: Phi
Estimated speed: 22-32 tok/s
Context: 131k
License: MIT
ollama pull phi3:mediumQwen2.5-Coder-14B-Instruct
14Bq2_K • 6.4GBFamily: Qwen
Estimated speed: 23-33 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5-coder:14bDeepSeek-R1-Distill-Llama-8B
8Bq5_K_M • 8GBFamily: DeepSeek
Estimated speed: 21-31 tok/s
Context: 33k
License: MIT
ollama pull deepseek-r1:8bQwen2.5-14B-Instruct
14Bq2_K • 6.4GBFamily: Qwen
Estimated speed: 23-33 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5:14bDeepSeek-R1-Distill-Qwen-14B
14Bq2_K • 6.4GBFamily: DeepSeek
Estimated speed: 25-35 tok/s
Context: 33k
License: MIT
ollama pull deepseek-r1:14bQwen2.5-7B-Instruct
7Bq5_K_M • 7.1GBFamily: Qwen
Estimated speed: 25-35 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5:7bDeepSeek-R1-Distill-Qwen-7B
7Bq5_K_M • 7.1GBFamily: DeepSeek
Estimated speed: 28-38 tok/s
Context: 33k
License: MIT
ollama pull deepseek-r1:7bStarCoder2-15B-Instruct
15Bq2_K • 6.8GBFamily: StarCoder
Estimated speed: 31-41 tok/s
Context: 16k
License: OpenRAIL-M
ollama pull starcoder2:15bCodeLlama-13B-Instruct
13Bq3_K_M • 7.8GBFamily: CodeLlama
Estimated speed: 32-42 tok/s
Context: 16k
License: Llama 2 Community
ollama pull codellama:13b