LLM Finder
Match local LLMs to your GPU. Filter by VRAM + use case, then copy the download command.
21 models fit 80GB for math
Data powered by llmfit ↗Qwen2.5-72B-Instruct
72Bq5_K_M • 60.4GBFamily: Qwen
Estimated speed: 61-71 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5:72bLlama-3.3-70B-Instruct
70Bq5_K_M • 58.8GBFamily: Llama
Estimated speed: 65-75 tok/s
Context: 131k
License: Llama 3.3 Community
ollama pull llama3.3:70bQwen2.5-Coder-32B-Instruct
32Bfp16 • 72.9GBFamily: Qwen
Estimated speed: 32-42 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5-coder:32bLlama-3.1-70B-Instruct
70Bq5_K_M • 58.8GBFamily: Llama
Estimated speed: 66-76 tok/s
Context: 131k
License: Llama 3.1 Community
ollama pull llama3.1:70bDeepSeek-R1-Distill-Llama-70B
70Bq5_K_M • 58.8GBFamily: DeepSeek
Estimated speed: 66-76 tok/s
Context: 33k
License: MIT
ollama pull deepseek-r1:70bYi-1.5-34B-Chat
34Bfp16 • 77.3GBFamily: Yi
Estimated speed: 22-32 tok/s
Context: 33k
License: Apache-2.0
ollama pull yi:34bQwen2.5-32B-Instruct
32Bfp16 • 72.9GBFamily: Qwen
Estimated speed: 33-43 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5:32bPhi-4-14B-Instruct
14Bfp16 • 33.3GBFamily: Phi
Estimated speed: 129-139 tok/s
Context: 128k
License: MIT
ollama pull phi4:14bPhi-3-medium-128k-instruct
14Bfp16 • 33.3GBFamily: Phi
Estimated speed: 130-140 tok/s
Context: 131k
License: MIT
ollama pull phi3:mediumDeepSeek-R1-Distill-Llama-8B
8Bfp16 • 20.1GBFamily: DeepSeek
Estimated speed: 163-173 tok/s
Context: 33k
License: MIT
ollama pull deepseek-r1:8bQwen2.5-Coder-14B-Instruct
14Bfp16 • 33.3GBFamily: Qwen
Estimated speed: 131-141 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5-coder:14bQwen2.5-14B-Instruct
14Bfp16 • 33.3GBFamily: Qwen
Estimated speed: 131-141 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5:14bMixtral-8x22B-Instruct
141B MoEq5_K_M • 75.2GBFamily: Mistral
Estimated speed: 31-41 tok/s
Context: 66k
License: Apache-2.0
ollama pull mixtral:8x22bGemma-2-27B-Instruct
27Bfp16 • 61.9GBFamily: Gemma
Estimated speed: 63-73 tok/s
Context: 8k
License: Gemma Terms
ollama pull gemma2:27bDeepSeek-R1-Distill-Qwen-14B
14Bfp16 • 33.3GBFamily: DeepSeek
Estimated speed: 133-143 tok/s
Context: 33k
License: MIT
ollama pull deepseek-r1:14bDeepSeek-Coder-33B-Instruct
33Bfp16 • 75.1GBFamily: DeepSeek
Estimated speed: 34-44 tok/s
Context: 16k
License: DeepSeek License
ollama pull deepseek-coder:33bQwen2.5-7B-Instruct
7Bfp16 • 17.9GBFamily: Qwen
Estimated speed: 172-182 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5:7bCodeLlama-34B-Instruct
34Bfp16 • 77.3GBFamily: CodeLlama
Estimated speed: 32-42 tok/s
Context: 16k
License: Llama 2 Community
ollama pull codellama:34bDeepSeek-R1-Distill-Qwen-7B
7Bfp16 • 17.9GBFamily: DeepSeek
Estimated speed: 175-185 tok/s
Context: 33k
License: MIT
ollama pull deepseek-r1:7bStarCoder2-15B-Instruct
15Bfp16 • 35.5GBFamily: StarCoder
Estimated speed: 135-145 tok/s
Context: 16k
License: OpenRAIL-M
ollama pull starcoder2:15bCodeLlama-13B-Instruct
13Bfp16 • 31.1GBFamily: CodeLlama
Estimated speed: 147-157 tok/s
Context: 16k
License: Llama 2 Community
ollama pull codellama:13b