LLM Finder

Match local LLMs to your GPU. Filter by VRAM + use case, then copy the download command.

21 models fit 80GB for math

Data powered by llmfit ↗

Qwen2.5-72B-Instruct

72Bq5_K_M60.4GB

Family: Qwen

Estimated speed: 61-71 tok/s

Context: 33k

License: Apache-2.0

chatcodingresearchmathcreative
ollama pull qwen2.5:72b

Llama-3.3-70B-Instruct

70Bq5_K_M58.8GB

Family: Llama

Estimated speed: 65-75 tok/s

Context: 131k

License: Llama 3.3 Community

chatcodingresearchcreativemath
ollama pull llama3.3:70b

Qwen2.5-Coder-32B-Instruct

32Bfp1672.9GB

Family: Qwen

Estimated speed: 32-42 tok/s

Context: 33k

License: Apache-2.0

codingchatmathresearch
ollama pull qwen2.5-coder:32b

Llama-3.1-70B-Instruct

70Bq5_K_M58.8GB

Family: Llama

Estimated speed: 66-76 tok/s

Context: 131k

License: Llama 3.1 Community

chatresearchcreativemath
ollama pull llama3.1:70b

DeepSeek-R1-Distill-Llama-70B

70Bq5_K_M58.8GB

Family: DeepSeek

Estimated speed: 66-76 tok/s

Context: 33k

License: MIT

mathresearchcodingchat
ollama pull deepseek-r1:70b

Yi-1.5-34B-Chat

34Bfp1677.3GB

Family: Yi

Estimated speed: 22-32 tok/s

Context: 33k

License: Apache-2.0

chatresearchcreativemath
ollama pull yi:34b

Qwen2.5-32B-Instruct

32Bfp1672.9GB

Family: Qwen

Estimated speed: 33-43 tok/s

Context: 33k

License: Apache-2.0

chatcodingresearchmathcreative
ollama pull qwen2.5:32b

Phi-4-14B-Instruct

14Bfp1633.3GB

Family: Phi

Estimated speed: 129-139 tok/s

Context: 128k

License: MIT

codingmathresearchchat
ollama pull phi4:14b

Phi-3-medium-128k-instruct

14Bfp1633.3GB

Family: Phi

Estimated speed: 130-140 tok/s

Context: 131k

License: MIT

codingchatresearchmath
ollama pull phi3:medium

DeepSeek-R1-Distill-Llama-8B

8Bfp1620.1GB

Family: DeepSeek

Estimated speed: 163-173 tok/s

Context: 33k

License: MIT

mathresearchchat
ollama pull deepseek-r1:8b

Qwen2.5-Coder-14B-Instruct

14Bfp1633.3GB

Family: Qwen

Estimated speed: 131-141 tok/s

Context: 33k

License: Apache-2.0

codingchatmath
ollama pull qwen2.5-coder:14b

Qwen2.5-14B-Instruct

14Bfp1633.3GB

Family: Qwen

Estimated speed: 131-141 tok/s

Context: 33k

License: Apache-2.0

chatcodingresearchmath
ollama pull qwen2.5:14b

Mixtral-8x22B-Instruct

141B MoEq5_K_M75.2GB

Family: Mistral

Estimated speed: 31-41 tok/s

Context: 66k

License: Apache-2.0

chatcodingresearchcreativemath
ollama pull mixtral:8x22b

Gemma-2-27B-Instruct

27Bfp1661.9GB

Family: Gemma

Estimated speed: 63-73 tok/s

Context: 8k

License: Gemma Terms

chatcodingresearchcreativemath
ollama pull gemma2:27b

DeepSeek-R1-Distill-Qwen-14B

14Bfp1633.3GB

Family: DeepSeek

Estimated speed: 133-143 tok/s

Context: 33k

License: MIT

mathresearchcoding
ollama pull deepseek-r1:14b

DeepSeek-Coder-33B-Instruct

33Bfp1675.1GB

Family: DeepSeek

Estimated speed: 34-44 tok/s

Context: 16k

License: DeepSeek License

codingchatmath
ollama pull deepseek-coder:33b

Qwen2.5-7B-Instruct

7Bfp1617.9GB

Family: Qwen

Estimated speed: 172-182 tok/s

Context: 33k

License: Apache-2.0

chatcodingresearchmath
ollama pull qwen2.5:7b

CodeLlama-34B-Instruct

34Bfp1677.3GB

Family: CodeLlama

Estimated speed: 32-42 tok/s

Context: 16k

License: Llama 2 Community

codingmath
ollama pull codellama:34b

DeepSeek-R1-Distill-Qwen-7B

7Bfp1617.9GB

Family: DeepSeek

Estimated speed: 175-185 tok/s

Context: 33k

License: MIT

mathresearchcoding
ollama pull deepseek-r1:7b

StarCoder2-15B-Instruct

15Bfp1635.5GB

Family: StarCoder

Estimated speed: 135-145 tok/s

Context: 16k

License: OpenRAIL-M

codingmath
ollama pull starcoder2:15b

CodeLlama-13B-Instruct

13Bfp1631.1GB

Family: CodeLlama

Estimated speed: 147-157 tok/s

Context: 16k

License: Llama 2 Community

codingmath
ollama pull codellama:13b