LLM Finder

Match local LLMs to your GPU. Filter by VRAM + use case, then copy the download command.

21 models fit 16GB for research

Data powered by llmfit ↗

Qwen2.5-Coder-32B-Instruct

32Bq2_K13.6GB

Family: Qwen

Estimated speed: 20-30 tok/s

Context: 33k

License: Apache-2.0

codingchatmathresearch
ollama pull qwen2.5-coder:32b

Command-R-35B

35Bq2_K14.8GB

Family: Cohere

Estimated speed: 18-28 tok/s

Context: 128k

License: CC-BY-NC

researchchatcoding
ollama pull command-r:35b

Qwen2.5-32B-Instruct

32Bq2_K13.6GB

Family: Qwen

Estimated speed: 21-31 tok/s

Context: 33k

License: Apache-2.0

chatcodingresearchmathcreative
ollama pull qwen2.5:32b

Gemma-2-27B-Instruct

27Bq3_K_M15GB

Family: Gemma

Estimated speed: 18-28 tok/s

Context: 8k

License: Gemma Terms

chatcodingresearchcreativemath
ollama pull gemma2:27b

Yi-1.5-34B-Chat

34Bq2_K14.4GB

Family: Yi

Estimated speed: 20-30 tok/s

Context: 33k

License: Apache-2.0

chatresearchcreativemath
ollama pull yi:34b

DeepSeek-R1-Distill-Qwen-14B

14Bq5_K_M12.9GB

Family: DeepSeek

Estimated speed: 24-34 tok/s

Context: 33k

License: MIT

mathresearchcoding
ollama pull deepseek-r1:14b

Phi-4-14B-Instruct

14Bq5_K_M12.9GB

Family: Phi

Estimated speed: 25-35 tok/s

Context: 128k

License: MIT

codingmathresearchchat
ollama pull phi4:14b

Phi-3-medium-128k-instruct

14Bq5_K_M12.9GB

Family: Phi

Estimated speed: 25-35 tok/s

Context: 131k

License: MIT

codingchatresearchmath
ollama pull phi3:medium

DeepSeek-R1-Distill-Llama-8B

8Bq8_011.2GB

Family: DeepSeek

Estimated speed: 30-40 tok/s

Context: 33k

License: MIT

mathresearchchat
ollama pull deepseek-r1:8b

Qwen2.5-14B-Instruct

14Bq5_K_M12.9GB

Family: Qwen

Estimated speed: 26-36 tok/s

Context: 33k

License: Apache-2.0

chatcodingresearchmath
ollama pull qwen2.5:14b

DeepSeek-R1-Distill-Qwen-7B

7Bq8_010.1GB

Family: DeepSeek

Estimated speed: 33-43 tok/s

Context: 33k

License: MIT

mathresearchcoding
ollama pull deepseek-r1:7b

Ministral-8B-Instruct

8Bq8_011.2GB

Family: Mistral

Estimated speed: 33-43 tok/s

Context: 128k

License: Mistral Research

chatresearch
ollama pull ministral:8b

Gemma-2-9B-Instruct

9Bq8_012.4GB

Family: Gemma

Estimated speed: 30-40 tok/s

Context: 8k

License: Gemma Terms

chatresearchcreative
ollama pull gemma2:9b

Command-R-7B

7Bq8_010.1GB

Family: Cohere

Estimated speed: 36-46 tok/s

Context: 128k

License: CC-BY-NC

researchchatcreative
ollama pull command-r:7b

Llama-3.1-8B-Instruct

8Bq8_011.2GB

Family: Llama

Estimated speed: 33-43 tok/s

Context: 131k

License: Llama 3.1 Community

chatresearchcreative
ollama pull llama3.1:8b

Mistral-7B-Instruct-v0.3

7Bq8_010.1GB

Family: Mistral

Estimated speed: 36-46 tok/s

Context: 33k

License: Apache-2.0

chatresearch
ollama pull mistral:7b

Qwen2.5-7B-Instruct

7Bq8_010.1GB

Family: Qwen

Estimated speed: 37-47 tok/s

Context: 33k

License: Apache-2.0

chatcodingresearchmath
ollama pull qwen2.5:7b

Yi-1.5-9B-Chat

9Bq8_012.4GB

Family: Yi

Estimated speed: 31-41 tok/s

Context: 33k

License: Apache-2.0

chatresearchcreative
ollama pull yi:9b

Neural-Chat-7B-v3.3

7Bq8_010.1GB

Family: Intel

Estimated speed: 38-48 tok/s

Context: 33k

License: Apache-2.0

chatresearch
ollama pull neural-chat

Phi-4-mini-instruct

3.8Bfp1610.9GB

Family: Phi

Estimated speed: 37-47 tok/s

Context: 128k

License: MIT

chatcodingresearch
ollama pull phi4-mini

Qwen2.5-3B-Instruct

3Bfp169.1GB

Family: Qwen

Estimated speed: 44-54 tok/s

Context: 33k

License: Apache-2.0

chatcodingresearch
ollama pull qwen2.5:3b