LLM Finder

Match local LLMs to your GPU. Filter by VRAM + use case, then copy the download command.

10 models fit 8GB for math

Data powered by llmfit ↗

Phi-4-14B-Instruct

14Bq2_K6.4GB

Family: Phi

Estimated speed: 21-31 tok/s

Context: 128k

License: MIT

codingmathresearchchat
ollama pull phi4:14b

Phi-3-medium-128k-instruct

14Bq2_K6.4GB

Family: Phi

Estimated speed: 22-32 tok/s

Context: 131k

License: MIT

codingchatresearchmath
ollama pull phi3:medium

Qwen2.5-Coder-14B-Instruct

14Bq2_K6.4GB

Family: Qwen

Estimated speed: 23-33 tok/s

Context: 33k

License: Apache-2.0

codingchatmath
ollama pull qwen2.5-coder:14b

DeepSeek-R1-Distill-Llama-8B

8Bq5_K_M8GB

Family: DeepSeek

Estimated speed: 21-31 tok/s

Context: 33k

License: MIT

mathresearchchat
ollama pull deepseek-r1:8b

Qwen2.5-14B-Instruct

14Bq2_K6.4GB

Family: Qwen

Estimated speed: 23-33 tok/s

Context: 33k

License: Apache-2.0

chatcodingresearchmath
ollama pull qwen2.5:14b

DeepSeek-R1-Distill-Qwen-14B

14Bq2_K6.4GB

Family: DeepSeek

Estimated speed: 25-35 tok/s

Context: 33k

License: MIT

mathresearchcoding
ollama pull deepseek-r1:14b

Qwen2.5-7B-Instruct

7Bq5_K_M7.1GB

Family: Qwen

Estimated speed: 25-35 tok/s

Context: 33k

License: Apache-2.0

chatcodingresearchmath
ollama pull qwen2.5:7b

DeepSeek-R1-Distill-Qwen-7B

7Bq5_K_M7.1GB

Family: DeepSeek

Estimated speed: 28-38 tok/s

Context: 33k

License: MIT

mathresearchcoding
ollama pull deepseek-r1:7b

StarCoder2-15B-Instruct

15Bq2_K6.8GB

Family: StarCoder

Estimated speed: 31-41 tok/s

Context: 16k

License: OpenRAIL-M

codingmath
ollama pull starcoder2:15b

CodeLlama-13B-Instruct

13Bq3_K_M7.8GB

Family: CodeLlama

Estimated speed: 32-42 tok/s

Context: 16k

License: Llama 2 Community

codingmath
ollama pull codellama:13b