LLM Finder

Match local LLMs to your GPU. Filter by VRAM + use case, then copy the download command.

29 models fit 24GB for coding

Data powered by llmfit ↗

Qwen2.5-Coder-32B-Instruct

32Bq4_K_M • 22.3GB

Family: Qwen

Estimated speed: 19-29 tok/s

Context: 33k

License: Apache-2.0

codingchatmathresearch
ollama pull qwen2.5-coder:32b

Nous-Hermes-2-Mixtral-8x7B-DPO

46.7B MoEq2_K • 19.6GB

Family: Nous

Estimated speed: 26-36 tok/s

Context: 33k

License: Apache-2.0

chatcreativecoding
ollama pull nous-hermes2-mixtral

Qwen2.5-32B-Instruct

32Bq4_K_M • 22.3GB

Family: Qwen

Estimated speed: 20-30 tok/s

Context: 33k

License: Apache-2.0

chatcodingresearchmathcreative
ollama pull qwen2.5:32b

Phi-4-14B-Instruct

14Bq8_0 • 18.3GB

Family: Phi

Estimated speed: 31-41 tok/s

Context: 128k

License: MIT

codingmathresearchchat
ollama pull phi4:14b

Phi-3-medium-128k-instruct

14Bq8_0 • 18.3GB

Family: Phi

Estimated speed: 32-42 tok/s

Context: 131k

License: MIT

codingchatresearchmath
ollama pull phi3:medium

Mistral-Nemo-12B-Instruct

12Bq8_0 • 16GB

Family: Mistral

Estimated speed: 38-48 tok/s

Context: 128k

License: Apache-2.0

chatcodingcreative
ollama pull mistral-nemo:12b

Qwen2.5-Coder-14B-Instruct

14Bq8_0 • 18.3GB

Family: Qwen

Estimated speed: 32-42 tok/s

Context: 33k

License: Apache-2.0

codingchatmath
ollama pull qwen2.5-coder:14b

Qwen2.5-14B-Instruct

14Bq8_0 • 18.3GB

Family: Qwen

Estimated speed: 33-43 tok/s

Context: 33k

License: Apache-2.0

chatcodingresearchmath
ollama pull qwen2.5:14b

Gemma-2-27B-Instruct

27Bq5_K_M • 23.5GB

Family: Gemma

Estimated speed: 22-32 tok/s

Context: 8k

License: Gemma Terms

chatcodingresearchcreativemath
ollama pull gemma2:27b

DeepSeek-R1-Distill-Qwen-14B

14Bq8_0 • 18.3GB

Family: DeepSeek

Estimated speed: 34-44 tok/s

Context: 33k

License: MIT

mathresearchcoding
ollama pull deepseek-r1:14b

Mixtral-8x7B-Instruct

46.7B MoEq2_K • 19.6GB

Family: Mistral

Estimated speed: 32-42 tok/s

Context: 33k

License: Apache-2.0

chatcodingresearchcreative
ollama pull mixtral:8x7b

Qwen2.5-Coder-7B-Instruct

7Bfp16 • 17.9GB

Family: Qwen

Estimated speed: 37-47 tok/s

Context: 33k

License: Apache-2.0

codingchat
ollama pull qwen2.5-coder:7b

Dolphin-2.9.2-Qwen2-7B

7Bfp16 • 17.9GB

Family: Dolphin

Estimated speed: 37-47 tok/s

Context: 33k

License: Apache-2.0

chatcreativecoding
ollama pull dolphin3:8b

DeepSeek-Coder-33B-Instruct

33Bq4_K_M • 23GB

Family: DeepSeek

Estimated speed: 25-35 tok/s

Context: 16k

License: DeepSeek License

codingchatmath
ollama pull deepseek-coder:33b

Qwen2.5-7B-Instruct

7Bfp16 • 17.9GB

Family: Qwen

Estimated speed: 37-47 tok/s

Context: 33k

License: Apache-2.0

chatcodingresearchmath
ollama pull qwen2.5:7b

CodeLlama-7B-Instruct

7Bfp16 • 17.9GB

Family: CodeLlama

Estimated speed: 39-49 tok/s

Context: 16k

License: Llama 2 Community

coding
ollama pull codellama:7b

Phi-4-mini-instruct

3.8Bfp16 • 10.9GB

Family: Phi

Estimated speed: 56-66 tok/s

Context: 128k

License: MIT

chatcodingresearch
ollama pull phi4-mini

Command-R-35B

35Bq3_K_M • 19.2GB

Family: Cohere

Estimated speed: 36-46 tok/s

Context: 128k

License: CC-BY-NC

researchchatcoding
ollama pull command-r:35b

CodeLlama-34B-Instruct

34Bq4_K_M • 23.6GB

Family: CodeLlama

Estimated speed: 28-38 tok/s

Context: 16k

License: Llama 2 Community

codingmath
ollama pull codellama:34b

DeepSeek-R1-Distill-Qwen-7B

7Bfp16 • 17.9GB

Family: DeepSeek

Estimated speed: 40-50 tok/s

Context: 33k

License: MIT

mathresearchcoding
ollama pull deepseek-r1:7b

Phi-3-mini-4k-instruct

3.8Bfp16 • 10.9GB

Family: Phi

Estimated speed: 57-67 tok/s

Context: 4k

License: MIT

chatcoding
ollama pull phi3:mini

DeepSeek-Coder-6.7B-Instruct

6.7Bfp16 • 17.2GB

Family: DeepSeek

Estimated speed: 44-54 tok/s

Context: 16k

License: DeepSeek License

codingchat
ollama pull deepseek-coder:6.7b

WizardLM-2-7B

7Bfp16 • 17.9GB

Family: WizardLM

Estimated speed: 43-53 tok/s

Context: 33k

License: Llama 2 Community

chatcoding
ollama pull wizardlm2:7b

StarCoder2-15B-Instruct

15Bq8_0 • 19.5GB

Family: StarCoder

Estimated speed: 39-49 tok/s

Context: 16k

License: OpenRAIL-M

codingmath
ollama pull starcoder2:15b

Qwen2.5-3B-Instruct

3Bfp16 • 9.1GB

Family: Qwen

Estimated speed: 65-75 tok/s

Context: 33k

License: Apache-2.0

chatcodingresearch
ollama pull qwen2.5:3b

CodeGemma-7B-Instruct

7Bfp16 • 17.9GB

Family: Gemma

Estimated speed: 43-53 tok/s

Context: 8k

License: Gemma Terms

codingchat
ollama pull codegemma:7b

CodeLlama-13B-Instruct

13Bq8_0 • 17.1GB

Family: CodeLlama

Estimated speed: 47-57 tok/s

Context: 16k

License: Llama 2 Community

codingmath
ollama pull codellama:13b

StarCoder2-7B-Instruct

7Bfp16 • 17.9GB

Family: StarCoder

Estimated speed: 47-57 tok/s

Context: 16k

License: OpenRAIL-M

coding
ollama pull starcoder2:7b

StarCoder2-3B-Instruct

3Bfp16 • 9.1GB

Family: StarCoder

Estimated speed: 72-82 tok/s

Context: 16k

License: OpenRAIL-M

coding
ollama pull starcoder2:3b