LLM Finder

Match local LLMs to your GPU. Filter by VRAM + use case, then copy the download command.

51 models fit 48GB for all

Data powered by llmfit ↗

Llama-3.3-70B-Instruct

70Bq4_K_M • 47.4GB

Family: Llama

Estimated speed: 15-25 tok/s

Context: 131k

License: Llama 3.3 Community

chatcodingresearchcreativemath
ollama pull llama3.3:70b

DeepSeek-R1-Distill-Llama-70B

70Bq4_K_M • 47.4GB

Family: DeepSeek

Estimated speed: 15-25 tok/s

Context: 33k

License: MIT

mathresearchcodingchat
ollama pull deepseek-r1:70b

Mixtral-8x22B-Instruct

141B MoEq3_K_M • 47.8GB

Family: Mistral

Estimated speed: 15-25 tok/s

Context: 66k

License: Apache-2.0

chatcodingresearchcreativemath
ollama pull mixtral:8x22b

WizardLM-2-8x22B

141B MoEq3_K_M • 47.8GB

Family: WizardLM

Estimated speed: 15-25 tok/s

Context: 66k

License: Llama 2 Community

chatcodingresearchcreative
ollama pull wizardlm2:8x22b

Qwen2.5-72B-Instruct

72Bq3_K_M • 38.4GB

Family: Qwen

Estimated speed: 36-46 tok/s

Context: 33k

License: Apache-2.0

chatcodingresearchmathcreative
ollama pull qwen2.5:72b

Llama-3.1-70B-Instruct

70Bq4_K_M • 47.4GB

Family: Llama

Estimated speed: 16-26 tok/s

Context: 131k

License: Llama 3.1 Community

chatresearchcreativemath
ollama pull llama3.1:70b

DBRX-Instruct

132B MoEq3_K_M • 44.7GB

Family: Databricks

Estimated speed: 22-32 tok/s

Context: 33k

License: Databricks Open

chatcodingresearch
ollama pull dbrx

Qwen2.5-Coder-32B-Instruct

32Bq8_0 • 39.6GB

Family: Qwen

Estimated speed: 34-44 tok/s

Context: 33k

License: Apache-2.0

codingchatmathresearch
ollama pull qwen2.5-coder:32b

Mixtral-8x7B-Instruct

46.7B MoEq5_K_M • 39.9GB

Family: Mistral

Estimated speed: 34-44 tok/s

Context: 33k

License: Apache-2.0

chatcodingresearchcreative
ollama pull mixtral:8x7b

DeepSeek-Coder-33B-Instruct

33Bq8_0 • 40.7GB

Family: DeepSeek

Estimated speed: 32-42 tok/s

Context: 16k

License: DeepSeek License

codingchatmath
ollama pull deepseek-coder:33b

Command-R-35B

35Bq8_0 • 43.1GB

Family: Cohere

Estimated speed: 27-37 tok/s

Context: 128k

License: CC-BY-NC

researchchatcoding
ollama pull command-r:35b

Qwen2.5-32B-Instruct

32Bq8_0 • 39.6GB

Family: Qwen

Estimated speed: 35-45 tok/s

Context: 33k

License: Apache-2.0

chatcodingresearchmathcreative
ollama pull qwen2.5:32b

Nous-Hermes-2-Mixtral-8x7B-DPO

46.7B MoEq5_K_M • 39.9GB

Family: Nous

Estimated speed: 35-45 tok/s

Context: 33k

License: Apache-2.0

chatcreativecoding
ollama pull nous-hermes2-mixtral

Gemma-2-27B-Instruct

27Bq8_0 • 33.7GB

Family: Gemma

Estimated speed: 50-60 tok/s

Context: 8k

License: Gemma Terms

chatcodingresearchcreativemath
ollama pull gemma2:27b

Yi-1.5-34B-Chat

34Bq8_0 • 41.9GB

Family: Yi

Estimated speed: 31-41 tok/s

Context: 33k

License: Apache-2.0

chatresearchcreativemath
ollama pull yi:34b

DeepSeek-R1-Distill-Qwen-14B

14Bfp16 • 33.3GB

Family: DeepSeek

Estimated speed: 52-62 tok/s

Context: 33k

License: MIT

mathresearchcoding
ollama pull deepseek-r1:14b

CodeLlama-34B-Instruct

34Bq8_0 • 41.9GB

Family: CodeLlama

Estimated speed: 31-41 tok/s

Context: 16k

License: Llama 2 Community

codingmath
ollama pull codellama:34b

Phi-4-14B-Instruct

14Bfp16 • 33.3GB

Family: Phi

Estimated speed: 52-62 tok/s

Context: 128k

License: MIT

codingmathresearchchat
ollama pull phi4:14b

Qwen2.5-Coder-14B-Instruct

14Bfp16 • 33.3GB

Family: Qwen

Estimated speed: 53-63 tok/s

Context: 33k

License: Apache-2.0

codingchatmath
ollama pull qwen2.5-coder:14b

Phi-3-medium-128k-instruct

14Bfp16 • 33.3GB

Family: Phi

Estimated speed: 53-63 tok/s

Context: 131k

License: MIT

codingchatresearchmath
ollama pull phi3:medium

Mistral-Nemo-12B-Instruct

12Bfp16 • 28.9GB

Family: Mistral

Estimated speed: 64-74 tok/s

Context: 128k

License: Apache-2.0

chatcodingcreative
ollama pull mistral-nemo:12b

StarCoder2-15B-Instruct

15Bfp16 • 35.5GB

Family: StarCoder

Estimated speed: 48-58 tok/s

Context: 16k

License: OpenRAIL-M

codingmath
ollama pull starcoder2:15b

DeepSeek-R1-Distill-Llama-8B

8Bfp16 • 20.1GB

Family: DeepSeek

Estimated speed: 86-96 tok/s

Context: 33k

License: MIT

mathresearchchat
ollama pull deepseek-r1:8b

Qwen2.5-14B-Instruct

14Bfp16 • 33.3GB

Family: Qwen

Estimated speed: 54-64 tok/s

Context: 33k

License: Apache-2.0

chatcodingresearchmath
ollama pull qwen2.5:14b

DeepSeek-R1-Distill-Qwen-7B

7Bfp16 • 17.9GB

Family: DeepSeek

Estimated speed: 91-101 tok/s

Context: 33k

License: MIT

mathresearchcoding
ollama pull deepseek-r1:7b

Qwen2.5-Coder-7B-Instruct

7Bfp16 • 17.9GB

Family: Qwen

Estimated speed: 93-103 tok/s

Context: 33k

License: Apache-2.0

codingchat
ollama pull qwen2.5-coder:7b

CodeLlama-13B-Instruct

13Bfp16 • 31.1GB

Family: CodeLlama

Estimated speed: 61-71 tok/s

Context: 16k

License: Llama 2 Community

codingmath
ollama pull codellama:13b

Ministral-8B-Instruct

8Bfp16 • 20.1GB

Family: Mistral

Estimated speed: 88-98 tok/s

Context: 128k

License: Mistral Research

chatresearch
ollama pull ministral:8b

Gemma-2-9B-Instruct

9Bfp16 • 22.3GB

Family: Gemma

Estimated speed: 83-93 tok/s

Context: 8k

License: Gemma Terms

chatresearchcreative
ollama pull gemma2:9b

DeepSeek-Coder-6.7B-Instruct

6.7Bfp16 • 17.2GB

Family: DeepSeek

Estimated speed: 96-106 tok/s

Context: 16k

License: DeepSeek License

codingchat
ollama pull deepseek-coder:6.7b

Command-R-7B

7Bfp16 • 17.9GB

Family: Cohere

Estimated speed: 94-104 tok/s

Context: 128k

License: CC-BY-NC

researchchatcreative
ollama pull command-r:7b

Llama-3.1-8B-Instruct

8Bfp16 • 20.1GB

Family: Llama

Estimated speed: 89-99 tok/s

Context: 131k

License: Llama 3.1 Community

chatresearchcreative
ollama pull llama3.1:8b

Mistral-7B-Instruct-v0.3

7Bfp16 • 17.9GB

Family: Mistral

Estimated speed: 94-104 tok/s

Context: 33k

License: Apache-2.0

chatresearch
ollama pull mistral:7b

Dolphin-2.9.2-Qwen2-7B

7Bfp16 • 17.9GB

Family: Dolphin

Estimated speed: 94-104 tok/s

Context: 33k

License: Apache-2.0

chatcreativecoding
ollama pull dolphin3:8b

Qwen2.5-7B-Instruct

7Bfp16 • 17.9GB

Family: Qwen

Estimated speed: 95-105 tok/s

Context: 33k

License: Apache-2.0

chatcodingresearchmath
ollama pull qwen2.5:7b

CodeGemma-7B-Instruct

7Bfp16 • 17.9GB

Family: Gemma

Estimated speed: 95-105 tok/s

Context: 8k

License: Gemma Terms

codingchat
ollama pull codegemma:7b

WizardLM-2-7B

7Bfp16 • 17.9GB

Family: WizardLM

Estimated speed: 95-105 tok/s

Context: 33k

License: Llama 2 Community

chatcoding
ollama pull wizardlm2:7b

Yi-1.5-9B-Chat

9Bfp16 • 22.3GB

Family: Yi

Estimated speed: 84-94 tok/s

Context: 33k

License: Apache-2.0

chatresearchcreative
ollama pull yi:9b

StarCoder2-7B-Instruct

7Bfp16 • 17.9GB

Family: StarCoder

Estimated speed: 95-105 tok/s

Context: 16k

License: OpenRAIL-M

coding
ollama pull starcoder2:7b

Llama-3-8B-Instruct

8Bfp16 • 20.1GB

Family: Llama

Estimated speed: 90-100 tok/s

Context: 8k

License: Llama 3 Community

chatcreative
ollama pull llama3:8b

OpenChat-3.6-8B

8Bfp16 • 20.1GB

Family: OpenChat

Estimated speed: 90-100 tok/s

Context: 8k

License: Apache-2.0

chatcreative
ollama pull openchat:8b

OpenHermes-2.5-Mistral-7B

7Bfp16 • 17.9GB

Family: Nous

Estimated speed: 95-105 tok/s

Context: 33k

License: Apache-2.0

chatcreative
ollama pull openhermes

Zephyr-7B-beta

7Bfp16 • 17.9GB

Family: Zephyr

Estimated speed: 96-106 tok/s

Context: 33k

License: MIT

chatcreative
ollama pull zephyr:7b

CodeLlama-7B-Instruct

7Bfp16 • 17.9GB

Family: CodeLlama

Estimated speed: 96-106 tok/s

Context: 16k

License: Llama 2 Community

coding
ollama pull codellama:7b

Neural-Chat-7B-v3.3

7Bfp16 • 17.9GB

Family: Intel

Estimated speed: 96-106 tok/s

Context: 33k

License: Apache-2.0

chatresearch
ollama pull neural-chat

Phi-4-mini-instruct

3.8Bfp16 • 10.9GB

Family: Phi

Estimated speed: 113-123 tok/s

Context: 128k

License: MIT

chatcodingresearch
ollama pull phi4-mini

Phi-3-mini-4k-instruct

3.8Bfp16 • 10.9GB

Family: Phi

Estimated speed: 115-125 tok/s

Context: 4k

License: MIT

chatcoding
ollama pull phi3:mini

Llama-3.2-3B-Instruct

3Bfp16 • 9.1GB

Family: Llama

Estimated speed: 120-130 tok/s

Context: 131k

License: Llama 3.2 Community

chatcreative
ollama pull llama3.2:3b

StarCoder2-3B-Instruct

3Bfp16 • 9.1GB

Family: StarCoder

Estimated speed: 120-130 tok/s

Context: 16k

License: OpenRAIL-M

coding
ollama pull starcoder2:3b

Qwen2.5-3B-Instruct

3Bfp16 • 9.1GB

Family: Qwen

Estimated speed: 121-131 tok/s

Context: 33k

License: Apache-2.0

chatcodingresearch
ollama pull qwen2.5:3b

Gemma-2-2B-Instruct

2Bfp16 • 6.9GB

Family: Gemma

Estimated speed: 127-137 tok/s

Context: 8k

License: Gemma Terms

chatcreative
ollama pull gemma2:2b