LLM Finder
Match local LLMs to your GPU. Filter by VRAM + use case, then copy the download command.
17 models fit 24GB for creative
Data powered by llmfit ↗Nous-Hermes-2-Mixtral-8x7B-DPO
46.7B MoEq2_K • 19.6GBFamily: Nous
Estimated speed: 26-36 tok/s
Context: 33k
License: Apache-2.0
ollama pull nous-hermes2-mixtralYi-1.5-34B-Chat
34Bq4_K_M • 23.6GBFamily: Yi
Estimated speed: 18-28 tok/s
Context: 33k
License: Apache-2.0
ollama pull yi:34bQwen2.5-32B-Instruct
32Bq4_K_M • 22.3GBFamily: Qwen
Estimated speed: 20-30 tok/s
Context: 33k
License: Apache-2.0
ollama pull qwen2.5:32bMistral-Nemo-12B-Instruct
12Bq8_0 • 16GBFamily: Mistral
Estimated speed: 38-48 tok/s
Context: 128k
License: Apache-2.0
ollama pull mistral-nemo:12bGemma-2-27B-Instruct
27Bq5_K_M • 23.5GBFamily: Gemma
Estimated speed: 22-32 tok/s
Context: 8k
License: Gemma Terms
ollama pull gemma2:27bCommand-R-7B
7Bfp16 • 17.9GBFamily: Cohere
Estimated speed: 36-46 tok/s
Context: 128k
License: CC-BY-NC
ollama pull command-r:7bMixtral-8x7B-Instruct
46.7B MoEq2_K • 19.6GBFamily: Mistral
Estimated speed: 32-42 tok/s
Context: 33k
License: Apache-2.0
ollama pull mixtral:8x7bDolphin-2.9.2-Qwen2-7B
7Bfp16 • 17.9GBFamily: Dolphin
Estimated speed: 37-47 tok/s
Context: 33k
License: Apache-2.0
ollama pull dolphin3:8bYi-1.5-9B-Chat
9Bfp16 • 22.3GBFamily: Yi
Estimated speed: 26-36 tok/s
Context: 33k
License: Apache-2.0
ollama pull yi:9bLlama-3-8B-Instruct
8Bfp16 • 20.1GBFamily: Llama
Estimated speed: 32-42 tok/s
Context: 8k
License: Llama 3 Community
ollama pull llama3:8bOpenHermes-2.5-Mistral-7B
7Bfp16 • 17.9GBFamily: Nous
Estimated speed: 38-48 tok/s
Context: 33k
License: Apache-2.0
ollama pull openhermesLlama-3.1-8B-Instruct
8Bfp16 • 20.1GBFamily: Llama
Estimated speed: 33-43 tok/s
Context: 131k
License: Llama 3.1 Community
ollama pull llama3.1:8bZephyr-7B-beta
7Bfp16 • 17.9GBFamily: Zephyr
Estimated speed: 38-48 tok/s
Context: 33k
License: MIT
ollama pull zephyr:7bGemma-2-9B-Instruct
9Bfp16 • 22.3GBFamily: Gemma
Estimated speed: 28-38 tok/s
Context: 8k
License: Gemma Terms
ollama pull gemma2:9bOpenChat-3.6-8B
8Bfp16 • 20.1GBFamily: OpenChat
Estimated speed: 35-45 tok/s
Context: 8k
License: Apache-2.0
ollama pull openchat:8bLlama-3.2-3B-Instruct
3Bfp16 • 9.1GBFamily: Llama
Estimated speed: 66-76 tok/s
Context: 131k
License: Llama 3.2 Community
ollama pull llama3.2:3bGemma-2-2B-Instruct
2Bfp16 • 6.9GBFamily: Gemma
Estimated speed: 75-85 tok/s
Context: 8k
License: Gemma Terms
ollama pull gemma2:2b