A 22B dense model purpose-built for code generation and agentic coding workflows. Key features:
The Ministral family (3B, 8B, 14B) is designed for deployment on constrained hardware:
| Model | RAM Needed | Best For |
|---|---|---|
| Ministral 3B | ~2GB (Q4) | Mobile, IoT, Raspberry Pi |
| Ministral 8B | ~5GB (Q4) | Laptops, desktops |
| Ministral 14B | ~8GB (Q4) | Workstations, light servers |
Released April 2026, this model unifies instruct, reasoning, and coding in a single multimodal package. It's the "Swiss Army knife" of the Mistral ecosystem — small enough for consumer GPUs but capable enough for production use.
docker run -d --gpus all -v ollama:/root/.ollama -p 11434:11434 ollama/ollamadocker exec -it [container] ollama run ministral:8b