Mistral AI positions itself as the European leader in open-weight AI, with nearly all models released under the permissive Apache 2.0 license.
| Model | Params | Architecture | Specialty |
|---|---|---|---|
| Mistral Large 3 | 675B (41B active) | Sparse MoE | Flagship general-purpose, 256K context |
| Codestral 2 | 22B dense | Dense | Code generation & agentic coding |
| Devstral 2 | — | Dense | Frontier agentic dev workflows |
| Pixtral Large | — | VLM | Vision-language, multimodal |
| Mistral Small 4 | ~14B | Hybrid | Unified instruct+reasoning+coding |
| Ministral 3B/8B/14B | 3-14B | Dense | Edge devices, cost-efficient |
| Magistral Small | 24B | Dense | Reasoning-focused (open Apache 2.0) |
# Via Ollama ollama run mistral-large # Via llama.cpp (GGUF) llama-server -m mistral-large-3-Q4_K_M.gguf --ctx-size 32768 # Via vLLM (production) vllm serve mistralai/Mistral-Large-3 --tensor-parallel-size 4