[ ABORT TO HUD ]
SEQ. 1
SEQ. 2

Codestral & Edge Models

🌬️ The Mistral Ecosystem8 min75 BASE XP

Specialized Mistral Models

Codestral 2: The Coding Specialist

A 22B dense model purpose-built for code generation and agentic coding workflows. Key features:

  • Optimized for code completion, refactoring, and multi-file edits
  • Supports tool calling for agentic development
  • Re-licensed to Apache 2.0 (earlier versions had restrictive licenses)
  • Integrated into IDEs: Cursor, Continue.dev, VS Code

Ministral: Edge AI

The Ministral family (3B, 8B, 14B) is designed for deployment on constrained hardware:

ModelRAM NeededBest For
Ministral 3B~2GB (Q4)Mobile, IoT, Raspberry Pi
Ministral 8B~5GB (Q4)Laptops, desktops
Ministral 14B~8GB (Q4)Workstations, light servers

Mistral Small 4: The Hybrid

Released April 2026, this model unifies instruct, reasoning, and coding in a single multimodal package. It's the "Swiss Army knife" of the Mistral ecosystem — small enough for consumer GPUs but capable enough for production use.

🐳 Container Pattern: For edge deployments, use Ollama in a Docker container:
docker run -d --gpus all -v ollama:/root/.ollama -p 11434:11434 ollama/ollama
Then: docker exec -it [container] ollama run ministral:8b
KNOWLEDGE CHECK
QUERY 1 // 2
What was the key licensing change for Codestral 2?
Changed to GPL
Re-licensed to Apache 2.0 for commercial use
Became fully proprietary
Added per-token pricing
Watch: 139x Rust Speedup
Codestral & Edge Models | The Mistral Ecosystem — Open Source AI Academy