[ ABORT TO HUD ]
SEQ. 1
SEQ. 2

When & How to Fine-Tune

🎯 Fine-Tuning & Distillation 15 min 300 BASE XP

Customizing Model Behavior

Fine-tuning trains an existing OpenAI model on your own dataset to customize its behavior, tone, format, or domain knowledge. It does NOT add new knowledge — it adjusts HOW the model responds.

Decision Framework

Try FirstThen TryLast Resort
Prompt EngineeringRAG (Retrieval)Fine-Tuning
90% of use casesDomain knowledgeBehavior/format changes

Fine-Tuning Workflow

  1. Prepare Data: Create a JSONL file of example conversations
  2. Upload: Upload the training file via the Files API
  3. Train: Create a fine-tuning job specifying the base model
  4. Evaluate: Test the fine-tuned model against your eval set
  5. Deploy: Use your custom model ID in API calls
// Training data format (JSONL):
{"messages": [
  {"role": "system", "content": "You are a concise legal assistant."},
  {"role": "user", "content": "Summarize this contract clause..."},
  {"role": "assistant", "content": "Key terms: ..."}
]}

// Create fine-tuning job:
const job = await openai.fineTuning.jobs.create({
  training_file: "file-abc123",
  model: "gpt-5.4-mini",
  hyperparameters: { n_epochs: 3 }
});

Best Practices

  • Start with 50-100 high-quality examples — quality over quantity
  • Always create a validation set (20% of data) to detect overfitting
  • Fine-tune the smallest model that meets your needs (Mini > Pro)
  • Use checkpoints to save intermediate states
🎯 Rule: Fine-tuning is for changing behavior/format, NOT for adding knowledge. Use RAG for knowledge injection.
SYNAPSE VERIFICATION
QUERY 1 // 3
What should you try BEFORE fine-tuning?
Nothing — always fine-tune first
Prompt engineering and RAG — they solve 90%+ of use cases without fine-tuning costs
Switching to a different provider
Rebuilding from scratch
Watch: 139x Rust Speedup
When & How to Fine-Tune | Fine-Tuning & Distillation — OpenAI Academy