[ ABORT TO HUD ]
SEQ. 1
SEQ. 2
SEQ. 3

Prompt Debugging & Iteration

✍️ Prompt Engineering for Agents10 min90 BASE XP

When Your Agent Misbehaves

Prompt engineering is 20% writing and 80% debugging. Here's a systematic approach.

The Prompt Debugging Checklist

SymptomLikely CauseFix
Calls wrong toolVague tool descriptionsAdd specific use-case guidance
Hallucinates argumentsAmbiguous parameter namesUse descriptive names + examples
Ignores constraintsConstraints buried in long promptMove constraints to TOP + bold/caps
Loops infinitelyNo termination criteriaAdd "stop when X" + iteration cap
Generic answersNo domain contextAdd company/domain context section
Wrong output formatFormat not enforcedAdd format examples + "ONLY this format"

The APE Method

  1. Action: Run agent on 10 test cases, record failures.
  2. Prompt: Modify ONE thing to address the most common failure.
  3. Evaluate: Re-run all 10 cases. Did it improve?
// Systematic Prompt Iteration Log
// v1: Base prompt → 4/10 pass
// v2: Added constraints → 6/10 pass  
// v3: Added few-shot examples → 8/10 pass
// v4: Added negative examples → 9/10 pass
// v5: Added structured CoT → 10/10 pass
🚧 Critical Rule: Change only ONE thing per iteration. If you change 3 things and quality improves, you won't know which change helped.

Version Control Your Prompts

  • Store prompts in Git, not hardcoded in app code.
  • Tag each version with its eval score.
  • A/B test changes with canary rollouts.
  • Maintain a changelog explaining WHY each change was made.
SYNAPSE VERIFICATION
QUERY 1 // 2
If an agent ignores constraints, what's the most likely fix?
Use a bigger model
Move constraints to the TOP of the prompt with emphasis
Remove constraints
Add more tools
Watch: 139x Rust Speedup
Prompt Debugging & Iteration | Prompt Engineering for Agents — AI Agents Academy