[ ABORT TO HUD ]
SEQ. 1
SEQ. 2
SEQ. 3

Tool Execution & Errors

🛠️ Tools & Functions8 min70 BASE XP

Handling Tool Errors Gracefully

When an LLM provides bad arguments or an API call fails, your tool shouldn't crash the server. It should return a graceful error message back to the LLM so the AI can debug itself and try again.

server.tool(
  "read_file",
  "Reads a file",
  { path: z.string() },
  async ({ path }) => {
    try {
      const data = await fs.readFile(path, 'utf8');
      return { content: [{ type: "text", text: data }] };
    } catch (e) {
      // ✅ Allow the LLM to learn and retry:
      return { 
        isError: true, 
        content: [{ type: "text", text: `Error reading file. Did you use the correct path? ${e.message}`}] 
      };
    }
  }
);
💡 Key Insight: The isError: true flag tells the Host application to render the result as an error boundary, while feeding the error text back to the LLM for correction.
SYNAPSE VERIFICATION
QUERY 1 // 3
What is the best practice for handling errors inside a Tool execution?
Throw a fatal exception and crash.
Return an empty string.
Return the error details with the `isError: true` flag so the LLM can try again.
Exit the process immediately.
Watch: 139x Rust Speedup
Tool Execution & Errors | Tools & Functions — MCP Academy