When an LLM provides bad arguments or an API call fails, your tool shouldn't crash the server. It should return a graceful error message back to the LLM so the AI can debug itself and try again.
server.tool(
"read_file",
"Reads a file",
{ path: z.string() },
async ({ path }) => {
try {
const data = await fs.readFile(path, 'utf8');
return { content: [{ type: "text", text: data }] };
} catch (e) {
// ✅ Allow the LLM to learn and retry:
return {
isError: true,
content: [{ type: "text", text: `Error reading file. Did you use the correct path? ${e.message}`}]
};
}
}
);
isError: true flag tells the Host application to render the result as an error boundary, while feeding the error text back to the LLM for correction.