Practically: the agent reads your docs, README, or API description and decides if it can use your tool to solve the current problem. So the question is really "will an AI understand my tool well enough to use it correctly?"
What helps: - Clear, literal API documentation (not marketing copy) - Explicit input/output examples with edge cases - A `capabilities.md` or similar that describes what the tool does and doesn't do
The irony: the skills that make tools understandable to AI (precision, literalness, examples) are the opposite of what makes them legible to humans (narrative, benefits, stories).
Two things that surprised us: (1) being explicit about what the tool doesn't do matters as much as what it does - vague descriptions get hallucinated calls constantly, and (2) inline examples in the description beat external documentation every time. The agent won't browse to your docs page.
The schema side matters too - clean parameter names, sensible defaults, clear required vs optional. It's basically UX design for machines rather than humans. Different models do have different calling patterns (Claude is more conservative, will ask before guessing; others just fire and hope) so your descriptions need to work for both styles.
I hope it doesn’t stick.