Context
Defining how tools present themselves to agents.
Problem
Descriptions written for humans lack the explicit detail LLMs need. Humans infer; LLMs require explicit guidance.
Solution
Describe tools for the machine:
- Comprehensive: Everything the LLM needs
- Prompt engineering: Guide the LLM into selecting and using the tool correctly
- Prerequisites: "If unknown, call X first"
- Related tools: Link to alternatives
- Comprehensive: Everything the LLM needs
- Prompt engineering: Guide the LLM into selecting and using the tool correctly
- Prerequisites: "If unknown, call X first"
- Related tools: Link to alternatives
Examples
Python
def get_user(identifier: str) -> User | None:
"""
Retrieve a user by email or ID.
Args:
identifier: Email (john@example.com) or
ID (usr_abc123). If you only have
a name, call search_users() first.
Returns:
User object or None if not found.
Related: search_users(), update_user()
""" Considerations
- Use the tool description to guide the LLM into understanding when to select it and how to call it correcly, when needed.
- LLMs perform well when value functions and reasoning criteria are clearly defined and explicit. The tool description must contain all the information the LLM needs to reason about its usage.
- Assume your tool will be presented to an LLM for selection together with other tools which can cause confusion.
- Do not assume the AI model will be able to infer anything that is not explicitly stated in the tool description, even if it's obvious from a human reasoning standpoint.
- Consider the tool description from a prompt engineering standpoint. All standard prompt engineering principles and practices apply here.
- LLMs do not reason like humans. We can only know when a tool description is good when we evaluate their performance with prompts that simulates expected real world usage.