All terms
Agents & tools

Tool use / Function calling

Also known as: function calling, tool calling, function use

Letting an LLM call external functions — search the web, run code, query a database — by emitting a structured function call the runtime executes for it.

What it means

Tool use (also called function calling) is the mechanism that turns a text-only LLM into something that can act on the world. You describe a set of functions to the model in its prompt — name, what they do, what arguments they take — and the model can choose to emit a structured call like {"name": "search_web", "arguments": {"query": "..."}} instead of normal text. Your runtime executes the call, feeds the result back into the model's context, and the model continues. That tight loop — model proposes a tool call, runtime executes it, result goes back into context, model decides what's next — is the foundation of every AI agent. Without tool use, an LLM can only output text. With it, the same model can read your inbox, post to Slack, query Postgres, run shell commands, or click buttons in a browser. Every major model exposes this: OpenAI calls it "function calling," Anthropic calls it "tool use," Google calls it "function calling" too. The contracts differ but the shape is the same. MCP standardizes the layer above, so the same tool definition works across providers. Tool use quality varies a lot. Frontier models in 2026 (Claude 4.x, GPT-5, Gemini 2.5) handle parallel tool calls, multi-step planning, and recovering from tool errors well. Smaller and older models hallucinate arguments, call the wrong tool, or get stuck in loops. Tool design matters as much as model choice — clear names, narrow scopes, and good error messages.

Example

You give Claude a `get_weather(city)` tool. You ask "should I bring an umbrella to my London meeting?" Claude emits get_weather("London"), your code calls the API, returns "rain", and Claude says "Yes, bring one — it's raining."

Why it matters

Tool use is the single biggest capability unlock since the original ChatGPT. It's what makes agents possible, what makes ChatGPT able to browse and run Python, and what makes MCP useful. If you're building anything beyond a Q&A bot, you're building on top of tool use.

Related terms

See it in a comparison