Select Page

11 Mar 2026 | 0 comments

"Tool calling (often called function calling) is a technical capability in modern AI systems—specifically Large Language Models (LLMs)—that allows the model to interact with external tools, APIs, or databases to perform tasks beyond its own training data." - Tool calling -

“Tool calling (often called function calling) is a technical capability in modern AI systems-specifically Large Language Models (LLMs)-that allows the model to interact with external tools, APIs, or databases to perform tasks beyond its own training data.” – Tool calling

Tool calling, also known as function calling, is a technical capability that enables Large Language Models (LLMs) to intelligently request and utilise external tools, APIs, databases, and services during conversations or processing tasks.1,2 Rather than relying solely on information contained within their training data, LLMs equipped with tool calling can dynamically access real-time information, perform actions, and interact with external systems to provide more accurate, current, and actionable responses.3,4

How Tool Calling Works

The tool calling process follows a structured flow that bridges the gap between language models and external systems:2

  1. A user submits a prompt or query to the LLM that may require external data or functionality
  2. The model analyses the request and determines whether a tool is needed to fulfil it
  3. If necessary, the model outputs structured data specifying which tool to call and what parameters to use
  4. The application executes the requested tool with the provided parameters
  5. The tool returns results to the model
  6. The model incorporates this information into its final response to the user

Critically, the model itself does not execute the functions or interact directly with external systems. Instead, it generates structured parameters for potential function calls, allowing your application to maintain full control over whether to invoke the suggested function or take alternative actions.8

Defining Tools and Functions

Tools are defined using JSON Schema format, which informs the model about available capabilities.3 Each tool definition requires three essential components:

  • Name: A function identifier using alphanumeric characters, underscores, or dashes (maximum 64 characters)
  • Description: A clear explanation of what the function does, which the model uses to decide when to call it
  • Parameters: A JSON Schema object describing the function’s input arguments and their types

For example, a weather function might be defined with the name get_weather, a description explaining it retrieves current weather conditions, and parameters specifying that it requires a location argument.2

Types of Tool Calling

Tool calling implementations vary in complexity depending on application requirements:1

  • Simple: One function triggered by a single user prompt, ideal for basic utilities
  • Multiple: Several functions available, with the model selecting the most appropriate one based on user intent
  • Parallel: The same function called multiple times simultaneously for complex requests
  • Parallel Multiple: Multiple different functions executed in parallel within a single request
  • Multi-Step: Sequential function calling within one conversation turn for data processing workflows
  • Multi-Turn: Conversational context combined with function calling, enabling AI agents to interact with humans in iterative loops

Primary Use Cases

Tool calling enables two fundamental categories of functionality:4

Fetching Data: Retrieving up-to-date information for model responses, such as current weather conditions, currency conversion rates, or specific data from knowledge bases and APIs. This approach is particularly valuable for Retrieval-Augmented Generation (RAG) systems that require access to external knowledge sources.4

Taking Action: Performing external operations such as submitting forms, updating application state, scheduling appointments, controlling smart home devices, or orchestrating agentic workflows including conversation handoffs.4,5

Practical Applications

Tool calling transforms LLMs from passive information providers into active agents capable of real-world interaction. Common implementations include:5

  • Conversational agents that answer questions by accessing current data
  • Voice AI bots that check weather, look up stock prices, or query databases
  • Automated systems that schedule appointments or control connected devices
  • Agentic AI workflows that perform complex multi-step tasks

Key Distinction: Tools vs Functions

Whilst the terms are often used interchangeably, a subtle distinction exists. A function is a specific kind of tool defined by a JSON schema, allowing the model to pass structured data to your application. A tool is the broader concept encompassing any external capability or resource-including functions, custom tools with free-form text inputs and outputs, and built-in tools such as web search, code execution, and Model Context Protocol (MCP) server functionality.2,8

Related Strategy Theorist: Andrew Ng

Andrew Ng (born 1976) is a pioneering computer scientist and AI researcher whose work has profoundly influenced how modern AI systems are designed and deployed, including the development of tool-augmented AI architectures. As a co-founder of Coursera, Chief Scientist at Baidu, and founder of Landing AI, Ng has consistently advocated for practical, production-oriented approaches to artificial intelligence that extend model capabilities beyond their training data.

Ng’s relationship to tool calling stems from his broader philosophy that effective AI systems must be grounded in real-world applications. Rather than viewing LLMs as isolated systems, Ng has championed the integration of language models with external tools, databases, and domain-specific systems-an approach that directly parallels modern tool calling implementations. His work on machine learning systems design emphasises the importance of connecting AI models to actionable data and external services, enabling them to operate effectively in production environments.

In his influential writings and lectures, particularly through his “AI for Everyone” initiative and subsequent work on AI transformation, Ng has stressed that the future of AI lies not in larger models alone, but in intelligent systems that can leverage external resources and tools to solve real problems. This perspective aligns precisely with tool calling’s core principle: extending LLM capabilities by enabling structured interaction with external systems.

Ng’s background includes a PhD in Computer Science from UC Berkeley, where he conducted research in machine learning and robotics. He served as Director of the Stanford Artificial Intelligence Laboratory and has held leadership positions at major technology companies. His contributions to deep learning, transfer learning, and practical AI deployment have shaped industry standards for building intelligent systems that operate beyond their training data-making him a foundational figure in the theoretical and practical development of tool-augmented AI systems like those enabled by tool calling.

 

References

1. https://docs.together.ai/docs/function-calling

2. https://platform.openai.com/docs/guides/function-calling

3. https://docs.fireworks.ai/guides/function-calling

4. https://docs.cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling

5. https://docs.pipecat.ai/guides/learn/function-calling

6. https://budibase.com/blog/ai-agents/tool-calling/

7. https://www.promptingguide.ai/applications/function_calling

8. https://cobusgreyling.substack.com/p/whats-the-difference-between-tools

 

Download brochure

Introduction brochure

What we do, case studies and profiles of some of our amazing team.

Download

Our latest podcasts on Spotify
Global Advisors | Quantified Strategy Consulting