ARTICLE · 20 MIN READ · FEBRUARY 10, 2026
Chapter 10: Model Context Protocol (MCP)
Every tool integration so far required custom code. MCP changes that — a universal standard that lets any LLM connect to any tool, database, or service without bespoke glue code for each pair.
The Integration Problem
Protocol: A set of agreed-upon rules that two parties use to communicate. HTTP is a protocol for web browsers and servers. Bluetooth is a protocol for wireless devices. A protocol defines the exact format of messages, the order of communication, and how errors are handled. Without a protocol, two systems might both "speak" but use incompatible formats — like two people trying to have a conversation where one speaks French and the other speaks Japanese.
Client-server architecture: A model where one program (the server) provides a service, and another program (the client) uses that service. A web browser is a client; the website's computer is a server. The client makes requests; the server fulfills them. One server can serve many clients simultaneously.
API (Application Programming Interface): A defined way for two programs to communicate. When you use a weather app, it calls a weather service's API to get forecast data. The API specifies exactly what requests you can make and what format the responses will be in.
JSON-RPC: A protocol for calling functions remotely using JSON (JavaScript Object Notation) as the message format. You send: {"method": "get_weather", "params": {"city": "London"}, "id": 1} and receive back a JSON response. MCP uses JSON-RPC as its underlying communication format.
STDIO (Standard Input/Output): The basic input and output channels of a Unix/Linux/Mac process. Every program has stdin (where it reads input from), stdout (where it writes output to), and stderr (where it writes errors to). MCP uses STDIO for local communication — the client writes to the server's stdin, the server writes responses to its stdout.
Schema: A formal, machine-readable description of what data should look like. A JSON schema for a weather API tool might say: "The input must have a 'city' field that is a string, and a 'units' field that is either 'celsius' or 'fahrenheit'." The LLM reads this schema to know how to call the tool correctly.
Interoperability: The ability of different systems to work together seamlessly. USB is an interoperable standard — any USB cable works with any USB port, regardless of who made either. MCP aims to be the "USB of AI integrations."
In Chapter 5, we learned how agents call tools. An agent equipped with a get_weather function can check the weather. An agent equipped with send_email can send emails. This works beautifully — for a small, fixed set of tools that you’ve written yourself.
But consider what happens as the system grows:
- Your company has 40 internal services: CRM, ticketing, HR system, code repository, document store, project management, financial reporting, calendar, messaging…
- Each service has its own API format, authentication mechanism, and data structure
- You have 5 different AI applications that need to access subsets of these services
- Each integration requires custom code: understand the service’s API, write a wrapper function, handle its specific error formats, manage its authentication
That’s potentially 40 × 5 = 200 custom integrations, each requiring code, testing, and maintenance. When a service updates its API, every integration that touches it breaks. When you add a new AI application, it can’t use any existing integrations — they were all written for the old applications. When you swap the underlying LLM (say, from GPT-4 to Gemini), every tool definition needs to be reformatted for the new provider’s requirements.
This is the integration hell that Model Context Protocol (MCP) solves. MCP is an open standard, created by Anthropic in 2024, that defines one way for LLMs to communicate with any external system. Write an MCP server once for your CRM — any compliant LLM application can use it. Write an MCP client once in your AI application — it can connect to any MCP server. The integration graph goes from O(applications × services) custom integrations to O(applications + services) standard implementations.
What MCP Is: The Universal Adapter
The best analogy for MCP is the USB standard. Before USB, every device (keyboard, mouse, printer, camera) had its own connector — PS/2, serial, parallel, proprietary. Adding a new device meant finding the right port, often physically impossible if all ports were occupied. USB created a universal connector: one standard port, one standard protocol, any device from any manufacturer.
MCP is the USB of AI integrations. Instead of every AI application needing custom code to talk to every service, MCP defines:
- A standard way for services to advertise their capabilities (what tools, data, and templates they offer)
- A standard way for AI applications to discover those capabilities at runtime
- A standard way to request actions and receive results
The Three MCP Primitives
MCP defines exactly three types of things that a server can expose to clients. Understanding each one — and the clear distinction between them — is essential to using MCP correctly.
A Tool is a function that the LLM can call to perform an action. Tools have inputs (parameters) and outputs (return values). Critically, tools typically have side effects — they change something in the world: sending an email, updating a database record, creating a file, making a payment, sending a notification.
The LLM decides when to call a tool based on its description (the docstring) and the current context. The MCP server executes the tool and returns the result.
send_email(to, subject, body) — sends an actual emailcreate_ticket(title, priority, description) — creates a support ticketquery_database(sql) — executes a SQL querysearch_web(query) — performs a live web searchA Resource is a piece of data that the LLM can read — a file, a database record, a document, a configuration. Resources are read-only: the LLM (through MCP) can retrieve them, but not modify them directly (that would require a Tool).
Resources are identified by URIs (Uniform Resource Identifiers — the same concept as URLs in web browsers). The MCP server knows how to fetch and return the content at each URI in a format the LLM can understand.
file:///reports/q4-2025.md — the text content of a Markdown filedb://customers/id/12345 — a specific customer recordgit://main/src/api.py — the content of a source code fileconfig://app/settings — application configuration dataA Prompt is a pre-built template that guides the LLM in how to interact effectively with a specific resource or tool. Think of it as a "best practices guide" baked into the server — the server knows how to use its own tools best, and encodes that knowledge as prompt templates.
For example, a database MCP server might provide a prompt template that says: "When querying customer data, always filter by status='active' first, then limit results to 50, then sort by last_update. Describe any security-sensitive fields as [REDACTED] in your response." This guides the LLM to interact with the database in a safe, efficient way without the LLM having to figure it out from scratch.
MCP vs. Tool Function Calling: What’s the Difference?
If you already have tool function calling (Chapter 5), why do you need MCP? This is the most important conceptual question to answer clearly.
Tool function calling is a mechanism — a specific way for an LLM to trigger the execution of a pre-defined Python function. You write @tool on a function, bind it to an agent, and the LLM can call it. This works perfectly within one application, with a fixed set of tools, talking to one LLM provider.
MCP is a standard — a universal protocol that defines how LLMs and external systems communicate, regardless of which LLM or which system is involved. It’s not just about calling functions; it’s about a discoverable, provider-agnostic ecosystem where tools can be shared, reused, and composed.
The practical distinction: If you’re building a personal project with 3 fixed tools, function calling is simpler and completely sufficient. If you’re building an enterprise platform where 50 services need to be accessible to 10 different AI applications, MCP is the right architecture — you write 60 implementations instead of 500.
The MCP Architecture: How It Works
MCP defines a precise four-component architecture with a specific five-step interaction flow. Understanding each component and each step is essential to implementing MCP correctly.
The Four Components
Why this four-layer design? Each layer has a clear responsibility:
- The LLM focuses purely on reasoning — it doesn’t need to know how to authenticate with Salesforce or format a Postgres query
- The MCP Client focuses purely on protocol — translating LLM intent to standard requests and routing them to the right server
- The MCP Server focuses purely on the domain — it knows everything about the external system and handles its quirks
- The external system stays unchanged — you wrap it with an MCP server without modifying the underlying service
This is called layered architecture — a fundamental pattern in software engineering where each layer only communicates with the layer directly adjacent to it.
The Five-Step Interaction Flow
Client → Server: {"method": "tools/list", "id": 1}Server → Client: {"tools": [{"name": "send_email", "description": "...", "inputSchema": {...}}, ...]} send_email tool with its description and parameters) and reasons: "I need to call send_email with these specific arguments." It formulates a structured request specifying exactly which tool to use and what arguments to pass. This reasoning happens inside the LLM based on the tool descriptions.Client → Server: {"jsonrpc": "2.0", "method": "tools/call", "params": {"name": "send_email", "arguments": {"to": "alice@company.com", "subject": "Meeting tomorrow", "body": "..."}}, "id": 2}Server → Client: {"result": {"content": [{"type": "text", "text": "Email sent successfully. Message ID: msg_abc123"}]}, "id": 2}Transport Mechanisms: How the Messages Actually Travel
MCP supports two transport mechanisms. The choice determines where the MCP server runs and how the client connects to it.
The MCP client starts the server as a subprocess (a child process). They communicate by reading and writing to standard input/output streams — the most fundamental Unix communication mechanism. The client writes JSON-RPC messages to the server's stdin; the server writes responses to its stdout.
This is exactly what happens with the filesystem server example: npx @modelcontextprotocol/server-filesystem /path/to/folder starts a Node.js process, and the MCP client communicates with it via STDIO.
Advantages: Zero network overhead. Extremely fast. No authentication needed (same machine, same trust boundary). Data never leaves your machine — ideal for sensitive data.
Disadvantages: Can't be shared across machines or teams. Server lifecycle tied to client lifecycle. Can't scale horizontally.
StdioServerParameters(command='npx', args=['@modelcontextprotocol/server-filesystem', '/path'])The MCP server runs as a web service. Clients connect to it via HTTP (for sending requests) and Server-Sent Events (SSE, for receiving streaming responses). SSE is a web standard that lets a server push data to clients over a persistent HTTP connection — useful for long-running tool executions where the response might come in chunks.
This is what FastMCP uses when you run mcp_server.run(transport="http", host="127.0.0.1", port=8000). Any HTTP client that speaks MCP can connect to this server.
Advantages: Accessible from any machine on the network. Can be shared across teams and applications. Can be scaled independently. Can have its own authentication (API keys, OAuth).
Disadvantages: Network latency. Requires authentication. Data travels over the network — security considerations for sensitive data.
HttpServerParameters(url="http://localhost:8000")Important Caveats: MCP Is Not Magic
Before diving into implementation, there are two critical limitations of MCP that the documentation often glosses over. Understanding these prevents costly architectural mistakes.
Caveat 1: MCP Wrapping a Bad API Produces a Bad Agent Integration
MCP is a standardized communication layer — it doesn’t improve the underlying API it wraps. If your ticketing system’s API only lets you retrieve tickets one at a time, an MCP server wrapping that API still only lets the agent retrieve tickets one at a time. The agent will be slow, expensive, and potentially inaccurate at scale.
Before building an MCP server, audit the underlying API for agent-friendliness:
- Does it support filtering? (The agent should be able to say “get me high-priority tickets” rather than “get all tickets, the agent will filter”)
- Does it support pagination with reasonable page sizes?
- Does it support batch operations where sequential access would be too slow?
- Does it return aggregates where individual-record access would require too many calls?
If the answer to any of these is “no,” the right solution is to improve the underlying API before wrapping it with MCP. MCP should be the last mile of standardization, not a band-aid over a poorly designed API.
Caveat 2: MCP Requires Agent-Readable Data Formats
MCP exposes data to LLMs — and LLMs can only read text. If your MCP resource returns a PDF file, the LLM cannot read it. If your resource returns a binary database blob, the LLM cannot process it. MCP itself does not perform format conversion — that’s your server’s responsibility.
The golden rule: Every resource endpoint in your MCP server must return text that a human could read and understand: plain text, Markdown, JSON, CSV, HTML. Never return binary formats (PDF, DOCX, images, compressed files) directly as resource content.
If your underlying system stores PDFs, your MCP server should extract the text content (using a library like PyPDF2 or Marker) and return that to the LLM. If your system stores images, your MCP server should return a text description or metadata. The server is responsible for the translation — the LLM cannot do it.
Hands-On: Filesystem Agent with ADK + MCP
Let’s build a real MCP integration: an ADK agent that can navigate and read your local file system through an MCP server.
Step 1: Understanding the Components
import os
from google.adk.agents import LlmAgent
from google.adk.tools.mcp_tool.mcp_toolset import MCPToolset, StdioServerParameters
MCPToolset: This is ADK’s MCP client implementation. It manages the connection to an MCP server, handles the discovery protocol (asking the server what tools it offers), and translates ADK/LLM tool calls into properly formatted MCP requests. You use it as a “tool” in the agent’s tool list — but instead of a single function, it gives the agent access to everything the MCP server exposes.
StdioServerParameters: This tells MCPToolset how to start the MCP server process and communicate with it via STDIO. You provide thecommand(what to run) andargs(the arguments to pass).
Step 2: Building the Path Safely
# Build a reliable absolute path — critical for MCP servers
TARGET_FOLDER_PATH = os.path.join(
os.path.dirname(os.path.abspath(__file__)), # directory containing agent.py
"mcp_managed_files" # folder name
)
# Create the folder if it doesn't exist yet
os.makedirs(TARGET_FOLDER_PATH, exist_ok=True)
Why
os.path.abspath(__file__)? The MCP filesystem server requires an absolute path — a full path like/Users/kohsheen/projects/mcp_managed_files, not a relative path like./mcp_managed_files.__file__gives you the current script’s path,os.path.abspathconverts it to absolute, andos.path.dirnamegets just the directory part. Theos.path.jointhen appendsmcp_managed_filesto create the full target path.
Why
exist_ok=True?os.makedirscreates the directory and all missing parent directories. Withoutexist_ok=True, it would raiseFileExistsErrorif the directory already exists. With it, the call is idempotent — safe to run multiple times.
Step 3: Configuring the Agent with MCPToolset
root_agent = LlmAgent(
model = 'gemini-2.0-flash',
name = 'filesystem_assistant_agent',
instruction = (
'Help the user manage their files. You can list files, read files, and write files. '
f'You are operating in the following directory: {TARGET_FOLDER_PATH}'
),
tools = [
MCPToolset(
connection_params = StdioServerParameters(
command = 'npx', # use npx to run the server package
args = [
'-y', # auto-confirm npm install
'@modelcontextprotocol/server-filesystem', # the MCP server package
TARGET_FOLDER_PATH, # the allowed directory
],
),
# tool_filter=['list_directory', 'read_file'] # Optional: restrict available tools
)
],
)
What does
npxdo?npx(Node Package Execute) is a tool that runs Node.js packages without installing them globally. When you runnpx @modelcontextprotocol/server-filesystem, it downloads the package from npm (the Node.js package registry) if needed, then runs it. This is why you need Node.js installed — many community MCP servers are distributed as npm packages.
What does
@modelcontextprotocol/server-filesystemdo? This is an official MCP server that exposes filesystem operations as MCP tools. It provides tools like:list_directory(list files in a folder),read_file(read a file’s content),write_file(write to a file),create_directory,move_file, etc. By passingTARGET_FOLDER_PATHas the argument, you restrict the server to only operate within that directory — a critical security measure.
What is
tool_filter? MCPToolset will expose all tools the server offers unless you restrict it withtool_filter. For a filesystem server, you might only want the agent to be able to read files (not write them) in a particular context.tool_filter=['list_directory', 'read_file']ensures only those two tools are passed to the LLM.
What is
MCPToolsetdoing internally? When the agent is initialized,MCPToolsetstarts the subprocess (the npm package), sends the MCP discovery request (tools/list), receives the list of available tools with their schemas, and makes those tools available to the LLM. Every time the LLM wants to use a tool,MCPToolsetsends atools/callrequest to the subprocess and returns the result.
Step 4: Alternative Transport — Python3 Server
Instead of an npm package, you can run a Python MCP server:
connection_params = StdioServerParameters(
command = 'python3',
args = ['./agent/mcp_server.py'],
env = {
'SERVICE_ACCOUNT_PATH': SERVICE_ACCOUNT_PATH,
'DRIVE_FOLDER_ID': DRIVE_FOLDER_ID,
}
)
Why pass
env? The subprocess (the MCP server) runs in its own process with its own environment variables. If your server needs credentials (a Google service account path, an API key, a folder ID), you pass them via theenvdictionary. This is the secure way to pass credentials — not hardcoding them in the server’s source code.
Step 5: Alternative Transport — uvx for Python Packages
connection_params = StdioServerParameters(
command = 'uvx',
args = ['mcp-google-sheets@latest'],
env = {
'SERVICE_ACCOUNT_PATH': SERVICE_ACCOUNT_PATH,
'DRIVE_FOLDER_ID': DRIVE_FOLDER_ID,
}
)
What is
uvx?uvxis to Python packages whatnpxis to Node.js packages — it runs Python packages in temporary isolated environments without permanent installation. It usesuv(an extremely fast Python package manager) under the hood.mcp-google-sheets@latestis a PyPI package that exposes Google Sheets operations as MCP tools. The@latestsuffix ensures you always get the current version.
Building an MCP Server with FastMCP
FastMCP is a Python framework that dramatically simplifies building MCP servers. While the raw MCP protocol requires significant boilerplate code (handling JSON-RPC framing, schema generation, transport initialization), FastMCP lets you define tools with Python decorators and handles all the protocol complexity automatically.
Why FastMCP Exists
The raw MCP SDK requires you to manually write JSON schemas for each tool, implement the full JSON-RPC server loop, handle transport initialization, and register tools explicitly. FastMCP reduces this to a decorator:
@mcp_server.tool
def my_function(param: str) -> str:
"""Description for the LLM."""
return result
FastMCP reads the type hints (param: str), docstring, and function name to automatically generate the complete JSON schema. What would take 50 lines of boilerplate becomes 8 lines.
Building a Server
from fastmcp import FastMCP
# Initialize the server instance
mcp_server = FastMCP()
What does
FastMCP()create? A server object that handles all MCP protocol details: accepting connections, responding to capability discovery requests, routing tool calls to the right Python functions, serializing responses as valid MCP messages.
@mcp_server.tool
def greet(name: str) -> str:
"""
Generates a personalized greeting for a person.
Use this when the user asks to greet someone or say hello to a person.
Args:
name: The name of the person to greet. Should be their first name.
Returns:
A friendly greeting string.
"""
return f"Hello, {name}! Nice to meet you."
What does
@mcp_server.tooldo? This is a Python decorator that registers the function as an MCP tool. FastMCP: (1) reads the function’s name (greet) and uses it as the tool name, (2) reads the type hintname: strand generates the JSON schema{"type": "object", "properties": {"name": {"type": "string"}}, "required": ["name"]}, (3) reads the docstring and uses it as the tool description that the LLM reads to decide when to call this tool. Every time an MCP client calls thegreettool, FastMCP calls this Python function with the provided arguments and returns the result.
Why is the docstring so important? The docstring is what the LLM reads to decide when to call this tool. Without “Use this when the user asks to greet someone,” the LLM might not know when greet is relevant. A good tool docstring answers: what does this do, when should it be called, what are the exact input requirements, and what does it return.
if __name__ == "__main__":
mcp_server.run(
transport = "http",
host = "127.0.0.1", # localhost only — not exposed to network
port = 8000
)
transport="http": Starts the server with HTTP+SSE transport, making it accessible via HTTP. The alternative istransport="stdio"for STDIO-based communication (when the server is started as a subprocess). For development and testing, HTTP is more convenient because you can test it with curl or a web browser.
host="127.0.0.1": Binds to localhost only. The server is only accessible from the same machine. Never use"0.0.0.0"in development — that would expose the server to your entire network.
Consuming the FastMCP Server with an ADK Agent
from google.adk.tools.mcp_tool.mcp_toolset import MCPToolset, HttpServerParameters
FASTMCP_SERVER_URL = "http://localhost:8000"
root_agent = LlmAgent(
model = 'gemini-2.0-flash',
name = 'fastmcp_greeter_agent',
instruction = 'You are a friendly assistant that can greet people by name. Use the "greet" tool.',
tools = [
MCPToolset(
connection_params = HttpServerParameters(
url = FASTMCP_SERVER_URL,
),
tool_filter = ['greet'] # only expose the greet tool to the LLM
)
],
)
HttpServerParameters(url=FASTMCP_SERVER_URL): Instead of starting a subprocess (STDIO), this connects to an already-running server at the specified URL. TheMCPToolsetsends an HTTP request to discover capabilities, then makes HTTP calls for each tool invocation.
tool_filter=['greet']: Even though the FastMCP server might expose multiple tools in the future, we only want to give this specific agent access to thegreettool. This is the principle of least privilege — give components only the access they actually need, nothing more.
The Complete Picture
User: "Greet John Doe"
↓
LLM (Gemini): reads tool manifest, sees greet tool with description
↓
LLM decides: "I should call greet with name='John Doe'"
↓
MCPToolset (HTTP client): POST http://localhost:8000 → {"method": "tools/call", "params": {"name": "greet", "arguments": {"name": "John Doe"}}}
↓
FastMCP server: routes to greet() function, executes greet("John Doe")
↓
Python function: returns "Hello, John Doe! Nice to meet you."
↓
FastMCP server: serializes as MCP response → {"result": {"content": [{"type": "text", "text": "Hello, John Doe! Nice to meet you."}]}}
↓
MCPToolset: returns result to LLM as new context
↓
LLM: formulates final response to user
↓
User: "Hello, John Doe! Nice to meet you."
Nine Practical Applications of MCP
Database Integration
LLMs query structured data in natural language. The MCP server translates to SQL, runs the query, and returns results as readable text. Example: "Show me all customers who haven't ordered in 6 months" → SQL → formatted table.
MCP Toolbox for Databases → BigQuery, Postgres, MySQLGenerative Media Orchestration
Agents coordinate multi-modal content creation: generate an image (Imagen), create a voiceover (Chirp 3 HD), compose background music (Lyria), and compile into a video (Veo) — all through MCP tools.
MCP Tools for Genmedia → Imagen, Veo, Chirp 3 HD, LyriaExternal API Interaction
Any external API can be wrapped in an MCP server: weather, stock prices, CRM systems, ticketing systems, payment processors. The agent interacts with all of them through the same standard protocol.
Custom MCP servers wrapping any REST APIIntelligent Document Search
Instead of returning entire documents, an MCP resource server extracts the specific clause, figure, or statement that answers the user's question. The LLM's reasoning capability enables precision retrieval beyond keyword search.
Document store MCP server + LLM reasoningComplex Workflow Orchestration
Multi-step workflows across multiple systems: retrieve customer data → generate personalized content → draft tailored email → schedule send → log in CRM. Each step is a different MCP server, coordinated by one agent.
Multiple MCP servers + agent orchestrationLegacy System Modernization
Wrap existing legacy systems in MCP servers without rewriting them. The MCP layer makes a 1990s mainframe accessible to a 2025 LLM application. The legacy system runs unchanged; the MCP server translates.
MCP wrapper → any legacy APIIoT Device Control
Natural language control of physical devices: "Turn off the lights in the conference room" → MCP tool call → smart building API → device command. Agents can monitor and control entire IoT ecosystems.
IoT platform MCP server → physical devicesFinancial Services Automation
Agents analyze market data, execute trades, generate personalized financial advice, and automate regulatory reporting — all through MCP connections to financial data sources and trading platforms.
Financial data MCP servers + compliance toolsCustom Internal Tool Ecosystems
Organizations build MCP servers for proprietary internal systems (code repos, incident management, deployment pipelines) and make them available to any AI application in the company — once, reused everywhere.
FastMCP → internal systems → company-wide reuseCommon Mistakes When Implementing MCP
Mistake 1: Returning binary formats from resource endpoints. The single most common mistake. Your document store contains PDFs. You create a resource endpoint that returns the raw PDF bytes. The LLM receives binary garbage. Instead: extract text from PDFs (use pymupdf, marker, or pdfplumber), return Markdown or plain text. Always ask: “Can a human read this response in a text editor?” If no, your MCP server needs to convert it.
Mistake 2: Wrapping bad APIs without improving them. You wrap a ticketing API that only supports retrieving one ticket at a time. The agent needs to summarize 200 high-priority tickets. It makes 200 sequential API calls — slow, expensive, often hitting rate limits. Instead: improve the underlying API to support filtering (?status=high_priority&limit=50) before wrapping it. MCP is a presentation layer, not a performance layer.
Mistake 3: No tool_filter — exposing too many tools. A filesystem MCP server exposes list_directory, read_file, write_file, delete_file, create_directory, and move_file. In a read-only agent, you expose all of them. The LLM, trying to be helpful, accidentally deletes a file. Always apply tool_filter to restrict the agent to only the tools it actually needs.
Mistake 4: Using relative paths with STDIO servers. The STDIO server runs as a subprocess with its own working directory — often different from your agent script’s directory. Relative paths like ./data resolve to different locations in the subprocess. Always use absolute paths when configuring STDIO-based MCP servers.
Mistake 5: Starting the FastMCP server on 0.0.0.0 in development. host="0.0.0.0" binds to all network interfaces — your MCP server becomes accessible to every device on your local network, potentially exposing sensitive tools. Always use host="127.0.0.1" for local development. Use proper authentication and access controls for remote deployment.
Mistake 6: Ignoring authentication. A production MCP server without authentication is an unauthenticated API endpoint that any program on the network can call. Implement API key verification or OAuth before deploying any MCP server that handles sensitive data or performs write operations.
At a Glance
An open standard that defines how LLMs and external systems communicate — replacing N×M custom integrations with N+M standard implementations. MCP servers expose tools (actions), resources (data), and prompts (templates). MCP clients connect to any server through a standard protocol.
Custom tool integrations are brittle, non-reusable, and LLM-provider-specific. MCP creates an ecosystem where a tool built once works with any compliant LLM. Discovery is dynamic — agents learn what servers can do at runtime without being redeployed when capabilities change.
Use direct function calling for small, fixed tool sets in single applications. Use MCP when tools need to be shared across multiple AI applications, when you want LLM-agnostic integrations, or when you're building an enterprise system that will evolve over time.
Key Takeaways
-
MCP is a protocol, not a product. It defines the rules for how LLMs and external systems communicate. Like HTTP enables any browser to talk to any web server, MCP enables any compliant LLM client to talk to any compliant MCP server.
-
Three primitives, one standard. Every MCP server exposes some combination of: Tools (actions that change the world), Resources (data that can be read), and Prompts (templates that guide the LLM). Understand which category each capability belongs to before designing your server.
-
Dynamic discovery is MCP’s superpower. The LLM doesn’t need to know what tools exist at startup. It queries the MCP server at runtime — “what can you do?” — and the server responds with its current capabilities. Add new tools to the server and all connected agents immediately have access, without redeployment.
-
STDIO for local, HTTP+SSE for remote. Local servers (same machine) use STDIO — fast, simple, secure. Remote servers (across a network) use HTTP + Server-Sent Events — accessible, scalable, requires authentication.
-
MCPToolset in ADK is your client implementation. It manages the connection lifecycle, handles the discovery protocol, and makes MCP server tools available to the LLM exactly like any other ADK tool. The LLM can’t tell the difference between a locally-defined function and an MCP-connected tool.
-
FastMCP removes boilerplate. The decorator
@mcp_server.toolconverts any Python function into a fully compliant MCP tool — complete with JSON schema generation from type hints, docstring-based descriptions, and automatic protocol handling. -
MCP is only as good as the underlying API. Wrapping a poorly designed API in MCP produces a poorly designed MCP server. Always improve the underlying API (add filtering, batch operations, agent-friendly response formats) before wrapping it. MCP standardizes communication — it doesn’t fix bad design.
-
Return text, always. Every resource and tool response must return agent-readable text. Binary formats (PDFs, images, compressed files) must be converted by the MCP server before being returned to the LLM. The server is responsible for the translation — not the LLM, not the client.
Next up — Chapter 11: Goal Setting and Monitoring, where agents don’t just react to inputs but actively pursue defined objectives and measure their own progress.
Enjoy Reading This Article?
Here are some more articles you might like to read next: