Adding custom tools to an MCP server in Python

Kashish Hora

Kashish Hora

Co-founder of MCPcat

Try out MCPcat

The Quick Answer

Add custom tools to your MCP server using the @mcp.tool() decorator from FastMCP:

from fastmcp import FastMCP

mcp = FastMCP("My Server")

@mcp.tool()
def calculate_sum(a: int, b: int) -> int:
    """Adds two numbers together"""
    return a + b

This decorator transforms Python functions into AI-executable tools. Type hints generate the parameter schema, docstrings provide descriptions, and the function name becomes the tool identifier.

Prerequisites

  • Python 3.10 or later installed
  • FastMCP package (pip install fastmcp)
  • Basic understanding of Python decorators and type hints

Installation

Install FastMCP to access the tool decorator functionality:

$pip install fastmcp

For the standard MCP SDK approach:

$pip install mcp

Basic Tool Creation

Tools in MCP servers expose Python functions as executable operations for AI systems. The @mcp.tool() decorator handles all the complexity of protocol compliance, schema generation, and validation automatically.

Using the FastMCP Decorator

The simplest way to create a tool involves decorating a standard Python function:

from fastmcp import FastMCP

mcp = FastMCP("Calculator Server")

@mcp.tool()
def multiply(x: float, y: float) -> float:
    """Multiplies two numbers and returns the result"""
    return x * y

This approach automatically extracts several key pieces of information from your function. The function name becomes the tool's identifier in the MCP protocol. The docstring transforms into the tool's description, helping AI systems understand when and how to use it. Most importantly, the type hints generate a JSON Schema that validates inputs and documents the expected data types.

Type Annotations and Schema Generation

FastMCP relies heavily on Python's type system to create robust tool interfaces. Every parameter must have a type annotation, as these directly translate into the tool's input schema:

@mcp.tool()
def search_products(
    query: str,                    # Required string parameter
    max_results: int = 10,         # Optional with default
    include_sold_out: bool = False # Optional boolean
) -> list[dict]:
    """Searches product catalog with filters"""
    # Implementation here
    return results

The generated schema includes validation rules, default values, and type constraints. This ensures that AI systems send properly formatted requests and understand the expected response structure.

Advanced Tool Configuration

While the decorator's defaults work well for most cases, you can customize various aspects of tool behavior through decorator parameters and function design patterns.

Custom Names and Descriptions

Override the automatic naming when you need more control over how tools appear to AI systems:

@mcp.tool(
    name="vector_dot_product",
    description="Calculates the dot product of two 3D vectors"
)
def dot_product(v1: list[float], v2: list[float]) -> float:
    """Internal documentation for developers"""
    return sum(a * b for a, b in zip(v1, v2))

This separation allows you to maintain developer-friendly function names while exposing clearer identifiers to AI systems.

Using Pydantic Models

For complex input structures, Pydantic models provide enhanced validation and documentation:

from pydantic import BaseModel, Field

class EmailRequest(BaseModel):
    recipient: str = Field(description="Email address of recipient")
    subject: str = Field(description="Email subject line")
    body: str = Field(description="Email body content")
    cc: list[str] = Field(default=[], description="CC recipients")

@mcp.tool()
def send_email(request: EmailRequest) -> dict:
    """Sends an email using the configured mail server"""
    # Validate email format, send via SMTP
    return {"status": "sent", "message_id": "..."}

Pydantic models enable nested validation, custom validators, and generate detailed schemas that help AI systems construct valid requests.

Accessing Server Context

Tools often need access to server resources like database connections or API clients. The Context parameter provides this access:

@mcp.tool()
def query_database(sql: str, context: Context) -> list[dict]:
    """Executes a read-only SQL query"""
    db = context.lifespan_context["database"]
    
    # Validate it's a SELECT query
    if not sql.strip().upper().startswith("SELECT"):
        raise ValueError("Only SELECT queries allowed")
    
    return db.execute(sql).fetchall()

The context object is automatically injected when the tool runs, giving access to resources initialized during server startup.

Tool Annotations and Metadata

Behavioral annotations communicate important characteristics about your tools to AI systems, helping them make better decisions about when and how to use each tool.

from mcp.types import ToolAnnotations

@mcp.tool(annotations=ToolAnnotations(
    readOnlyHint=True,      # Doesn't modify state
    idempotentHint=True,    # Safe to retry
    openWorldHint=True      # Interacts with external systems
))
def fetch_weather(city: str) -> dict:
    """Retrieves current weather data from external API"""
    # API call implementation
    return weather_data

These hints guide AI behavior:

  • readOnlyHint: Tool only reads data, never modifies it
  • destructiveHint: Tool performs irreversible actions
  • idempotentHint: Multiple calls produce the same result
  • openWorldHint: Tool accesses external resources

Common Issues

**Error: Functions with *args or kwargs not supported

FastMCP cannot generate schemas for variable arguments. Every parameter must be explicitly defined with a type annotation. This ensures AI systems know exactly what data to provide.

# Wrong - cannot generate schema
@mcp.tool()
def bad_function(*args, **kwargs):
    pass

# Correct - explicit parameters
@mcp.tool()
def good_function(name: str, value: int):
    pass

Error: Missing type annotations

All parameters and return values need type hints for schema generation:

# Wrong - missing type hints
@mcp.tool()
def process_data(data):
    return data.upper()

# Correct - fully typed
@mcp.tool()
def process_data(data: str) -> str:
    return data.upper()

Issue: Tool not recognized by AI client

Even when tools register correctly, some clients may not immediately recognize them. Ensure your docstring clearly describes the tool's purpose, as AI systems use this to understand when to invoke each tool. Check that your server is running and the client has successfully connected to it.

Examples

Building a File Processing Tool

This example demonstrates a practical tool that processes text files with proper error handling and type safety:

from pathlib import Path
from fastmcp import FastMCP
from mcp.types import ToolAnnotations

mcp = FastMCP("File Processor")

@mcp.tool(annotations=ToolAnnotations(readOnlyHint=True))
def analyze_text_file(
    file_path: str,
    encoding: str = "utf-8"
) -> dict:
    """Analyzes a text file and returns statistics
    
    Reads the specified file and calculates word count,
    line count, and character frequencies.
    """
    path = Path(file_path)
    
    if not path.exists():
        raise FileNotFoundError(f"File not found: {file_path}")
    
    if not path.is_file():
        raise ValueError(f"Path is not a file: {file_path}")
    
    content = path.read_text(encoding=encoding)
    words = content.split()
    lines = content.splitlines()
    
    # Character frequency analysis
    char_freq = {}
    for char in content.lower():
        if char.isalpha():
            char_freq[char] = char_freq.get(char, 0) + 1
    
    return {
        "file_path": str(path.absolute()),
        "word_count": len(words),
        "line_count": len(lines),
        "character_count": len(content),
        "top_characters": sorted(
            char_freq.items(), 
            key=lambda x: x[1], 
            reverse=True
        )[:5]
    }

This tool showcases several best practices. It uses the read-only annotation since it doesn't modify files. The implementation includes comprehensive error handling for missing files and invalid paths. The return value provides structured data that AI systems can easily parse and use in further operations.

Creating an API Integration Tool

Here's how to build a tool that integrates with external APIs while managing authentication and rate limiting:

import httpx
from typing import Optional
from fastmcp import FastMCP
from mcp.types import ToolAnnotations

mcp = FastMCP("API Gateway")

@mcp.tool(annotations=ToolAnnotations(
    openWorldHint=True,
    idempotentHint=True
))
async def fetch_github_repo(
    owner: str,
    repo: str,
    include_stats: bool = False,
    context: Context
) -> dict:
    """Fetches GitHub repository information
    
    Retrieves repository metadata from GitHub's API,
    optionally including traffic and contributor statistics.
    """
    # Get API token from server context
    token = context.lifespan_context.get("github_token")
    headers = {"Authorization": f"token {token}"} if token else {}
    
    async with httpx.AsyncClient() as client:
        # Fetch basic repo info
        response = await client.get(
            f"https://api.github.com/repos/{owner}/{repo}",
            headers=headers,
            timeout=10.0
        )
        response.raise_for_status()
        repo_data = response.json()
        
        result = {
            "name": repo_data["name"],
            "description": repo_data["description"],
            "stars": repo_data["stargazers_count"],
            "language": repo_data["language"],
            "created_at": repo_data["created_at"]
        }
        
        if include_stats:
            # Fetch additional statistics
            stats_response = await client.get(
                f"https://api.github.com/repos/{owner}/{repo}/stats/contributors",
                headers=headers
            )
            if stats_response.status_code == 200:
                result["top_contributors"] = [
                    {"login": c["author"]["login"], "commits": c["total"]}
                    for c in stats_response.json()[:5]
                ]
        
        return result

This asynchronous tool demonstrates external API integration with proper timeout handling and optional data fetching. The tool uses context to access API credentials securely and includes the openWorldHint annotation to indicate external system interaction. Error handling via raise_for_status() ensures clean failure reporting to the AI system.