Building an MCP server in Python using FastMCP

Kashish Hora

Kashish Hora

Co-founder of MCPcat

Try out MCPcat

The Quick Answer

Build an MCP server in Python with FastMCP by installing the framework and decorating functions as tools:

$pip install fastmcp
from fastmcp import FastMCP

mcp = FastMCP("My Server")

@mcp.tool
def add(a: int, b: int) -> int:
    """Add two numbers"""
    return a + b

if __name__ == "__main__":
    mcp.run()

Run with: fastmcp run server.py

This creates a minimal MCP server that exposes an "add" function to AI assistants. The @mcp.tool decorator handles all protocol details automatically.

Prerequisites

  • Python 3.10 or higher
  • pip or poetry package manager
  • Basic understanding of Python decorators
  • Familiarity with async/await (for async tools)

Installation

FastMCP can be installed using pip or poetry. Both package managers work well for FastMCP projects.

# Using pip
$pip install fastmcp
 
# Using poetry
$poetry add fastmcp
 
# With optional WebSocket support
$pip install fastmcp[websockets]
# or
$poetry add fastmcp[websockets]

Verify installation:

$fastmcp version

Basic Server Structure

Every FastMCP server starts with creating a server instance and decorating functions. The framework handles all the protocol complexity, letting you focus on your business logic. Here's a minimal server that demonstrates both tools (actions) and resources (data access):

from fastmcp import FastMCP

# Create server instance
mcp = FastMCP("Weather Server")

@mcp.tool
def get_weather(city: str, units: str = "celsius") -> dict:
    """Get current weather for a city"""
    # Your implementation here
    return {"temp": 22, "city": city, "units": units}

@mcp.resource("weather://{city}/forecast")
def forecast(city: str) -> str:
    """Get weather forecast"""
    return f"Sunny in {city} for the next 3 days"

if __name__ == "__main__":
    mcp.run()

Adding Tools

Tools are the primary way MCP servers expose functionality to AI assistants. Each tool is a Python function that can accept parameters and return results. FastMCP uses decorators to automatically handle parameter validation, type conversion, and protocol compliance.

Here's how to create both synchronous and asynchronous tools:

from fastmcp import FastMCP
from typing import List, Optional
import asyncio

mcp = FastMCP("Database Tools")

# Synchronous tool
@mcp.tool
def query_users(limit: int = 10) -> List[dict]:
    """Query users from database"""
    return [{"id": i, "name": f"User {i}"} for i in range(limit)]

# Asynchronous tool
@mcp.tool
async def create_user(name: str, email: str) -> dict:
    """Create a new user"""
    await asyncio.sleep(0.1)  # Simulate DB operation
    return {"id": 123, "name": name, "email": email}

FastMCP automatically handles both sync and async functions. Use async for I/O-bound operations like database queries or API calls to improve server performance.

Adding Resources

While tools enable actions, resources provide read-only access to data. Resources use URI patterns to create intuitive, REST-like interfaces for accessing information. They're ideal for exposing configuration, state, or any data that AI assistants might need to reference.

The key difference: tools perform actions ("do something"), while resources retrieve information ("get something").

from fastmcp import FastMCP
import json

mcp = FastMCP("Data Server")

# Static resource
@mcp.resource("config://app/settings")
def app_settings() -> str:
    """Application configuration"""
    return json.dumps({
        "version": "1.0.0",
        "debug": False,
        "max_connections": 100
    })

# Dynamic resource with parameters
@mcp.resource("user://{user_id}/profile")
def user_profile(user_id: str) -> dict:
    """Get user profile by ID"""
    return {
        "id": user_id,
        "name": f"User {user_id}",
        "created": "2024-01-01"
    }

Error Handling

Proper error handling is crucial for production MCP servers. FastMCP provides two main strategies: client-visible errors using ToolError for expected issues, and masked errors for internal failures. This dual approach ensures security while maintaining helpful error messages for AI assistants.

from fastmcp import FastMCP, ToolError

mcp = FastMCP("Secure Server", mask_error_details=True)

@mcp.tool
def divide(a: float, b: float) -> float:
    """Divide two numbers with error handling"""
    if b == 0:
        # Client-visible error
        raise ToolError("Division by zero is not allowed")
    
    try:
        result = a / b
        if result > 1000000:
            # Internal error - details masked from client
            raise ValueError("Result too large for processing")
        return result
    except Exception as e:
        # Generic error for unexpected issues
        raise ToolError("Calculation failed") from e

The mask_error_details=True setting ensures that internal exceptions don't leak sensitive information to clients, while ToolError messages are always passed through for debugging.

Using Context

The Context object provides powerful capabilities for logging, LLM sampling, and session management. By adding a Context parameter to any tool function, you gain access to these features without managing the complexity yourself. This is especially useful for tools that need to interact with the AI assistant or maintain conversation state.

from fastmcp import FastMCP, Context
import logging

mcp = FastMCP("Context-Aware Server")

@mcp.tool
async def process_with_context(
    ctx: Context, 
    message: str
) -> str:
    """Process message using context features"""
    # Log to MCP client
    ctx.info(f"Processing message: {message}")
    
    # Sample from LLM
    response = await ctx.sample(
        model="claude-3-5-sonnet-20241022",
        messages=[{
            "role": "user",
            "content": f"Summarize this: {message}"
        }],
        max_tokens=100
    )
    
    # Log completion
    ctx.debug(f"Generated summary: {response.content}")
    
    return response.content

Context enables sophisticated interactions: logging helps with debugging, while LLM sampling allows tools to leverage AI capabilities within their execution.

Configuration

FastMCP servers can be configured for different environments and use cases. The framework provides sensible defaults for development while offering production-ready options for deployment.

from fastmcp import FastMCP

# Production configuration
mcp = FastMCP(
    "Production Server",
    mask_error_details=True,  # Hide internal errors
    rate_limit="100/hour",    # Global rate limiting
    dependencies=["requests", "sqlalchemy"]  # Auto-install deps
)

# Development configuration
mcp = FastMCP(
    "Dev Server",
    mask_error_details=False,  # Show full errors
    log_level="DEBUG"
)

Choose configuration based on your deployment environment: development servers benefit from full error details and debug logging, while production servers should mask sensitive information and implement rate limiting.

Running Your Server

FastMCP supports multiple transport protocols. STDIO is the default for local development and Claude Desktop integration, while SSE (Server-Sent Events) enables web-based deployments. Choose based on your deployment target.

# server.py
from fastmcp import FastMCP

mcp = FastMCP("My Server")

# ... tool definitions ...

if __name__ == "__main__":
    # Default: STDIO transport
    mcp.run()
    
    # SSE transport with port
    # mcp.run(transport="sse", port=6278)
# Run with CLI (recommended)
$fastmcp run server.py
 
# Run with specific transport
$fastmcp run server.py --transport sse --port 8000
 
# Run with debugging
$fastmcp dev server.py --log-level DEBUG
 
# Run module
$python -m server

Testing Your Server

FastMCP includes a built-in client for testing your server without connecting to an AI assistant. This enables unit testing, integration testing, and debugging during development.

# test_server.py
import asyncio
from fastmcp import Client
from server import mcp

async def test_tools():
    client = Client(mcp)
    
    async with client:
        # Test tool call
        result = await client.call_tool(
            "add",
            {"a": 5, "b": 3}
        )
        print(f"Addition result: {result}")
        
        # Test resource fetch
        weather = await client.fetch_resource(
            "weather://london/forecast"
        )
        print(f"Weather: {weather}")

if __name__ == "__main__":
    asyncio.run(test_tools())

Claude Desktop Integration

Integrating your FastMCP server with Claude Desktop requires adding configuration to specify how Claude should launch and communicate with your server. The configuration supports both development and production setups.

Add to ~/Library/Application Support/Claude/claude_desktop_config.json:

{
  "mcpServers": {
    "my-server": {
      "command": "python",
      "args": [
        "-m",
        "fastmcp",
        "run",
        "/path/to/your/server/server.py"
      ]
    }
  }
}

For production deployment:

{
  "mcpServers": {
    "production-server": {
      "command": "fastmcp",
      "args": [
        "run",
        "/path/to/server.py",
        "--transport", "sse"
      ],
      "env": {
        "API_KEY": "your-api-key",
        "DATABASE_URL": "postgresql://..."
      }
    }
  }
}

Common Issues

Here are the most common issues developers encounter with FastMCP and their solutions:

Error: SSE connection not established

# Use SSE transport instead of STDIO
if __name__ == "__main__":
    mcp.run(transport="sse", port=6278)

Error: No module named 'mcp'

# Ensure virtual environment is activated
$pip install fastmcp
# Or specify Python path in Claude config

Error: Tool not found

# Ensure tool is decorated and server is running
@mcp.tool  # Don't forget the decorator
def my_tool():
    pass

Error: Invalid type in tool parameters

# Use supported types (Pydantic-compatible)
from typing import List, Dict, Optional, Union

@mcp.tool
def process(
    items: List[str],  # ✓ Supported
    config: Dict[str, Any],  # ✓ Supported
    custom_obj: MyClass  # ✗ Not supported - use dict instead
) -> dict:
    pass

FastMCP supports all Pydantic-compatible types including primitives, collections, dates, UUIDs, and enums. Custom classes must be converted to dictionaries.

Examples

File Operations Server

Many AI assistants need to work with local files. This server provides safe, controlled access to file operations with proper error handling and logging. It's useful for document processing, configuration management, or any workflow involving file manipulation.

from fastmcp import FastMCP, Context, ToolError
from pathlib import Path
import json
from typing import List

mcp = FastMCP("File Manager")

@mcp.tool
def read_json(ctx: Context, filepath: str) -> dict:
    """Read and parse JSON file"""
    try:
        with open(filepath, 'r') as f:
            data = json.load(f)
        ctx.info(f"Successfully read {filepath}")
        return data
    except FileNotFoundError:
        raise ToolError(f"File not found: {filepath}")
    except json.JSONDecodeError:
        raise ToolError(f"Invalid JSON in {filepath}")

@mcp.tool
def list_files(
    directory: str = ".",
    pattern: str = "*"
) -> List[str]:
    """List files in directory"""
    path = Path(directory)
    if not path.exists():
        raise ToolError(f"Directory not found: {directory}")
    
    return [str(f) for f in path.glob(pattern)]

This server demonstrates the tool/resource pattern: tools perform actions (reading JSON files), while resources provide direct access to data (file contents).

API Integration Server

Bridging AI assistants with external APIs is a common use case. This server demonstrates async HTTP operations with proper timeout handling and authentication. Use this pattern when you need to give AI assistants access to web services, REST APIs, or external data sources.

from fastmcp import FastMCP
import httpx
import asyncio

mcp = FastMCP("API Gateway")

@mcp.tool
async def fetch_data(
    endpoint: str,
    params: dict = None
) -> dict:
    """Fetch data from API endpoint"""
    async with httpx.AsyncClient() as client:
        response = await client.get(
            f"https://api.example.com/{endpoint}",
            params=params,
            timeout=30.0
        )
        response.raise_for_status()
        return response.json()

Async operations are essential for API integrations to avoid blocking the server while waiting for responses. The httpx library provides a modern, async-first approach to HTTP requests.

Database Query Server

Database access is a critical capability for AI assistants. This server provides safe, read-only database access with parameterized queries to prevent SQL injection. It's designed for scenarios where AI assistants need to analyze data, generate reports, or answer questions based on structured data.

from fastmcp import FastMCP, Context, ToolError
from typing import List, Dict, Any
import sqlite3

mcp = FastMCP("SQLite Tools")

@mcp.tool
def query_database(
    ctx: Context,
    query: str,
    params: List[Any] = None
) -> List[Dict[str, Any]]:
    """Execute SQL query safely"""
    ctx.info(f"Executing query: {query[:50]}...")
    
    # Only allow SELECT queries
    if not query.strip().upper().startswith("SELECT"):
        raise ToolError("Only SELECT queries are allowed")
    
    conn = sqlite3.connect("database.db")
    conn.row_factory = sqlite3.Row
    
    try:
        cursor = conn.execute(query, params or [])
        results = [dict(row) for row in cursor.fetchall()]
        ctx.info(f"Query returned {len(results)} rows")
        return results
    finally:
        conn.close()

@mcp.resource("schema://tables")
def list_tables() -> List[str]:
    """List all database tables"""
    conn = sqlite3.connect("database.db")
    cursor = conn.execute(
        "SELECT name FROM sqlite_master WHERE type='table'"
    )
    tables = [row[0] for row in cursor.fetchall()]
    conn.close()
    return tables

This pattern provides controlled database access: queries are restricted to SELECT statements, parameters prevent injection attacks, and resources expose metadata without allowing modifications.