Building an MCP server in Rust

Kashish Hora

Kashish Hora

Co-founder, MCPcat

Try out MCPcat

The Quick Answer

Create a high-performance MCP server using Rust's official SDK. Set up your project, implement tools with async handlers, and run via stdio transport for Claude Desktop integration.

$cargo new my-mcp-server --bin
$cd my-mcp-server

Add to Cargo.toml:

[dependencies]
rmcp = { git = "https://github.com/modelcontextprotocol/rust-sdk", features = ["server", "transport-io", "macros"] }
tokio = { version = "1.35", features = ["full"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"

Rust's ownership system and async runtime make MCP servers memory-safe and concurrent by design. The macro-based approach simplifies tool implementation while maintaining type safety.

Prerequisites

  • Rust 1.75+ with cargo installed
  • Basic understanding of async/await in Rust
  • Claude Desktop or compatible MCP client
  • Git for accessing the official SDK

Installation

Create your MCP server project and add required dependencies:

$cargo new weather-mcp-server --bin
$cd weather-mcp-server

Update your Cargo.toml with the complete dependency list:

[package]
name = "weather-mcp-server"
version = "0.1.0"
edition = "2021"

[dependencies]
rmcp = { git = "https://github.com/modelcontextprotocol/rust-sdk", features = ["server", "transport-io", "macros"] }
tokio = { version = "1.35", features = ["full"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
anyhow = "1.0"
thiserror = "1.0"
tracing = "0.1"
tracing-subscriber = "0.3"
async-trait = "0.1"
reqwest = { version = "0.11", features = ["json"] }  # For HTTP requests

The official SDK provides macro support for cleaner tool definitions. Alternative crates like mcp-sdk-rs offer different architectural approaches, but rmcp remains the standard choice for most implementations.

Configuration

Configure your MCP server in Claude Desktop's configuration file. The location varies by platform:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
  • Linux: ~/.config/Claude/claude_desktop_config.json
{
  "mcpServers": {
    "weather": {
      "command": "/path/to/target/release/weather-mcp-server"
    }
  }
}

Build your server in release mode for optimal performance:

$cargo build --release

The server communicates via stdio by default, making it compatible with Claude Desktop's process management. Environment variables and command-line arguments can extend configuration options.

Usage

Implement your MCP server with the macro-based approach for clean, maintainable code:

use rmcp::{Error, ServerHandler, ServiceExt, model::*, schemars, tool, transport::stdio};
use std::sync::Arc;
use tokio::sync::Mutex;
use serde::{Deserialize, Serialize};

#[derive(Clone)]
pub struct WeatherServer {
    api_key: String,
    cache: Arc<Mutex<std::collections::HashMap<String, WeatherData>>>,
}

#[derive(Serialize, Deserialize, schemars::JsonSchema)]
struct WeatherParams {
    #[schemars(description = "City name to get weather for")]
    city: String,
}

#[derive(Clone, Serialize, Deserialize)]
struct WeatherData {
    temperature: f64,
    description: String,
    humidity: u8,
}

#[tool(tool_box)]
impl ServerHandler for WeatherServer {
    fn get_info(&self) -> ServerInfo {
        ServerInfo {
            protocol_version: ProtocolVersion::V_2024_11_05,
            capabilities: ServerCapabilities::builder()
                .enable_tools()
                .build(),
            server_info: Implementation::from_build_env(),
            instructions: Some("Weather information service".to_string()),
        }
    }
}

With capabilities defined, implement your tools using the #[tool] macro:

#[tool(tool_box)]
impl WeatherServer {
    #[tool(description = "Get current weather for a city")]
    async fn get_weather(
        &self,
        #[tool(aggr)] params: WeatherParams
    ) -> Result<CallToolResult, Error> {
        // Check cache first
        let cache = self.cache.lock().await;
        if let Some(data) = cache.get(&params.city) {
            return Ok(CallToolResult::success(vec![
                Content::text(format!(
                    "Weather in {}: {}°C, {}, {}% humidity",
                    params.city, data.temperature, data.description, data.humidity
                ))
            ]));
        }
        drop(cache);
        
        // Fetch from API
        let url = format!(
            "https://api.openweathermap.org/data/2.5/weather?q={}&appid={}&units=metric",
            params.city, self.api_key
        );
        
        let response = reqwest::get(&url).await
            .map_err(|e| Error::internal_error("api_error", Some(serde_json::json!({"error": e.to_string()}))))?;
            
        let json: serde_json::Value = response.json().await
            .map_err(|e| Error::internal_error("parse_error", Some(serde_json::json!({"error": e.to_string()}))))?;
        
        let weather_data = WeatherData {
            temperature: json["main"]["temp"].as_f64().unwrap_or(0.0),
            description: json["weather"][0]["description"].as_str().unwrap_or("Unknown").to_string(),
            humidity: json["main"]["humidity"].as_u64().unwrap_or(0) as u8,
        };
        
        // Update cache
        self.cache.lock().await.insert(params.city.clone(), weather_data.clone());
        
        Ok(CallToolResult::success(vec![
            Content::text(format!(
                "Weather in {}: {}°C, {}, {}% humidity",
                params.city, weather_data.temperature, weather_data.description, weather_data.humidity
            ))
        ]))
    }
}

Initialize and run your server with proper error handling and logging:

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    // Initialize tracing to stderr (stdout is reserved for MCP protocol)
    tracing_subscriber::fmt()
        .with_max_level(tracing::Level::DEBUG)
        .with_writer(std::io::stderr)
        .with_ansi(false)
        .init();
    
    tracing::info!("Starting Weather MCP Server");
    
    let api_key = std::env::var("OPENWEATHER_API_KEY")
        .expect("OPENWEATHER_API_KEY environment variable required");
    
    let server = WeatherServer {
        api_key,
        cache: Arc::new(Mutex::new(std::collections::HashMap::new())),
    };
    
    // Serve via stdio transport
    let service = server
        .serve(stdio())
        .await?;
    
    tracing::info!("Server running, waiting for requests");
    service.waiting().await?;
    
    Ok(())
}

The server runs as a subprocess, communicating with Claude Desktop through JSON-RPC over stdio. The async runtime handles concurrent requests efficiently while Rust's ownership system prevents data races.

Common Issues

Error: "failed to resolve: use of undeclared crate or module rmcp"

This occurs when the Git dependency fails to fetch. The rmcp crate requires network access during compilation to clone the repository. Ensure you have Git installed and network connectivity, then clean and rebuild:

$cargo clean
$cargo update
$cargo build

Error: "thread 'main' panicked at 'OPENWEATHER_API_KEY environment variable required'"

Environment variables must be set before launching the server. Claude Desktop doesn't pass environment variables by default. Create a wrapper script to set variables:

#!/bin/bash
$export OPENWEATHER_API_KEY="your-api-key"
$exec /path/to/weather-mcp-server
{
  "mcpServers": {
    "weather": {
      "command": "/path/to/weather-wrapper.sh"
    }
  }
}

This approach ensures your server receives required configuration while keeping secrets out of the config file.

Error: "multiple versions of crate X in dependency tree"

Cargo's dependency resolution can conflict when using Git dependencies. This typically happens with tokio or serde version mismatches. Pin specific commits in your Cargo.toml to ensure consistency:

rmcp = { git = "https://github.com/modelcontextprotocol/rust-sdk", rev = "abc123", features = ["server", "transport-io", "macros"] }

Check the SDK's Cargo.toml for its dependency versions and align yours accordingly. Running cargo tree helps identify version conflicts.

Examples

File System Browser

Build an MCP server that provides safe file system access with permission controls. This example demonstrates resource handling and security patterns:

use rmcp::{Error, ServerHandler, ResourceHandler, ServiceExt, model::*, schemars, tool, transport::stdio};
use std::path::{Path, PathBuf};
use tokio::fs;

#[derive(Clone)]
pub struct FileSystemServer {
    allowed_paths: Vec<PathBuf>,
}

#[derive(Serialize, Deserialize, schemars::JsonSchema)]
struct ReadFileParams {
    #[schemars(description = "Path to the file to read")]
    path: String,
}

#[tool(tool_box)]
impl ServerHandler for FileSystemServer {
    fn get_info(&self) -> ServerInfo {
        ServerInfo {
            protocol_version: ProtocolVersion::V_2024_11_05,
            capabilities: ServerCapabilities::builder()
                .enable_tools()
                .enable_resources()
                .build(),
            server_info: Implementation::from_build_env(),
            instructions: Some("Secure file system access".to_string()),
        }
    }
}

#[tool(tool_box)]
impl FileSystemServer {
    #[tool(description = "Read contents of a text file")]
    async fn read_file(
        &self,
        #[tool(aggr)] params: ReadFileParams
    ) -> Result<CallToolResult, Error> {
        let path = Path::new(&params.path);
        
        // Security: Validate path is within allowed directories
        let is_allowed = self.allowed_paths.iter().any(|allowed| {
            path.starts_with(allowed)
        });
        
        if !is_allowed {
            return Err(Error::invalid_request(
                "Access denied: Path outside allowed directories"
            ));
        }
        
        // Read file with size limit
        let metadata = fs::metadata(&path).await
            .map_err(|e| Error::invalid_request(&format!("Cannot access file: {}", e)))?;
            
        if metadata.len() > 1_048_576 { // 1MB limit
            return Err(Error::invalid_request("File too large (max 1MB)"));
        }
        
        let content = fs::read_to_string(&path).await
            .map_err(|e| Error::internal_error("read_error", Some(serde_json::json!({"error": e.to_string()}))))?;
        
        Ok(CallToolResult::success(vec![Content::text(content)]))
    }
}

#[async_trait::async_trait]
impl ResourceHandler for FileSystemServer {
    async fn list_resources(&self) -> Result<Vec<Resource>, Error> {
        let mut resources = Vec::new();
        
        for base_path in &self.allowed_paths {
            let mut entries = fs::read_dir(base_path).await
                .map_err(|e| Error::internal_error("list_error", Some(serde_json::json!({"error": e.to_string()}))))?;
            
            while let Some(entry) = entries.next_entry().await
                .map_err(|e| Error::internal_error("entry_error", Some(serde_json::json!({"error": e.to_string()}))))?
            {
                let path = entry.path();
                if path.is_file() {
                    resources.push(Resource {
                        uri: format!("file://{}", path.display()),
                        name: path.file_name().unwrap().to_string_lossy().to_string(),
                        mime_type: Some("text/plain".to_string()),
                        description: None,
                    });
                }
            }
        }
        
        Ok(resources)
    }
    
    async fn read_resource(&self, uri: &str) -> Result<ResourceContents, Error> {
        // Parse file:// URI and delegate to read_file logic
        // ... implementation ...
    }
}

This server implements both tools and resources, demonstrating MCP's dual-interface pattern. The security checks prevent directory traversal attacks while the size limit prevents memory exhaustion. Production deployments should add rate limiting and audit logging.

Database Query Interface

Create an MCP server that safely executes database queries with parameterization and result formatting:

use sqlx::{postgres::PgPoolOptions, Pool, Postgres, Row};
use rmcp::{Error, ServerHandler, ServiceExt, model::*, schemars, tool, transport::stdio};

#[derive(Clone)]
pub struct DatabaseServer {
    pool: Pool<Postgres>,
    max_rows: usize,
}

#[derive(Serialize, Deserialize, schemars::JsonSchema)]
struct QueryParams {
    #[schemars(description = "SQL query to execute")]
    query: String,
    #[schemars(description = "Query parameters for prepared statement")]
    params: Option<Vec<serde_json::Value>>,
}

impl DatabaseServer {
    async fn new(database_url: &str) -> Result<Self, Box<dyn std::error::Error>> {
        let pool = PgPoolOptions::new()
            .max_connections(5)
            .connect(database_url)
            .await?;
            
        Ok(Self {
            pool,
            max_rows: 100,
        })
    }
}

#[tool(tool_box)]
impl ServerHandler for DatabaseServer {
    fn get_info(&self) -> ServerInfo {
        ServerInfo {
            protocol_version: ProtocolVersion::V_2024_11_05,
            capabilities: ServerCapabilities::builder()
                .enable_tools()
                .build(),
            server_info: Implementation::from_build_env(),
            instructions: Some("Execute database queries safely".to_string()),
        }
    }
}

#[tool(tool_box)]
impl DatabaseServer {
    #[tool(description = "Execute a SELECT query with optional parameters")]
    async fn query(
        &self,
        #[tool(aggr)] params: QueryParams
    ) -> Result<CallToolResult, Error> {
        // Validate query is SELECT only
        let normalized = params.query.trim().to_lowercase();
        if !normalized.starts_with("select") {
            return Err(Error::invalid_request("Only SELECT queries allowed"));
        }
        
        // Execute with timeout
        let timeout = std::time::Duration::from_secs(30);
        let rows = tokio::time::timeout(timeout, async {
            if let Some(params) = params.params {
                // Prepared statement with parameters
                let mut query = sqlx::query(&params.query);
                for param in params {
                    query = match param {
                        serde_json::Value::String(s) => query.bind(s),
                        serde_json::Value::Number(n) => {
                            if let Some(i) = n.as_i64() {
                                query.bind(i)
                            } else {
                                query.bind(n.as_f64().unwrap())
                            }
                        },
                        serde_json::Value::Bool(b) => query.bind(b),
                        _ => return Err(Error::invalid_request("Unsupported parameter type")),
                    };
                }
                query.fetch_all(&self.pool).await
            } else {
                sqlx::query(&params.query).fetch_all(&self.pool).await
            }
        })
        .await
        .map_err(|_| Error::internal_error("timeout", None))?
        .map_err(|e| Error::internal_error("query_error", Some(serde_json::json!({"error": e.to_string()}))))?;
        
        // Format results as table
        let mut result = String::new();
        
        if rows.is_empty() {
            result.push_str("No results found.");
        } else {
            // Get column names from first row
            let columns = rows[0].columns();
            let col_names: Vec<&str> = columns.iter().map(|c| c.name()).collect();
            
            // Build markdown table
            result.push_str(&col_names.join(" | "));
            result.push('\n');
            result.push_str(&vec!["---"; col_names.len()].join(" | "));
            result.push('\n');
            
            // Add rows (with limit)
            for (idx, row) in rows.iter().enumerate() {
                if idx >= self.max_rows {
                    result.push_str(&format!("\n... {} more rows", rows.len() - self.max_rows));
                    break;
                }
                
                let values: Vec<String> = (0..columns.len())
                    .map(|i| {
                        if let Ok(val) = row.try_get::<String, _>(i) {
                            val
                        } else if let Ok(val) = row.try_get::<i64, _>(i) {
                            val.to_string()
                        } else if let Ok(val) = row.try_get::<f64, _>(i) {
                            val.to_string()
                        } else {
                            "NULL".to_string()
                        }
                    })
                    .collect();
                    
                result.push_str(&values.join(" | "));
                result.push('\n');
            }
        }
        
        Ok(CallToolResult::success(vec![Content::text(result)]))
    }
}

This database server prevents SQL injection through parameterized queries while providing formatted results. The timeout prevents long-running queries from blocking the server. Consider adding query complexity analysis and resource usage monitoring for production use.