How to fork and customize an open-source MCP server

Kashish Hora

Kashish Hora

Co-founder of MCPcat

Try out MCPcat

The Quick Answer

Fork an existing MCP server from GitHub and customize it by modifying its tools, resources, or prompts to fit your needs:

# Fork and clone an MCP server
$git clone https://github.com/YOUR_USERNAME/mcp-server-filesystem
$cd mcp-server-filesystem
 
# Install dependencies and run locally
$npm install
$npm run dev

Then modify the server's index.ts to add your custom functionality. MCP servers expose tools, resources, and prompts that AI assistants can use to interact with external systems.

Prerequisites

  • Node.js 18+ and npm installed
  • Basic TypeScript/JavaScript knowledge
  • Git for version control
  • Familiarity with MCP concepts (tools, resources, prompts)

If you're new to MCP development, consider reading Building MCP Server with TypeScript first.

Finding Servers to Fork

The MCP ecosystem offers numerous open-source servers that serve as excellent starting points for customization. Official servers from the Model Context Protocol organization provide well-tested foundations for common use cases.

Start with the official MCP servers repository at modelcontextprotocol/servers. This collection includes servers for filesystem access, PostgreSQL databases, GitHub integration, and more. Each server demonstrates best practices and includes comprehensive documentation.

# Browse official MCP servers
$git clone https://github.com/modelcontextprotocol/servers
$cd servers
$ls -la
 
# Output shows available servers:
# filesystem/ postgres/ github/ fetch/ memory/

Community-maintained collections provide additional options. The punkpeye/awesome-mcp-servers repository catalogs servers across various categories including data tools, developer utilities, and AI-specific integrations. Browse these collections to find servers matching your use case.

Forking Process

Forking an MCP server involves creating your own copy of an existing server's repository. This allows you to maintain your customizations while potentially contributing improvements back to the original project.

Navigate to your chosen server's GitHub repository and click the "Fork" button. This creates a copy under your GitHub account. Then clone your fork locally:

# Clone your forked repository
$git clone https://github.com/YOUR_USERNAME/mcp-server-filesystem
$cd mcp-server-filesystem
 
# Add upstream remote to sync with original
$git remote add upstream https://github.com/modelcontextprotocol/servers
$git remote -v

The upstream remote allows you to pull updates from the original repository. This helps keep your fork synchronized with bug fixes and new features from the maintainers.

Understanding Server Structure

MCP servers follow a consistent structure that makes customization straightforward. Understanding this structure is crucial for effective modifications.

Examine the typical server layout:

// src/index.ts - Main server entry point
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';

const server = new Server({
  name: 'mcp-server-custom',
  version: '1.0.0',
}, {
  capabilities: {
    tools: {},
    resources: {},
    prompts: {}
  }
});

// Tool definitions
server.setRequestHandler('tools/list', async () => ({
  tools: [
    {
      name: 'read_file',
      description: 'Read contents of a file',
      inputSchema: {
        type: 'object',
        properties: {
          path: { type: 'string' }
        },
        required: ['path']
      }
    }
  ]
}));

// Start server
const transport = new StdioServerTransport();
await server.connect(transport);

The server initialization defines its name, version, and capabilities. Tools, resources, and prompts are registered through request handlers that define their schemas and implementations.

Adding Custom Tools

Tools are the primary way MCP servers expose functionality to AI assistants. Adding custom tools involves defining their schema and implementing their logic.

Create a new tool by adding it to the tools list and implementing its handler:

// Add to tools/list handler
{
  name: 'analyze_json',
  description: 'Analyze JSON file structure and statistics',
  inputSchema: {
    type: 'object',
    properties: {
      path: { type: 'string', description: 'Path to JSON file' },
      detailed: { type: 'boolean', description: 'Include detailed analysis' }
    },
    required: ['path']
  }
}

// Implement the tool handler
server.setRequestHandler('tools/call', async (request) => {
  if (request.params.name === 'analyze_json') {
    const { path, detailed } = request.params.arguments;
    
    const content = await fs.readFile(path, 'utf-8');
    const data = JSON.parse(content);
    
    const analysis = {
      size: Buffer.byteLength(content),
      keys: Object.keys(data).length,
      type: Array.isArray(data) ? 'array' : 'object'
    };
    
    if (detailed) {
      analysis.structure = analyzeStructure(data);
    }
    
    return { toolResult: analysis };
  }
  // Handle other tools...
});

Tools should validate inputs, handle errors gracefully, and return structured data that AI assistants can interpret. Consider edge cases and provide helpful error messages when operations fail.

Modifying Existing Tools

Customizing existing tools often provides more value than creating entirely new ones. You can enhance tools with additional validation, caching, or specialized behavior for your use case.

Enhance an existing file reading tool with caching:

// Add caching to file operations
const fileCache = new Map<string, { content: string, timestamp: number }>();
const CACHE_TTL = 60000; // 1 minute

server.setRequestHandler('tools/call', async (request) => {
  if (request.params.name === 'read_file') {
    const { path } = request.params.arguments;
    
    // Check cache first
    const cached = fileCache.get(path);
    if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
      return { toolResult: cached.content };
    }
    
    // Read file and update cache
    const content = await fs.readFile(path, 'utf-8');
    fileCache.set(path, { content, timestamp: Date.now() });
    
    return { toolResult: content };
  }
});

When modifying tools, maintain backward compatibility unless you're creating a completely new server variant. Document any behavioral changes clearly in your server's README.

Adding Resources

Resources provide read-only access to data that AI assistants can reference. They're ideal for configuration files, documentation, or any content that should be accessible but not modified through tools.

Implement a resource provider for configuration files:

// Add to resources/list handler
server.setRequestHandler('resources/list', async () => ({
  resources: [
    {
      uri: 'config://app/settings',
      name: 'Application Settings',
      mimeType: 'application/json'
    }
  ]
}));

// Implement resource reading
server.setRequestHandler('resources/read', async (request) => {
  const { uri } = request.params;
  
  if (uri === 'config://app/settings') {
    const settings = await loadApplicationSettings();
    return {
      contents: [{
        uri,
        mimeType: 'application/json',
        text: JSON.stringify(settings, null, 2)
      }]
    };
  }
});

Resources use URI schemes to organize content logically. Choose schemes that clearly indicate the resource type and location, making it intuitive for users to reference them.

Implementing Prompts

Prompts provide reusable templates that guide AI assistants in using your server effectively. They can include placeholders for dynamic values and comprehensive instructions.

Create prompts that demonstrate your server's capabilities:

// Add to prompts/list handler
{
  name: 'analyze_codebase',
  description: 'Analyze project structure and dependencies',
  arguments: [
    {
      name: 'project_path',
      description: 'Root directory of the project',
      required: true
    }
  ]
}

// Implement prompt generation
server.setRequestHandler('prompts/get', async (request) => {
  if (request.params.name === 'analyze_codebase') {
    const { project_path } = request.params.arguments;
    
    return {
      description: 'Comprehensive codebase analysis',
      messages: [{
        role: 'user',
        content: {
          type: 'text',
          text: `Analyze the codebase at ${project_path}:
          
1. List all source files using the filesystem tools
2. Identify the primary programming language
3. Find and analyze package.json or equivalent
4. Summarize the project structure
5. List key dependencies and their purposes`
        }
      }]
    };
  }
});

Well-designed prompts save users time and demonstrate best practices for using your server's tools and resources effectively.

Testing Your Customizations

Testing ensures your customizations work correctly and handle edge cases appropriately. The MCP Inspector tool provides an interactive environment for testing servers during development. For debugging connection issues, see Debugging Message Serialization Errors.

Install and use MCP Inspector:

# Install MCP Inspector globally
$npm install -g @modelcontextprotocol/inspector
 
# Run your server with the inspector
$mcp-inspector node dist/index.js

The inspector opens a web interface where you can invoke tools, read resources, and test prompts interactively. Use it to verify input validation, error handling, and response formats.

Write unit tests for critical functionality:

// __tests__/tools.test.ts
import { expect, test } from '@jest/globals';
import { analyzeJson } from '../src/tools/analyze-json';

test('analyzeJson handles valid JSON', async () => {
  const result = await analyzeJson({
    path: '__tests__/fixtures/sample.json',
    detailed: true
  });
  
  expect(result.size).toBeGreaterThan(0);
  expect(result.keys).toBe(3);
  expect(result.type).toBe('object');
});

test('analyzeJson handles invalid paths', async () => {
  await expect(analyzeJson({
    path: 'nonexistent.json'
  })).rejects.toThrow('File not found');
});

Integration tests verify your server works correctly with AI assistants. Test with Claude Desktop or other MCP-compatible clients to ensure real-world compatibility.

Configuration Best Practices

Flexible configuration makes your server adaptable to different environments and use cases. Use environment variables for sensitive data and configuration files for complex settings.

Implement a configuration system:

// src/config.ts
import { z } from 'zod';

const ConfigSchema = z.object({
  cacheTTL: z.number().default(60000),
  maxFileSize: z.number().default(10 * 1024 * 1024), // 10MB
  allowedPaths: z.array(z.string()).default([]),
  features: z.object({
    caching: z.boolean().default(true),
    compression: z.boolean().default(false)
  })
});

export function loadConfig() {
  const config = {
    cacheTTL: parseInt(process.env.CACHE_TTL || '60000'),
    maxFileSize: parseInt(process.env.MAX_FILE_SIZE || '10485760'),
    allowedPaths: process.env.ALLOWED_PATHS?.split(',') || [],
    features: {
      caching: process.env.ENABLE_CACHING !== 'false',
      compression: process.env.ENABLE_COMPRESSION === 'true'
    }
  };
  
  return ConfigSchema.parse(config);
}

Document all configuration options in your README. Provide sensible defaults that work for most users while allowing power users to customize behavior.

Publishing Your Fork

Share your customized server with the community by publishing it to npm or keeping it as a public GitHub repository. Choose a clear name that indicates it's a fork or variant.

Prepare for publication:

{
  "name": "@yourusername/mcp-server-filesystem-enhanced",
  "version": "1.0.0",
  "description": "Enhanced MCP filesystem server with caching and analysis tools",
  "bin": {
    "mcp-server-filesystem-enhanced": "./dist/index.js"
  },
  "files": ["dist", "README.md", "LICENSE"],
  "scripts": {
    "build": "tsc",
    "prepublishOnly": "npm run build && npm test"
  }
}
# Build and test before publishing
$npm run build
$npm test
 
# Publish to npm
$npm publish --access public

Include clear installation instructions and usage examples. Credit the original server and explain what your fork adds or changes.

Common Issues

Error: Protocol version mismatch

This occurs when your server uses an incompatible MCP SDK version. Update your dependencies to match the client's expected protocol version:

$npm update @modelcontextprotocol/sdk

Keep your SDK version synchronized with major MCP clients like Claude Desktop to ensure compatibility.

Error: Tool timeout exceeded

Long-running operations may timeout. Implement progress reporting for operations that take more than a few seconds:

// Report progress for long operations
server.setRequestHandler('tools/call', async (request) => {
  if (request.params.name === 'analyze_large_dataset') {
    const { path } = request.params.arguments;
    
    // Send progress notifications
    await request.progressCallback?.({
      progress: 0.1,
      message: 'Loading dataset...'
    });
    
    // ... perform analysis ...
    
    return { toolResult: results };
  }
});

Consider breaking large operations into smaller, chainable tools that AI assistants can call sequentially.

Error: Resource not found

Resources must be registered before they can be accessed. Ensure your resource URIs match exactly between registration and implementation:

// Consistent URI handling
const RESOURCE_PREFIX = 'myserver://';

function normalizeUri(uri: string): string {
  return uri.startsWith(RESOURCE_PREFIX) 
    ? uri 
    : `${RESOURCE_PREFIX}${uri}`;
}

Validate resource URIs during server initialization to catch configuration errors early.

Examples

Example: Enhanced GitHub Server

The GitMCP project demonstrates advanced customization of the official GitHub server. It adds automated PR creation and code review features:

// Custom tool for automated PR workflow
{
  name: 'create_pr_from_issue',
  description: 'Create a pull request addressing a GitHub issue',
  inputSchema: {
    type: 'object',
    properties: {
      owner: { type: 'string' },
      repo: { type: 'string' },
      issue_number: { type: 'number' },
      branch_name: { type: 'string' }
    },
    required: ['owner', 'repo', 'issue_number']
  }
}

// Implementation combines multiple GitHub API calls
async function createPRFromIssue(params) {
  // Get issue details
  const issue = await octokit.issues.get({
    owner: params.owner,
    repo: params.repo,
    issue_number: params.issue_number
  });
  
  // Create branch from issue
  const branch = params.branch_name || `fix-issue-${params.issue_number}`;
  await createBranch(params.owner, params.repo, branch);
  
  // Generate PR description from issue
  const prBody = `Fixes #${params.issue_number}\n\n${issue.data.body}`;
  
  // Create pull request
  const pr = await octokit.pulls.create({
    owner: params.owner,
    repo: params.repo,
    title: `Fix: ${issue.data.title}`,
    body: prBody,
    head: branch,
    base: 'main'
  });
  
  return pr.data;
}

This enhancement streamlines the development workflow by automating common GitHub operations. The tool combines multiple API calls into a single, purposeful action.

Example: Local Memory Server

A customized memory server that persists data to disk demonstrates how to add persistence to stateless servers:

// Add persistent storage to memory server
import { LevelDB } from 'level';

const db = new LevelDB('./mcp-memory-store');

// Override memory operations with persistent storage
server.setRequestHandler('tools/call', async (request) => {
  if (request.params.name === 'store_memory') {
    const { key, value, metadata } = request.params.arguments;
    
    await db.put(key, JSON.stringify({
      value,
      metadata,
      timestamp: Date.now()
    }));
    
    return { toolResult: { success: true, key } };
  }
  
  if (request.params.name === 'retrieve_memory') {
    const { key } = request.params.arguments;
    
    try {
      const data = JSON.parse(await db.get(key));
      return { toolResult: data };
    } catch (error) {
      return { toolResult: { error: 'Memory not found' } };
    }
  }
});

// Add cleanup on server shutdown
process.on('SIGINT', async () => {
  await db.close();
  process.exit(0);
});

This customization transforms a volatile memory server into a persistent knowledge store. The LevelDB backend provides fast key-value storage while maintaining the simple memory server interface.

Next Steps

With your customized MCP server running, explore advanced topics to enhance its capabilities further. Consider implementing authentication for sensitive operations, adding rate limiting for resource-intensive tools, or creating specialized servers for your domain.

For deployment options, explore Installing MCP Servers Globally vs Locally and Configuring MCP Installations for Production. If you need different transport mechanisms, check out Comparing stdio, SSE, and StreamableHTTP.

Connect with the MCP community through GitHub discussions and Discord channels. Share your customizations and learn from others' implementations. The ecosystem benefits when developers contribute their specialized servers back to the community.

Stay updated with MCP protocol changes by following the official repository and subscribing to release notifications. Regular updates ensure your server remains compatible with the latest AI assistants and takes advantage of new protocol features.