Validation tests for tool inputs

Kashish Hora

Kashish Hora

Co-founder of MCPcat

Try out MCPcat

The Quick Answer

Test MCP tool input validation using schema validators like Zod (TypeScript) or Pydantic (Python) combined with comprehensive test suites:

// TypeScript with Zod
import { z } from 'zod';

const schema = z.object({
  query: z.string().min(1),
  maxResults: z.number().int().positive().max(100)
});

// Test validation
describe('tool validation', () => {
  it('rejects invalid inputs', () => {
    expect(() => schema.parse({ query: '', maxResults: -1 }))
      .toThrow();
  });
});

This ensures your MCP tools handle invalid inputs gracefully, preventing crashes and security vulnerabilities. Validation tests verify schema constraints, type safety, and error messages.

Prerequisites

  • MCP server implementation (TypeScript or Python)
  • Testing framework installed (Jest/Vitest for TypeScript, pytest for Python)
  • Validation library (Zod/io-ts for TypeScript, Pydantic for Python)
  • Basic understanding of JSON Schema

Installation

Install the necessary validation and testing libraries for your language:

# TypeScript
$npm install --save-dev jest @types/jest zod
# or
$npm install --save-dev vitest zod
 
# Python
$pip install pytest pydantic
# or
$poetry add --dev pytest pydantic

Configuration

Configure your testing framework to run validation tests. For TypeScript projects, add test scripts to your package.json:

{
  "scripts": {
    "test": "jest",
    "test:watch": "jest --watch",
    "test:coverage": "jest --coverage"
  }
}

For Python projects, create a pytest.ini configuration:

[tool.pytest.ini_options]
testpaths = ["tests"]
python_files = ["test_*.py", "*_test.py"]
addopts = "-v --tb=short"

Set up your validation schemas alongside your MCP tool definitions. This co-location makes it easier to maintain consistency between tool interfaces and their validation rules.

Usage

Basic Schema Validation Testing

Implement validation tests that verify your schemas catch invalid inputs correctly. Start with testing individual field constraints:

// TypeScript: Testing field validation
import { z } from 'zod';

const SearchToolSchema = z.object({
  query: z.string().min(1).max(1000),
  filters: z.array(z.string()).optional(),
  limit: z.number().int().min(1).max(100).default(10)
});

describe('SearchTool validation', () => {
  it('validates query length constraints', () => {
    // Too short
    const shortQuery = { query: '' };
    const result = SearchToolSchema.safeParse(shortQuery);
    expect(result.success).toBe(false);
    
    // Too long
    const longQuery = { query: 'a'.repeat(1001) };
    const longResult = SearchToolSchema.safeParse(longQuery);
    expect(longResult.success).toBe(false);
  });
  
  it('validates numeric constraints', () => {
    const invalidLimit = { query: 'test', limit: 150 };
    const result = SearchToolSchema.safeParse(invalidLimit);
    expect(result.success).toBe(false);
    if (!result.success) {
      expect(result.error.issues[0].path).toContain('limit');
    }
  });
});

Python implementations use Pydantic's validation exceptions to test constraints:

# Python: Testing validation with Pydantic
import pytest
from pydantic import BaseModel, Field, ValidationError

class SearchToolParams(BaseModel):
    query: str = Field(..., min_length=1, max_length=1000)
    filters: list[str] = Field(default_factory=list)
    limit: int = Field(default=10, ge=1, le=100)

def test_query_validation():
    # Test empty query
    with pytest.raises(ValidationError) as exc_info:
        SearchToolParams(query="")
    
    errors = exc_info.value.errors()
    assert len(errors) == 1
    assert errors[0]['loc'] == ('query',)
    assert 'at least 1 character' in errors[0]['msg']

def test_limit_constraints():
    # Test limit exceeds maximum
    with pytest.raises(ValidationError) as exc_info:
        SearchToolParams(query="test", limit=150)
    
    errors = exc_info.value.errors()
    assert errors[0]['loc'] == ('limit',)
    assert 'less than or equal to 100' in errors[0]['msg']

Testing Complex Validation Rules

MCP tools often require complex validation involving multiple fields or conditional logic. Test these scenarios thoroughly:

// TypeScript: Complex validation rules
const FileOperationSchema = z.object({
  operation: z.enum(['read', 'write', 'delete']),
  path: z.string(),
  content: z.string().optional(),
  force: z.boolean().default(false)
}).refine(
  (data) => {
    // Content required for write operations
    if (data.operation === 'write') {
      return data.content !== undefined;
    }
    return true;
  },
  { message: "Content is required for write operations" }
);

describe('FileOperation complex validation', () => {
  it('requires content for write operations', () => {
    const writeWithoutContent = {
      operation: 'write' as const,
      path: '/test.txt'
    };
    
    const result = FileOperationSchema.safeParse(writeWithoutContent);
    expect(result.success).toBe(false);
    if (!result.success) {
      expect(result.error.issues[0].message)
        .toBe("Content is required for write operations");
    }
  });
  
  it('allows missing content for read operations', () => {
    const readOp = {
      operation: 'read' as const,
      path: '/test.txt'
    };
    
    const result = FileOperationSchema.safeParse(readOp);
    expect(result.success).toBe(true);
  });
});

Testing Error Message Quality

Validation error messages should be clear and actionable. Test that your validation provides helpful feedback:

# Python: Testing error message clarity
from pydantic import BaseModel, Field, validator
import re

class DatabaseQueryParams(BaseModel):
    table: str = Field(..., regex=r'^[a-zA-Z_]\w*$')
    columns: list[str] = Field(..., min_items=1)
    where_clause: str = Field(None)
    
    @validator('where_clause')
    def validate_where_clause(cls, v):
        if v and 'DROP' in v.upper():
            raise ValueError('Potentially dangerous SQL detected')
        return v

def test_error_messages_are_helpful():
    # Test invalid table name
    try:
        DatabaseQueryParams(
            table="123-invalid",
            columns=["id"]
        )
    except ValidationError as e:
        error_msg = str(e)
        assert "table" in error_msg
        assert "regex" in error_msg or "pattern" in error_msg
    
    # Test SQL injection attempt
    try:
        DatabaseQueryParams(
            table="users",
            columns=["id"],
            where_clause="id = 1; DROP TABLE users;"
        )
    except ValidationError as e:
        error_msg = e.errors()[0]['msg']
        assert "dangerous SQL" in error_msg

Integration Testing with MCP Servers

Test validation within the context of actual MCP tool execution. This ensures validation integrates properly with your server:

// TypeScript: Integration testing
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { z } from 'zod';

const server = new Server({
  name: 'test-server',
  version: '1.0.0'
});

const CalculatorSchema = z.object({
  operation: z.enum(['add', 'subtract', 'multiply', 'divide']),
  a: z.number(),
  b: z.number()
});

server.setRequestHandler('tools/call', async (request) => {
  if (request.params.name === 'calculate') {
    const validation = CalculatorSchema.safeParse(request.params.arguments);
    
    if (!validation.success) {
      return {
        isError: true,
        content: [{
          type: 'text',
          text: `Validation error: ${validation.error.message}`
        }]
      };
    }
    
    // Tool implementation
    const { operation, a, b } = validation.data;
    // ... calculation logic
  }
});

describe('MCP server validation integration', () => {
  it('returns validation errors to client', async () => {
    const response = await server.handleRequest({
      method: 'tools/call',
      params: {
        name: 'calculate',
        arguments: {
          operation: 'invalid',
          a: 'not a number',
          b: 10
        }
      }
    });
    
    expect(response.isError).toBe(true);
    expect(response.content[0].text).toContain('Validation error');
  });
});

Common Issues

Error: Schema Validation Passes But Runtime Fails

When validation passes but the tool fails at runtime, it often indicates missing validation rules. The root cause is typically a mismatch between what your schema validates and what your implementation actually requires. This commonly happens with optional fields that your code treats as required or with string formats that need specific patterns.

// Fix: Add comprehensive validation
const schema = z.object({
  email: z.string().email(), // Don't just use z.string()
  url: z.string().url(),      // Validate format, not just type
  date: z.string().regex(/^\d{4}-\d{2}-\d{2}$/)
});

To prevent this issue, audit your tool implementation and ensure every assumption is reflected in your validation schema. Use property-based testing to generate edge cases automatically.

Error: Validation Too Strict, Blocking Valid Inputs

Overly restrictive validation can frustrate users by rejecting legitimate inputs. This typically occurs when validation rules are based on current implementation limitations rather than actual requirements. For example, setting arbitrary length limits or rejecting valid but uncommon input patterns.

# Fix: Balance security with usability
class FlexibleParams(BaseModel):
    # Too strict - blocks valid international names
    # name: str = Field(..., regex=r'^[A-Za-z ]+$')
    
    # Better - allows Unicode while preventing injection
    name: str = Field(..., min_length=1, max_length=200)
    
    @validator('name')
    def sanitize_name(cls, v):
        # Remove potentially dangerous characters
        return re.sub(r'[<>&\"\'`]', '', v)

Review validation rules regularly and base them on actual security requirements rather than assumptions. Consider logging rejected inputs to identify patterns of false positives.

Error: Inconsistent Validation Across Languages

When implementing MCP servers in multiple languages, validation inconsistencies can lead to different behavior. The root cause is often different interpretation of validation rules or capabilities between validation libraries. For instance, regex patterns might behave differently between JavaScript and Python.

// TypeScript validation
const schema = z.object({
  // This regex works in JS but may differ in Python
  code: z.string().regex(/^[A-Z]{2,4}\d{4}$/)
});
# Python - ensure consistent behavior
import re

class ConsistentParams(BaseModel):
    # Use explicit flags for consistency
    code: str = Field(..., regex=r'^[A-Z]{2,4}\d{4}$')
    
    @validator('code')
    def validate_code_format(cls, v):
        # Additional validation for consistency
        if not re.match(r'^[A-Z]{2,4}\d{4}$', v, re.ASCII):
            raise ValueError('Invalid code format')
        return v

To prevent inconsistencies, maintain a single source of truth for validation rules (like JSON Schema) and generate language-specific validators from it. Test validation behavior across all implementations.

Examples

Example 1: E-commerce Product Search Validation

This example demonstrates comprehensive validation for a product search tool that handles complex filtering and prevents injection attacks:

// E-commerce search tool with layered validation
import { z } from 'zod';

const PriceRangeSchema = z.object({
  min: z.number().min(0).optional(),
  max: z.number().min(0).optional()
}).refine(
  data => {
    if (data.min !== undefined && data.max !== undefined) {
      return data.min <= data.max;
    }
    return true;
  },
  { message: "Min price must be less than max price" }
);

const ProductSearchSchema = z.object({
  query: z.string().min(2).max(200),
  category: z.string().regex(/^[a-zA-Z0-9-_]+$/).optional(),
  priceRange: PriceRangeSchema.optional(),
  sortBy: z.enum(['price', 'rating', 'newest', 'relevance']).default('relevance'),
  inStock: z.boolean().default(true),
  page: z.number().int().min(1).default(1),
  perPage: z.number().int().min(10).max(50).default(20)
});

// Comprehensive test suite
describe('ProductSearch validation', () => {
  it('validates search query bounds', () => {
    const tooShort = ProductSearchSchema.safeParse({ query: 'a' });
    expect(tooShort.success).toBe(false);
    
    const valid = ProductSearchSchema.safeParse({ query: 'laptop' });
    expect(valid.success).toBe(true);
  });
  
  it('prevents SQL injection in category', () => {
    const sqlInjection = ProductSearchSchema.safeParse({
      query: 'laptop',
      category: "electronics'; DROP TABLE products;--"
    });
    expect(sqlInjection.success).toBe(false);
  });
  
  it('validates price range logic', () => {
    const invalidRange = ProductSearchSchema.safeParse({
      query: 'laptop',
      priceRange: { min: 1000, max: 500 }
    });
    expect(invalidRange.success).toBe(false);
    if (!invalidRange.success) {
      expect(invalidRange.error.issues[0].message)
        .toContain('less than max price');
    }
  });
});

// Integration with MCP server
async function handleProductSearch(args: unknown) {
  const validation = ProductSearchSchema.safeParse(args);
  
  if (!validation.success) {
    // Format errors for user-friendly display
    const errors = validation.error.issues.map(issue => ({
      field: issue.path.join('.'),
      message: issue.message
    }));
    
    return {
      isError: true,
      content: [{
        type: 'text',
        text: `Invalid search parameters:\n${
          errors.map(e => `- ${e.field}: ${e.message}`).join('\n')
        }`
      }]
    };
  }
  
  // Proceed with validated data
  const searchResults = await performSearch(validation.data);
  return formatSearchResults(searchResults);
}

This implementation validates all input constraints while providing clear error messages. The layered approach catches both structural issues (wrong types) and logical issues (invalid price ranges), ensuring robust protection against various input problems.

Example 2: File System Operations with Security Validation

This example shows how to validate file system operations with security considerations, preventing directory traversal and unauthorized access:

# Secure file system operations with comprehensive validation
from pathlib import Path
from typing import Optional, Literal
from pydantic import BaseModel, Field, validator
import os

class FileOperationParams(BaseModel):
    operation: Literal['read', 'write', 'list', 'delete']
    path: str = Field(..., min_length=1, max_length=500)
    content: Optional[str] = Field(None, max_length=1_000_000)  # 1MB limit
    encoding: str = Field('utf-8', regex=r'^[a-zA-Z0-9-]+$')
    create_dirs: bool = Field(False)
    
    @validator('path')
    def validate_safe_path(cls, v):
        # Normalize and validate path
        normalized = os.path.normpath(v)
        
        # Prevent directory traversal
        if '..' in normalized or normalized.startswith('/'):
            raise ValueError('Path cannot contain .. or absolute paths')
        
        # Prevent access to sensitive files
        forbidden_patterns = ['.git', '.env', 'node_modules', '__pycache__']
        if any(pattern in normalized for pattern in forbidden_patterns):
            raise ValueError('Access to sensitive directories denied')
        
        # Ensure path doesn't exceed depth limit
        if len(Path(normalized).parts) > 10:
            raise ValueError('Path depth exceeds maximum allowed')
        
        return normalized
    
    @validator('content')
    def validate_content_required(cls, v, values):
        if 'operation' in values and values['operation'] == 'write':
            if v is None:
                raise ValueError('Content required for write operations')
        return v
    
    @validator('encoding')
    def validate_encoding_supported(cls, v):
        # Verify encoding is supported
        try:
            ''.encode(v)
        except LookupError:
            raise ValueError(f'Unsupported encoding: {v}')
        return v

# Comprehensive test suite
import pytest

class TestFileOperationValidation:
    def test_prevents_directory_traversal(self):
        """Test that directory traversal attempts are blocked"""
        attacks = [
            '../../../etc/passwd',
            'files/../../../secret.txt',
            '/absolute/path/file.txt',
            '..\\..\\windows\\system32'
        ]
        
        for attack_path in attacks:
            with pytest.raises(ValidationError) as exc:
                FileOperationParams(
                    operation='read',
                    path=attack_path
                )
            assert 'Path cannot contain' in str(exc.value)
    
    def test_blocks_sensitive_directories(self):
        """Test that access to sensitive dirs is blocked"""
        sensitive_paths = [
            '.git/config',
            'node_modules/package/index.js',
            '.env.local',
            '__pycache__/module.pyc'
        ]
        
        for path in sensitive_paths:
            with pytest.raises(ValidationError) as exc:
                FileOperationParams(
                    operation='read',
                    path=path
                )
            assert 'sensitive directories' in str(exc.value)
    
    def test_validates_operation_requirements(self):
        """Test operation-specific validation rules"""
        # Write requires content
        with pytest.raises(ValidationError) as exc:
            FileOperationParams(
                operation='write',
                path='output.txt',
                content=None
            )
        assert 'Content required' in str(exc.value)
        
        # Read doesn't require content
        params = FileOperationParams(
            operation='read',
            path='input.txt'
        )
        assert params.content is None
    
    def test_path_depth_limits(self):
        """Test path depth validation"""
        deep_path = '/'.join(['folder'] * 15)
        
        with pytest.raises(ValidationError) as exc:
            FileOperationParams(
                operation='list',
                path=deep_path
            )
        assert 'depth exceeds' in str(exc.value)

# MCP integration example
async def handle_file_operation(args: dict):
    """Handle file operations with security validation"""
    try:
        params = FileOperationParams(**args)
    except ValidationError as e:
        # Return structured error response
        return {
            'isError': True,
            'content': [{
                'type': 'text',
                'text': f'Validation failed: {e}'
            }]
        }
    
    # Additional runtime security checks
    base_path = Path('./sandbox')
    full_path = base_path / params.path
    
    # Ensure resolved path is within sandbox
    try:
        full_path.resolve().relative_to(base_path.resolve())
    except ValueError:
        return {
            'isError': True,
            'content': [{
                'type': 'text',
                'text': 'Security error: Path escapes sandbox'
            }]
        }
    
    # Execute operation with validated parameters
    return await execute_file_operation(params, full_path)

This comprehensive validation approach combines input sanitization, path security, and operation-specific rules. The multi-layered validation ensures that file operations are both functional and secure, preventing common vulnerabilities while maintaining usability.

This comprehensive validation approach combines input sanitization, path security, and operation-specific rules. The multi-layered validation ensures that file operations are both functional and secure, preventing common vulnerabilities while maintaining usability.