Configuring MCP transport protocols for Docker containers

Kashish Hora

Kashish Hora

Co-founder of MCPcat

Try out MCPcat

The Quick Answer

MCP servers in Docker use different transport protocols based on your deployment needs. For local development, use stdio transport—it's like a direct phone line between processes. For production, use StreamableHTTP—it's built for scale and resilience.

{
  "mcpServers": {
    "local-dev": {
      "command": "docker",
      "args": ["run", "-i", "--rm", "my-mcp-server:stdio"]
    },
    "production": {
      "url": "http://localhost:8080/mcp"
    }
  }
}

Decision guide: Use stdio when your MCP server runs on the same machine as the client. Use StreamableHTTP when you need multiple clients, horizontal scaling, or remote access. SSE is deprecated—migrate to StreamableHTTP.

Understanding Transport Protocols

Transport protocols determine how MCP clients and servers communicate. Think of them as different ways to have a conversation—each with its own strengths and ideal scenarios.

Stdio: The Direct Connection

Standard I/O transport works like a direct intercom between two rooms. The client launches the Docker container as a subprocess, and they communicate through stdin/stdout streams. This creates an exclusive, high-performance channel perfect for local development or single-user scenarios.

The magic happens through Docker's interactive mode (-i flag), which keeps the stdin stream open. Without this flag, the container can't receive messages and exits immediately—like trying to have a phone conversation after hanging up.

StreamableHTTP: The Modern Standard

StreamableHTTP revolutionizes MCP communication by using a single HTTP endpoint for bidirectional messaging. Unlike traditional request-response patterns, it maintains persistent connections that support both stateful conversations and stateless operations.

In stateful mode, the server remembers context between messages—ideal for complex workflows. In stateless mode, each request stands alone, enabling horizontal scaling across multiple container replicas. The protocol includes built-in session management via the Mcp-Session-Id header, automatic reconnection, and OAuth 2.1 authentication support.

SSE: The Legacy Protocol

Server-Sent Events represented MCP's first attempt at HTTP-based transport. It uses separate endpoints—POST for requests and SSE for responses. While functional, its unidirectional nature and lack of native session management led to its deprecation in favor of StreamableHTTP.

Transport Selection Guide

Choosing the right transport depends on your deployment architecture, scaling needs, and client requirements. Here's a practical decision matrix:

| Deployment Scenario | Recommended Transport | Key Benefits | |--------------------|--------------------|--------------| | Local development | stdio | Zero latency, simple debugging, no network configuration | | Single production instance | StreamableHTTP (stateful) | Session persistence, simple deployment, remote access | | Scalable microservices | StreamableHTTP (stateless) | Horizontal scaling, load balancing, fault tolerance | | Legacy integration | SSE → StreamableHTTP migration | Maintain compatibility while upgrading |

When to Use Each Transport

Choose stdio when:

  • Developing and testing locally
  • Running single-user tools or agents
  • Network overhead is unacceptable
  • Client and server share the same host

Choose StreamableHTTP when:

  • Deploying to production
  • Supporting multiple concurrent clients
  • Implementing microservices architecture
  • Requiring authentication and authorization
  • Needing automatic failover and load balancing

Docker Configuration Essentials

Successfully running MCP servers in Docker requires understanding how containers interact with different transport protocols. Each transport has specific requirements that affect your Docker configuration.

Container Lifecycle Management

Docker containers are ephemeral by design. For stdio transport, this means the container lives only as long as the client connection. The container starts when the client connects and stops when it disconnects. This behavior is perfect for development but requires careful consideration for production use.

StreamableHTTP containers, conversely, run continuously and handle multiple connections. They require proper health checks, resource limits, and restart policies to ensure reliability.

Networking Considerations

Stdio transport bypasses networking entirely—communication happens through process pipes. This eliminates network-related issues but limits you to local deployments.

StreamableHTTP requires careful network configuration. Containers must expose the appropriate ports, and you need to consider:

  • Port mapping between container and host
  • Network isolation for security
  • DNS resolution for service discovery
  • Load balancer integration for scaling

Security Best Practices

Running MCP servers in containers introduces unique security considerations. Always run containers as non-root users to limit potential damage from compromises. For stdio transport, the -i flag creates an attack surface—ensure you trust the container image.

StreamableHTTP deployments should implement:

  • TLS encryption for all communications
  • OAuth 2.1 for authentication
  • Network policies to restrict access
  • Regular security scanning of base images
  • Minimal container images to reduce attack surface

Common Patterns

Basic Stdio Configuration

$docker run -i --rm \
$ -v "$PWD:/workspace:ro" \
$ -e API_KEY="$API_KEY" \
$ my-mcp-server:stdio

This pattern mounts the current directory read-only and passes environment variables for configuration. The --rm flag ensures cleanup after disconnection.

Production StreamableHTTP Setup

version: '3.8'
services:
  mcp-server:
    image: my-mcp:latest
    ports:
      - "8080:8080"
    environment:
      - MCP_MODE=stateless
    deploy:
      replicas: 3
      resources:
        limits:
          memory: 512M

This configuration enables horizontal scaling with resource constraints, suitable for production deployments.

Troubleshooting

Container Exits Immediately

This common stdio issue occurs when the -i flag is missing. The container can't read from stdin and terminates. Always include -i for interactive mode. Adding --init helps with proper signal handling and prevents zombie processes.

Connection Refused Errors

For StreamableHTTP, this usually indicates:

  • Port mapping issues (container port not exposed)
  • Firewall blocking connections
  • Service not fully started (add health checks)
  • Wrong protocol in client configuration

Session Lost After Container Restart

Stateful StreamableHTTP servers lose session data when containers restart. Solutions include:

  • Switch to stateless mode for better resilience
  • Implement external session storage (Redis)
  • Use sticky sessions in load balancers
  • Design clients to handle reconnection gracefully

Performance Degradation

Monitor resource usage and implement limits. Common causes:

  • Memory leaks accumulating over time
  • CPU throttling from insufficient allocation
  • Network congestion from poor configuration
  • Disk I/O from excessive logging

Migration Strategies

Moving from SSE to StreamableHTTP

SSE users should migrate to StreamableHTTP for better reliability and features. The migration involves:

  1. Update server code to use StreamableHTTP endpoints
  2. Modify client configuration to use single endpoint
  3. Implement session management via headers
  4. Test thoroughly with parallel deployments
  5. Gradually shift traffic using load balancer rules

Transitioning from Development to Production

Moving from stdio to StreamableHTTP requires architectural changes:

  1. Containerize with appropriate base images
  2. Externalize all configuration
  3. Implement proper logging and monitoring
  4. Add health checks and readiness probes
  5. Design for horizontal scaling from day one

Remember: successful MCP deployments in Docker balance simplicity with scalability. Start simple with stdio for development, then graduate to StreamableHTTP when production demands grow. Focus on understanding your transport choice deeply rather than implementing complex configurations prematurely.