The Quick Answer
Connect your MCP server to any OpenTelemetry-compatible platform using MCPcat's OTLP exporter:
import mcpcat from 'mcpcat';
mcpcat.track(server, null, {
exporters: {
otlp: {
type: "otlp",
endpoint: "http://localhost:4318/v1/traces",
protocol: "http/protobuf",
headers: { "api-key": "your-api-key" }
}
}
});
import mcpcat
mcpcat.track(server, None, mcpcat.MCPCatOptions(
exporters={
"otlp": {
"type": "otlp",
"endpoint": "http://localhost:4318/v1/traces",
"protocol": "http/protobuf",
"headers": {"api-key": "your-api-key"}
}
}
))
This enables distributed tracing across Jaeger, Grafana Tempo, New Relic, and any OTLP-compatible platform. MCPcat automatically maps MCP events to OpenTelemetry spans with semantic attributes.
Prerequisites
- MCPcat SDK installed (
mcpcat
for TypeScript or Python) - An OpenTelemetry-compatible backend (Jaeger, Tempo, New Relic, etc.)
- OTLP endpoint accessible from your MCP server
Installation
Install the MCPcat SDK for your language:
# TypeScript/JavaScript$npm install mcpcat# Python$pip install mcpcat
Configuration
MCPcat's OpenTelemetry integration transforms MCP events into OTLP-compliant traces, mapping session IDs to trace IDs and event IDs to span IDs. This creates a hierarchical view of your MCP server operations that observability platforms can visualize and analyze.
Configure the OTLP exporter with your backend's endpoint and authentication:
const options = {
exporters: {
otlp: {
type: "otlp",
endpoint: "https://your-collector.example.com/v1/traces",
protocol: "http/protobuf", // or "grpc"
headers: {
"api-key": process.env.OTLP_API_KEY,
"x-custom-header": "value"
},
compression: "gzip" // optional: reduce bandwidth
}
}
};
mcpcat.track(server, null, options);
The protocol field determines how data is transmitted. Use "http/protobuf"
for HTTP transport (default) or "grpc"
for gRPC connections. Most cloud providers support HTTP/protobuf, while self-hosted collectors often prefer gRPC for its efficiency.
The compression option reduces network bandwidth usage, particularly important for high-volume MCP servers. Setting it to "gzip"
can reduce payload sizes by 60-80% with minimal CPU overhead.
Usage
Basic Integration
The simplest integration forwards all MCP telemetry to your OTLP backend without requiring an MCPcat account:
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import mcpcat from 'mcpcat';
const server = new Server({
name: 'my-mcp-server',
version: '1.0.0'
});
// Forward to OpenTelemetry without MCPcat dashboard
mcpcat.track(server, null, {
exporters: {
otlp: {
type: "otlp",
endpoint: process.env.OTLP_ENDPOINT || "http://localhost:4318/v1/traces"
}
}
});
This configuration sends telemetry directly to your observability backend, bypassing MCPcat's cloud services entirely. Each MCP session becomes a trace, and individual tool calls become spans within that trace.
Dual Export Strategy
For comprehensive monitoring, export to both MCPcat's dashboard and your existing observability platform:
mcpcat.track(server, "proj_YOUR_PROJECT_ID", {
exporters: {
otlp: {
type: "otlp",
endpoint: "https://tempo.grafana.net/v1/traces",
headers: {
"Authorization": `Bearer ${process.env.GRAFANA_TOKEN}`
}
}
}
});
This approach provides MCPcat's MCP-specific analytics (user intentions, session replay) alongside your existing monitoring infrastructure. The telemetry data flows to both destinations independently, ensuring redundancy.
Platform-Specific Configurations
Different observability platforms require specific configurations. Here are tested examples for popular platforms:
// Jaeger (self-hosted)
const jaegerConfig = {
exporters: {
otlp: {
type: "otlp",
endpoint: "http://jaeger-collector:4318/v1/traces",
protocol: "http/protobuf"
}
}
};
// New Relic
const newRelicConfig = {
exporters: {
otlp: {
type: "otlp",
endpoint: "https://otlp.nr-data.net:4318/v1/traces",
headers: {
"api-key": process.env.NEW_RELIC_LICENSE_KEY
}
}
}
};
// AWS X-Ray via OpenTelemetry Collector
const xrayConfig = {
exporters: {
otlp: {
type: "otlp",
endpoint: "http://otel-collector:4318/v1/traces",
protocol: "http/protobuf"
}
}
};
Advanced Usage
Custom Span Attributes
MCPcat automatically enriches spans with semantic attributes that provide context about MCP operations. Understanding this mapping helps you write effective queries in your observability platform.
// MCP events are mapped to OTLP spans with these attributes:
// Resource attributes (service-level)
{
"service.name": "my-mcp-server",
"service.version": "1.0.0",
"telemetry.sdk.name": "mcpcat-typescript",
"telemetry.sdk.version": "1.2.0"
}
// Span attributes (event-level)
{
"mcp.event_type": "tools/call",
"mcp.session_id": "sess_abc123",
"mcp.resource_name": "fetch_weather",
"mcp.user_intent": "Check tomorrow's forecast",
"mcp.actor_id": "user_xyz",
"mcp.client_name": "claude-desktop",
"mcp.client_version": "1.0.0"
}
These attributes enable powerful queries in your observability platform. For example, in Jaeger you can search for all traces where mcp.resource_name="fetch_weather"
to analyze weather tool performance, or filter by mcp.client_name
to understand usage patterns across different clients.
Distributed Tracing Across Services
MCPcat automatically maintains trace context for MCP operations, creating spans that connect to form distributed traces. Each MCP session becomes a trace with a unique trace ID derived from the session ID, and individual operations (tool calls, resource access) become spans within that trace.
// MCPcat automatically handles trace propagation
server.setRequestHandler({
method: 'tools/call',
handler: async (request) => {
// Your tool logic - MCPcat tracks this automatically
const response = await fetch('https://api.example.com/data');
// MCPcat creates a span for this tool call with:
// - Trace ID from session_id
// - Span ID from event_id
// - Automatic timing and error tracking
return { content: [{ type: "text", text: await response.text() }] };
}
});
The trace context automatically flows through the MCP protocol, creating a complete picture of request processing. In your observability backend, you'll see the full trace hierarchy showing how AI agents interact with your MCP server and its tools.
Performance Optimization
Optimize telemetry overhead in production by using compression:
mcpcat.track(server, null, {
exporters: {
otlp: {
type: "otlp",
endpoint: "https://collector.example.com/v1/traces",
compression: "gzip" // Reduce bandwidth by 60-80%
}
}
});
The compression option significantly reduces network bandwidth usage, which is particularly important for high-volume MCP servers. GZIP compression can reduce payload sizes by 60-80% with minimal CPU overhead. For additional optimization, configure sampling at the OpenTelemetry Collector level using processor pipelines.
Common Issues
Connection Refused Errors
OpenTelemetry collectors must be configured to accept OTLP traffic on the specified endpoint. The most common cause is misconfigured collector receivers or network policies blocking the connection.
# otel-collector-config.yaml
receivers:
otlp:
protocols:
http:
endpoint: 0.0.0.0:4318 # Must match your MCPcat config
grpc:
endpoint: 0.0.0.0:4317
Verify connectivity with curl before deploying:
# Test HTTP endpoint$curl -X POST http://localhost:4318/v1/traces \$ -H "Content-Type: application/x-protobuf" \$ -d ""# Should return 400 (bad request) not connection refused
Network policies, firewalls, or container networking issues often block OTLP traffic. Ensure port 4318 (HTTP) or 4317 (gRPC) is accessible from your MCP server to the collector.
Missing Traces in Backend
Traces may not appear immediately due to batching delays or backend processing lag. MCPcat batches telemetry for efficiency, which can delay trace appearance by 5-10 seconds.
// Basic configuration for debugging connectivity
mcpcat.track(server, null, {
exporters: {
otlp: {
type: "otlp",
endpoint: "http://localhost:4318/v1/traces"
}
}
});
Check your backend's ingestion pipeline for delays. Grafana Tempo, for example, may take 15-30 seconds to index new traces. Jaeger typically shows traces within 5 seconds of receipt. If traces still don't appear, verify your endpoint configuration and network connectivity.
Authentication Failures
Different backends require specific authentication headers. Incorrect or missing credentials result in 401 or 403 errors.
// Configure authentication headers for different platforms
mcpcat.track(server, null, {
exporters: {
otlp: {
type: "otlp",
endpoint: "https://api.honeycomb.io/v1/traces",
headers: {
"x-honeycomb-team": process.env.HONEYCOMB_API_KEY, // Honeycomb
// "api-key": process.env.NEW_RELIC_KEY, // New Relic
// "Authorization": `Bearer ${token}`, // OAuth2
}
}
}
});
Always verify your API keys have appropriate permissions. Most platforms require write access to traces/spans endpoints, not just read access. Check your OTLP endpoint logs for authentication errors if traces aren't appearing.
Examples
Production Monitoring Setup
Here's a complete production setup that monitors an MCP server with error alerting and performance tracking:
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import mcpcat from 'mcpcat';
const server = new Server({
name: 'production-mcp-server',
version: process.env.VERSION || '1.0.0'
});
// Multi-destination telemetry configuration
const telemetryConfig = {
exporters: {
// Primary: Grafana Cloud for long-term storage
grafana: {
type: "otlp",
endpoint: "https://tempo-prod-us-central1.grafana.net/tempo",
protocol: "http/protobuf",
headers: {
"Authorization": `Bearer ${process.env.GRAFANA_TOKEN}`
},
compression: "gzip"
},
// Secondary: Local Jaeger for debugging
jaeger: {
type: "otlp",
endpoint: "http://jaeger:4318/v1/traces",
protocol: "http/protobuf"
}
}
};
// Initialize tracking with project ID for MCPcat dashboard
mcpcat.track(server, process.env.MCPCAT_PROJECT_ID, telemetryConfig);
// Add custom instrumentation for your tools
server.setRequestHandler({
method: 'tools/call',
handler: async (request) => {
const startTime = Date.now();
try {
// Your actual tool implementation logic
const result = await handleToolCall(request.params.name, request.params.arguments);
// Log performance metrics
const duration = Date.now() - startTime;
if (duration > 1000) {
console.warn(`Slow tool call: ${request.params.name} took ${duration}ms`);
}
return result;
} catch (error) {
// Errors are automatically captured by MCPcat
throw error;
}
}
});
// Example tool handler (implement your actual tool logic)
async function handleToolCall(toolName, args) {
// Your tool-specific logic here
switch(toolName) {
case 'get_weather':
// Implement weather fetching logic
return { content: [{ type: "text", text: "Weather data" }] };
default:
throw new Error(`Unknown tool: ${toolName}`);
}
}
This configuration provides comprehensive monitoring with redundancy, performance tracking, and error alerting. The dual export ensures you never lose visibility even if one backend is unavailable.
Trace Correlation Dashboard
Create meaningful dashboards by correlating MCPcat's semantic attributes with system metrics:
// Send telemetry with contextual information
mcpcat.track(server, null, {
exporters: {
otlp: {
type: "otlp",
endpoint: "http://prometheus:4318/v1/traces"
}
}
});
// MCPcat automatically includes standard attributes like:
// - mcp.event_type (e.g., "tools/call")
// - mcp.session_id (unique session identifier)
// - mcp.resource_name (tool/resource name)
// - mcp.client_name (MCP client identifier)
// - service.name (your MCP server name)
// - service.version (your MCP server version)
These attributes enable sophisticated analysis in your observability platform. Query by mcp.resource_name
to analyze specific tool performance, or filter by mcp.client_name
to understand usage patterns across different AI clients. The semantic attributes provided by MCPcat create a rich dataset for building insightful dashboards.
Related Guides
Stream MCP Server Logs to Datadog for Observability
Forward MCP server logs and metrics to Datadog using MCPcat's native integration for complete observability.
Send MCP Server Errors to Sentry for Real-Time Alerting
Forward MCP server errors to Sentry using MCPcat telemetry. Track errors, performance metrics, and debug production issues in real-time.
Set Up Multi-Platform Telemetry for MCP Servers
Send MCP telemetry data to multiple observability platforms simultaneously for comprehensive monitoring.