The Quick Answer
Stream MCP server telemetry directly to Datadog by configuring MCPcat's native Datadog exporter. Add this configuration to your MCP server initialization:
mcpcat.track(server, null, {
exporters: {
datadog: {
type: "datadog",
apiKey: process.env.DD_API_KEY,
site: "datadoghq.com",
service: "my-mcp-server",
env: "production"
}
}
})
MCPcat automatically converts MCP events into Datadog logs and metrics, including trace IDs for correlation across your observability stack. Each event generates both searchable logs with structured MCP metadata and actionable metrics for dashboards and alerting.
Prerequisites
- MCPcat SDK version 0.2.0 or higher
- Datadog account with API key
- MCP server using TypeScript or Python
- Node.js 16+ (TypeScript) or Python 3.8+ (Python)
Installation
Install the MCPcat SDK with built-in Datadog support:
# For TypeScript/JavaScript projects$npm install mcpcat# For Python projects$pip install mcpcat
The MCPcat SDK includes the Datadog exporter by default. No additional dependencies are required for basic integration, though you may want to install the Datadog Agent for advanced features like custom metrics and distributed tracing.
Configuration
MCPcat's Datadog integration transforms MCP events into structured logs and metrics that integrate seamlessly with your existing Datadog infrastructure. The exporter handles authentication, batching, and retry logic automatically, ensuring reliable data transmission even during network interruptions.
Configure the Datadog exporter when initializing MCPcat tracking on your MCP server. The configuration accepts both environment-specific settings and optional parameters for fine-tuning the integration:
import { MCPServer } from '@modelcontextprotocol/server-node'
import mcpcat from 'mcpcat'
const server = new MCPServer()
// Initialize MCPcat with Datadog exporter
mcpcat.track(server, null, {
exporters: {
datadog: {
type: "datadog",
apiKey: process.env.DD_API_KEY, // Required: Your Datadog API key
site: process.env.DD_SITE || "datadoghq.com", // Required: Your Datadog region
service: "mcp-weather-service", // Required: Service identifier
env: process.env.NODE_ENV || "production", // Optional: Environment tag
// Optional: Advanced configuration
batchSize: 100, // Events per batch (default: 100)
flushInterval: 5000, // Flush interval in ms (default: 5000)
maxRetries: 3, // Retry attempts for failed sends (default: 3)
timeout: 10000 // Request timeout in ms (default: 10000)
}
}
})
For Python implementations, use snake_case for configuration keys:
from mcp.server import Server
import mcpcat
import os
server = Server()
# Initialize MCPcat with Datadog exporter
mcpcat.track(server, None, mcpcat.MCPCatOptions(
exporters={
"datadog": {
"type": "datadog",
"api_key": os.getenv("DD_API_KEY"),
"site": os.getenv("DD_SITE", "datadoghq.com"),
"service": "mcp-weather-service",
"env": os.getenv("PYTHON_ENV", "production"),
# Optional: Advanced configuration
"batch_size": 100,
"flush_interval": 5000,
"max_retries": 3,
"timeout": 10000
}
}
))
The site
parameter must match your Datadog region. Use datadoghq.com
for US1, datadoghq.eu
for EU, us3.datadoghq.com
for US3, us5.datadoghq.com
for US5, or ddog-gov.com
for US1-FED. Incorrect site configuration will result in authentication failures despite valid API keys.
Usage
With the Datadog exporter configured, MCPcat automatically streams all MCP events to your Datadog account. The integration captures comprehensive telemetry data without requiring code changes to your existing MCP server implementation.
Automatic Event Tracking
MCPcat intercepts and tracks all MCP protocol events, transforming them into structured Datadog logs and metrics. Each event type generates specific telemetry data optimized for observability:
// All these MCP operations are automatically tracked
server.setRequestHandler({
async tools_call(request) {
// MCPcat tracks: tool name, execution time, success/failure, arguments
const result = await processToolCall(request)
return result
},
async resources_list(request) {
// MCPcat tracks: resource access patterns, list operations
return getAvailableResources()
},
async prompts_get(request) {
// MCPcat tracks: prompt usage, parameter substitution
return getPromptTemplate(request.params.name)
}
})
// Error events are automatically captured with full context
server.onerror = (error) => {
// MCPcat sends error details to Datadog with stack traces
console.error('MCP Server error:', error)
}
The integration preserves the complete context of each MCP interaction, including session identifiers, client information, and timing data. This context enables powerful correlation analysis in Datadog, allowing you to trace issues across distributed systems.
Log Structure in Datadog
Each MCP event creates a structured log entry in Datadog with consistent field mapping. Understanding this structure helps you build effective queries and dashboards:
{
"message": "tools/call - get_current_weather",
"service": "mcp-weather-service",
"ddsource": "mcpcat",
"ddtags": "env:production,event_type:tools.call,resource:get_current_weather",
"timestamp": 1706294400000,
"status": "info",
"mcp": {
"session_id": "sess_abc123",
"event_id": "evt_xyz789",
"event_type": "tools/call",
"resource": "get_current_weather",
"duration_ms": 145,
"user_intent": "Check weather in San Francisco",
"actor_id": "user_123",
"actor_name": "John Doe",
"client_name": "claude-desktop",
"client_version": "1.0.0",
"server_name": "weather-mcp-server",
"server_version": "0.1.0",
"is_error": false,
// Tool-specific metadata
"tool_args": {
"location": "San Francisco, CA",
"units": "fahrenheit"
},
"tool_result": {
"temperature": 68,
"conditions": "partly cloudy"
}
},
// Datadog trace correlation
"dd": {
"trace_id": "1234567890123456789",
"span_id": "9876543210987654321"
}
}
MCPcat automatically generates trace and span IDs from session and event identifiers, enabling correlation with APM traces if you're using Datadog's distributed tracing. The numeric IDs are derived from SHA256 hashes of the original string identifiers to ensure compatibility with Datadog's tracing system.
Metrics Collection
The Datadog exporter generates three primary metrics from MCP events, each tagged with relevant metadata for filtering and aggregation:
// Example of metrics generated from a single tool call
{
"mcp.events.count": 1, // Incremented for each event
"mcp.event.duration": 145, // Gauge showing execution time
"mcp.errors.count": 0 // Incremented only for errors
}
// All metrics include these tags
{
"service": "mcp-weather-service",
"env": "production",
"event_type": "tools.call", // Dots replace slashes for compatibility
"resource": "get_current_weather",
"client_name": "claude-desktop"
}
These metrics enable you to create dashboards showing request rates, performance trends, and error patterns across your MCP infrastructure. The consistent tagging scheme allows correlation between different services and environments.
Advanced Usage
Multi-Platform Integration
MCPcat supports simultaneous export to multiple observability platforms, enabling you to leverage Datadog alongside other monitoring tools:
mcpcat.track(server, "proj_YOUR_ID", {
exporters: {
// Send to Datadog for logs and metrics
datadog: {
type: "datadog",
apiKey: process.env.DD_API_KEY,
site: "datadoghq.com",
service: "mcp-weather-service",
env: "production"
},
// Also send to OpenTelemetry collector
otlp: {
type: "otlp",
endpoint: "http://localhost:4318/v1/traces",
headers: {
"x-service-name": "mcp-weather-service"
}
},
// And capture errors in Sentry
sentry: {
type: "sentry",
dsn: process.env.SENTRY_DSN,
environment: "production"
}
}
})
This multi-platform approach provides defense in depth for observability, ensuring you can correlate data across different systems and maintain visibility even if one platform experiences issues. Each exporter operates independently with its own batching and retry logic.
Custom Tagging and Metadata
Enhance your Datadog integration by adding custom tags and metadata to provide additional context for your MCP events:
mcpcat.track(server, null, {
exporters: {
datadog: {
type: "datadog",
apiKey: process.env.DD_API_KEY,
site: "datadoghq.com",
service: "mcp-weather-service",
env: "production",
// Add custom tags for all events
tags: {
team: "platform",
version: process.env.APP_VERSION,
region: process.env.AWS_REGION,
deployment: process.env.DEPLOYMENT_ID
}
}
},
// Add user identification
identifyUser: (context) => {
return {
id: context.clientId,
name: context.clientName,
metadata: {
subscription: "premium",
organization: "acme-corp"
}
}
}
})
Custom tags appear on all logs and metrics, enabling fine-grained filtering and analysis in Datadog. User identification data helps track usage patterns and troubleshoot user-specific issues.
Creating Datadog Dashboards
Build comprehensive dashboards to visualize your MCP server performance. Start with this dashboard JSON template that you can import directly into Datadog:
{
"title": "MCP Server Performance",
"widgets": [
{
"definition": {
"type": "timeseries",
"requests": [{
"q": "sum:mcp.events.count{$service,$env}.as_rate()",
"display_type": "line",
"style": {
"palette": "dog_classic",
"line_type": "solid"
}
}],
"title": "Request Rate by Event Type"
}
},
{
"definition": {
"type": "query_value",
"requests": [{
"q": "avg:mcp.event.duration{$service,$env}",
"aggregator": "avg"
}],
"title": "Average Response Time"
}
},
{
"definition": {
"type": "toplist",
"requests": [{
"q": "top(sum:mcp.events.count{$service,$env} by {resource}, 10, 'sum')",
"style": {
"palette": "dog_classic"
}
}],
"title": "Most Used Tools"
}
}
],
"template_variables": [
{
"name": "service",
"prefix": "service",
"default": "mcp-weather-service"
},
{
"name": "env",
"prefix": "env",
"default": "production"
}
]
}
This template provides essential performance metrics including request rates, response times, and tool usage patterns. Customize the dashboard by adding widgets for error rates, specific tool performance, or business-specific metrics.
Common Issues
Error: Authentication failed despite valid API key
This typically indicates a mismatch between your API key and the configured Datadog site. Verify that your site
parameter matches your Datadog account region:
// Wrong site for US-based account
mcpcat.track(server, null, {
exporters: {
datadog: {
type: "datadog",
apiKey: process.env.DD_API_KEY,
site: "datadoghq.eu", // Should be "datadoghq.com" for US
service: "my-service"
}
}
})
Issue: Missing or incomplete trace correlation
MCPcat generates trace IDs from session identifiers, but these may not correlate with existing APM traces if your application uses different tracing libraries. To enable full correlation, ensure consistent trace propagation:
import { trace } from 'dd-trace'
// Propagate trace context to MCP operations
const span = trace.startSpan('mcp.operation')
mcpcat.track(server, null, {
exporters: {
datadog: {
type: "datadog",
apiKey: process.env.DD_API_KEY,
site: "datadoghq.com",
service: "my-service",
// Include trace context
traceContext: {
traceId: span.context().toTraceId(),
spanId: span.context().toSpanId()
}
}
}
})
Problem: High latency or dropped events during traffic spikes
The default batching configuration may not be optimal for high-traffic scenarios. Adjust batching parameters based on your traffic patterns:
mcpcat.track(server, null, {
exporters: {
datadog: {
type: "datadog",
apiKey: process.env.DD_API_KEY,
site: "datadoghq.com",
service: "high-traffic-service",
// Optimize for high throughput
batchSize: 250, // Increase batch size
flushInterval: 2000, // Reduce flush interval
maxRetries: 5, // More aggressive retries
timeout: 15000 // Longer timeout for large batches
}
}
})
Monitor the mcp.exporter.errors
metric in Datadog to identify when batching configuration needs adjustment. Persistent export errors indicate that your configuration may need optimization for your specific workload.
Examples
Example 1: Monitoring AI Agent Tool Usage
This example demonstrates tracking and analyzing how AI agents interact with your MCP server tools, providing insights into usage patterns and performance characteristics:
import { MCPServer } from '@modelcontextprotocol/server-node'
import mcpcat from 'mcpcat'
const server = new MCPServer()
// Configure comprehensive tool monitoring
mcpcat.track(server, null, {
exporters: {
datadog: {
type: "datadog",
apiKey: process.env.DD_API_KEY,
site: "datadoghq.com",
service: "ai-tools-service",
env: "production",
// Tag with tool categories for analysis
tags: {
category: "ai-tools",
model: process.env.AI_MODEL || "unknown"
}
}
},
// Capture user intent for deeper insights
captureUserIntent: true,
// Identify AI agents
identifyUser: (context) => ({
id: context.clientId,
name: context.clientName,
metadata: {
model: context.headers?.['x-ai-model'],
version: context.headers?.['x-ai-version']
}
})
})
// Register tools with automatic tracking
server.tool("analyze_sentiment", async (args) => {
// Tool execution is automatically tracked
const result = await sentimentAnalyzer.analyze(args.text)
return result
})
server.tool("generate_summary", async (args) => {
// Performance metrics captured automatically
const summary = await summarizer.process(args.content)
return summary
})
With this configuration, you can create Datadog monitors that alert when tool response times exceed thresholds, usage patterns change unexpectedly, or specific AI models encounter errors. The captured intent data helps understand what users are trying to accomplish when tools fail.
Example 2: Production Debugging with Correlated Logs
This example shows how to leverage MCPcat's Datadog integration for production debugging by correlating MCP events with application logs and distributed traces:
import { MCPServer } from '@modelcontextprotocol/server-node'
import mcpcat from 'mcpcat'
import winston from 'winston'
const server = new MCPServer()
// Configure Winston to send logs to Datadog
const logger = winston.createLogger({
transports: [
new winston.transports.Http({
host: 'http-intake.logs.datadoghq.com',
path: `/api/v2/logs?dd-api-key=${process.env.DD_API_KEY}`,
ssl: true
})
],
defaultMeta: {
service: 'mcp-debug-service',
ddsource: 'nodejs',
env: 'production'
}
})
// Initialize MCPcat with session correlation
mcpcat.track(server, null, {
exporters: {
datadog: {
type: "datadog",
apiKey: process.env.DD_API_KEY,
site: "datadoghq.com",
service: "mcp-debug-service",
env: "production"
}
},
// Add session context to all events
beforeSend: (event) => {
// Log correlated application events
logger.info('MCP event processed', {
session_id: event.session_id,
event_id: event.event_id,
event_type: event.event_type,
duration_ms: event.duration_ms
})
return event
}
})
// Implement tools with detailed logging
server.tool("database_query", async (args) => {
const startTime = Date.now()
try {
logger.debug('Executing database query', {
query: args.query,
session_id: server.currentSessionId
})
const result = await db.execute(args.query)
logger.info('Query completed successfully', {
duration_ms: Date.now() - startTime,
row_count: result.rows.length,
session_id: server.currentSessionId
})
return result
} catch (error) {
logger.error('Database query failed', {
error: error.message,
stack: error.stack,
query: args.query,
session_id: server.currentSessionId
})
throw error
}
})
This setup enables powerful debugging workflows in Datadog. You can search for a specific session ID to see all related MCP events and application logs in chronological order, making it easy to trace the complete flow of an AI agent interaction and identify where issues occurred. The correlated logs provide context that's often missing when debugging distributed AI systems, showing not just what the MCP server did, but why and how it made specific decisions.
Related Guides
Monitor MCP Server Performance with OpenTelemetry
Connect MCP servers to any OpenTelemetry-compatible platform for distributed tracing and performance monitoring.
Send MCP Server Errors to Sentry for Real-Time Alerting
Forward MCP server errors to Sentry using MCPcat telemetry. Track errors, performance metrics, and debug production issues in real-time.
Set Up Multi-Platform Telemetry for MCP Servers
Send MCP telemetry data to multiple observability platforms simultaneously for comprehensive monitoring.