The Quick Answer
Create a ChatGPT app by building an MCP server that returns structured data and a React component to display it. The MCP server defines tools, and each tool can reference an HTML template for its UI.
Install the TypeScript MCP SDK and create a minimal server:
$npm install @modelcontextprotocol/sdk express cors
Register a tool with its UI template and handler:
server.registerTool(
"greet_user",
{
description: "Display a personalized greeting",
inputSchema: {
type: "object",
properties: { name: { type: "string" } },
required: ["name"]
},
_meta: {
"openai/outputTemplate": "ui://widget/hello.html",
"openai/toolInvocation/invoking": "Creating greeting…",
"openai/toolInvocation/invoked": "Greeting ready."
}
},
async (args: { name: string }) => {
return {
content: [{ type: "text", text: `Greeting ${args.name}` }],
structuredContent: { message: `Hello, ${args.name}!` }
};
}
);
Create hello.html
with a simple React component:
<!DOCTYPE html>
<html>
<body>
<div id="root"></div>
<script type="module">
import React from 'https://esm.sh/react@18';
import ReactDOM from 'https://esm.sh/react-dom@18/client';
const App = () => {
const data = window.openai?.toolOutput?.structuredContent;
return React.createElement('div', null, data?.message || 'Loading...');
};
ReactDOM.createRoot(document.getElementById('root')).render(
React.createElement(App)
);
</script>
</body>
</html>
The toolInvocation
strings provide status feedback to users. ChatGPT displays the invoking
message while your tool executes, then shows invoked
when complete. This gives users immediate feedback without requiring custom loading states in your component.
This architecture separates concerns: the MCP server handles logic and data, while the UI component focuses purely on presentation. By declaring the template in the tool descriptor, ChatGPT knows upfront which UI to render for each tool's output, enabling better optimization and preloading.
Project Setup
Create a new directory for your ChatGPT app and set up the basic structure. Apps SDK projects typically separate the MCP server code from the UI components.
$mkdir hello-chatgpt-app$cd hello-chatgpt-app$mkdir components
Initialize your Node.js project and install dependencies:
$npm init -y$npm install @modelcontextprotocol/sdk express cors$npm install -D typescript @types/node @types/express @types/cors tsx
Create a tsconfig.json
for TypeScript configuration:
$npx tsc --init
Update your package.json
to add development scripts:
{
"scripts": {
"dev": "tsx watch src/server.ts",
"build": "tsc",
"start": "node dist/server.js"
}
}
The components
directory will hold your HTML templates for UI rendering. The MCP server code will live in src/http-server.ts
and define the tools that ChatGPT can call. The tsx
package allows running TypeScript directly during development without a build step.
This structure follows the Apps SDK pattern where server logic and UI presentation are cleanly separated. The server focuses on data and business logic, while components handle user interaction and display.
Understanding the Architecture
Before diving into code, let's understand how ChatGPT apps work. Your application consists of three key pieces:
- MCP Server: Defines tools (actions ChatGPT can perform) and handles their execution
- UI Components: HTML files with React code that render tool results inline in ChatGPT
- HTTP Server: Serves both the MCP protocol endpoint and component files over HTTPS
For browser-based ChatGPT, you'll use Streamable HTTP for the MCP protocol. This modern transport uses a single HTTP endpoint (/mcp
) for all communication—requests, responses, and optional streaming.
The data flow: ChatGPT → POST /mcp
→ MCP Server → Tool Handler → Returns structuredContent
→ ChatGPT fetches component HTML → Renders with window.openai.toolOutput
This clean separation means your server handles business logic while components focus purely on presentation. The _meta["openai/outputTemplate"]
field in your tool descriptor links the two together.
Create the UI Component
UI components in ChatGPT apps are HTML files with embedded React code. They run in a sandboxed iframe and communicate with ChatGPT through the window.openai
API.
Create components/greeting.html
:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Greeting Widget</title>
</head>
<body>
<div id="root"></div>
<script type="module">
import React from 'https://esm.sh/react@18';
import ReactDOM from 'https://esm.sh/react-dom@18/client';
function GreetingCard() {
// Access data passed from MCP server
const data = window.openai?.toolOutput?.structuredContent;
if (!data) {
return React.createElement('div', null, 'Loading...');
}
return React.createElement('div', null,
React.createElement('p', null, data.greeting),
React.createElement('small', null, `Generated: ${new Date(data.timestamp).toLocaleString()}`)
);
}
ReactDOM.createRoot(document.getElementById('root')).render(
React.createElement(GreetingCard)
);
</script>
</body>
</html>
This component demonstrates the fundamental pattern: read from window.openai.toolOutput
to get the data your server returned in structuredContent
. The component is self-contained, using ESM.sh to load React directly from a CDN without requiring a build step.
ChatGPT injects the window.openai
object before your component loads, providing access to the tool output data. The toolInvocation
status strings you defined in _meta
handle loading states automatically, so your component can focus on displaying the data. More complex components can use window.openai.callTool()
to trigger server actions or window.openai.sendFollowUpMessage()
to continue the conversation.
Create the HTTP Server
For browser-based ChatGPT, you need an HTTP server that handles both the MCP protocol and serves component files. The MCP server will use Streamable HTTP for communication.
Create src/http-server.ts
:
import express from "express";
import cors from "cors";
import path from "path";
import { fileURLToPath } from "url";
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
const app = express();
const PORT = 8000;
// Enable CORS for ChatGPT
app.use(
cors({
origin: "https://chatgpt.com",
credentials: true,
})
);
app.use(express.json());
// Initialize MCP server
const mcpServer = new Server(
{
name: "hello-world-app",
version: "1.0.0",
},
{
capabilities: {
tools: {},
},
}
);
// Register the greeting tool
mcpServer.registerTool(
"greet_user",
{
description: "Display a personalized greeting",
inputSchema: {
type: "object",
properties: {
name: {
type: "string",
description: "User's name",
},
},
required: ["name"],
},
_meta: {
"openai/outputTemplate": "ui://widget/greeting.html",
"openai/toolInvocation/invoking": "Creating greeting…",
"openai/toolInvocation/invoked": "Greeting ready.",
},
},
async (args: { name: string }) => {
const userName = args.name || "World";
return {
content: [
{
type: "text",
text: `Greeting ${userName}`,
},
],
structuredContent: {
greeting: `Hello, ${userName}!`,
timestamp: new Date().toISOString(),
},
};
}
);
// MCP endpoint using Streamable HTTP transport
app.post("/mcp", async (req, res) => {
const transport = new StreamableHTTPServerTransport({
sessionIdGenerator: undefined,
enableJsonResponse: true,
});
res.on("close", () => {
transport.close();
});
await mcpServer.connect(transport);
await transport.handleRequest(req, res, req.body);
});
// Serve component files with Skybridge MIME type
app.get("/widget/:componentName", (req, res) => {
const componentPath = path.join(
__dirname,
"..",
"components",
req.params.componentName
);
// Set the correct MIME type for ChatGPT app components
res.type("text/html+skybridge");
res.sendFile(componentPath, (err) => {
if (err) {
res.status(404).json({ error: "Component not found" });
}
});
});
// Health check endpoint
app.get("/health", (req, res) => {
res.json({ status: "ok" });
});
app.listen(PORT, () => {
console.log(`MCP server running on http://localhost:${PORT}`);
console.log(`MCP endpoint: http://localhost:${PORT}/mcp`);
console.log(`Components endpoint: http://localhost:${PORT}/widget/:name`);
});
Update your package.json
scripts:
{
"scripts": {
"dev": "tsx watch src/http-server.ts",
"build": "tsc",
"start": "node dist/http-server.js"
}
}
This unified server handles both MCP protocol requests and component serving. ChatGPT will connect to your /mcp
endpoint via POST requests to discover and call tools, while component files are served from /widget/:name
.
The text/html+skybridge
MIME type is required for ChatGPT to properly recognize app UI components. These HTML files are called "Skybridge" templates in OpenAI's documentation. While plain text/html
might work during development, the correct MIME type is essential for production and will be validated by tools like fast-agent. Setting res.type("text/html+skybridge")
before sending the file ensures proper recognition.
The CORS configuration is critical for security. Only allow https://chatgpt.com
to fetch your components and access your MCP endpoint. In production, you'll use HTTPS for your server and may need additional security measures like authentication tokens.
For local development, use ngrok to expose your server:
$ngrok http 8000
Copy the HTTPS URL provided by ngrok - you'll need it to connect ChatGPT to your server.
Connect to ChatGPT
With your server running, connect it to ChatGPT using Developer Mode. This enables ChatGPT to discover your tools and render your components.
Start your development server:
$npm run dev
Expose your local server via ngrok:
$ngrok http 8000
Copy the HTTPS URL (e.g., https://abc123.ngrok.io
). Update your tool registration in http-server.ts
to use this URL for component templates:
_meta: {
"openai/outputTemplate": "https://abc123.ngrok.io/widget/greeting.html"
}
Restart your server after updating the URL.
Enable Developer Mode in ChatGPT:
- Open ChatGPT settings (browser version at chatgpt.com)
- Navigate to Settings → Features → Developer mode
- Toggle Developer mode on
Create a new connector:
- In the ChatGPT interface, go to Connectors
- Click Create connector
- Enter your connector details:
- Name: Hello World App
- Connector URL:
https://abc123.ngrok.io/mcp
(your ngrok URL +/mcp
)
- Click Save
Test your app by starting a new conversation and asking: "Use the Hello World App to greet me as Alex"
ChatGPT will show "Creating greeting…" while the tool executes, then "Greeting ready." when complete. It will call your greet_user
tool with {"name": "Alex"}
and render your greeting component inline showing "Hello, Alex!" with a timestamp.
Common Issues
Understanding why things break helps you debug faster. Here are the three most common issues when building your first ChatGPT app and their root causes.
Component Won't Render
Symptom: ChatGPT calls your tool but shows text instead of your UI component.
This usually happens because ChatGPT can't fetch your component file. The component URL in _meta["openai/outputTemplate"]
must be publicly accessible and return valid HTML with proper CORS headers and the correct MIME type.
Tool Not Being Called
Symptom: ChatGPT responds with general knowledge instead of calling your tool.
The model decides whether to use your tool based on the tool's name and description. Generic names like "process" or vague descriptions make it hard for ChatGPT to know when your tool is relevant.
Use descriptive tool names and clear descriptions:
{
name: "greet_user", // Specific action
description: "Display a personalized greeting card for the user", // Clear purpose
// vs
name: "process", // Too generic
description: "Processes input", // Too vague
}
Prevention: Write tool descriptions as if you're explaining to another developer when this tool should be used. Include keywords that match likely user queries. Use TypeScript's type system to catch schema errors before runtime.
Related Guides
OpenAI Apps SDK Python Quickstart
Build your first ChatGPT app in Python with this step-by-step guide covering MCP server setup and inline UI components.
Error handling in custom MCP servers
Implement robust error handling in MCP servers with proper error codes, logging, and recovery strategies.
Building a serverless MCP server
Deploy MCP servers on serverless platforms like AWS Lambda, Vercel, and Cloudflare Workers with StreamableHTTP.