The Quick Answer
Install the Model Context Tool Inspector from the Chrome Web Store, enable the WebMCP testing flag, and click the extension icon to open a side panel that lists all registered tools on the current page. From there you can inspect schemas, execute tools manually, or let Gemini AI call them for you.
- Install the extension from the Chrome Web Store
- Enable
chrome://flags/#enable-webmcp-testingand relaunch Chrome - Navigate to a page with WebMCP tools and click the extension icon
The extension badge shows a count of registered tools on the active tab, so you can tell at a glance whether your tools are being picked up.
Prerequisites
- Chrome 146+ (currently available in the Canary or Dev channel)
- The
chrome://flags/#enable-webmcp-testingflag set to Enabled - A page with WebMCP tools registered via
navigator.modelContext.registerTool()or declarative HTML form attributes
If you haven't registered any WebMCP tools yet, start with the register your first WebMCP tool guide to get a working tool on the page before testing.
Installation
From the Chrome Web Store
The simplest path. Visit the Model Context Tool Inspector listing and click Add to Chrome. The extension appears in your toolbar immediately.
From Source
If you want to run the latest development version or inspect the extension code:
$git clone https://github.com/beaufortfrancois/model-context-tool-inspector$cd model-context-tool-inspector$npm install
The npm install step bundles the @google/genai SDK (used for Gemini integration) into a single file via esbuild. Once built, load the extension in Chrome:
- Navigate to
chrome://extensions/ - Toggle Developer mode (top right)
- Click Load unpacked and select the cloned directory
The extension is part of the broader GoogleChromeLabs/webmcp-tools repository, which also includes an evals CLI and demo applications.
Enabling the WebMCP Testing Flag
Before the inspector can communicate with navigator.modelContextTesting, you need to enable the experimental API:
- Open
chrome://flags/#enable-webmcp-testingin Chrome - Set the flag to Enabled
- Click Relaunch at the bottom of the page
[Screenshot: Chrome flags page with the "WebMCP for testing" flag highlighted and set to Enabled]
This flag activates navigator.modelContextTesting, the testing variant of the WebMCP API. The inspector relies on two methods from this interface — listTools() to discover registered tools and executeTool() to invoke them. The production navigator.modelContext API is used by your application code to register tools; the testing interface is what allows external tools like the inspector to query and call them.
Opening the Inspector
Click the extension icon in your Chrome toolbar. This opens the side panel — a persistent sidebar that stays visible as you navigate and interact with the page.
[Screenshot: Chrome browser with the Model Context Tool Inspector side panel open, showing the WebMCP Tools section with a table of registered tools]
The side panel has two collapsible sections:
- WebMCP Tools — lists all tools registered on the current page
- Interact with the Page — manual tool execution and Gemini AI testing
Inspecting Registered Tools
The WebMCP Tools section displays a table of every tool registered on the current page. Each row shows the tool's name, description, and inputSchema (as JSON). The table updates automatically when tools are added or removed — it listens for toolchange events from the navigator.modelContextTesting API, so you don't need to manually refresh.
[Screenshot: The WebMCP Tools table showing two registered tools with their names, descriptions, and input schemas]
Double-click the table body to toggle between compact and pretty-printed JSON views. This is useful when schemas are large and you need to read nested property definitions.
Two export buttons sit below the table:
- Copy as JSON — copies the full tool list as a JSON array, useful for documentation or pasting into test scripts
- Copy as ScriptToolConfig — exports tools in Google's ScriptToolConfig format, which is handy if you're integrating with Google's AI tools
If you're dynamically registering and unregistering tools based on route changes or user actions, the table reflects these changes in real time as tools appear and disappear.
Executing Tools Manually
The Interact with the Page section includes a manual execution panel at the bottom. Select a tool from the Tool dropdown, edit the Input Arguments JSON, and click Execute Tool.
{
"origin": "SFO",
"destination": "JFK",
"date": "2026-04-15"
}The inspector auto-generates a template for the input arguments based on the tool's JSON Schema. It maps schema types and formats to sensible placeholder values — dates get date strings, emails get email-formatted strings, enums get one of the allowed values, and so on. You'll usually only need to tweak the generated values rather than write JSON from scratch.
[Screenshot: The manual execution panel showing the Tool dropdown, auto-generated Input Arguments textarea, and the Execute Tool button]
Results appear in the Tool Results area below the button. If the tool throws an error or the input doesn't match the schema, you'll see the error message here. This makes it straightforward to test edge cases — pass invalid types, omit required fields, or hit error paths in your handler to verify your tool fails gracefully.
Behind the scenes, the inspector calls navigator.modelContextTesting.executeTool(name, JSON.stringify(args)), which invokes your tool's execute callback exactly as a real agent would. The only difference is that the call originates from the inspector's content script rather than a browser-native agent.
Testing with Gemini AI
The inspector includes built-in Gemini integration that lets you test your tools with a real LLM. Instead of manually constructing inputs, you describe what you want in natural language and Gemini selects the appropriate tool, fills in the arguments, and calls it.
To set it up:
- Click Set Gemini API key in the side panel
- Enter your Google AI Studio API key
- Type a natural-language prompt in the User Prompt textarea
- Click Send
For example, if your page has a search-flights tool, you could type:
Find flights from San Francisco to New York on April 15thGemini identifies the matching tool, constructs the input arguments from your prompt, calls the tool, and displays both the tool call and the result in the Prompt Results area. The default model is gemini-2.5-flash.
This is particularly valuable for testing how well your tool descriptions and schemas communicate their purpose to an LLM. If Gemini picks the wrong tool or passes incorrect arguments, your descriptions or schema likely need refinement. The Copy trace button exports the full conversation (including all tool calls and responses) as JSON, which is useful for debugging multi-step interactions.
After setting the API key, the extension also auto-suggests a natural-language prompt based on the tools available on the page — a quick way to verify the basics work.
Verifying Tools from the Console
You can also test tools directly from the browser console without the extension, using the same navigator.modelContextTesting API:
// List all registered tools
const tools = navigator.modelContextTesting.listTools();
console.table(tools.map(t => ({ name: t.name, description: t.description })));// Execute a tool
const result = await navigator.modelContextTesting.executeTool(
"search-flights",
JSON.stringify({ origin: "SFO", destination: "JFK", date: "2026-04-15" })
);
console.log(JSON.parse(result));Note that executeTool takes and returns JSON strings, not objects. This is a common source of confusion — if your tool returns an object, you need to JSON.parse() the result. The inspector handles this conversion automatically, which is one reason to prefer it over raw console calls for everyday testing.
The console approach is useful for writing quick automated checks. For more structured testing, you can wrap these calls in a test script and run them via the console or a testing framework. For server-side MCP testing, see the MCP Inspector setup guide.
Common Issues
Extension badge shows "0" but tools are registered
This usually means the chrome://flags/#enable-webmcp-testing flag is not enabled, or Chrome needs a relaunch after enabling it. The extension relies on navigator.modelContextTesting — without the flag, this API doesn't exist and the inspector can't discover tools. Check the side panel's status area for an error message pointing to the flag.
If the flag is enabled and you still see 0, verify your tools are actually registering by running navigator.modelContextTesting.listTools() in the console. If this returns an empty array, the issue is in your registration code, not the inspector. Common culprits: the registration runs before the DOM is ready, the code is wrapped in a condition that evaluates to false, or an error during registration silently prevents it from completing.
"A tool with this name already exists" during development
Hot module replacement (HMR) re-runs your registration code without fully unloading the page, so the previous tool instance is still registered when the new code tries to register the same name. Use the safe re-registration pattern:
function registerToolSafe(tool) {
try { navigator.modelContext.unregisterTool(tool.name); } catch (e) {}
navigator.modelContext.registerTool(tool);
}React libraries like webmcp-react handle this automatically through cleanup in useEffect return functions. If you're using the raw API, this pattern prevents duplicate registration errors during development.
Tool executes but returns undefined
Your execute callback must explicitly return a value. If it performs work but doesn't have a return statement, the inspector shows undefined as the result. This also affects real agents — they receive nothing useful. Always return a serializable value from your handler, even if it's just { success: true }.
Example: Testing a Multi-Tool Page
A typical debugging session might involve a page with several tools that interact with each other. Consider a travel booking page with search-flights, get-flight-details, and book-flight tools.
Start by opening the inspector and verifying all three tools appear in the table. Check that each tool's inputSchema contains the properties you expect — missing properties usually mean the schema object was constructed incorrectly in your registration code.
Test the read-only tools first with manual execution:
- Select
search-flightsfrom the dropdown - Fill in origin, destination, and date
- Click Execute Tool and verify the results contain flight data
- Copy a flight ID from the result
- Select
get-flight-details, paste the flight ID, and execute
For the state-changing book-flight tool, test the confirmation flow. If your tool uses requestUserInteraction, the browser pauses execution and shows whatever UI your callback renders (a confirm() dialog, a modal, etc.). The inspector waits for the interaction to complete before showing the result. Test both the confirmation and cancellation paths.
Then switch to Gemini mode and try a natural-language prompt like "Search for flights from SFO to JFK on April 15th and show me details for the cheapest one." Watch whether Gemini correctly chains the two tool calls. If it tries to call book-flight without being asked, your tool description may be too broad — tighten it to specify that booking requires explicit user intent.
The Copy trace button captures the entire Gemini interaction, including every tool call and response. Save these traces during development to track how schema and description changes affect agent behavior over time.
Related Guides
Register your first WebMCP tool with navigator.modelContext
How to use navigator.modelContext to register tools that AI agents can call directly from your website.
Setting up MCP Inspector for server testing
Set up MCP Inspector to test and debug your MCP servers with real-time visual interface.
Build declarative WebMCP tools with HTML form attributes
Turn existing HTML forms into MCP tools using declarative attributes — no JavaScript required.