
Run MCP Servers from the Web – Part Four
The AI tooling space is moving fast; every day brings fresh ideas, hacks, and utilities.
Much of this momentum comes from AI itself and from talented people who pair strong vision with equally strong prompting skills.
During my daily trawl for new tools I discovered a Chrome extension called MCP SuperAssistant
Calling MCP Servers from the Web?
MCP SuperAssistant acts as a bridge between popular AI chat platforms (such as OpenAI ChatGPT and Google AI Studio) and your own MCP servers. The concept is simple yet clever.
Setup is slightly hack-ish: after installing the extension, you launch the companion proxy server, mcp-superassistant-proxy
, which lists the MCP servers you want to expose. The proxy itself is an MCP server, forwarding requests from the extension to your MCP tools.
npx @srbhptl39/mcp-superassistant-proxy@latest --config ./mcpconfig.json
Once running, the extension connects to the proxy and gains access to every underlying server.
Open a ChatGPT tab (it also works with Google AI Studio, Grok, and others) and you will see an MCP button beside the input box. Selecting it injects a system prompt that lists all available MCP tools and lets you call them directly from the chat UI.
Everything relies on carefully crafted prompts that describe each tool, its parameters, and how the model should "invoke it", quote because you actually need to click the Run button.
Here is the portion of the system prompt that enumerates the tools:
Example of a properly formatted tool call for ChatGPT:
<function_calls>
<invoke name="tool_name" call_id="1">
<parameter name="param1">value1</parameter>
<parameter name="param2">value2</parameter>
</invoke>
</function_calls>
## AVAILABLE TOOLS FOR SUPERASSISTANT
- vsc-mcp.edit_symbol
**Description**: [vsc-mcp] Edits a symbol (function, class, etc.) by name and type in a given file using LSP.
**Parameters**:
- `filePath`: Path to the file containing the symbol (string) (required)
- `name`: Name of the symbol to edit (string) (required)
- `type`: Type of the symbol. Supported types: function, method, class, interface, variable, constant, property, field (string) (required)
- `newContent`: New content for the symbol (string) (required)
- vsc-mcp.write_file
**Description**: [vsc-mcp] Creates a new file or overwrites an existing file with the provided content.
**Parameters**:
- `filePath`: Path to the file to create or overwrite (string) (required)
- `newContent`: Content to write to the file (string) (required)
The extension converts the JSON returned by tools/list
into human-readable text.
One obvious gap: it does not include example calls for every tool, leaving the model to guess and sometimes hallucinate.
Even with automatic prompt generation, reliability is uneven: malformed JSON, missing or extra fields, and imaginary tool names still slip through.
The Experiment
I was thrilled by the idea!
Until major desktop clients (other than Claude) natively support MCP, this extension seems like the perfect workaround: pair it with the VSC-MCP Docker container and you can edit code from any browser.
Reality set in quickly. I modified the Dockerfile so the container runs mcp-superassistant-proxy
alongside the repomix
and vsc-mcp
servers pre-installed.
The startup is smooth, just install the extension and start the container, but frustrations emerge as soon as you ask AI Studio or ChatGPT to execute tools:
- The extension manipulates the page's HTML. A minor markup change breaks everything.
- Each MCP tool call pauses the chat; you must hit the Run button, wait for completion, then send the result as a new message.
- The model must output/write strict JSON; any formatting glitch triggers an error.
- If a tool returns raw JSON, the model has to prettify it without code execution, a bit tough.
- Large system prompts are required. With 20 tools, the explanatory text alone consumes a big chunk of context.
Formatting problems dominate: escaped characters slip into code blocks, or XML goats get mangled, and the call fails. Current models are better at natural language than at precise API scaffolding.
By contrast, Claude Desktop (and also OpenAI AgentSDK and Google ADK SDK) use three clean layers:
- Discovery layer – the client fetches
tools/list
andprompts/list
, then caches the JSON schema. - Injection layer – immediately before each request, the client copies cached objects into the
tools
field of the API call. - Execution layer – when the model returns
tool_use
, the agent looks up the name, callstools/call
, and posts the result back so the model can continue.
With projects like MCP-Bridge you can translate the Model Context Protocol into the Function-Tool protocol that OpenAI and Google already understand.
That path works but plunges you back into metered API land, budgets are still tight, so api credits are a problem.
Takeaway
MCP SuperAssistant is an exciting proof of concept that unlocks browser-based access to your private MCP servers.
Yet the current implementation fights HTML fragility, strict JSON requirements, and large prompt overheads.
For everyday use I still lean on Claude Desktop, 20$/month, unlimited MCP servers calls.
I'll keep experimenting and watching for the day when all major AI chat platforms speak MCP out of the box.