
Enhance the VSC‑MCP Capabilities with a Headless VS Code - Part Three
The main drawback of current AI-powered IDEs like Cursor, Windsurf, or Copilot is that they are based on prompt engineering and rely on closed-source tools for code editing.
Each tool is proprietary and tailored to a specific product. They also require explicit mention in the prompt for the AI model to be able to use them effectively.
Take this example from one of Devin's AI, it's a highly customized instruction set designed specifically for their environment:
<go_to_definition path="/absolute/path/to/file.py" line="123" symbol="symbol_name"/>
Description: Use the LSP to find the definition of a symbol in a file. Useful when you are unsure about the implementation of a class, method, or function but need the information to make progress.
Parameters:
- path (required): absolute path to file
- line (required): The line number that the symbol occurs on.
- symbol (required): The name of the symbol to search for. This is usually a method, class, variable, or attribute.
With the introduction of the Model Context Protocol (MCP), we can now provide an AI client with tools that perform a wide range of actions comparable to or even exceeding those bundled with the above mentioned IDEs.
Even better, they're open-source and transparent.
What's most valuable is that MCP offers a standardized way for the AI to discover available tools and understand how to use them. Once an MCP server is attached to the client, the protocol provides APIs to list available tools, describe typed parameters, offer example prompts, and even share user-defined scenarios.
This means our system prompt becomes much more intuitive and lightweight, for example:
You are a code editing assistant: You can fulfill edit requests and chat with the user about code or other questions.
For each user task, you should:
1. Use the `repomix` mcp tool to get the codebase directory and files structure.
2. Use the `vsc-mcp` tools to edit files (read, find, edit, write).
3. after implementing the changes, use the `get_errors` mcp tool, check for errors and fix it.
4. repeat step 2 and 3 until no errors are found.
What changed in VSC-MCP?
In the first part, I introduced the VSC-MCP project. In the second part, I demonstrated how to use it. Up to that point, VSC-MCP only supported TypeScript/JavaScript projects, as it relied on a single typescript-language-server
running under the hood.
In the effort to build a more comprehensive MCP that supports all VS Code features across multiple languages, I extended VSC-MCP to use a headless version of VS Code and the results have been really promising.
Now, we launch a headless VS Code instance powered by GitPod’s OpenVSCode Server. This setup brings in the full VS Code extension ecosystem, packages the needed languages and dependencies inside a Docker container, and allows VSC-MCP to communicate using the Language Server Protocol (LSP) with the VSC-MCP extension pre-installed in the container.
This custom VSC-MCP extension serves as middleware: it intercepts MCP tool requests and routes them to the VS Code instance. The instance then uses the appropriate language extension's built-in LSP services and apis, like diagnostics, symbol definitions, and more.
How does it work now?
Here's the updated flow behind the scenes for checking errors in a file:
- The AI client (e.g. Claude Desktop) calls the
get_errors
tool using the MCP JSON protocol. - VSC-MCP connects to the OpenVSCode container via its buddy extension on port 5007.
- It then issues two standard LSP calls: one to open the document (
textDocument/didOpen
) and one to retrieve diagnostics (textDocument/diagnostic
). - The VSC-MCP extension handles both calls using native VS Code APIs. For example, when calling
vscode.languages.getDiagnostics
, it relies on the appropriate language extension based on the file type. So if it’s a Rust file, it’ll automatically use the Rust Analyzer extension.
How to run the "new" VSC-MCP
The biggest infrastructure change is the requirement to use the OpenVSCode Server, a headless browser-based version of VS Code IDE published by GitPod.
To run it, you'll need Docker. I made minor adjustments to the default Dockerfile
and added a docker-compose.yml
file to streamline the setup.
One requirement for now: you must specify the path to the project you want to work with.
PROJECT_PATH=/path/to/your/project docker-compose up
Once the container is running, it will start a VS Code instance with default extensions installed. You can even open http://localhost:3000 in your browser to watch code edits live and open files.
Another key change is the introduction of the USE_VSCODE_LSP
environment variable in the MCP server config.
When set to true
, VSC-MCP routes requests to the Dockerized VS Code instance rather than spinning up a local language server.
Here’s how to add it to your AI client configuration:
{
"mcpServers": {
"vsc-mcp": {
"command": "bun",
"args": ["/path/to/the/cloned/vsc-mcp/src/index.ts"],
"env": {
"USE_VSCODE_LSP": "true",
"LOG_DIR": "/path/to/your/vsc-mcp/logs",
"ALLOWED_DIRECTORIES": "/path/to/your/project"
}
}
}
}
The ALLOWED_DIRECTORIES
setting works just like before, it's a safeguard that restricts the scope of MCP tools to a defined directory. It's an important layer of protection, especially in the off-chance that the AI goes rogue and tries to access something like your BTC wallet private key.
Conclusion
Yes, there's now one extra step, this Docker thing, but the gains are well worth it.
With this setup, we can now leverage any VS Code API, regardless of programming language we work with, and create highly capable MCP tools around them.
While we’re still waiting on ChatGPT to natively support MCP servers, this works great with Claude Desktop to let AI interact with and modify your codebase without needing to rely on a traditional IDEs.