Skip to main content
Exploring ideas, sharing knowledge
Hidden Peaks Unlocked!
Looks like you found the hidden peaks! Future posts are now visible.
Peaks Hidden Again
The future posts are hidden once more. You know how to find them again.

Model Context Protocol (MCP)

Bet

An open standard for connecting AI applications to external tools, data sources, and services

AI |

Metrics

Learning UX Potential Impact Ecosystem Market Standard Maintainability
Learning UX
3/5
Potential
5/5
Impact
4/5
Ecosystem
3/5
Market Standard
4/5
Maintainability
2/5

What is it

Model Context Protocol (MCP) is an open-source, JSON-RPC 2.0-based standard for connecting AI applications to external tools and data sources. Originated by Anthropic and now multi-stakeholder, it defines a client-server architecture where an MCP host (like Claude Code or VS Code) spawns MCP clients that connect to MCP servers — programs that expose tools (executable functions), resources (data sources), and prompts (reusable templates) to the AI. Servers can run locally via stdio or remotely via Streamable HTTP, with SDKs available in ten languages including Python, TypeScript, Go, Java, and Rust.

My Opinion

MCP is a promising standard that makes the right architectural bet, but it’s still early in the ways that actually hurt day-to-day usage.

The Protocol Is Right, the Config Is Not

The core idea is solid: define a universal contract between AI hosts and external capabilities, so you write an integration once and it works everywhere. In practice, this standardization stops at the protocol layer. The configuration story is a mess. Setting up an MCP server in Claude Code means editing .mcp.json. Doing the same in GitHub Copilot requires a different format in VS Code settings. OpenCode has its own approach. The protocol is standardized; the developer experience is not. You end up with fragmented setup docs for every host, which undermines the “write once, work everywhere” promise and makes the ecosystem feel less mature than the star count suggests.

The Maintenance Problem Is Underappreciated

MCP’s biggest unresolved tension is the update story, and it cuts both ways. Local MCP servers — the ones running on your machine via stdio — have no good mechanism to tell you they’re outdated or to update themselves. You install a server, forget about it, and three months later you’re running a stale binary that nobody reminds you to upgrade. Remote MCP servers solve this problem but create a worse one: the server can silently update itself without your knowledge or consent. You trust a tool based on its initial behavior, and then a malicious or simply careless change later does real damage. There’s no model equivalent of a package lock file or a browser extension permissions review. This is a genuine security gap that the ecosystem hasn’t figured out yet, and it matters more as agents are granted wider permissions.

The Potential Is Real

Despite these friction points, the long-term trajectory is hard to bet against. MCP is already the de-facto standard for AI tool integration — adopted natively by Claude, VS Code Copilot, Cursor, Gemini CLI, and most new agentic frameworks. The more interesting angle is what it could become: a complement or even a partial replacement for REST. Future APIs will want to be agent-compatible out of the box, which means exposing MCP endpoints alongside or instead of raw HTTP endpoints. An API that describes its own capabilities in a structured, machine-readable way that an agent can discover and invoke without bespoke integration code — that’s a significant shift in how we think about service interfaces. I used it to wire image generation into my publishing workflow as described in Automate with MCP, and the composability is genuinely compelling once you’re past the setup friction.

A glimpse of where this is heading: WebMCP is a companion W3C Community Group draft (published February 2026, co-edited by Microsoft and Google engineers) that brings the same model to the browser. It defines a navigator.modelContext API so web pages can expose JavaScript functions as typed MCP tools directly in client-side script — no backend server required. Instead of an agent scraping HTML, it can call searchPosts(query) or checkout() against a structured tool contract declared by the page itself. It’s a W3C Community Group Draft, not yet a W3C Standard, but it’s already in early preview in Chrome Canary (around version 146) — so this is moving faster than a typical spec proposal. The fact that Microsoft and Google are co-authoring it and Chrome is already shipping a preview makes this one to watch closely.

Conclusion

MCP is the right abstraction at the right time, and it’s winning. The protocol itself is solid and adoption is accelerating across every major agentic tool. The friction is real though: config fragmentation across hosts makes setup a repeated chore, and neither the local nor remote server model has a satisfying answer for lifecycle management and security. Worth adopting now for AI tool integration — just go in with clear expectations about the operational overhead, and be deliberate about which remote servers you trust.

Share this article