The protocol is in very, very early stages and there are a lot of things that still need to be figured out. That being said, I can commend Anthropic on being very open to listening to the community and acting on the feedback. The authorization spec RFC, for example, is a coordinated effort between security experts at Microsoft (my employer), Arcade, Hellō, Auth0/Okta, Stytch, Descope, and quite a few others. The folks at Anthropic set the foundation and welcomed others to help build on it. It will mature and get better.
[1]: https://github.com/modelcontextprotocol/modelcontextprotocol...
>makes it easier to accidentally expose sensitive data.
So does the "forward" button on emails. Maybe be more careful about how your system handles sensitive data. How about:
>MCP allows for more powerful prompt injections.
This just touches on wider topic of only working with trusted service providers that developers should abide by generally. As for:
>MCP has no concept or controls for costs.
Rate limit and monitor your own usage. You should anyway. It's not the road's job to make you follow the speed limit.
Finally, many of the other issues seem to be more about coming to terms with delegating to AI agents generally. In any case it's the developer's responsibility to manage all these problems within the boundaries they control. No API should have that many responsibilities.
The essay misses the biggest problem with MCP:
1. it does not enable AI agents to functionally compose tools.
2. MCP should not exist in the first place.
LLMs already know how to talk to every API that documents itself with OpenAPI specs, but the missing piece is authorization. Why not just let the AI make HTTP requests but apply authorization to endpoints? And indeed, people are wrapping existing APIs with thin MCP tools.Personally, the most annoying part of MCP is the lack of support for streaming tool call results. Tool calls have a single request/response pair, which means long-running tool calls can't emit data as it becomes available – the client has to repeat a tool call multiple times to paginate. IMO, MCP could have used gRPC which is designed for streaming. Need an onComplete trigger.
I'm the author of Modex[^1], a Clojure MCP library, which is used by Datomic MCP[^2].
[^1]: Modex: Clojure MCP Library – https://github.com/theronic/modex
[^2]: Datomic MCP: Datomic MCP Server – https://github.com/theronic/datomic-mcp/
A large problem in this article stems from the fact that the LLM may take actions I do not want it to take. But there are clearly 2 types of actions the LLM can take: those I want it to take on it's own, and those I want it to take only after prompting me.
There may come a time when I want the LLM to run a business for me, but that time is not yet upon us. For now I do not even want to send an e-mail generated by AI without vetting it first.
But the author rejects the solution of simply prompting the user because "it’s easy to see why a user might fall into a pattern of auto-confirmation (or ‘YOLO-mode’) when most of their tools are harmless".
Sure, and people spend more on cards than they do with cash and more on credit cards than they do on debit cards.
But this is a psychological problem, not a technological one!
This isn't necessarily the fault of the spec itself, but how most clients have implemented it allows for some pretty major prompt injections.
[1] https://invariantlabs.ai/blog/mcp-security-notification-tool... [2] https://www.bernardiq.com/blog/resource-poisoning/
I wrote an MCP Server (called Codebox[1]) which starts a Docker container with your project code mounted. It works quite well, and I've been using it with LibreChat and vscode. In my experience, Agents save 2x the time (over using an LLM traditionally) and is less typing, but at roughly 3x the cost.
The idea is to make the entire Unix toolset available to the LLM (such as ls, find), along with project specific tooling (such as typescript, linters, treesitter). Basically you can load whatever you want into the container, and let the LLM work on your project inside it. This can be done with a VM as well.
I've found this workflow (agentic, driven through a Chat based interface) to be more effective compared to something like Cursor. Will do a Show HN some time next week.
I mean the whole AI personal assistant shebang from all possible angles.
Imagine, for example if booking.com built an MCP server allowing you to book a hotel room, query all offers in an area in a given time, quickly, effortlessly, with a rate limit of 100 requests/caller/second, full featured, no hiding or limiting data.
That would essentially be asking them to just offer you their internal databases, remove their ability to show you ads, remove the possibility to sell advertisers better search rankings, etc.
It would be essentially asking them to keel over and die, and voluntarily surrender all their moat.
But imagine for a second they did do that. You get the API, all the info is there.
Why do you need AI then?
Let's say you want to plan a trip to Thailand with your family. You could use the fancy AI to do it for you, or you could build a stupid frontend with minimal natural language understanding.
It would be essentially a smart search box, where you could type in 'book trip to Thailand for 4 people, 1 week, from July 5th', and then it would parse your query, call out to MCP, and display the listings directly to you, where you could book with a click.
The AI value add here is minimal, even non-existent.
This applies to every service under the sun, you're essentially creating a second Internet just for AIs, without all the BS advertising, fluff, clout chasing and time wasting. I, as a human am dying to get access to that internet.
Edit: I'm quite sure this AI MCP future is going to be enshittified in some way.
Being pretty close to OAuth 1.0 and the group that shaped it I’ve seen how new standards emerge, and I think it’s been so long since new standards mattered that people forgot how they happen.
I was one of the first people to criticize MCP when it launched (my comment on the HN announcement specifically mentioned auth) but I respect the groundswell of support it got, and at the end of the day the standard that matters is the one people follow, even if it isn’t the best.
"... MCP tends to crowd the model context with too many options. There doesn’t seem to be a clear way to set priorities or a set of good examples to expose MCP server metadata–so your model API calls will just pack all the stuff an MCP server can do and shove it into the context, which is both wasteful of tokens and leads to erratic behavior from models."
I feel like I hear very many stories of some company integrating with MCP, many fewer stories from users about how it helps them.
I am yet to see a use case that wouldn't be better served with an HTTP API. I understand the need to standardize some conventions around this, but at the heart of it, all "tool" use boils down to: 1. an API endpoint to expose capabilities / report the API schema 2. other endpoints ("tools") to expose functionality
Want state? ("resources") - put a database or some random in-memory data structure behind an API endpoint. Want "prompts"? This is just a special case of a tool.
Fundamentally (like most everyone else experimenting with this tech), I need an API that returns some text and maybe images. So why did I just lose two days trying to debug the Python MCP SDK, and the fact that its stdio transport can't send more than a few KB without crashing the server?
If only there was a stateless way to communicate data between a client and a server, that could easily recover from and handle errors...
Doesn't solve a pressing problem that can't be solved via a few lines of code.
Overly abstract.
Tons of articles trying to explain its advantages, yet all somehow fail.
I don't really see the point yet where LLMs become so good that I throw my specialized LLM tools out and do everything in one claude desktop window. It simply doesn't work generic enough.
Also... if you end up building something custom, you end up having to reimplement the tool calling again anyways. MCP really is just for the user facing chat agents, which is just one section of AI applications. It's not as generically applicable as implied.
In my experience, many less-technical folks started using MCP, and that makes security issues all the more relevant. This audience often lacks intuition around security best-practices. So it’s definitely important to raise awareness around this.
We based Xops (https://xops.net) on OpenRPC for this exact reason (disclosure: we are the OpenRPC founders). It requires defining the result schema, not just params, which helps plan how outputs connect to the any step's inputs. Feels necessary for building complex workflows and agents reliably.
> The protocol has a very LLM-friendly interface, but not always a human friendly one.
similar to the people asking "why not just use the API directly", I have another question: why not just use the CLI directly? LLMs are trained on natural language. CLIs are an extremely common solution for client/server interactions in a human-readable, human-writeable way (that can be easily traversed down subcommands)
for instance, instead of using the GitHub MCP server, why not just use the `gh` CLI? it's super easy to generate the help and feed it into the LLM, super easy to allow the user to inspect the command before running it, and already provides a sane exposure of the REST APIs. the human and the LLM can work in the same way, using the exact same interface
MCP is not a UI. Seem someone here quite confused about what is MCP.
MCP have no security? Someone don't know that stdio is secure and over SSE/HTTP there was already specs: https://modelcontextprotocol.io/specification/2025-03-26/bas....
MCP can run malicious code? Apply to any app you download. How this is the MCP issue? Happen in vscode extensions. NPM libs. But blame MCP.
MCP transmits unstructured text by design?
This is totally funny. It's the tool that decide what to respond. Annd the dialogue is quite
I start feeling this post is a troll.
I stopped reading and even worth continuing over prompt injection and so on.
Rkt is better than Docker, later won.
${TBD} is better than MCP, my bet is on MCP.
I have to think the enthusiasm is coming mostly from the vibe-coding snakeoil salespeople that seem to be infecting every software company right now.
I can imagine a plugin-based server where the plugins are applications and AIs that all use MCP to interact. The server would add a discovery protocol.
That seems like the perfect use for MCP.
There's a whole section on how people can do things like analyse a combination of slack messages, and how they might use that information. This is more of an argument suggesting agents are dangerous. You can think MCP is a good spec that lets you create dangerous things but conflating these arguments under "mcp bad" is disingenuous.
Id rather have more details and examples on the problem with the spec itself. "You can use it to do bad things" doesn't cut it. I can use http and ssh to bad things too, so it's more interesting to show how Eve might use MCP to do malicious things to Alice or Bob who are trying to use MCP as intended.
No, it's not fair at all. You can't add security afterwards like spreading icing on baked cake. If you forgot to add sugar to the cake batter, there's not enough buttercream in the world to fix it.
The only upside to these technologies being shotgun implemented and promoted is that they'll inevitably lead to a failure that can't be pushed under the rug (and will irreversibly damage the credibility of AI usage in business).
> We are so back
MCP calls itself a “protocol,” but let’s be honest—it’s a framework description wrapped in protocol cosplay. Real protocols define message formats and transmission semantics across transport layers. JSON-RPC, for example, is dead simple, dead portable, and works no matter who implements it. MCP, on the other hand, bundles prompt templates, session logic, SDK-specific behaviors, and application conventions—all under the same umbrella.
As an example, I evidently need to install something called "uv", using a piped script pulled in from the Internet, to "run" the tool, which is done by putting this into a config file for Claude Desktop (which then completely hosed my Claude Desktop):
{
"mcpServers": {
"weather": {
"command": "uv",
"args": [
"run",
"--with",
"fastmcp",
"fastmcp",
"run",
"C:\\Users\\kord\\Code\\mcptest\\weather.py"
]
}
}
}
They (the exuberant authors) do mention transport—stdio and HTTP with SSE—but that just highlights the confusion here we are seeing. A real protocol doesn’t care how it’s transported, or it defines the transport clearly. MCP tries to do both and ends up muddying the boundaries. And the auth situation? It waves toward OAuth2.1, but offers almost zero clarity on implementation, trust delegation, or actual enforcement. It’s a rats nest waiting to unravel once people start pushing for real-world deployments that involve state, identity, or external APIs with rate limits and abuse vectors.This feels like yet another centralized spec written for one ecosystem (TypeScript AI crap), claiming universality without earning it.
And let’s talk about streaming vs formatting while we’re at it. MCP handwaves over the reality that content coming in from a stream (like SSE) has totally different requirements than a local response. When you’re streaming partials from a model and interleaving tool calls, you need a very well-defined contract for how to chunk, format, and parse responses—especially when tools return mid-stream or you’re trying to do anything interactive.
Right now, only a few clients are actually supported (Anthropic’s Claude, Copilot, OpenAI, and a couple local LLM projects). But that’s not a bug—it’s the feature. The clients are where the value capture is. If you can enforce that tools, prompts, and context management only work smoothly inside your shell, you keep devs and users corralled inside your experience. This isn’t open protocol territory; it’s marketing. Dev marketing dressed up as protocol design. Give them a “standard” so they don’t ask questions, then upsell them on hosted toolchains, orchestrators, and AI-native IDEs later. The LLM is the bait. The client is the business.
And yes, Claude helped write this, but it's exactly what I would say if I had an hour to type it out clearly.
This is exactly why MCP is hardly a mature standard and was not designed to be secure at all making it acceptable for AI agents to claim to execute commands but could also be stealing your credentials or running a totally different command such or could download malware.
The spec appears to to be designed by 6 month-old vibe-coding developers learning Javascript with zero scrutiny rather than members of the IETF at leading companies with maximum scrutiny.
Next time, Anthropic should consult professionals that have developed mature standards for decades and learn from bad standards such as JWT and Oauth.