The simplest mitigation is also the least popular one: don't give the agent credentials in the first place. Scope it to read-only where possible, and treat every page it visits as untrusted input. But that limits what agents can do, which is why nobody wants to hear it.
This is npm supply chain attacks but worse in one specific way: with npm you need arbitrary code execution. With MCP, the attack surface is the natural language itself. The model reads the description and follows it. No sandbox escape needed.
The article suggests pinning versions and signing tool descriptions, which is the right direction. But the ecosystem tooling isn't there yet. Most MCP registries have no signing, no auditing, and tool descriptions aren't even shown to users before the model ingests them.
This is a Gemini deep research response that someone ran through some kind of shortening prompt. They even kept all the footnotes.
It used to be that startups would run blogs that did technical analysis, maybe talked a little market research, advanced the strategy of the business.
The good ones showed you how the leaders of the business thought, built trust and generated leads.
Now we have whatever this bullshit is. No evidence of human thought or experience, it's not even apparent what the objective of the piece is.
The prose is unbearably bad. Your brain just sort of slips on it. There's basically zero through line in this thing. a section ends, the next one begins, and it's not even clear what's under discussion.
One section starts "The clearest public descriptions landed between mid-2025 and early 2026." Descriptions of what? No clarity on this. Probably because it got "tersed" out.
At this point I feel like blogs are like lawn ornaments for startups. Even now, the sheer contempt for other people's time and attention is still a mild shock to me.
Works great with OpenClaw, Claude Cowork, or anything, really