I don't agree with this. Natural language is so ambiguous. At least for software development the hard work is still coming up with clearly defined solutions. There is a reason for why math has its own domain specific language.
and earlier Simon Willison argued[1] that Skills are even bigger deal than MCP.
But I do not see as much hype for Skills as it was for MCP - it seems people are in the MCP "inertia" and having no time to shift to Skills.
Is there any other difference in the end-user side?
We liked it quite a bit, but it led to some funny things. We use Reminders to keep our home to-do lists, hers and mine in one list with two sections. I wanted to take this existing flow we had and make it work with a Custom GPT. It's practically impossible because Reminders:
* doesn't have a good API through EventKit
* requires a pop-up permission grant in the UI
So in the end, I did end up making somewhat of an MCP server for it, running it on an old Macbook Pro I had and then sticking Amphetamine on in closed-lid display-sleep mode hooked up to my Tailnet and exposed via a Cloudflare tunnel so that we could use ChatGPT to interact with the thing. Yes, you can see how insane that whole thing is. But there's quite a lot of value to have your AI agent just be the one thing.
0: https://wiki.roshangeorge.dev/w/Blog/2025-10-17/Custom_GPTs
But I reckon that every time that humans have been able to improve their information processing in any way, the world has changed. Even if all we get is to have an LLM be right more times than it is wrong, the world will change again.
> "If I could short MCP, I would"
I mean, MCP is hard to work with. But there's a very large set of things that we want a hardened interface to out there - if not MCP, it will be something very like it. In particular, MCP was probably overly complicated at the design phase to deal with the realities of streaming text / tokens back and forth live. That is, it chose not to abstract these realities in exchange for some nice features, and we got a lot of implementation complexity early.To quote the Systems Bible, any working complex system is only the result of the growth of a working simple system -- MCP seems to me to be right on the edge of what you'd define as a "working simple system" -- but to the extent it's all torn down for something simpler, that thing will inevitably evolve to allow API specifications, API calls, and streaming interaction modes.
Anyway, I'm "neutral" on MCP, which is to say I don't love it. But I don't have a better system in mind, and crucially, because these models still need fine-tuning to deal properly with agent setups, I think it's likely here to stay.
An LLM with a shell integration can do anything you need it to.
The academic community has been using the term "skill" for years, to refer to classes of tasks at which LLMs exhibit competence.
Now OpenAI has usurped the term to refer to these inference-guiding .md files.
I'm not looking forward to having to pick through a Google hit list for "LLM skills", figuring out which publications are about skills in the traditional sense and which are about the OpenAI feature. Semantic overload sucks.
How do we deal with this? Start using "competencies" (or similar) in academic papers? Or just resign ourselves to suffering the ambiguity?
Or maybe the OpenAI feature will fall flat and nobody will talk about it at all. That would frankly be the best outcome.