We've been running agent workflows for a while now. The pattern that works: treat agents like junior team members. Clear scope, explicit success criteria, checkpoints to review output. The skills that matter are the same ones that make someone a good manager of people.
pglevy is right that many managers aren't good at this. But that's always been true. The difference now is that the feedback loop is faster. Bad delegation to an agent fails in minutes, not weeks. You learn quickly whether your instructions were clear.
The uncomfortable part: if your value was being the person who could grind through tedious work, that's no longer a moat. Orchestration and judgment are what's left.
In my experience so far, AI prototyping has been a powerful force for breaking analysis paralysis.
In the last 10 years of my career, the slow execution speed at different companies wasn't due to slow code writing. It was due to management excesses trying to drive consensus and de-risk ideas before the developers were even allowed to write the code. Let's circle back and drive consensus in a weekly meeting with the stakeholders to get alignment on the KPIs for the design doc that goes through the approval and sign off process first.
Developers would then read the ream and realize that perfection was expected from their output, too, so development processes grew to be long and careful to avoid accidents. I landed on a couple teams where even small changes required meetings to discuss it, multiple rounds of review, and a lot of grandstanding before we were allowed to proceed.
Then AI comes along and makes it cheap to prototype something. If it breaks or it's the wrong thing, nobody feels like they're in trouble because we all agree it was a prototype and the AI wrote it. We can cycle through prototypes faster because it's happening outside of this messy human reputation-review-grandstanding loop that has become the norm.
Instead of months of meetings, we can have an LLM generate a UI and a backend with fake data and say "This is what I want to build, and this is what it will do". It's a hundred times more efficient than trying to describe it to a dozen people in 1-hour timeslots in between all of their other meetings for 12 weeks in a row.
The dark side of this same coin is when teams try to rely on the AI to write the real code, too, and then blame the AI when something goes wrong. You have to draw a very clear line between AI-driven prototyping and developer-driven code that developers must own. I think this article misses the mark on that by framing everything as a decision to DIY or delegate to AI. The real AI-assisted successes I see have developers driving with AI as an assistant on the side, not the other way around. I could see how an MBA class could come to believe that AI is going to do the jobs instead of developers, though, as it's easy to look at these rapid LLM prototypes and think that production ready code is just a few prompts away.
5 years ago: ML-auto-complete → You had to learn coding in depth
Last Year: AI-generated suggestions → You had to be an expert to ask the right questions
Now: AI-generated code → You should learn how to be a PM
Future: AI-generated companies → You must learn how to be a CEO
Meta-future: AI-generated conglomerates → ?
Recently I realized that instead of just learning technical skills, I need to learn management skills. Specifically, project management, time management, writing specifications, setting expectations, writing tests, and in general, handling and orchestrating an entire workflow.And I think this will only shift to the higher levels of the management hierarchy in the future. For example, in the future we will have AI models that can one-shot an entire platform like Twitter. Then the question is less about how to handle a database and more about how to handle several AI generated companies!
While we're at the project manager level now, in the future we'll be at the CEO level. It's an interesting thing to think about.
At least ChatGPT, Gemini and Claude told me it was. I did so many rounds of each one evaluating the other, trying to poke holes etc. Reviewing the idea and the "research", the reasoning. Plugging the gaps.
Then I started talking to real people about their problems in this space to see if this was one of them. Nope, not really. It kinda was, but not often enough to pay for a dedicated service, and not enough of a pain to move on from free workarounds.
Beware of AI reviewing AI. Always talk to real people to validate.
Similarly, it’s easy to think that the lowly peons in the engineering world are going to get replaced and we’ll all be doing the job of directors and CEOs in the future, but that doesn’t really make sense to me.
Being able to whip your army of AI employees 3% better than your competitor doesn’t (usually) give any lasting advantage.
What does give an advantage is: specialized deep knowledge, building relationships and trust with users and customers, and having a good sense of design/ux/etc.
Like maybe that’s some of the job of a manager/director/CEO, but not anyone that I’ve worked with.
I like his thinking but many professional managers are not good at management. So I'm not sure about the assumption that "many people" can easily pick this up.
Fire MBAs and other “management” types. If they’re not technical and you’re building something technical, they need to go. Anyone who says otherwise gets fired too.
Keep the engineers who consistently get Exceeds Expectations. Fire everyone else. No Pip just go please.
Keep a few EE product managers. Fire the rest.
Hire a few QAs who can work with AI and work with product to ensure the stuff actually works. You don’t need that many people anymore and a couple of quality people can’t hurt. I don’t trust engineers enough, sorry. You need discerning eyes.
Fire everyone else. Give the best people AI and they will be able to put out more good work. If someone doesn’t get this, fire them too because they’re clearly not EE level.
Scale this to the whole org.
"AI labs"
Can we stop this misleading language. They're doing product development. It's not a "laboratory" doing scientific research. There's no attempt at the scientific method. It's a software firm and these are software developers/project managers.
Which brings me to point 2. These guys are selling AI tooling. Obviously there's a huge desire to dogfood the tooling. Plus, by joining the company, you are buying into the hype and the vision. It would be more surprising if they weren't using their own tools the whole time. If you can't even sell to yourself...