Bainbridge by itself is a tough paper to read because it's so dense. It's just four pages long and worth following along:
https://ckrybus.com/static/papers/Bainbridge_1983_Automatica...
For example, see this statement in the paper: "the present generation of automated systems, which are monitored by former manual operators, are riding on their skills, which later generations of operators cannot be expected to have."
This summarizes the first irony of automation, which is now familiar to everyone on HN: using AI agents effectively requires an expert programmer, but to build the skills to be an expert programmer, you have to program yourself.
It's full of insights like that. Highly recommended!
- When one of the agents does something wrong, a human operator needs to be able to intervene quickly and needs to provide the agent with expert instructions. However since experts do not execute the bare tasks anymore, they forget parts of their expertise quickly. This means the experts need constant training, hence they will have little time left to oversee the agent's work.
- Experts must become managers of agentic systems, a role which they are not familiar with, hence they are not feeling at home in their job. This problem is harder to be determined as a problem by people managers (of the experts) since they don't experience that problem often first hand.
Indeed the irony is that AI provides efficiency gains, which as they become more widely adopted, become more problematic because they outfit the necessary human in the loop.
I think this all means that automation is not taking away everyone's job, as it makes things more complicated and hence humans can still compete.
“But at what cost?”
We’ve all accepted calculators into our lives as being faster and correct when utilized correctly (Minus Intel tomfoolery), but we emphasize the need to know how to do the math in educational settings.
Any post education adult will confirm when confronted with an irregular math problem (or a skill) that there is a wait time to revive the ability.
Programming automation having the potential skill decay AND being critical path is … worth thinking about.
Yet, pilots are constantly trained on actual scenarios, and are expected to land airplanes manually monthly (and during take off too).
This ensures pilots maintain their skills, while the auto pilot helps most of the time.
On top of that, plane commands often are half automatic already, aka they are assisted (but not by LLMs!), so it’s a complex comparison.
There's been progress since then. Although the details are not widely publicized, enough pilots of the F-22, F-35, or the Gripen have talked about what modern fighter cockpit automation is like. The real job of fighter pilots is to fight and win battles, not drive the airplane. A huge amount of effort has been put into simplifying the airplane driving job so the pilot can focus on killing targets. The general idea today is that the pilot puts the pointy end in the right direction and the control systems take care of the details. An F-22 pilot has been quoted as saying that the F-22 is far less fussy than a Cessna as a flying machine.
For the F-35, which has a VTOL configuration (B) and a carrier-landing configuration (C), much effort was put into making VTOL landing and carrier landing easy. Not because pilots can't learn to do it, but because training tended to obsess on those tasks. The hard part of Harrier (the only previous successful VTOL fighter) was learning to land the unstable beast without crashing. There were still a lot of Harrier crashes.
The hard part of Naval aviator training is landing on a carrier deck. Neither of these tasks has anything to do with the real job of taking a bite out of the enemy, but they consumed most of the training time. So, for the F-35, both of those tasks have enough computer-added stability to make them much easier. One of the stranger features of the F-35 is that it has two main controls, called "inceptors", which correspond to throttle and stick. In normal flight, they mostly work like throttle and stick. But in low-speed hover, the "throttle" still controls speed while the "stick" controls attitude, even though the "stick" is affecting engine speed and the "throttle" is affecting control surfaces in that mode. So the pilot doesn't have to manage the strange transitions of a VTOL craft directly.
This refocuses pilot training on using the sensors and weapons to do something to the enemy. Classic training is mostly about the last few minutes of getting home safely.
As AI for programming advances, we should expect to devote more user time to analyzing the tactical problem, rather than driving the bus.
[1] https://ckrybus.com/static/papers/Bainbridge_1983_Automatica...
But in my use of AI agents as a programmer and also for other work. I would say that, while yes, you also have to look for mistakes or errors, most of the time I spend is on programming the AI still.
The AI agent has no idea what it must produce, what it's meant to do, when it can alter something existing to enable something new, etc.
And this is true for both functional and non-functional requirements.
Unlike in traditional manufacturing, you've already built your manufacturing pipeline for a precise output, you've got your CAD designs done, you ran your simulations, you've calibrated everything already for what you want.
So most of the work remains that of programming the machine.
I question this.
Also, by and large the current AI tools are not in the critical path yet, well except those drones that lock on targets to eliminate them in case of interference, and even then it is ML. Agents can not be in that path due to predictability challenges yet.
But what most of them do is not to be more efficient but to be shown to be more efficient. The main reason they are so obsessed with AI is because they want to send the signal that they are pursuing to be more efficient, whether they succeed or not.
I work at a firm that has given AI tooling to non-developer data analyst type people who otherwise live & die in excel. Much of their day job involves reading PDFs. I occasionally will use some of the firms AI tooling for PDF summarizing/parsing/interrogation/etc type tasks and remain consistently underwhelmed.
Stuff like taking 10 PDFs each with a simple 30 row table per PDF, with the same title in each file, it ends up puking on 3-4 out of 10 with silent failures. Row drops, duplicating data, etc. When you point out its missed rows, it goes back and duplicates rows to get to the correct row count.
Using it to interrogate standard company filings PDfs that it has been specially trained on and it gave very convincing answers which were wrong because it has silently truncated its search context to only recent year financial filings. Nowhere did it show this limitation to the user. It only became apparent after researching the 4th or 5th company when it decided to caveat its answer with its knowledge window. This invalidated the previous answers as questions such as "when was the first X" or "have they ever reported Y" were operating on incomplete information.
Most users of these tool are not that technical, and are going to be much more naive in taking the answers for fact without considering the context.
It's useful to think about AI-driven coding assistants in terms of the SAE levels of automation for automatic driving.
- Level 0 - totally manual
- Level 1 - a bit of assistance, such as cruise control
- Level 2 - speed and steering control that requires constant supervision by a human driver. This is where most of the commercial systems are now.
- Level 3 - Level 2, but reliable enough that the human driver doesn't need to supervise constantly. Able to bring the vehicle to a safe stop by itself. Mercedes -Benz Drive Pilot is supposedly level 3. Handoff between computers and human remains a problem. Human still liable for accidents.
- Level 4 - Full automation, but not under all conditions. Waymo is Level 4. Human just selects the destination.
- Level 5 - Full automation, at least as capable as human drivers under all conditions. Not yet seen.
What we're looking at with the various programming assistance AI systems ls Level 2 or Level 3 competence. These are the most troublesome levels. Who's in charge? Who's to blame?
The need for such programming assistance systems may be transient, as it clearly is in automotive. Eventually, everybody in automotive will get to Level 4 or better, or drop out due to competitive pressure.
What's interesting is this mirrors every automation wave. We thought assembly lines would eliminate human work - instead they just changed what work meant. AI's doing the same, just at software speed instead of industrial speed.
Long-term I'm optimistic - automation creates more than it destroys, always has. Short-term though? Messy transition for anyone whose job is 'being the interface layer.
Will include this thread in my next issue of https://hackernewsai.com/
My AV is reporting issues with this link: 15/12/2025 2:59:56 PM;HTTP filter;file;https://cdn.jsdeliver.net/npm/mathjax@3.2.2/es5/tex-chtml.js... trojan;connection terminated;
However, this took 40 years and actual fatalities. We should keep that in mind when we're pushing the AI acceleration pedal down ever harder.
so much this!
If you're bad at your job, you're automating it at lightning speed.
You need have good business process and be good at your job without AI in order to have any chance in hell of being successful with it. The idea that you can just outsource your thinking to the AI and don't need to actually understand or learn anything new anymore is complete delusion.