The right question is how much human code can a human push now vs prior to AI.
Everything we've done in coding has been assisted.
Prior to this current generation of web applications, we had the advent of concepts like Object Orientated Programming and prior to that even C was a massive move up from Assembly and punch cards.
AI has written a lot of code. AI has written very little high velocity production code by itself (ie. for people with no coding background).
In Ruby on Rails, the concept of fast coding has been around for over 20 years, look up this concept of Scaffolding: https://www.rubyguides.com/2020/03/rails-scaffolding/
So to answer your question,
1. AI has pushed a lot of code 2. AI has pushed almost no code without the oversight of human software engineers 3. Software engineers are pushing a magnitude more code and producing more functional utility and solving more bugs than ever before
I don't know what the future holds, but I do think that this is not a new trend to use software to help humans build faster, and I don't think software has the ability to fully replace humans (yet).
[0] https://www.lennysnewsletter.com/p/anthropics-cpo-heres-what...
This is a naive take. Throughout history things have been automated with the help of professions who were being automated away.
Software engineers have been automating away jobs for other people for nearly a century. It would be quite rich if the profession suddenly felt qualms about the process! (TBC I think automation is great and should always be pursued. Ofc there are real human concerns when change happens quickly but I am skeptical that smashing the looms is the best response)
AI is getting better at writing code. However writing code is just some fraction of the work of many software engineers. AI doesn't work independently, it needs to be guided, its work needs to be reviewed, tested etc. There are some domains where it does better and some domains where it doesn't. There's a range of "AI" work between auto-complete style work, assisting in understanding a code base, and writing code from some spec or doing other types work.
All in all I would say it's a decent improvement to productivity for many situations. It's really hard to say how much and it's also not a zero sum game, as productivity improves there's more work.
Something to keep in mind is that if you look at a modern software project likely most of the code executing is not code written by the developers of that project. There's a huge stack of open source bits executing for almost any new project.
Specifically in OpenAI you also need to consider what type of software they are likely writing. Some of it may be more or less "vanilla" code and other is likely very specialized/performance critical. The vanilla code like API wrappers or simple front end pieces is likely more amenable to be written by AI whereas the more cutting edge algorithmic/scheduling/optimization work is almost certainly not done by AI. At least yet.
As software organizations become larger there's a lot of overhead and waste. It is possible that AI can enable smaller teams and that has a multiplicative effect because it lets you reduce that waste/overhead. There are likely also software engineers who will become better/adapt to new workflows and some who will not. It's really hard to say where things are going but overall my sense is that this like many other innovations will lead to more software and more jobs and not the other way around. There are many moving pieces here, not just AI itself but geopolitics, macro-economics, etc. Where are those new jobs going to get created, what new types of software/technology are going to be created etc. etc. History seems to show us that we'll adapt/evolve and grow.
I think the difference between situations where AI-driven development works and doesn't is going to be largely down to the quality of the engineers who are supervising and prompting to generate that code, and the degree to which they manually evaluate it before moving it forward. I think you'll find that good engineers who understand what they're telling an agent to do are still extremely valuable, and are unlikely to go anywhere in the short to mid term. AI tools are not yet at the point where they are reliable on their own, even for systems they helped build, and it's unclear whether they will be any time soon purely through model scaling (though it's possible).
I think you can see the realities of AI tooling in the fact that the major AI companies are hiring lots and lots of engineers, not just for AI-related positions, but for all sorts of general engineering positions. For example, here's a post for a backend engineer at OpenAI: https://openai.com/careers/backend-software-engineer-leverag... - and one from Anthropic: https://job-boards.greenhouse.io/anthropic/jobs/4561280008.
Note that neither of these require direct experience with using AI coding agents, just an interest in the topic! Contrast that with many companies who now demand engineers explain how they are using AI-driven workflows. When they are being serious about getting people to do the work that will make them money, rather than engaging in marketing hype, AI companies are honest: AI agents are tools, just like IDEs, version control systems, etc. It's up to the wise engineer to use them in a valuable way.
Is it possible they're just hiring these folks to try and make their models better to later replace those people? It's possible. But I'm not sure when in time, if ever, they'll reach the point where that was viable.
I actually use it from the web app not the cli. So far I've run over 100 codex sessions a great percentage of which I turned in to pull requests.
I kick off codex for 1 or more tasks and then review the code later. So they run in the background while I do other things. Occasionally I need to re-prompt if I don't like the results.
If I like the code I create a PR and test it locally. I would say 90% of my PR's are AI generated (with human in the loop).
Since using codex, I very rarely create hand written PR's.
Ask "how much did you build then?" -> also 100%.
The compiler and I operate on different layers.
There are two strong forces at play. Employees generally want to put in the least amount of effort possible and go home at 5. Employers want to save money and pay for fewer employees. AI creates a strong symbiosis here and both sides are focused on a short term win.