You sure? That’s basically all that’s being discussed.
There’s nothing in this article I haven’t heard 100 times before. Open any mainstream news article or HN/Reddit thread and you’ll find all of OP’s talking points about water, electricity, job loss, the intrinsic value of art, etc.
Exactly. Strongly agree with that. This closed world assumption never holds. We would only do less work if nothing else changes. But of course everything changes when you lower the price of creating software. It gets a lot cheaper. So, now you get a lot of companies suddenly considering doing things that previously would be too expensive. This still takes skills and expertise they don't have. So, they get people involved doing that work. Maybe they'll employ some of those but the trend is actually to employ things that are core to your company.
And that's just software creation. Everything else is going to change as well. A lot of software we use is optimized for humans. Including all of our development tools. Replacing all that with tools more suitable for automatic driving by AI is an enormous amount of work.
And we have decades worth of actively used software that requires human operators currently. If you rent a car, some car rental companies still interface with stuff written before I was born. And I'm > 0.5 century old. Same with banks, airline companies, insurers, etc. There's a reason this stuff was never upgraded: doing so is super expensive. Now that just got a bit cheaper to do. Maybe we'll get around to doing some of that. Along with all the stuff for which the ambition level just went up by 10x. And all the rest.
It’s always this tired argument. “But it’s so much better than six months ago, if you aren’t using it today you are just missing out.”
I’m tired of the hype boss.
I don't doubt that the leading labs are lighting money on fire. Undoubtedly, it costs crazy amounts of cash to train these models. But hardware development takes time and it's only been a few years at this point. Even TODAY, one can run Kimi K2.5, a 1T param open-source model on two mac studios. It runs at 24 tokens/sec. Yes, it'll cost you $20k for the specs needed, but that's hobbyist and small business territory... we're not talking mainframe computer costs here. And certainly this price will come down? And it's hard to imagine that the hardware won't get faster/better?
Yes... training the models can really only be done with NVIDIA and costs insane amounts of money. But it seems like even if we see just moderate improvement going forward, this is still a monumental shift for coding if you compare where we are at to 2022 (or even 2024).
[1] https://x.com/alexocheema/status/2016487974876164562?s=20
And those are the EASY things for AI to "put out of work".
HARD systems like government legacy systems are not something you can slap 200 unit tests on and say "agent re-write this in Rust". They're hard because of the complex interconnects, myriad patches, workarounds, institutional data codified both in the codebase and outside of it. Bugs that stay bugs because they next 20 versions used that bug in some weird way. I started my career in that realm. I call bullshit on AI taking any jobs here if it can't even accelerate the pace of _quality_ releases of OS and video games.
I'm not saying this won't happen eventually but that eventually is doing a heavy lift. I am skeptical of the 6-12 month timelines for broad job displacement that I see mentioned.
AIs (LLMs) are useful in this subtle way like "Google Search" and _not_ like "the internet". It's a very specific set of text heavy information domains that are potentially augmented. That's great but it's not the end of all work or even the end of all lucrative technical work.
It's a stellar tool for smart engineers to do _even_ more, and yet, the smart ones know you have to babysit and double-check -- so it's not remotely a replacement.
This is without even opening the can of worms that is AI Agency.
My strongly held belief is that anyone who think that way, also think that software engineering is reading tickets, searching for code snippets on stack overflow and copy-pasting code.
Good specifications are always written after a lot of prototypes, experiments and sample implementations (which may be production level). Natural language specifications exist after the concept has been formalized. Before that process, you only have dreams and hopes.
I often find when I come up with the solution, these little autocompletes pretend they knew that all along. Or I make an observation they say something like "yes that's the core insight into this".
They're great at boilerplate. They can immediately spot a bug in a 1000 lines of code. I just wish they'd stop being so pretentious. It's us that are driving these things, it's our intelligence, intuition and experience that's creating solutions.
As long as code continues to need to be reviewed to (mostly) maintain a chain of liability, I don’t see SWE going anywhere like both hypebros and doomers seem to be intent on posting about.
Code review will simply become the bottle neck for SWE. In other words, reading and understanding code so that when SHTF the shareholders know who to hold accountable.
Like I'm sorry but if you couldn't see that this tech would be enormously useful for millions if not billions of people you really shouldn't be putting yourself out there opining on anything at all. Same vibes as the guys saying horseless carriages were useless and couldn't possibly do anything better than a horse which after all has its own mind. Just incredibly short sighted and lacking curiosity or creativity.
IDK this sounds a whole lot like paying for snippets