I'm seeing legitimate 10x gains because I'm not writing code anymore – I'm thinking about code and reading code. The AI facilitates both. For context: I'm maintaining a well-structured enterprise codebase (100k+ lines Django). The reality is my input is still critically valuable. My insights guide the LLM, my code review is the guardrail. The AI doesn't replace the engineer, it amplifies the intent.
Using Claude Code Opus 4.5 right now and it's insane. I love it. It's like being a writer after Gutenberg invented the printing press rather than the monk copying books by hand before it.
While there might be open issues with AI, those AI companies are providing *far* more value than null.
But it's clear the LLM's have some real value, even if we always need a human-in-the-loop to prevent hallucinations it can still massively reduce the amount of human labour required for many tasks.
NFT's felt like a con, and in retrospect were a con. The LLM's are clearly useful for many things.
you're lumping together two very different groups of people and pointing out that their beliefs are incompatible. of course they are! the people who think there is a real threat are generally different people from the ones who want to push AI progress as fast as possible! the people who say both do so generally out of a need to compromise rather than there existing many people who simultaneously hold both views.
But they don't. Instead, "AI safety" organizations all appear to exclusively warn of unstoppable, apocalyptic, and unprovable harms that seem tuned exclusively to instill fear.
> This has, of course, not happened.
This is so incredibly shallow. I can't think of even a single doomer, who ever claimed that AI will destroy us by now. P(doom) is about the likelihood of it destroying us "eventually". And I haven't seen anything in this post or in any recent developments to make my reduce my own p(doom), which is not close to zero.
Here are some representative values: https://pauseai.info/pdoom
What parallel world are they living in? Every single online platform has been flooded with AI generated content and had to enact counter measures, or went the other way, embraced it and replaced humans with AI. AI use in scams has also become common place.
Everything they warned about with the release of GPT‑2 did in fact happen.
Let’s not forget these innovations are on the heels of COVID. Strong, swift action by government, industry, and individuals against a deadly pathogen is “controversial”. Even if killer AI was here, twice shy…
I’m angry about a lot of things right now, but LLM “marketing” (and inadequate reporting which turns to science fiction instead of science) is not one of them. The LLM revolution is getting shoehorned into this Three Card Monte narrative, and I don’t see the utility.
The criticisms of LLM promise and danger is part of the zeitgeist. If firms are playing off of anything I bet it’s that, and not an industry wide conspiracy to trick the public and customers. Advertising and marketing meets people where they’re at, and “imagines” where they want to go, all wrapped up with the product. It doesn’t make the product frightening. It’s the same for all manner of dangerous technologies—guns, nuclear energy, whatever. The product is the solution to the fear.
> “The LLMs we have today are famously obsequious. The phrase “you’re absolutely right!” may never again be used in earnest.”
Hard NO. I get it, the language patterns of LLMs are creepy, but it’s not bad usage. So, no.
I can handle the cognitive dissonance of computer algorithms spewing out anthropomorphic phrasing and not decide that I, as a human being, can no longer in humility and honesty tell someone else they’re right, and i was wrong.
The 'are LLMs intelligent?' discussion should be retired at this point, too. It's academic, the answer doesn't matter for businesses and consumers; it matters for philosophers (which everyone is even a little bit). 'Are LLMs useful for a great variety of tasks?' is a resounding 'yes'.
I think that's good, but the whole "AI is literally not doing anything", that it's just some mass hallucination has to die. Gamers argue it takes jobs from artists away, programmers seem to have to argue it doesn't actually do anything for some reason. Isn't that telling?
Any standard of intelligence devised before LLMs is passed by LLMs relatively easily. They do things that 10 years ago people would have said are impossible for a computer to do.
I can run claude code on my laptop with an instruction like "fix the sound card on this laptop" and it will analyze what my current settings are, determine what might be wrong, devise tests to have me gather information it can't gather itself, run commands to probe hardware for it's capabilities, and finally offer a menu of solutions, give the commands to implement the solution, and finally test that the solution works perfectly. Can you do that?
Hm... is it wrong to think like this?
You have not actually made clear how mechanical calculators were a scam.
Ironically, this article feels like it was written by an LLM. Just a baseless opinion.