I think this next generation is going to come up fundamentally believing that AI is generally a bad thing, and it's going to surprise older people.
[X] Tweets and instagram comments presented as "what society is thinking"
[X] Ties Luigi Mangione and the California warehouse fire to Gen Z discontent (about AI?).
[X] Statistics being used to support the title with little to no regards to continuity: "those respondents who said that AI makes them “nervous” grew from 50% to 52% during the same period" => percentage was 52% in 2023, 50% in 2024 and 52% in 2025, seems mostly flat to me, with the real jump being in 2022-2023 with 39%.
no one is questioning the underlying model mathematics, they are questioning deceptive & reckless stewards.
Meaningful advances in medical diagnosis are not coming from chatbot companies. Some are coming from machine learning methods. Perhaps measuring public sentiment about such a vagary is not a very productive way to quantify anything
That said, I continue to also be frustrated with people using the abstract concept of a new technology as a substitute for the institutions that use that technology to exert power in the world and what they do with that power, which is - as many in the comments already point out - is what the vast majority of people are actually mad about, and right to be
Maybe I'd be a bit more optimistic if someone could explain a realistic economic scenario for how we're going to transition into our utopian abundant future without a depression or a revolution.
The fundamental alignment issue is aligning the companies themselves with society, not the models with the companies. Widespread unemployment is not aligned with society, but it is aligned with Anthropic and OpenAI if it makes them rich.
Therefore the only “harms” the companies will take seriously are those which also harm the company. For example reputational harms from enabling scams aren’t allowed.
Perhaps all of this isn’t fair, since companies actively subverted safety research for profitability. But then I would go back to my earlier point of over-indexing on unintended behaviors and under-indexing on intended ones.
Imagine choosing to be an expert in something that you think is a coin flip away from making the world worse.
It looks like:
1. They take billions in investment
2. They spend trillions
3. They and their investors profit in the quadrillions from all the "labor saving"
4. ???
5. Everyone's needs are met.
> Note: “AI experts” refer to individuals whose work or research relates to AI. The AI experts surveyed are those who were authors or presenters at an AI-related conference in 2023 or 2024 and live in the U.S. Expert views are only representative of those who responded.
But we have been sold to use these constantly falsified AI summaries as the go-to source of "truth" by all levels of society. We're trading truth for an illusion of short-term gains. This will not have good consequences.
The kids are alright.
Take log review for example. Whether it’s admin or security LLMs are incredible at reading awfully formatted logs and even using those to pull meaning from other logs as well. Like turn an hour long log review into a 10 minute log review type thing.
It’s always only ever about how the new model is faster, better, smarter. Or how the tech will be bringing ruin to the job market and someone should probably do something about that some time soon. Zero efforts to create any sort of educational content - how it even works, how to vet its output, how to have an eye for confabulation, how to use it as thinking enhancement rather than replacement, to keep in mind that it’s trained to please and will literally generate anything to cause users to click the thumbs up button. Nope, it’s just “ModelGPClaude can make mistakes! Better be careful!”
And then everyone’s surprised when an utterly improvident handling of 4o kicks off the biggest concentrated wave of AI psychosis seen yet. Because, surprise! When you give people a model that’s trained to anthropomorphize itself, people who have no idea about any of this tech and have no access to education about any of it might believe it’s more than it is! Boy, who’d’ve thunk; isn’t the world complex?!
This was a symptom of this exact same disease. I have far less worry about the tech and far more worry about how the disconnected venture capital caste is inflicting it upon us.
Your Dario's and Sam's know exactly what they are doing. They know it's going to cause a lot of job displacement, even if the technology isn't perfect. They are trying to get the C-suite elite hyped up about it, and the hyperscalers are along for the ride as well. There's so much money to be made.
They could not care less about what joe schmoe on the street thinks about it.
It's new, people fear it. Sometimes justified, usually not.
People greatly feared the car because of the number of horse-related jobs it would displace.
President Benjamin Harrison and First Lady Caroline Harrison feared electricity so much they refused to operate light switches to avoid being shocked. They had staff turn lights on/off for them.
Looking back at these we might laugh.
We're largely in the same boat now.
It's possible AI will destroy us all, but judging from history, the irrational reactions to something new isn't exactly unprecedented.
An alternative possibility that inequality is about to explode between those who profit from AI/robotic labor and those displaced by it.
It's hard not to see that they have a point. If AI is so great and going to save so much money - how about starting by paying some of that forward? Suddenly when you ask the billionaires or AI tech elite to share any of the wealth they are so confident they will generate, everyone backs away fast and starts to behave like it is all a speculative venture. So which one is it?
I just wish my wife is more serious about camping and learning survival skills. I think Shit is going to hit the fan in the next 5-10 years but she thinks that’s crazy. Oh well maybe I am crazy.
There’s also a lack of willingness to ‘bring along’ the public. It’s just “make the god thing; ask for permission later”.
In 2022 the world was open arms, welcoming AI advancements.
However, since 2022, OpenAI and all of its original founding researchers, had their dramatic fallout, and began screaming in public saying crazy people things like "the end is coming."
Why did they insist on force launching ChatGPT? Google at the time refused to launch their own version (it was their own research that gave birth to LLMs) based chat because they knew all of the negative outcomes and unreliability of it all was just a poor product experience.
Instead of launch quietly like DALL-E and keep it fun and experiemental, nope, they threw it up online and moved full-steam ahead.
"THE END IS COMING" Sam Altman said. "AI WILL TAKE YOUR JOBS WITHIN 5 YEARS" Dario said. "AGI IS ALMOST HERE" Elon Musk said.
The disconnect is because these specific men, making those specific bold crazy person claims, with zealous cult following employees (including many of us here in this forum), kept marching ahead. Not only that, no one asked the rest of the world if they even wanted this technology EVERYWHERE.
This technology could have been so cool if it were given the breathing room to find usecases for it. Natural Language programming has been tried for a half a century, and it finally arrives.
Yet, it's so tainted by all the crazy person speak, and doomsday messaging, it's also thrown out there in such a haphazard way that have burned so many bridges, this technology is truely toxic. The fact that Gen-A and Gen-Z now have to waste brain power speculating if something is AI generated, is such a waste, but here we are. Welcome to the shit storm that was entirely made by those men.
“Is that some nonsense ChatGPT told you?” Has turned into an almost cynical mocking in response to someone commenting about an issue.
The hype seems to have run its course. I’m a fan and use it constantly, but it’s also clear there’s serious storm clouds and headwinds on the horizon.
If you work at the edge of context you know what I mean. Even within context, if the system was truly intelligent, the way that Euclid was intelligent, why do I need /superpowers and 50 cycles to get a certain implementation right?
Why is the AI not one-shotting obscure but simple business logic cases with optimal code? Whoops pattern never seen before! There is no thought to it, zero. The LLM is just shotgunning token prediction and context management until something sticks. The amount of complexity you get out of language is certainly fascinating and surprising at times but it's not intelligence - maybe part of it?
Sell it as skills or whatever, but all you do every day is fancy ways of context management to guardrail the token predictor algorithm into predicting the tokens that you want.
1. Overhyped. Try writing a blog post that doesn't sound like it. Everyone is sick of reading it now.
2. Affecting the wrong people. It used to be the rich got richer and the poor got poorer. But now a lot of the middle class will get poorer
3. Severely damages the work hard way out. Competition will become brutal if there's almost no barrier to entry. This will drive down profit, affect hiring and will become a conveyor belt of people trying to win the business lottery. This will make moats even more essential.
4. The obvious theft of creative works which destroys dreams and livelihoods.
No wonder the younger generation are against it. Those of us in the middle are still just hoping at least we can get through somehow. At least we have hope.
Of course, if people are fired and only capital owners / AI experts get to earn anything then this is wrong and a revolution is obviously needed and unavoidable.
But for me, the best outcome would be if it was AI that did all the jobs so people could focus on doing what they want, not that we'd go back to pre-AI era..
Initially however we need to balance between full wealth redistribution and keeping the incentive to develop AI further.
Of course by AI I mean really useful AI, the real part, not the marketing part.
> The United States reported the lowest trust in its own government to regulate AI responsibly of any country surveyed, at 31%.
It seems US citizens are really against the current administration, just using the fact that AI investment is intrinsecally connected to it to voice their opposition.
> Country-level expectations follow similar patterns to the earlier sentiment trends. Nigeria, Japan, Mexico, the United Arab Emirates, South Korea, and India all expected AI to create more jobs than it eliminates, with shares above 60%. The United States and Canada sat at the opposite end, where 67% and 68% of respondents expected AI to eliminate jobs and disrupt industries.
Globally, the disconnect is not growing. It's really just an U.S. problem (spilling to neighbouring Canada too).
So, no luddites in sight, again. It's just a public perception over a polemic topic being leveraged for ideological reasons sinking AI on US only.
A silicon savior to finally free capital from the dependence on labor with all its pesky demands like sick leave or a living wage.
You can see this in the literal deification going on in VC circles. AGI is the capitalist version of the Second Coming, God coming down to earth to redeem them by finally solving the contradictions in their world view.
Unfortunately for them and fortunately for the rest of us, it's not all they hope it to be.
> The TL;DR is that Google engineering appears to have the same AI adoption footprint as John Deere, the tractor company. Most of the industry has the same internal adoption curve: 20% agentic power users, 20% outright refusers, 60% still using Cursor or equivalent chat tool. It turns out Google has this curve too... [0]
Ummmm... Steve. You think Google might be able to figure out a super huge awesome new thing from 1 out of 5 of their employees. Or, given this is a consistent curve across the industry (even at Google)... Maybe AI is only about a fifth as cool and helpful as you and the enthusiasts think it is?
[0] https://simonwillison.net/2026/Apr/13/steve-yegge/#atom-ever...
It doesn’t matter if you think it’s all going to work out and AI will bring an unprecedented era of abundance. That is not the current state.
The current state is: Nearly all productivity growth since 1980 has gone to shareholders, not workers: https://www.epi.org/productivity-pay-gap/
Now what do you think happens when we dramatically expand productivity with AI? Well, we’re already seeing unprecedented layoffs in tech. And it’s easy to draw the conclusion that unless something structural changes all of the productivity gains from AI will go to investors not workers. Leaving said workers without access to healthcare or housing.
And of course let’s not forget that the tech elite in question supported Trump in the last election - someone who has done everything in his power to reduce healthcare access among the low income / unemployed population. This isn’t fucking rocket science guys.
We are ever so close to nearing the point where 90% of our AI usage can go through providers of open models, who all compete with each other to drive down prices and prevent rug pulls, leaving Dario and Sam holding empty bags.