by ACCount37
10 subcomments
- It's simple. It's because AI is the scariest technology ever made.
Human intelligence has proven itself capable of doing a lot of scary things. And AI research is keen on building ever more capable and more scalable intelligence.
By now, the list of advantages humans still have over machines is both finite and rapidly diminishing. If that doesn't make you uneasy, you're in denial.
- If I can plausibly say I'm making something super dangerous, the government is likely to want to be the first government to have it. If the check clears before they figure out if I'm BSing them or not, it's a win.
by everdrive
4 subcomments
- One thing that strikes me that I never really see anyone discuss is that we've been afraid of conscious computers for a _long_ time. Back in the 50s and before people were quite afraid that we'd build conscious computers. This was long before there was any sense that could actually accomplish the task. I think that similarly to seeing faces in the clouds, we imagine a consciousness where none exists. (eg: a rain god rather than a complex system of physics and chemistry)
Even LLMs, which blow past any normal Turing test methods, are still not conscious. But they certainly _feel_ conscious. They trigger the same intuitions that we rely on for consciousness. You ask yourself "how would I need to frame this question so that Claude would understand it?" You use the same mental hardware that you'd use for consciousness.
So, you have an historical and permanent fear of consciousness in a powerful entity where no consciousness actually exists combined with the fact that we created things which definitely seem conscious. (not to mention that consciousness could genuinely be on its way soon)
- I feel like this article is more written towards non-techies. A decent amount of programmers have touched coding agents, and know it "kind of" does it's job. It's good enough for some tasks... I cannot be arsed to figure out how to edit a graph in Drupal, so I ask Claude. Claude fixes it, and it's not anymore broken than it already was. Win win.
However, that's where I stop my agent usage. I let ~~Claude~~ GLM do the following:
- Fix tedious tasks that cost me more to figure out than I care for
- Research something I'm not familiar with, and give me the facts it had found, and even then I end up looking at the source myself
- For regulatory capture, of course. They are not fooling me. There may be other motives, and the more ever-doom-looking crowd can find something in it for themselves as well, but you don't have to dig any deeper if you are looking for an explanation for the perspective of the people actually building it.
The Chinese tech sector popularizing cheap and open source models sure did a number on that narrative, too. Llama models, a while ago, too.
- I wish we didn't call this AI as the term is crazily overloaded.
Those are programs. The only difference is how we write them. Not with "if"s and "for"s. We take a bunch of bits that do nothing. Then we organize them in a way so that it outputs whatever it is we want.
- Indeed. Apart from the obvious prompt research frauds mentioned in the article, the model learned all deceptive behaviors from hundreds of Yudkowsky scenarios that are easily available.
It literally plagiarizes its supposed free will like a good IP laundromat.
- We're hardwired to fear the rustle in the grass, and successful infrastructure gets backgrounded. Pain is a signal. How much time do you spend contemplating your skeleton outside of pain-related skeletal events?
by dclowd9901
0 subcomment
- My favorite part of this article was this bit, and naturally so, since I love the author:
> Where did we come up with this caricature of AI’s obsessive rationality? “There’s an article I love by [the sci-fi author] Ted Chiang,” Mitchell said, “where he asks: What entity adheres monomaniacally to one single goal that they will pursue at all costs even if doing so uses up all the resources of the world? A big corporation. Their single goal is to increase value for shareholders, and in pursuing that, they can destroy the world. That’s what people are modeling their AI fantasies on.” As Chiang put it in the article in The New Yorker(opens a new tab), “Capitalism is the machine that will do whatever it takes to prevent us from turning it off.”
I didn't realize it til I read it here, but yes, my fear isn't really about the machine, it's about the machine that drives the machine. We already have a class of amoral beings that treat the world as an expendable thing and are willing to burn it down for profit. We should focus on getting rid of that problem first.
- Why do we tell ourselves scary stories about anything?
by GolfPopper
2 subcomments
- Why does the uncanny valley[1] exist? (If it truly does.) What in our evolutionary history gave us a reflexive rejection of things that seem human but aren't?
1. https://en.wikipedia.org/wiki/Uncanny_valley
- We tell ourselves scary stories about everything new. Advances in electricity + medicine == FRANKENSTEIN!
by mememememememo
0 subcomment
- I read and experience scary stories about AI already. It is not a future maybe thing.
by vdelpuerto
0 subcomment
- The framing of "scary stories" misses something interesting: most of the actual operational fear isn't about consciousness or superintelligence — it's about systems that seem to work until they quietly don't.
by netdevphoenix
0 subcomment
- There is a very interesting book that explores the West's generally negative view of artificial intelligence whenever it is portrayed in media (Skynet) while Japanese media tends to have a positive view (Astro Boy).
by bharat1010
0 subcomment
- The point about AI companies actively hyping the danger of their own products is something I hadn't really thought about before — it's a strange kind of marketing when you think about it.
- This article would be a lot more digestible if we didn't have actual scary data rather than just stories. Not a day goes by without some prompt injection oopsie, security gotcha, deepfake or some sandbox escape artist demonstration and tbh I'm impressed but more to the point where I don't doubt this is dangerous tech, I'm sure of it.
This is roughly 1995 again and we're going to find out all over why mixing instructions and data was a spectacularly bad idea. Only now with human language as the input stream, which is far more expressive than HTML or SQL ever were. So now everybody is a hacker. At least in that sense it has leveled the playing field I guess.
by chrisbrandow
0 subcomment
- I don’t think the fact that the robot was instructed to lie to a human and was able to do so successfully makes the story much less scary for most people.
by RivieraKid
0 subcomment
- I 100% agree with this take, I find AI completely non-scary, especially in the sense that it's some kind of a conscious entity that will want to take over. I find these people almost delusional. It's a powerful tool, so it can be dangerous if used by people with bad intentions, so there's some real danger here, but my intuition is that it will be fine. The ratio between the power of people with good vs bad intentions shouldn't change too dramatically.
The only scary part is that it could be bad for my future as a software developer. That said, I think it will be net benefit for the average worker - the average person will work less and earn more.
by Forgeties79
0 subcomment
- LLM companies’ behavior, AI evangelists, and the investment fervor around it all, are telling us the scary stories.
- It does feel like a bizarre moment, where the AI companies are deliberately trying to scare us about their own product in a bid to, I think, show the inevitability of it? Or to sell themselves as the one responsible power to constrain it?
It's very odd. "It's going to take all your jobs" is not a great selling point to the everyday public.
by SpicyLemonZest
0 subcomment
- The actual contents of this article are making reasonable arguments I largely agree with. It would be very surprising for LLM-based AI systems to act as monomanaical goal optimizers, since they're trained on human text and humans are extremely bad at goal-oriented behavior. (My goals for today include a number of work and self-maintenance tasks, and the time I'm spending here writing out a HN comment does not all help achieve them - I suspect most people reading this comment are in the same boat.)
It's very frustrating that the magazine wrote such a dumb headline which guarantees people won't talk about the issues the article raised. Obviously non-goal-oriented systems can still have important negative effects.
by nalekberov
2 subcomments
- > “The last four years have demonstrated that AI agents can acquire the will to survive and that AIs have already learned how to lie.”
Why Harari feels an obligation to comment about everything is of course beyond me, but describing 'AI' as if it takes independent decisions to lie, make moral judgements, etc. demonstrates either that he has zero clue how 'AI' trains itself or that he chooses to mislead the audience.
- Because we don't like uncertainty, and the AI future is uncertain. There are multiple high probability scenarios.
Because we're seeing how its capabilities increase overtime. I find the rate at which I prefer to go to an AI than an UpWorker is scary.
Because we——the people——are not in control of it. We're at the whims of whatever it and the tech bros want (technocracy).
by FatherOfCurses
1 subcomments
- We tell ourselves scary stories about AI because humanity is rife with stories where a new idea or technology has had unintended negative consequences. AI bros just care about selling their product to another company and cashing out, they have absolutely no regard for their legacy.
- I think the most insightful bit is buried in the article:
> Perhaps because this is the best advertising money can’t buy. People like Harari and others repeat these accounts like ghost stories around a campfire. The public, awed and afraid, marvels at the capabilities of AI.
And that's mostly it. PR. Publicity. Fear is good publicity if it emphasizes AI's capabilities. And people like Harari (or Gladwell) tell interesting and awe-inspiring stories that do not necessarily have much rigor or fact-checking in them. They simplify for storytelling purposes, which can result in misleading stories.
I am worried about AI, but not about superintelligent AI that will exterminate or enslave us. I'm worried about AI as a tool to concentrate wealth and power in the hands of the current amoral entrepreneurial elite. I'm not sure whether I trust ChatGPT, but I sure as hell do NOT trust Sam Altman et al.
Or, in other words, I subscribe to Ted Chiang's very apt remark about what we really fear:
> “There’s an article I love by [the sci-fi author] Ted Chiang,” Mitchell said, “where he asks: What entity adheres monomaniacally to one single goal that they will pursue at all costs even if doing so uses up all the resources of the world? A big corporation. Their single goal is to increase value for shareholders, and in pursuing that, they can destroy the world. That’s what people are modeling their AI fantasies on.” As Chiang put it in the article in The New Yorker, “Capitalism is the machine that will do whatever it takes to prevent us from turning it off.”
- [dead]
by throwaway613746
0 subcomment
- [dead]
- [dead]
- [dead]
by KaoruAoiShiho
0 subcomment
- TLDR: Writer hasn't heard of agents.