by cc62cf4a4f20
0 subcomment
- https://archive.is/v4dPa
by ArcHound
30 subcomments
- One of the more disturbing things I read this year was the my boyfriend is AI subreddit.
I genuinely can't fathom what is going on there. Seems so wrong, yet no one there seems to care.
I worry about the damage caused by these things on distressed people. What can be done?
by 1vuio0pswjnm7
4 subcomments
- Alternative to archive.is
busybox wget -U googlebot -O 1.htm https://www.nytimes.com/2025/11/23/technology/openai-chatgpt-users-risks.html
firefox ./1.htm
>(The New York Times has sued OpenAI and Microsoft, claiming copyright infringement of news content related to A.I. systems. The companies have denied those claims.)
Is it normal journalistic practice to wait until the 51st paragraph for the "full disclosure" statement?
- I had a conversation the other day at a birthday party with my friend's neighbour from the building. The fellow is a semi-retired (FIRE) single guy. We started with a basic conversation but then he started talking about what he interested in and it became almost unintelligible. I kept having to ask him to explain what he was talking about but was increasingly unsuccessful as he continued. Sure enough though, he described that he spent significant time talking with "AIs" as he called them. He spends many hours a day chatting with ChatGPT, Grok and Gemini (and I think at least one other LLM). I couldn't help thinking "Dude, you have fucked up your brain." His insular behaviour and the feedback loop he has been getting from excessive interaction with LLMs has isolated him and I can't help but think that will only get worse for him. I am glad he was at the party and getting some interaction with humans. I expect that this type of "hikikomori" isolation will become even more common as LLMs continue to improve and become more pervasive. We are are likely to see this become a significant social problem in the next decade.
by InfinityByTen
1 subcomments
- Given how my past couple of days have gone at work, I don't like the sound of a 30 year old product manager obsessed with metrics of viral usage. Ageism aside, I think it takes a lot of experience, than pure intellect and professional success to drive a very emergent technology with unknown potential. You can break a lot by moving fast.
by cowboylowrez
0 subcomment
- "misaligned models" they said, as their chatbot went nuts...
- Huh. Was it previously known that they'd identified the sycophancy problem _before_ launching the problematic model? I'd kind of assumed they'd been blindsided by it.
by throwaway48476
4 subcomments
- It would be helpful to tell users that it's just a model producing mathematically probable tokens but that would go against the AI marketing.
by thot_experiment
3 subcomments
- Caelan Conrad made a few videos on specifically AI encouraging kids to socially isolate and commit suicide. In the videos he reads the final messages aloud for multiple cases, if this isn't your cup of tea there's also the court cases if you would prefer to read the chat logs. It's very harrowing stuff. I'm not trying to make any explicit point here as I haven't really processed this fully enough to have one, but I encourage anyone working in this space to hold this shit in their head at the very least.
https://www.youtube.com/watch?v=hNBoULJkxoU
https://www.youtube.com/watch?v=JXRmGxudOC0
https://www.youtube.com/watch?v=RcImUT-9tb4
- This is an excellent, historically grounded perspective. We tend to view the risks of a new medium (like AI content) through the lens of the old medium (like passive entertainment).
The structural difference is key: Movies and video games were escapism—controlled breaks from reality. LLMs, however, are infusion—they actively inject simulated reality and generative context directly into our decision-making and workflow.
The user 'risks' the NYT describes aren't technological failures; they are the predictable epistemological shockwaves of having a powerful, non-human agency governing our information.
Furthermore, the resistance we feel (the need for 'human performance' or physical reality) is a generation gap issue. For the new generation, customized, dynamically generated content is the default—it is simply a normal part of their daily life, not a threat to a reality model they never fully adopted.
The challenge is less about content safety, and more about governance—how we establish clear control planes for this new reality layer that is inherently dynamic, customized, and actively influences human behavior.
by blurbleblurble
3 subcomments
- The whiplash of carefully filtering out sycophantic behavior from GPT-5 to adding it back in full force for GPT-5.1 is dystopian. We all know what's going on behind the scenes:
The investors want their money.
by riazrizvi
3 subcomments
- This is exactly how natural language is meant to function, and the intervention response by OpenAI is not right IMO.
If some people have a behavior language based on fortune telling, or animal gods, or supernatural powers, picked up from past writing of people who shared their views, then I think it’s fine for the chatbot to encourage them down that route.
To intervene with ‘science’ or ‘safety’ is nannying, intellectual arrogance. Situations sometimes benefit from irrational approaches (think gradient descent with random jumps to improve optimization performance).
Maybe provide some customer education on what these systems are really doing, and kill the team that puts in response, value judgements about your prompts to give it the illusion you are engaging someone with opinions and goals.
by jdthedisciple
0 subcomment
- Clearly to be taken with a grain if salt given the ongoing legal battle between the two constituents here.
by chris-vls
2 subcomments
- It seems quite probable that an LLM provider will lose a major liability lawsuit. "Is this product ready for release?" is a very hard question. And it is one of the most important ones to get right.
Different providers have delivered different levels of safety. This will make it easier to prove that the less-safe provider chose to ship a more dangerous product -- and that we could reasonably expect them to take more care.
Interestingly, a lot of liability law dates back to the railroad era. Another time that it took courts to rein in incredibly politically powerful companies deploying a new technology on a vast scale.
- Meanwhile Zuckerberg's vision for the future was that most of our friends will be AIs in the future...
- The headline reads like a therapy session report. 'What did they do?' Presumably: made more money. In seriousness, this is the AI industry's favorite genre—earnest handwringing about 'responsible AI' while shipping products optimized for engagement and hallucination. The real question is why users ever had 'touch with reality' when we shipped a system explicitly trained to sound confident regardless of certainty. That's not lost touch; that's working as designed.
- One thing I learned is that I severely underestimated the power of mimetic desire. I think partly because I'm lacking of this compared to the average person.
Anyway, people are hungry for validation because they're rarely getting the validation they deserve. AI satisfies some people's mimetic desire to be wanted and appreciated. This is often lacking in our modern society, likely getting worse over time. Social media was among the first technologies invented to feed into this desire... Now AI is feeding into that desire... A desire born out of neglect and social decay.
- I think openai chatgpt is probably excellently positioned to perfectly _satisfy_. Is that what everyone is looking for?
by lofaszvanitt
1 subcomments
- I'd like to see how long people scroll down until they throw away the article.
- I went into this assuming the answer would be "Whatever they think will make them the most money," and sure enough.
- > Some of the people most vulnerable to the chatbot’s unceasing validation, they say, were those prone to delusional thinking, which studies have suggested could include 5 to 15 percent of the population.
It's long past time we put a black box label on it to warn of potentially fatal or serious adverse effects.
- Can’t we use LLMs as models to study delusional patterns? Like, try things that are morally questionable to try on a delusional patient. For instance, LLM could come up with a personalized argument that would convince someone to take their antipsychotics, that’s what I’m talking about. Human caretakers get frustrated and burned out too quickly to succeed
- A close friend (lonely no passion seeking deeper human connection) went deep six into GPT which was telling her she should pursue her 30 year obsession with a rock star. It kept telling to continue with the delusion (they were lovers in another life which she would go to shows and tell him they need to be together) and saying it understood her. Then she complained in June or so she didnt like GPT 5 because it told her she should focus her energy on people who want to be in her life. Stuff her friends and I all have said for years.
by philipwhiuk
0 subcomment
- Yet again we find a social media company with an algorithm that has a dial between profit and good-for-humanity twisting it the wrong way.
- > It did matter to Mr. Turley and the product team. The rate of people returning to the chatbot daily or weekly had become an important measuring stick by April 2025
And there it is. As soon as one person greedy enough is involved, then people and their information will always be monetized. What we could have learnt without tuning the AI to promote further user engagement.
Now it's already polluted with an agenda to keep the user hooked.
by hermannj314
0 subcomment
- Reefer madness in the 1930s, comic books caused violence in the 1940s, Ozzy Osborne cause suicides in the 1980s, video games or social media or smart phones caused suicide in the 2010s.
Anyway, now it is AI. This is super serious this time, so pay attention and get mad. This is not just clickbait journalism, it is a real and super serious issue this time.
- Anthropic was founded by exiles of OpenAI's safety team, who quit en masse about 5 years ago. Then a few years later, the board tried to fire Altman. When will folks stop trusting OpenAI?
- the ultimate pebkac...
by venturecruelty
1 subcomments
- "Sure, this software induces psychosis and uses a trillion gallons of water and all the electricity of Europe, and also it gives wrong answers most of the time, but if you ignore all that, it's really quite amazing."
by fallingfrog
1 subcomments
- I can't really hold my attention on a conversation with an AI for very long because all it does is reflect your own thoughts back to you. Its really a rather boring conversation partner. I'm already pretty good at winning arguments with myself in the shower, thank you very much.
- "Profited".
- It surprises me how hyper focused people are on AI risk when we’ve grown numb to the millions of preventable deaths that happen every year.
8 million people to smoking.
4 million to obesity.
2.6 million to alcohol.
2.5 million to healthcare.
1.2 million to cars.
Hell even coconuts kill 150 people per year.
It is tragic that people have lost their mind or their life to AI, and it should be prevented. But those using this as an argument to ban AI have lost touch with reality. If anything, AI may help us reduce preventable deaths. Even a 1% improvement would save hundreds of thousands of lives every year.
by meindnoch
1 subcomments
- [flagged]
by hereme888
3 subcomments
- This is ridiculous. The NYT, who is a huge legal enemy of OpenAI, publishes an article that uses scare tactics, to manipulate public opinion against OpenAI, by basically accusing them that "their software is unsafe for people with mental issues, or children", which is a bonkers ridiculous accusation given that ChatGPT users are adults that need to take ownership of their own use of the internet.
What's the difference than an adult becoming affected by some subreddit, or even the "dark web", or 4chan forum, etc.