by hamdingers
6 subcomments
- I wonder to what extent 4/4o is the culprit, vs it simply being the default model when many of these people were forming their "relationships."
by satvikpendem
7 subcomments
- How is this specific to 4o? This can happen with any model. See how people acted after Character.AI essentially removed their AI "partners" after a server reset. They actually used DeepSeek before which didn't have the same limitations as American models, especially being open weight means you can fine tune it to be as lovey dovey as your heart desires.
- What does it look like where some intentional effort is made by society to help people like this get what they are using these models to get, but in a healthy way? That is: how does society reconfigure itself so that people do not end up so lonely and desperate that an AI model solves a emotional problem which is hopelessly unsolved otherwise?
It is not "they go to therapy" because that's cheating; that answers the question "what can they do?" not "what can society do?" (and i think it's a highly speculative answer anyway)
- Blaming the 4o model for people forming an unhealthy parasocial relationship with a Chat bot is just as dangerous as letting the model stay online.
It quantifies it as a solved problem.
Why and what drove people to do this in the first place.
This is the conversation we should be having, not which model is currently the most sycophant. Soon the open models will catch up and then you will be able to self host your own boyfriend/girlfriend and this time there won’t be any feedback loop to keep it in check.
- I noticed that LLMs like to write code and anytime an "AI feature" is needed they will heavily default to using `gpt-4o` as kind of the "hello world" of models. It was a good model when it came out and a lot of people started building on it, which caused the training data to be saturated by it.
My AGENTS.md has:
You MUST use a modern but cost effective LLM such as `qwen3-8b` when you need structured output or tool support.
The reality is that almost all LLMs have quirks and each provider tries their best to smooth them over, but often you might start seeing stuff specific to OpenAI or the `gpt-4o` model in the code. IMO the last thing you want to be doing in 2026 is paying higher costs to use an outdated model being kept on life support that needs special tweaks that won't be relevant once it gets the axe.
by satvikpendem
1 subcomments
- Her was prescient, it just underestimated how quickly its dystopia would arrive.
- I dunno.
I've been reading a lot of "screw 'em" comments re: the deprecation of 4o and I agree there's some serious cases of AI psychosis going on with the people who are hooked, but damn this is pretty cold - these are humans with real feelings and real emotions here. Someone on X put it well (I'm paraphrasing):
OpenAI gave these people an unregulated experimental psychiatric drug in the form of an AI companion, they got people absolutely hooked (for better or for worse), and now OpenAI is taking it away. That's going to cause some distress.
We should all have some empathy for the (very real) pain this is causing, whether it's due to psychosis or otherwise.
- I'm partially fascinated by their reliance on this model. I do miss the models before gpt 5.
Openai is quietly locking it away into some vault as we just need to accept whatever model is current.
I think I can sympathize with these people on only one merit and that is nostalgia and entertainment.
I still load up old versions of software. I still watch old shows. I still play old video games. Under the lens of entertainment, I will never be able to be entertained by the objectively worse models.
Old chats are kind of still there but not really, the UI is obviously different and probably will get deleted when I stop paying for the subscription and try to claw back some of my life away from chatting with these stupid models.
It's dangerous to hold any meaningful memory with these cloud LLMs. Not to mention the social media traps people fell for, that I was proactively avoiding. I did get some part of me attached to gpt 4o. I quickly realized it and moved away from it.
This post is a mixture of complex emotions but it is just what I felt like posting. It's fine to ridicule people for wanting to be that deeply attached but these cloud LLMs show how easily it is to start a social habit and lose it in an instant. We need more healthcare push to prevent (and treat the) social attachment from happening to LLMs.
- Most of the tweets and examples in the article are likely bots/fake content. The future of the internet is so dire
- Ah, the 4o, the first beer bottle for humans. https://www.npr.org/sections/krulwich/2013/06/19/193493225/t...
- It just occured to me how different the emotional landscapes of people are. While I do not want to turn this into a sexist rant, I did observe this trait particularly in women (not all of them, mind you) - is that how much they crave strong positive feedback.
This was something that I figured out with my first gf, and had never seen written down or talked about before - that when I praised her she became happy, and the more superlative and excessive the praise got, the happier she became, calling her gorgeous, the most wonderful person in the world, made her overjoyed.
To be clear I did love her and found her very attractive, but overstating my feelings for her kind of felt like I came close to lying and emotional manipulation, that I'm still not comfortable with today. But she loved it and kept doing it because it made her happy.
Needless to say we didn't stick together (for teen reasons, not these reasons), but later in life, I tried doing this, but I did notice a lot of women respond very positively to this kind of attention and affection, and I still get some flack from the miss from apparently not being romantic enough.
Maybe I am overthinking this, or maybe I am emotionally flatter than I should be, but finding such a powerful emotional lever in another human being feels like finding a loaded gun on the table.
- Here's the related subreddit: https://www.reddit.com/r/MyBoyfriendIsAI/
by recursive
1 subcomments
- It will be back. Maybe under another name or brand. There's clearly a demand for this kind of fake friendship. As models, hardware, and training improve, those that want to will be able to run this kind of thing offline. OpenAI won't be able to gatekeep it. Or perhaps another less scrupulous provider will step in. The problem here seems to be more like an unpatched vulnerability in humans. Kind of like opioid dependency.
- I wonder how much of this is actually commentary on how easy it is to chat with AI whenever you want, how much of it is commentary on how hard it can be to both be sociable and to also succeed socially and make friends, and what it might mean that an AI is more attractive and easier to “befriend” or “be in a relationship with” than an actual person, both in regards to the qualities of the AI and those of the people it outperforms.
by charcircuit
2 subcomments
- 4o is still available via the API. Business users do not want the models they are using to be ripped out from underneath them.
>exploited until the legal pressure piled up
Being given access to a relationship is not exploitation. In some ways AI relationships are better than human ones.
- It appears that only the 4o text interface has been removed.
Advanced Voice Mode is still branded as 4o, although it has been gradually evolving over the past few months.
I suspect that voice mode is what most users are actually attached to.
- I wish Azure would provide acces to gpt-5.x models in the EU datazone... Stuck in 4.x.
Also I don't see any of the big cloud providers (apart from Azure) saying they are bound to professional secrecy acts (e.g. the S203 in Germany)
- I would prefer to have the option to still use 4o or whatever lite version of chatgpt but WITHOUT ANY POPUPS about limits.
- I'm completely out of the loop on this, why are people so angry about this?
- Life reads a lot like satire now.
Loving AI bots.
Killing yourself based on what an AI bot says.
Its hard to believe any of this is real or should be.
- Computing has made intimate sexual relationships worse.
Dating apps are skewed: men receive little attention while women have an overwhelming amount of attention.
Porn satisfies our most base sexual functions while abandoning truly intimate connections.
The ultimate goal of sexual unions has been demonized and turned into something to avoid. That being children. After school specials since the 80s have made pregnancy a horror to avoid instead of a joy to grasp.
AI is just the latest iteration of technology increasing the divide between the sexes.
When the clankers come, we're fucked.
- I think there is lots of value in a model that mimics your behavior. So your partner or anyone can message "you" at any time of the day.
work, sleep, socialize you can only do 2. With the help of AI you could talk to people as much as you want without wasting their time.
by fellowniusmonk
1 subcomments
- I spent a lot of time on philosophy and religion when I was younger, a lot of time, focus and money, and man...
I read these posts and feel sad for these people and it makes me realize now as an older guy how much more I value learning how to skateboard or run a committee, or write code, run a business or any time I spent on investigating the real world.
Life is short, these people are getting emotionally nerd sniped and dumped into thought loops that have no resolution, no termination point, no interconnectedness with physical reality, and no real growth. It's worse than any video game that can be beaten.
Maybe that's all uncharitable. I remember when I was a child people around me in the academic religous circles my parents ran talking about how "engineers" lacked imagination and could never push human progress forward, and now decades later I see, those people have at most written papers in already dead niche flights of fancy where even in their own imaginary field their work is relegated. I know what they did isn't "nothing", but man... it's a lot of work for a bunch of paper in a trashcan no ine even cites.
- One of the things about models progressing to new ones is the prompting skills also have to often evolve with it.