I don't quite understand why other people seem to crave that. Every time I read about someone who has gone down a dark road using LLMs I am constantly amazed at how much they "fall" for the LLM, often believing it's sentient. It's just a box of numbers, really cool numbers, with really cool math, that can do really cool things, but still just numbers.
It's not about the big confirmations. Most of us catch that andd are reasonably good at it.
It's the subtle continuous colour the "conversations" have.
It's the Reddit echo chamber problem on steroids.
You have a comforting affirming niche right in your pocket.
Every anxiety, every worry, every uncertain thought.
Vomitted to a faceless (for now)"intelligence" and regurgitated with an air of certainty.
Will people have time to ponder at all going forwards?
Short of clearing context, it is difficult to escape from this situation, and worse, the tendency for the model to put explanatory comments in code and writing means that it often writes code, or presents data, that is correct, but then attaches completely bogus scientific babbling to it, which, if not removed, can infect cleared contexts.
It's literally that easy, something anyone can think of, but people want what they want.
It's really nothing new. It takes significant mental energy (a finite resource) to question what you're being told, and to do your own fact checking. Instead people by default gravitate towards echo chambers where they can feel good about being a part of a group bigger than themselves, and can spend their limited energy towards what really matters in their lives.
Eventually she realized that it’s just a probabilistic machine and stopped using it for “therapy.” It’s just insane to think how many other people might be making decisions about their relationship from an AI.
The study explores outdated models, GPT-4o was notoriously sycophantic and GPT-5 was specifically trained to minimize sycophancy, from GPT-5's announcement:
>We’ve made significant advances in reducing hallucinations, improving instruction following, and minimizing sycophancy
And the whole drama in August 2025 when people complained GPT-5 was "colder" and "lacked personality" (= less sycophantic) compared to GPT-4o
It would be interesting to study evolution of sycophantic tendencies (decrease/increase) in models from version to version, i.e. if companies are actually doing anything about it
related: if you suggest a hypothesis then you'll get biased results (iow, you'll think you're right, but the true information is hidden)
Exploring openclaw though so maybe that change
And, tbh, I often try to remember to do the same.
(comment copied from the sibling thread; maybe they will get merged…)
So these tools can be useful when you know the subject matter. I've done queries and gotten objectively false answers. You really need to verify the information you get back. It's like these LLMs have no concept of true or false. They just say something that statistically looks right after ingesting Reddit. We've already seen cases of where ChatGPT legal briefs filed by actual lawyers include precedents that are completely made up eg [2].
There's a really interesting incentive in all this. People like to be told they're right and generally be gassed up, even when they're completely wrong. So if you just optimize for engagement and continued queries and subscriptions, you're just going to get a bunch of "yes men" AIs.
I still think this technology has so far to go. I'm somewhat reminded of Uber actually. Uber was burning VC cash at a horrific rate and was basically betting the company (initially) on self-driving. Full self-driving is still far away even though there are useful things cars can automate like lane-following on the highway and parking.
I simply can't see how the trillions spent on AI data centers can possibly be recouped.
[1]: https://www.tiktok.com/@huskistaken/video/762093124158341455...
[2]: https://www.theguardian.com/us-news/2025/may/31/utah-lawyer-...
The thing is an approximation function, not intelligent, so it is hard to get a middle ground. Many clankers are amazingly obnoxious after their initial release.
Grok-4.2 and the initial Google clanker were both highly dismissive of users and they have been tuned to fix that.
A combative clanker is almost unusable. Clankers only have one real purpose: Information retrieval and speculation, and for that domain a polite clanker is way better.
Anyone who uses generative, advisory or support features is severely misguided.
I say "I think you are getting me to chase a guess, are you guessing?"
90% of the time it says "Yes, honestly I am. Let me think more carefully."
That was a copypasta from a chat just this morning
https://courts.delaware.gov/Opinions/Download.aspx?id=392880
> Meanwhile, Kim sought ChatGPT’s counsel on how to proceed if Krafton failed to reach a deal with Unknown Worlds on the earnout. The AI chatbot prepared a “Response Strategy to a ‘No-Deal’ Scenario,” which Kim shared with Yoon. The strategy included a “pressure and leverage package” and an “implementation roadmap by scenario.”
It's not news at all for anyone who actually engage with the people.
Anyway no real surprise, we have many examples of people ignoring facts and moving to media that support their views, even when their views are completely wrong. Why should AI be different.
The problem is: flattery is often just like the cake. And the cake is a lie. Translation: people should improve their own intrinsic qualities and abilities. In theory AI can help here (I saw it used by good programmers too) but in practice to me it seems as if there is always a trade-off here. AI also influences how people think, and while some can reason that it can improve some things (it may be true), I would argue that it over-emphasises or even tries to ignore and mitigate negative aspects of AI. Nonetheless a focus on quality would be an objective basis for a discussion, e. g. whether your code improved with help of AI, as opposed to when you did not use AI. You'd still have to show comparable data points, e. g. even without AI, to compare it with yourself being trained by AI, to when you yourself train yourself. Aka like having a mentor - in one case it being AI; in the other case your own strategies to train yourself and improve. I would still reason that people may be better off without AI actually. But one has to improve nonetheless, that's a basic requirement in both situations.
Used to be only the wealthiest students could afford to pay someone else to write their essay homework for them. Now everyone can use ChatGPT.
Used to be you had to be a Trumpian-millionaire/Elonian-billionaire to afford an army of Yes-men to agree with your every idea. Now anyone can have that!