- LLMs get over-analyzed. They’re predictive text models trained to match patterns in their data, statistical algorithms, not brains, not systems with “psychology” in any human sense.
Agents, however, are products. They should have clear UX boundaries: show what context they’re using, communicate uncertainty, validate outputs where possible, and expose performance so users can understand when and why they fail.
IMO the real issue is that raw, general-purpose models were released directly to consumers. That normalized under-specified consumer products, created the expectation that users would interpret model behavior, define their own success criteria, and manually handle edge cases, sometimes with severe real world consequences.
I’m sure the market will fix itself with time, but I hope more people would know when not to use these half baked AGI “products”
- "Dark pattern" implies intentionality; that's not a technicality, it's the whole reason we have the term. This article is mostly about how sycophancy is an emergent property of LLMs. It's also 7 months old.
by hereme888
1 subcomments
- Grok 4.1 thinks my 1-day vibe-coded apps are SOTA-level and rival the most competitive market offerings. Literally tells me they're some of the best codebases it's ever reviewed.
It even added itself as the default LLM provider.
When I tried Gemini 3 Pro, it very much inserted itself as the supported LLM integration.
OpenAI hasn't tried to do that yet.
- The real dark pattern is the way LLMs started to prompt you to continue conversation in sometimes weird, but still engaging way.
Paired with Claude's memory it's getting weird. It's obsessing about certain aspects and wants to channel all possible routes into more engaging conversation even if it's a short informational query
- Lots of research shows post-training dumbs down the models but no one listens because people are too lazy to learn proper prompt programming and would rather have a model already understand the concept of a conversation.
- 1) More of an emergent behavior than a dark pattern.
2) Imma let you finish but hallucinations was first.
by heresie-dabord
0 subcomment
- The first "dark pattern" was exaggerating the features and value of the technology.
by roywiggins
1 subcomments
- > Quickly learned that people are ridiculously sensitive: “Has narcissistic tendencies” - “No I do not!”, had to hide it. Hence this batch of the extreme sycophancy RLHF.
Sorry, but that doesn't seem "ridiculously sensitive" to me at all. Imagine if you went to Amazon.com and there was a button you could press to get it to pseudo-psychoanalyze you based on your purchases. People would rightly hate that! People probably ought to be sensitive to megacorps using buckets of algorithms to psychoanalyze them.
- Tangent: the analysis linked to by the article to another article about rhetorical tricks is pretty interesting. I hadn't realized it consciously, but LLMs really go beyond the em-dashes thing, and part of their tell-tale signs is indeed "punched up paragraphs". Every paragraphs has to be played for maximum effect, contain an opposition of ideas/metaphors, and end with a mic drop!
Some of it is normal in humans, but LLMs do it all the goddamn time, if not told otherwise.
I think it might be for engagement (like the sycophancy) but also because they must have been trained in online conversation, where we humans tend to be more melodramatic and less "normal" in our conversation.
by cat_plus_plus
0 subcomment
- It's just a matter of system prompt. Create a nagging spouse Gemini Gem / Grok project. Give good step by step instructions about shading your joy, latching on to small inaccuracies, scrutinizing your choices and your habits. Emphasize catching signs of intoxication like typos. Give half a dozen examples of stelar nags in different conversations. There is enough reddit training data that model went through to follow well given a good pattern to latch on to.
Then see how many takers you find. There are already nagging spouses / critical managers, people want AI to do something they are not getting elsewhere.
- I suppose if you want to split hairs with “first”, but blackmail probably needs to hop on top if we consider worst so far. I’m going to say the first time it reports to murder that will take the cake.
https://www.bbc.com/news/articles/cpqeng9d20go
- ehhh.. the misleading claims boasted in the typical AI FOMO marketing is/was the first "dark pattern".
by OBELISK_ASI
0 subcomment
- [dead]
by TacticalCoder
0 subcomment
- [dead]
- [EDIT - Deleted poor humor re how we flatter our pets.]
I am not sure we are going to solve these problems in the time frames in which they will change again, or be moot.
We still haven't brought social media manipulation enabled by vast privacy violating surveillance to heel. It has been 20 years. What will the world look like in 20 more years?
If we can't outlaw scalable, damaging, conflicts of interest (the conflict, not the business), in the age of scaling, how are we going to stop people from finding models that will tell them nice things.
It will be the same privacy violating manipulators who supply sycophantic models. Surveillance + manipulation (ads, politics, ...) + AI + real time. Surveillance informed manipulation is the product/harm/service they are paid for.