> Why is mental-health crisis not a gating category, the kind where the conversation stops, full stop, and the user is routed to a human?
Well, obviously “routing to a human” is not feasible at that scale. And cold exiting the conversation is probably worse for the user than answering carefully.
I think openAI is doing the best they reasonably can with a very difficult class of users, whose problems are neither their fault nor within their power to fix.
https://www.youtube.com/watch?v=MW6FMgOzklw
https://www.youtube.com/watch?v=BzsLbHoNXTs
I would suggest to people, run your ideas through other humans at least as much as you do through AI, to stay grounded. I think there is a risk even if you're using AI in strictly professional capacity (to help you with your job).
The biggest observation so far is that the latest models are night and day from LLMs from even 6 months ago (from OpenAI + Anthropic, Google is still very poor!)
Don't get me wrong, mental health is important and should be considered and improved. But companies wont do it just for the sake of it.
I feel this is an example of the two larger narratives about AI that currently seem to be forming:
For one side, AI is basically every harmful technology ever invented rolled into one: It's harmful to the environment (via waste of energy and resources), it's harmful to the information space (through polluting everything with slop and devaluing human expression), it's harmful to society (by encouraging ever more badly done and unreliable products, by taking away jobs and by replacing human-to-human interaction, by normalizing a mode of development where not even the developers understand what is going on) and it's harmful to whoever uses it personally (by causing ever-growing dependence on AI, either only by skills or even emotionally or psychically, up to the point of AI psychosis and preferring AI agents to other humans).
For the other side, AI is the future, the next industrial revolution, the thing that you have to adapt or will be left behind, possibly even the next stage of evolution.
Right now, I feel every side is digging in and trying ever harder to ignore the other side.
(The AI labs acknowledge "AI risks" in theory - but, as the article pointed out, the risks they perceive and ostensibly work against are so abstract and removed from the everyday use of AI that they more make the point of AI proponents)
I feel the end result of this growing tension is the Molotov cocktail in Sam Altmann's home.
I'd really like to know more what the tech community at large is trying to do about this rift.
there aren't enough humans.
The reason that real-world harms caused by AI can’t get a hearing in what is now the mainstream AI safety community is that these harms were never part of the core tenets of the cult.
Best of luck to anyone working on reality-based AI harm reduction, you have many hard battles in front of you.
Should AIs be held to a higher standard than X/Twitter? Than Reddit? Than Fox News? What censorship is appropriate? And, yes, alignment is censorship.
Then there's the big problem of chatbots telling you what you seem to want to hear. This is an old problem. "Happy Talk", from South Pacific", is the entertainment version. "Wartime" by Paul Fussell, is the serious version.
As the article points out, a small percentage of the population is very vulnerable to certain types of misinformation. It may be the same fraction of the population that's vulnerable to cults. But maybe not. Cults have a group self-reinforcing mechanism and an agenda. Chatbots have neither. Worth studying.
The point here is that restrictions on chatbots strong enough to protect the vulnerable would close off most political and social discourse.
AI MENTAL SAFETY WARNING:
> This chatbot can sound caring, certain, and personal, but it is not a human and cannot protect your mental health. It may reinforce false beliefs, emotional dependence, suicidal thinking, manic plans, paranoia, or poor decisions. Do not use it as your therapist, only confidant, crisis counselor, doctor, lawyer, or source of reality-testing.
AI TECHNICAL SAFETY WARNING
> This AI may generate plausible but destructive technical instructions. Incorrect commands can erase data, expose secrets, compromise security, damage systems, or brick hardware. Never run commands you do not understand. Always verify AI-generated code, scripts, and shell commands before execution.
Now, if I’m running my own open-source model on my own hardware, I can’t really blame the model if I myself make bad decisions based on its advice - that’s like growing your own tobacco from seed in your garden, drying and curing it, then complaining about the health effects after you smoke it. If I give it agentic capabilities on my LAN without understanding the risks, same old story - with great power comes great responsibility.
I don't know if there are studies or concrete data either way, but it seems at least plausible that continuing the conversation could be more effective (read: saves more lives) than stopping it.
I don’t even work in tech and I detest the Facebook/Zuckerbergs of the world but it’s obnoxious and trite seeing tech companies get scapegoated for what are ultimately social and societal problems, not tech problems.
As a solution it’d prob make sense to start with how disconnected most modern families are in terms of support and accountability.
From ChatGPT to Instagram, tech companies follow the contours of how society already operates.
it is no longer 2023. let go of whatever delusions you might hold about unopenining this Pandora's box.
Tip for writers: aggressively filter out the "no X, no Y, no Z" pattern from your writing. Whether or not you used AI to help you write it's such a red flag now that you should be actively avoiding it in anything you publish.