- If an app makes a diagnosis or a recommendation based on health data, that's Software as a Medical Device (SaMD) and it opens up a world of liability.
https://www.fda.gov/medical-devices/digital-health-center-ex...
- Not surprised. Another example is minecraft related queries. Im searching with the intention of eventually going to a certain wiki page at minecraft.wiki, but started to just read the summaries instead. It will combine fan forums discussing desired features/ideas with the actual game bible at minecraft.wiki - so it mixes one source of truth with one source of fantasy. Results in ridiculous inaccurate summaries.
by dreadsword
1 subcomments
- "Dangerous and Alarming" - it tough; healthcare is needs disruption but unlike many places to target for disruption, the risk is life and death. It strikes me that healthcare is a space to focus on human in the loop applications and massively increasing the productivity of humans, before replacing them...
https://deadstack.net/cluster/google-removes-ai-overviews-fo...
- Good. I typed in a search for some medication I was taking and Google's "AI" summary was bordering on criminal. The WebMD site had the correct info, as did the manufacturer's website. Google hallucinated a bunch of stuff about it, and I knew then that they needed to put a stop to LLMs slopping about anything to do with health or medical info.
- Google is really wrecking its brand with the search AI summaries thing, which is unbelievably bad compared to their Gemini offerings, including the free one. The continued existence of it is baffling.
- > Google … constantly measures and reviews the quality of its summaries across many different categories of information, it added.
Notice how little this sentence says about whether anything is any good.
- "unsafe at any seed" is very cleaver subtitle for this article.
- Tangent, but some people I know have been downloading their genomes from 23andme and asking Gemini via Antigravity to analyze it. "If you don't die of heart disease by 50, you'll probably live to be 100."
I wonder how accurate it is.
- But only for some highly specific searches, when what it should be doing is checking if it's any sort of medical query and keeping the hell out of it because it can't guarantee reliability.
It's still baffling to me that the world's biggest search company has gone all-in on putting a known-unreliable summary at the top of its results.
- This incessant, unchecked[1] peddling is what rids "AI" of the good name it could earn for the things it's good at.
But Alas, infinite growth or nothing is the name of the game now.
[1] Well, not entirely thanks to people investigating.
- ... at the same time, OpenAI launches their ChatGPT Health service: https://openai.com/index/introducing-chatgpt-health/, marketed as "a dedicated experience in ChatGPT designed for health and wellness."
So interesting to see the vastly different approaches to AI safety from all the frontier labs.
- Google for "malay people acne" or other acne-related queries. It will readily spit out the dumbest pseudo science you can find. The AI bot finds a lot of dumb shit on the internet which it serves back to you on the Google page. You can also ask it about the Kangen MLM water scam. Why do athletes drink Kangen water? "Improved Recovery Time" Sure buddy.
Also try "health benefits of circumcision"...
- It took being a meme for a literal year for them to remove this… more responsibility in frontier tech, I’m begging.
by mannykannot
0 subcomment
- The fact that it reached this point is further evidence that if the AI apocalypse is a possibility, common sense will not save us.
- Ars rips of this original reporting, but makes it worse by leaving out the word "some" from the title.
‘Dangerous and alarming’: Google removes some of its AI summaries after users’ health put at risk:
https://www.theguardian.com/technology/2026/jan/11/google-ai...
by SirIsaacGluten
1 subcomments
- How could they even offer that without a Medical Device license? where is the FDA when it comes to enforcement?
- They AI summary is total garbage. Probably most broken feature I saw being released in a while.
by thepotatodude
0 subcomment
- I'm telling you all this as a medical student that has used the latest and greatest models with proper prompting for the past 3 years in school:
There are a ton of misses. Especially on imaging. LLMs are not ready for consumer-facing health information yet. My guess is ~ 3-5 years. Right now, I see systems implementing note writing with LLMs, which is hit or miss (but will rapidly improve). Physicians want 1:1 customization. Have someone sit with them and talk through how they like their notes/set it up so the LLMs produce notes like that. Notes need to be customized at the physician level.
Also, the electronic health records any AI is trained on is loaded to the brim with borderline fraud/copy paste notes. That's going to have to be reconciled. Do we have the LLMs add "Cranial Nerves II-X intact" even though the physician did not actually assess that? The physician would have documented that... No? But then you open up the physician to liability, which is a no go for adopting software.
Building a SaaS MVP that's 80% of the way there? Sure. But medicine is not an MVP you cram into a pitch deck for a VC. 80% of the way there does not cut it here, especially if we're talking about consumer facing applications. Remember, the average American reads at a 6th grade reading level. Pause and let that sink in. You're probably surrounded by college educated people like yourself. It was a big shock when I started seeing patients, even though I am the first in my family to go to college. Any consumer-facing health AI tool needs to be bulletproof!!
Big Tech will not deliver us a healthcare utopia. Do not buy into their propaganda. They are leveraging post-pandemic increases in mistrust towards physicians as a springboard for half-baked solutions. Want to make $$$ doing the same thing? Do it in a different industry.
- huh.. so google doesn't trust it's own product.. but openai and anthropic are happy to lie? lol
- Meanwhile ChatGPT health just 5 days ago https://openai.com/index/introducing-chatgpt-health/ because clearly they both don't give a fuck AND they are so desperate for money, any money.
by inquirerGeneral
0 subcomment
- Claude just added Health Connect integration for Android.
Meanwhile Copilot launched a full bot for it:
"Dos and don’ts of medical AI
While AI is a useful tool that can help you understand medical information, it’s important to clarify what it’s designed to do (and what it isn’t).
Dos:
Use AI as a reliable guide for finding doctors and understanding care options.
Let AI act as an always available medical assistant that explains information clearly.
Use AI as a transparent, unbiased source of clinically validated health content.
Don’ts:
Don’t use AI for medical diagnosis. If you’re concerned you may have a medical issue, you should seek the help of a medical professional.
Don’t replace your doctor or primary care provider with an “AI doctor”. AI isn’t a doctor. You should always consult a professional before making any medical decisions.
This clarity is what makes Copilot safe"
https://www.microsoft.com/en-us/microsoft-copilot/for-indivi...
by Hippieblog
0 subcomment
- [dead]
- [flagged]
- chatGPT told me, I am the healthiest guy in the world, and I believe it