by samglass09
8 subcomments
- Meanwhile they are pushing AI transcription and note taking solutions hard.
Patients are guilted into allowing the doctors to use it. I have gotten pushback when asked to have it turned off.
The messaging is that it all stays local. In reality it’s not and when I last looked it was running on Azure OpenAI in Australia.
I spoke to a practice nurse a few days ago to discuss this.
She said she didn’t think patients would care if they knew the data would be shipped off site. She said people’s problems are not that confidential and their heath data is probably online anyway so who cares.
- The union rep gets it - people improvise when you cut their tools and then threaten discipline for improvising.
That memo is how you make staff hide things instead of asking for help.
The scarier part though is that LLM-written clinical notes probably look fine. That's the whole problem. I built a system where one AI was scoring another AI's work, and it kept giving high marks because the output read well. I had to make the scorer blind to the original coaching text before it started catching real issues. Now imagine that "reads well, isn't right" failure mode in clinical documentation.
Nobody's re-reading the phrasing until a patient outcome goes wrong.
- I have seen the evolution of these tools and I think they are going to push a fundamental change to medical care. Notes have been getting more and more abused, at least in the US. Big health systems want them for a lot of reasons that have nothing to do with helping a practitioner improve the care of their patient. They want to capture every billable moment of that encounter and potentially prep things like labs, appointments, clinical trial screening, pre-auths, etc. Some of this is good for the patient but a lot isn't. Also, the reality is that many practitioners spend as much, or more time, on the note than on the patient. That clearly isn't to their benefit. There is a reason they sit there and type constantly while talking to you and that doesn't stop when you leave the room. The demands on them to document everything so that all the accounting can happen are actually harming healthcare.
I think there is a chance that these systems will lead to a change where the note isn't the fundamental record of the encounter. Instead different artifacts are created specifically for each entity that needs it. Billing gets their view, and scheduling gets theirs, and, etc etc... It will, hopefully, give the practitioners a chance to get back to focusing on the patient and not ensuring their note quality captured one more billable code. Of course the negative is also likely to happen here too. As practitioners spend less time on the note they will likely not get that back in time with individual patients, but instead on seeing more patients. It will also likely lead to higher bills as the health systems do start squeezing more out of every encounter. There is no perfect here when profit is the driving motivator but with this much change happening I can only hope that it causes the industry as a whole to shake up enough to maybe find a new better optimum to land in.
- Yeah, no privacy or security there. There are some tools explicitly designed at helping healthcare providers produce better notes faster, and a couple of them are AMAZING. I'm an AI-half-empty guy, I'm keenly aware of its shortcomings and deploy it thoughtfully, and even with my skepticism there are a couple of tools that are just plain great. I think using LLMs to create overviews and summaries is a great use of the tech.
by beatthatflight
2 subcomments
- I know at least one GP that has stopped using Heidi Health for transcription. He (and as I've noticed with transcriptions from my medical professionals) has noticed many errors, far too many to be comfortable. Things might improve, but not yet.
- I'm a female doctor, for me the hardest thing so far was watching AI shut down a discussion on harassment in healthcare in real time. Women were contributing stories to a website documenting their experience until one claiming to be written by a male was published. All contributors were fluent but this hit different - lower information density (harder to get a read on the age, ethnicity, social class of the writer) but higher emotional impact. Women usually seemed to have interrogated themselves, identifying choices they had made (joining after work drinks, accepting a ride, being alone) but this showed no sign of any attempt to think through how alternative versions of the account would look (eg, a white woman contributing to a forum on racism to complain discrimination by non-white men protected by the patriarchy, also they were on cocaine?) The last-minute drug reference was weird (doesn't tha complicate the question of who the victim is, not reinforce it? why cocaine - except that "crack" would sit in a field adjacent to both "abuse" and" damage" and "drug" next to "doctor")? Other men had shared experiences, but not like this. Feed the stories into GPT and ask which is most likely AI and it "carefully and respectfully" identifies 239 for different reasons (symmetry, etc) and offered to rewrite in a more human way. Nobody has contributed since.
https://www.survivinginscrubs.co.uk/your-stories/
yeah I'm in an industry that for no good reason still writes instructions using a language that hasn't been spoken since about 800AD. Ai scribes might not change how we practice but it is having real-world effects.
- I meant - "crack" is near "damage" but also near "drug" which is near both "abuse and "doctor" and AI does love synonyms/homonyms, but if we are talking about racism - we know AI tools also favours medical research by native English speakers? https://www.bmj.com/content/392/bmj-2025-087581/rr-8 - so now a non-white written "accent" can silence non-white doctors.
As a doctor navigating AI scribes (which I don't use) it feels like we are being distracted with toys while tech companies figure out how to become what pharma was to the last generation. Interested in non-medical perspectives
- I was applying recently to a role that was pretty interesting and so I wrote the email on the train on my way home, didn't have my laptop with me.
In the email I wrote out everything myself, absolutely no use of AI, but after I hit send I realised there was a pretty silly typo, nothing grave but it irked me.
I decided out of boredom to see would my email be considered AI as it was probably going to go through a million filters these days, I popped it into an online checker (I don't know the quality of these so who knows) and it told me with 75% certainty it was written by AI.
It was not at all. It was written overly hastily on a phone on public transport.
So I wonder how someone who might be grammar orientated and particular with the semantics would prove otherwise.
I can see a company needing to find any excuse to let people go saying "well theAI says you used the AI to do your work, we're letting you go"
- FYI, AI adoption in health in NZ is moving forward, for example https://www.rnz.co.nz/news/national/589774/emergency-doctors...
This is just about not using free/public AI tools.
- AI doesn't forget and soon all of new zealanders will have their health histories internalised by AI so it can individually calculate insurance premiums without knowing why....
- This is a blatant violation of patient privacy. That the output is often hallucinated doesn't even matter here. If the hospital wants to use LLMs, better deploy them on-premise or a trusted network at least.
by memolife23
0 subcomment
- [dead]
- [flagged]