And what converted me was direct patient response. Across the board patient feedback is extremely positive, with the most common comment being along the lines of "I really felt like the doctor connected with me better and they were more present in the visit."
These AI scribes really DO improve patient care, I've seen it with my own eyes.
Healthcare records are probably the most strongly protected personal information in the world. Remember that most of the data about you is not protected by law. Credit reports, ISP records (including your SS#), your entire email archive, Google Drive, etc could get leaked, and for the most part there's no legal consequence. But if a record of you having the flu in 3rd grade gets leaked by a 3rd party connected to health record keeping, there are real consequences (not only for the leak, but even for not reporting it).
If anything, I want everything I say to be recorded and kept on file for later reference. The danger of speech-to-text engine transcribing incorrectly is real, but that doesn't mean I don't want the notes there. I just want the audio included with the text. Both will be useful to refer to later on, especially as STT models improve their accuracy (we've seen amazing leaps in accuracy in just 1 year).
However, we do need to ensure that these records are protected from government over-reach. Currently the government can request your health records, without notifying you, for a slew of reasons. This enables the government to go on a fishing expedition, doing the equivalent of an unreasonable search of private information, and you will have no notification and no way to respond. We must create laws that provide stronger privacy rights for sensitive health information to resist government overreach. Another legal hole is 3rd party apps that collect sensitive health information, but aren't provided by your doctor. Your step-tracking, heart-monitoring app is not protected by HIPAA. Same for employer health records.
However, I do think we are in a situation where everybody knows that healthcare costs need to come down that doctors and medical professionals are spread too thin, forced to see evermore patients in the same number of hours, and yet for every attempt to improve efficiency there is a “no, not that way“ response.
nit: that is a real efficiency gain. seeing more patients sounds better on the face of it.
And the privacy/informed consent concerns here are silly, they apply to any of your charted data... and if you're going to any office that doesn't use the latest technology, your patient information is probably being sent between offices over fax anyway.
1. AI-generated charting. 2. The existence of a reliable record of the visit.
I am skeptical of the first in some cases (i.e. bias), but strongly in favor of the second.
My father is 80 and has Parkinson’s. He routinely leaves appointments unsure of what the doctor said, what changed, or what he is supposed to do next. Even when I attend with him, we sometimes disagree afterward about what exactly was recommended.
This happens with pediatric appointments too. My wife and I occasionally remember instructions differently: medication timing, symptoms to watch for, when to call back, whether something was “normal” or needed follow-up.
That is a care quality problem, not just a convenience problem.
The risks are real: privacy, consent, retention, training use, liability, and automation bias. But those argue for strict controls, not for a blanket refusal. Make it opt-in, give the patient access, prohibit training without explicit consent, keep retention short, and require clear auditability.
I do not want opaque AI quietly rewriting the medical record. But I also do not think “everyone relies on memory after a stressful 12-minute appointment” is some gold standard we should preserve.
It's fascinating how this translates to the idea that in the USA, this should mean "more time with patients", but in reality also means "more patients", but is somehow bad because the is a monetary drive.
That is far from correct and the main reason why I would oppose to this is that the AI might incorrectly record something in the transcript that completely derails my diagnosis and treatment.
There's a big difference between:
"I have had nausea for the past three days"
and
"I have not had nausea for the past three days"
And I'm being generous with my example.
This is probably not the reassurance anyone wanted to hear if they were worried about crap transcriptions leading to crap care.
This is my absolute least favorite category of AI innovations: people patting themselves on the back for becoming more efficient in their inefficiency.
My wife is a physician, and when permitted by patients, uses one of these tools. It's been an enormous time-saver for her. She works a 32-hour week, meaning 32 hours of seeing patients. Before these tools, she was regularly spending an extra 8 to 16 hours, e. g. two full work days, writing notes and sending messages. That time has been more than cut in half. She would never give up the tool if she didn't have a choice.
According to her, it is reasonably accurate, but all notes must be manually reviewed (not just as in her organization requires that, but also as in if she didn't, it would be obvious due to its mistakes). The biggest issue is with things like names and medications, stuff that isn't present in ordinary English, as well as mishearing the results of diagnostic tests, numbers, etc.
It's rare for patients to refuse it.
In that case, I’m paying them to engage with and observe me. Not to identify the correct treatment plan based on a variety of different data points (tests, my history, family history, research, etc)
And even in psychotherapy I have no problem with a LLM being used to compile notes after the session. Just don’t want it present in the session and used for analyzing it.
(My therapist asks me almost once a month if I’d mind. I thought it was because my notetaker is entering the Zoom meeting, but last week I called him out cause I was almost certain I disabled it. Curious if he’ll ask again.)
Documentation errors have always been an issue. They were when there were paper charts, or human transcriptionists, or when manually typing into the EMR, or when using speech recognition (which is AI/ML!) to do the typing for you.
Not all e-scribes use LLMs, but most of them do rely on ambient audio recordings for speech recognition, which nowadays runs entirely locally. That text then needs to be processed into your clinical documentation, and there are tons of ways to do that (including LLM processing).
The author has obviously never talked to clinicians or hospital administrators about the challenges of maintaining clinical documentation, and knows little to nothing about the reality of software that runs in clinical contexts.
So that means if I try to make an appt, I'll have an easier time getting one? Sounds good, I guess.
The next year, during my annual checkup, I gave my doctor a load of crap, telling her to record nothing I say unless I explicitly tell her to. She tried to defend the system, but she agreed. I'm still upset that my "file" still mentions alcoholism.
> "Here is a real concern about implementation" → "Therefore you should refuse entirely"
This skips the middle step of "therefore we should implement it well."
I'm not convinced that we should be allowing doctors to record patient visits at this stage yet, but I'm really not convinced by these points, which largely don't hold up under closer examination.
A few that stuck out:
"Privacy" - Labs are routinely sent to third-party companies, and we don't do informed consent for that. The third-party argument isn't unique to recording.
"False promise of efficiency" - This doesn't really have anything to do with patients at all. It's a criticism of medical office management, not of physician-patient interactions. Telling patients to refuse a tool because management might exploit the productivity gains is asking patients to fight a labor battle on the provider's behalf.
"Consent can't be revoked mid-visit" - Consent typically can't be revoked in the middle of an appendectomy, or halfway through administering a vaccine either. Practical irrevocability is a normal feature of informed consent, not a special problem unique to recording. Proper consent processes in medical offices are a broader issue than consent about voice recordings specifically. Had the authors made the point that providers are being asked to obtain consent for tools whose technical implementation and privacy risks fall outside the provider's own domain knowledge — that would be a stronger argument. But that isn't quite the point they made, and their current framing doesn't wholly convince.
"to whom may be concerned."
[Doctor Stan dinghere, as a patient i have no trust or confidence regarding the security and integrity of my personal information in regards to AI scribing.
for this reason i will scribe for you, as that is the most accurate account of what i intend to communicate with you.
i will refrain from verbal communication and will provide on the spot written communication with respect to health care interaction. ]
I really don't care if my recording becomes training data.
I would rather be spoken to like I'm not an idiot. Use technical terms please. I want precision.
Calling the US healthcare system underfunded might be the most wild part of the whole thing. We spend 5.3 trillion dollars a year. That's 17% of the entire economy.
In my case it was something very not sensitive, removing a benign tumor in a finger, which I have no problem telling the whole world about (I was awake for the surgery and got to watch, it was a incredibly fascinating experience that I want to write more about some day).
But I can imagine it would feel much more invasive if the subject were more sensitive.