Hospitals and providers are using current transcription tools. The difference here is the audio is recorded during the patient conversations. The AI processes the audio, creating the medical note in the EMR from specific keywords in the conversation. The software companies call it ambient listening.
To be fair to the article, it doesn’t actually focus on med transcription with Whisper; more just general talking shit about it. I don’t think many hospitals are using it. None should ofc, but article is light on specific examples in medical setting.
“Over 30,000 clinicians and 40 health systems, including the Mankato Clinic in Minnesota and Children’s Hospital Los Angeles, have started using a Whisper-based tool built by Nabla”
“Nabla said the tool has been used to transcribe an estimated 7 million medical visits.”
Yeah ok that’s unfortunate. The key thing there is whisper-based. There’s a lot that can be done on the backend to make it more accurate. But I have no idea is this Nabla SaaS platform does that.
My understanding is that this means they took a skeleton of the Whisper model before it was given training data, which is supposed to help it build a relevant vocabulary in the setting it's to be used in. This is the case with most LLMs.
Comments
“Over 30,000 clinicians and 40 health systems, including the Mankato Clinic in Minnesota and Children’s Hospital Los Angeles, have started using a Whisper-based tool built by Nabla”
“Nabla said the tool has been used to transcribe an estimated 7 million medical visits.”
Spoiler: They are all terrible in the same ways.