AI medical note taking in practice
https://pulitzercenter.org/stories/researchers-say-ai-powered-transcription-tool-used-hospitals-invents-things-no-one-ever
(it is worth asking “is the AI backlash justified?” before vilifying criticism as anti-innovation)
https://pulitzercenter.org/stories/researchers-say-ai-powered-transcription-tool-used-hospitals-invents-things-no-one-ever
(it is worth asking “is the AI backlash justified?” before vilifying criticism as anti-innovation)
Comments
With clear audio (like a Dr's office) that 187 / 13,000 has an error rate of 1.4% total, which may or may not be critical in nature.
So... the AI is likely better. And can be improved.
Those "short clear" audio samples were each *10 seconds* long. That's hardly representative of any actual use case.
https://www.healthcare-brew.com/stories/2024/11/18/openai-transcription-tool-whisper-hallucinations
It’s always about some future version that will be better.
https://bsky.app/profile/abeba.bsky.social/post/3lf3d2fspo22a
Hallucinations cannot be removed from LLMs. It is a feature. So when you use it in medicine, legal, tax, etc. - you absolutely have to verify the output.
What, then, is the ROI? The human still needs: expertise; must review the doc; etc.
Bonkers.
Then AI has ridiculous bias and distribution shift issues where who you evaluate changes the accuracy and so on.
if you can't reason with empirical evidence that challenges your "AI backlash is bad" argument, the problem is you
And even if it was wrong 1 out of a 100 recordings (but you don’t know when or where) it’s as good as 0% accuracy