There are now 5 reports like this—#AI performing better than physicians + AI—and we don’t have the explanation for why yet (hybrid was supposed to be best)
Gift link https://nytimes.com/2024/11/17/hea…
Gift link https://nytimes.com/2024/11/17/hea…
Comments
Plus, there is no end of stories of people being sent home from ER without a diagnosis only to come back later with life-threatening issues.
Here is a talk I did about AI in medicine 7 years ago. And the concept of over ruling an AI decision support system:
https://youtu.be/LCTW3IoX5jQ?t=810&feature=shared
It would be interest to have a large scale study to understand the level of false positives here too.
It’s straight and to the point. It’s prerogative is direct
I'm not sure why you'd assert that Adam's result is not true. I'm expecting the studies you'll point to are much larger and/or have better methods.
https://www.nytimes.com/2024/11/17/health/chatgpt-ai-doctors-diagnosis.html
Have you *met* doctors? Try telling them the answer to something, and inevitably they will tell you why it's not the answer.
I can very easily imagine ChatGPT suggesting the correct answer and institutionalized oppositional-defiant disorder kicking in *hard*.
esp this bit "They didn’t listen to A.I. when A.I. told them things they didn’t agree with”
M. Deities
LLM is great at association but terrible at complex logic. A human expert should be able to reliably beat it if assisted with good info like relevant statistics for treatment options.
Diagnosing conditions and treating the patient are two different things