“When the AI provided an incorrect result, researchers found inexperienced and moderately experienced radiologists dropped their cancer-detecting accuracy from around 80% to about 22%. Very experienced radiologists’ accuracy dropped from nearly 80% to 45%.”
https://pubs.rsna.org/doi/10.1148/radiol.222176
https://pubs.rsna.org/doi/10.1148/radiol.222176
Comments
My MSc was trying to predict outcome from a specific cancer to help doctors choose the best option for the patient, from trying to cure to improve their last 3 months of life 1/2
Still, I've spent a year trying various methods to predict anything useful - without any success 2/3
It turned out I could predict with more than 95% accuracy when a patient would join the study
It exhibited a major bias in data, invisible by the mere eyes of researchers and doctors 3/4
And yes I did not plan my messages correctly 4/4
The inexperienced, with the lack of experience, trust blindness in the technology;
The experienced sometimes detected a flaw and decide double check by your own.
Does that mean we stop that by introducing complacency?
Aren't we dumbing down human intelligence then? aren't we already seeing it with UI centric s/w focussing on ease of use instead of augment a fn?
https://pubs.rsna.org/doi/full/10.1148/radiol.232479
The article extols the virtues of AI on reducing workload
Instead they want to use to increase profits by sacrificing quality.
The main difference is that experienced radiologists are usually much faster.
It’s about how people, even experts, blindly trust machines (which can fail or be inaccurate for various reasons)
IBM tried AI for cancer detection 20 years ago. Made a big splash. Turned out it wasn't any better than doctors.
We should use available tools appropriately, with sensible protocols, to prevent the misery of metastatic cancer. No one said anything about all our hopes. 20 years ago isn’t 2024.
I’ll get on with grieving now.
Does the use of AI slow that down? There's some evidence in other contexts (primary school students) that AI assistance slows down learning.
Thanks for link
I can't predict the weather without checking it. Even if it is not always accurate, it's more accurate than me.
Not the case with the doctors in regards to predicting cancer
.. and then continuously trained further towards "six sigma."
If the participants were used to a reliable AI accuracy of, say, 99.9 percent at the time of the experiment a certain bias would be understandable.
Note that AI is used to attempt to make things cheaper for healthcare systems (like reducing screening from two to one radiologists) rather than improving results.