AI models in hospitals miss 66% of patient injuries, failing to detect worsening health. A study urges integrating medical expertise, as 65% of US hospitals rely on these systems for patient care. π©Ίπ»
Direct link to the study: https://www.nature.com/articles/s43856-025-00775-0
Direct link to the study: https://www.nature.com/articles/s43856-025-00775-0
Comments
You know how you reduce death and errors in the hospital? You hire more staff, you pay high quality staff more to retain quality staff.
AI has its uses--like detecting tumors on scans--but predicting falls and other bad outcomes-pure bullshit
Computers donβt replace doctors
Then the only reason we have them in US hospitals to to make someone exceedingly rich without benefiting patients.
That's my short take, because when it comes to healthcare, pretty much EVERYTHING we do in the US is wrong.
https://bsky.app/profile/smcgrath.phd/post/3lk6vckdxtk2s
Case in point: a recent very large RCT found AI-supported mammography increased cancer detection by 29% while cutting radiologists' workload by 44%, without more false positives.
Computers can be helpful
Obviously
But ai is just internet garbage mixed into real science articles
Still need doctors
A lot of people are latching on to the original post due to the negative light it paints AI in, and I didn't want to run through all the various AI flavors (ML, Deep Learning, Fuzzy Networks, etc.)
Link to the paper: https://www.thelancet.com/journals/lanonc/article/PIIS1470-2045(23)00298-X/fulltext
An algorithm is no replacement for personalized health services. How dystopian of us.
AI is a rich, diverse field of extreme utility and one of our greatest technological advancements.
We should never have let tech bros anywhere near it.
Afaik the good uses are largely pattern recognition based?
It is not a nothingburger, but it's certainly neither an existential crisis to humanity nor is it going to destroy all jobs everywhere.
No, literally. News articles and sales use the term AI, but from a technical standpoint that is nearly meaningless. LLMs are not CNNs are not 1500 if statements are not genetic algorithms are not are not are not. All have different strengths and weaknesses.
AI in medicine has been around for decades, but it's been limited by what it could accomplish. We are seeing some very exciting advances, but there are very big dangers too.
Is why they prefer it than humans making decisions
I'm not the church, believe in whatever you want, there's evidence and industry executives that prefer if people ignore it
Our country is sick with greed.
AI has been around for years and has a million different implementations, loved by researchers.
They should always be guided by an actual clinician. Every AI paper specifies this.
The new craze is transformer-based LLM Generative AI, loved by tech bros.
They also tend to ignore warnings from researchers about dangers and limitations of the technology.
The problem isn't the research, it's business fools chasing venture capital at the expense of reason.