Interesting. I'm not a big AI fan but my doctor isn't up to date on my illness and gaslights me about it. At least ChatGPT would access relevant and up to date research and treatments.
A recent study found that 11.4% of people underwent surgery over the last year. So if that's where we're drawing the line, then that would give AI 98.6% of all medical jobs and authority.
If you include procedures such as central lines, intubation, bone marrow biopsies, etc., AI would be a diagnostic tool, not really a replacement for many medical specialties.
But radiology, pathology, and reading EKGs/studies? AI might quickly be able to be more precise.
True. I see a lot of benefit from a helping tool side. The info data is better than any human brain. As a doctor’s wife I only see the fast analytical decision making side missing. They can’t fully replace a human, but really helpful tool. AI is impressive.
Not only that. Given that lower income patients have less access to diagnosis and care. The treatments and surgeries can rise, and insurance companies will benefit. We just need to properly find the use and benefit from an operational management side.
This was my *primary* hope. The industry is so brutally squeezed, that the doctor greets you on their way out the door. They are overworked even with a team of helpers.
Is it safe to say that the barrier is essentially just jobs that require arms? That is a recurring trend in AI. I certainly would not miss going through the work to have something looked at only to find out it's nothing. Maybe diagnoses will be free and performed at home in some cases?
Could be. The issue is in the asking of questions and, far more importantly, the physical exam. Once the data is available, I’m sure AI will (eventually) beat any human.
And while many MDs have poor bedside manner due to a host of causes, a little humanity still goes a long way.
This is exactly what AI does well. Get the symptoms and run through and exhaustive probabilities list. Doctors aren't authoritative enough to go through less probabilities.
Who is liable if the AI misdiagnoses someone? Will patients be able to opt out? Already doctors spend more time at appointments typing on a computer than they do looking at or touching a patient.
I have no doubt this technology will move forward w/few limitations. Because big money is at stake.
I actually would like AI in addition to the doc in any difficult situation now. It's more likely to have read the latest studies and research. And we effectively use it when we WebMD something so we're pretty comfortable with it.
Doctors will probably be reduced to an interface role between patient and machine. Similar to how field umpires and referees are with video assisted technology.
The real question is what are we gonna call this AI bot that diagnoses you WITHOUT waiting 30 minutes past your appointment time, and doesn't forget to call you to tell you that blood test came back negative? I vote for Dr. Roboto.
A hospital in Round Rock Tx took 2 weeks to diagnose my brother-in-laws's Guillain-Barré syndrome. I'd be willing to bet that AI would've diagnosed it immediately
Points to note: the group was 50 total, roughly 50:50 attending physicians to residents. The docs were give 10 minutes per case, having no prior experience with any of the patients. IMO I think in such time and resource constrained scenarios 75% accuracy is pretty damn good.
AI is supposed to assist, not to do the job. I do not want AI reading my medical history without human review. Actually I don't want AI in my healthcare at all.
But Kat -- no one anticipates this use of AI. The idea here is that there are more illnesses/symptoms than a human brain can readily retrieve from memory. This vastly enhances the abilities of human doctors to help patients.
I work in healthcare. I tested an AI product for work and found it's ability to summarize is limited and need human review. AI literally hallucinates and will say things that aren't correct or even within reason. Currently, I am a skeptic.
Skepticism is good. I’m simply saying no one is proposing removing human review. Ideally, human beings refine/correct the models to increase the engine’s speed and accuracy. At best, it’s a tool, one that like any tool is verified by human expertise.
Humans are fallible and, while I hope you are right, I think the generalization will prove to be wrong. People don't understand AI and some tech allows us to be lazy. Remember all the phone numbers you knew by heart before cell phones?
OK, again, your parallel is apt, but I don't think it proves what you're advancing it to prove. Cell phones didn't make us lazy; they freed us from the drudgery of memorizing numbers and increased the number of numbers at our fingertips. Rightly done, that's what AI in medicine will do.
I think no, because AI can be biaised properly to recognize those cases. AI has a very long memory to access during inference, human brains get foggy, imprecise after many years. Not RAM.
Medical knowledge = sum of many many medical books, symptoms, cures, effects, patients, relatives, etc.. This is exactly a corpus in AI ML research terms, something a machine can learn and never forget. So no wonder machines are already better.
I have zero worry about AI replacing physicians. AI would have to use facts and evidence based medicine - which is usually not want people want to hear.
Dr AI: "you have a virus, supportive care"
Patient: "screw this, it want my antibiotic, steroid, and B12 shot right now"
Comments
But radiology, pathology, and reading EKGs/studies? AI might quickly be able to be more precise.
And while many MDs have poor bedside manner due to a host of causes, a little humanity still goes a long way.
I have no doubt this technology will move forward w/few limitations. Because big money is at stake.
https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2825395
It’s like asking a math book to tell you the answers and concluding the book is good at math.
Dr AI: "you have a virus, supportive care"
Patient: "screw this, it want my antibiotic, steroid, and B12 shot right now"