In my experience, ChatGPT has been more accurate at diagnosis than most doctors
But how could this be? Doesn’t it make stuff up?
Yeah, but most doctors literally do that too. They guess with very limited knowledge across specialties
ChatGPT tends to make better guesses
But how could this be? Doesn’t it make stuff up?
Yeah, but most doctors literally do that too. They guess with very limited knowledge across specialties
ChatGPT tends to make better guesses
Comments
That cognitive dissonance is too much for many people to process. They go polarized: it’s either a digital god or a deep evil. I embrace nuance
For some topics well-represented in training data, they can synthesize useful outputs with more accuracy. For poorly represented concepts, they are less likely to be accurate
It’s making probabilistic connections from a massive dataset and spitting them out
Our current system is often that doctors are supposed to remember which of 500 diseases usually presents w/these 12 symptoms.
I never dared use it for medical info (not even this week, when I was really sick and could use some good opinions), but it’s terribly useful for all sorts of things. Yes — even though there’s all sorts of morally questionable aspects to using it.
These AIs are *magically* good at coding, but ask anything about board games and you might as well be asking a particularly sociopathic 6-year-old at the street.
Sure, but the probabilities are of certain words being found near other words.
Don’t confuse that with the probability of a certain set of symptoms/data being a given disease.
Those are completely different things.
For example, as a math teacher, I can ask it a series of questions and it can solve a lot of them, but there are things it gets very wrong.
But my concern are people that don't know any better using it like a search engine but even more trusting or implementation that isn't at least able to be double checked by a human.
I know with a doctor for me it's always a balancing act of trying to convey the list of symptoms and how I suspect they all may or may not be related
Doctors frustratingly make mistakes frequently and miss things. It can take several diff docs to catch it
ChatGPT remains a useful diagnostic tool if you know how to use it properly
Because of nothing but coincidence, a researcher i worked with knew my pediatrician who was 98 at the time who remembered me because I was ANA- and he was certain I had JRA but ANA was only test.
GPT suggested from symptoms *3rd*.
a JAK inhibitor put my lungs into remission at 50.
COINCIDENCE and *my* career as a cancer researcher got me a diagnosis.
and a 97 year oldest memory...
There were 7 people with my precise ILD symptoms alive in the last paper I know of. How dare no one guess a 1 in a billion disease.