For everyone thinking that we simply tell doctors they should *not* use LLMs for decision making and that they will listen to that, just look at these numbers:
"76% of respondents reported using general-purpose LLMs in clinical decision-making"
https://www.fiercehealthcare.com/special-reports/some-doctors-are-using-public-generative-ai-tools-chatgpt-clinical-decisions-it
"76% of respondents reported using general-purpose LLMs in clinical decision-making"
https://www.fiercehealthcare.com/special-reports/some-doctors-are-using-public-generative-ai-tools-chatgpt-clinical-decisions-it
Comments
Perhaps we should begin by allowing and supporting academic staf/teachers to use it, and subsequently to study its use.
At present I cannot access several services which my students can.
https://www.grounded.systems/2025/01/dealing-with-ai/
"What is a good starting dose of propofol for anesthesia of a 4 yo?". Seems dangerous.
While "suggest diagnoses for a patient with 4 weeks of cough, stuffy nose and malaise" may help the doc consider some less common disorders.