There are some use cases where AI models do yield better results that human judgement, a lot of which has been in research for longer than chatgpt was even a thing.
And that's assuming I could trust even those use cases. We live in a sea of bullshit regarding AI, chunks of it generated by AI itself. When people like Sam Altman are the advocates, the advocacy becomes impossible to believe.
I don't quite agree,I see this more scientific uses of these technologies as a pretty separate thing from the whole LLM hype. And I'm pretty sure some countries' medical systems have better checks and balances than let's say the USA
They could be, but that doesn't mean they will be once corporations get their hands on them. I'll grant you on the second sentence though, I do have more faith in other countries.
I think you have a wrong picture of how this technology would work, how it would be used, etc. in a medical setting. It won't be used at all of it isn't capable of making predictions that are more reliable than human judgement, which would be empirically proven (which is part of the research)
You have too much confidence in how it would be used, then. The people in control of the purse strings - the ones who would dictate the use of AI - aren't researchers. They also aren't doctors. They will wait for legality and nothing else, *especially* considering all the AI overhype going around.
I'm the anti-est llm guy around and what the other person is saying here is correct, though of course one must concede training data bias is a risk. A friend of mine is trying to get NHS funding for it, many layers of testing and bureaucracy to get through though.
As I said to the other person, I do concur that it gets a bit easier to accept once we cut the US out of the picture, but I still don't trust any of it right now; there's too little regulation and too much "move fast and break things."
Comments
I would want a new doctor.