I… what?
Even in the face of countless gems from the World’s Worst Technology™️, this is outstanding.
Even in the face of countless gems from the World’s Worst Technology™️, this is outstanding.
Reposted from
New Scientist
Many AI models fail to recognise negation words such as “no” and “not”, which means they can’t easily distinguish between medical images labelled as showing a disease and images labelled as not showing the disease
Comments
The way we use negations in English can be confusing.
But AI can give answers that contradict themselves in the same paragraph.
Makes me want to say, "Did you read over your answer before you submitted it?" Which I have said before all this
And students will often make mistakes like swapping ‘can’ for ‘can’t’ because they sound so similar
A machine learning image scanner is an incredibly useful tool, but has almost nothing in common with a chatbot.
'No elephant' and 'elephant' both make us think of elephant.
Then the neocortex sorts it out for the conscious brain.
I've simplified this, but at least we can recognise 'no' with our human brains.
And perhaps it is an example of using the wrong tool for the job?
We are heading into a world of pain basically because people are looking at LLMs as a universal panacea as opposed to a tool that might have uses in some cases. And even then the tool user needs training in that case.
Or it might just vanish...
https://pmc.ncbi.nlm.nih.gov/articles/PMC5407813/
It makes sense for LLM like chatGPT however. But why would you try using an LLM for this?
So using the LLM as a natural language interface. That's probably one of the rare use of LLM which seems potentially useful and powerful, but indeed the tech is not ready at all for that yet. The hype is so absurd.