This makes sense when you remember it can't "understand" anything.

It creates the illusion of understanding by drawing from large amounts of data based on the prompt given, and collating that data to find the response that seems most likely to make sense based on recognized patterns.
Reposted from New Scientist
Many AI models fail to recognise negation words such as “no” and “not”, which means they can’t easily distinguish between medical images labelled as showing a disease and images labelled as not showing the disease

Comments