Time to tap the sign: At present, there's no known way to stop generative AI from producing output that is false, wrong, mistaken, untrue, erroneous, incorrect, "hallucinatory," etc etc. You can't fix this by pointing to any particular source material. Bummer, I know. But that's the truth.
Reposted from
Yasmin R. Aslam
Over half the answers AI gave couldn’t be trusted.
Not even after the AI was given access to the BBC’s website and prompted to use BBC News articles and sources.
The BBC tested:
- OpenAI’s ChatGPT
- Microsoft’s Copilot
- Google’s Gemini
- Perplexity
Not even after the AI was given access to the BBC’s website and prompted to use BBC News articles and sources.
The BBC tested:
- OpenAI’s ChatGPT
- Microsoft’s Copilot
- Google’s Gemini
- Perplexity
Comments