Let's please stop calling them "hallucinations" and call them what they really are: errors. Heinous errors that any human would easily notice as wrong, but which LLMs gleefully spit out.
Reposted from Lili Saintcrow
"When humans hallucinate, it’s because something’s misfiring in our brains. AI hallucinations are not a mistake. The system is doing exactly what it was intended to do. That’s why they can’t fix it, because it’s not broken."

Comments