They just completely won with calling LLM errors "hallucinations" didn't they? Even the fiercest AI critics say it. Unnecessarily anthropomorphic and helpless way to say "returned false and broken results," as if the bot had an imagination or the ability to tell truth from lies.
Comments
It's just wrong. Bad data output.
Thats how I see it anyway
But the LLMs themselves are incapable of lying, because they're incapable of doing anything deliberately.
I'm going to be more aggressive about correcting misinformation, but honestly, it's like talking to a brick wall.
a hallucination implies it's crazy
to my the hallucination is worse though i guess it does imply that it's capable of thought
I am also constantly reminded of how the Villain in Surface Detail is just Elon Musk.
Even the engineers fall for their own propaganda smdh