They just completely won with calling LLM errors "hallucinations" didn't they? Even the fiercest AI critics say it. Unnecessarily anthropomorphic and helpless way to say "returned false and broken results," as if the bot had an imagination or the ability to tell truth from lies.

Comments