Exactly this 👇
LLM are exactly what the name Large Language Models means. They model language. They construct sentences that sound plausible, even probable. They are not reproducing facts, but rather language that reads like facts. What makes them dangerous is that people don’t understand that.
Reposted from Sonja Drimmer
Ppl need to stop referring to AI “hallucinations.” LLMs are maybe-most-likely-machines not tools for accurate summarization. It is HIGHLY LIKELY that I would have written an article titled “Manuscript Mediation & Reproduction of Authority” but I didn’t. That’s not hallucination; that’s probability.

Comments