Exactly this 👇
LLM are exactly what the name Large Language Models means. They model language. They construct sentences that sound plausible, even probable. They are not reproducing facts, but rather language that reads like facts. What makes them dangerous is that people don’t understand that.
LLM are exactly what the name Large Language Models means. They model language. They construct sentences that sound plausible, even probable. They are not reproducing facts, but rather language that reads like facts. What makes them dangerous is that people don’t understand that.
Reposted from
Sonja Drimmer
Ppl need to stop referring to AI “hallucinations.” LLMs are maybe-most-likely-machines not tools for accurate summarization. It is HIGHLY LIKELY that I would have written an article titled “Manuscript Mediation & Reproduction of Authority” but I didn’t. That’s not hallucination; that’s probability.
Comments
From https://cacm.acm.org/magazines/2024/2/279533-talking-about-large-language-models/fulltext
took a class with Hinton and he legit thought machine learning would make psychology obsolete
Perhaps AI and (some) politicians are on a convergent evolutionary path...
(And for those legit uses, we need a LOT more research to say they're safe.)