emily bender does not know whether any of these claims are true. this entire passage is as hallucinatory as anything emitted by an LLM.
Reposted from Nicole Holliday
A major danger of LLMs is that humans are SO predisposed to attribute knowledge to any entity that uses natural language fluently. We cannot imagine that a machine that outputs natural-seeming speech/text doesn't have cognition. Brilliantly articulated by @emilymbender.bsky.social et al. (2021).

Comments