emily bender does not know whether any of these claims are true. this entire passage is as hallucinatory as anything emitted by an LLM.
Reposted from
Nicole Holliday
A major danger of LLMs is that humans are SO predisposed to attribute knowledge to any entity that uses natural language fluently. We cannot imagine that a machine that outputs natural-seeming speech/text doesn't have cognition. Brilliantly articulated by @emilymbender.bsky.social et al. (2021).
Comments
look at my philosophers, dawg, we're never getting a coherent theory of personhood (everyone who could construct one is possessed by the engram of Commander Bruce Maddox and compelled to hack out an exception that lets him dismantle Data)
https://arxiv.org/pdf/2505.06120
That said, there's also a mismatch of training and usage, I think. A PhD on my team thinks that multi-turn conversations are a significant enough departure from the training process to cause these issues.
(interested who you mean though!)
(I hate that person's take with bloodthirsty passion)
I think it's fairly widely held that humans do have such representations but that's much farther from my field
You can't have intent if you have no motivation from external factors.
Like an animal kills because it needs to eat, an LLM writes because it's told to but there's no active choice involved?
Like free will: as soon as I hear a decent definition, I'll accept it as ontologically meaningful. Not holding my breath.
But there's no ethics without it. Hence, have to take it as a starting point.
Varela’s description of intentionality in the context of Enactivism seems common-sense and fairly non-controversial to me.
https://iep.utm.edu/enactivism/
Less abstractly: what's the possible intention of an LLM? The intention of the source sentence(s)? The intention of the user, who wrote the prompt? These may be contradictory!