Searle's Chinese Room has been around for ~50 yrs, this should be obvious 🙃 (maybe another mark in favor of Integrated Information Theory too?) Ultimately, I'm skeptical sentience is possible without paraconsistent logic.
Reposted from Nicole Holliday
A major danger of LLMs is that humans are SO predisposed to attribute knowledge to any entity that uses natural language fluently. We cannot imagine that a machine that outputs natural-seeming speech/text doesn't have cognition. Brilliantly articulated by @emilymbender.bsky.social et al. (2021).

Comments