And yet, part of Turing's point was precisely that there is no way to objectively define what is means to think. As such, if people think it thinks, then for all intents and purposes, it kinda does.
Comments
Log in with your Bluesky account to leave a comment
I also agree, but in Paul's defense he is pushing back on those putting the LLM intelligence as close to a physics undergrad based on flawed benchmarks.
Well, then you can ask if the behavior is adaptive in a given environment… and you end up with evolutionary biology. Which perhaps is going to happen some day with AI, once there are AI agents acting in real environments.
Turing's original point in his essay was precisely that due to this imprecision, you can't actually ask the question itself, scientifically. So, you have to replace it somehow.
He argued that a not unreasonable substitute question, is, "Would most people struggle to differentiate this machine from something that does think?" (i.e. humans).
And arguably, if the answer is, "yes", then arguing about whether the thing *really* thinks is useless.
I am not sure I agree that intelligence cannot be defined, but for the sake of argument I would say that Paul is not arguing out of context. He is criticising those that say this things are intelligent. After all, the burden of proof is on them.
Comments
He tacitly assumes that LLMs either do or do not have human-like intelligence, black or white, no middle.
But the comparison to ELIZA or Haider/Simmel is unfair because LLMs are orders of magnitude more complex than those.
In fact, we KNOW that there's no magic fairy dust giving humans human-like thought.
After all, human brains are just computers based on carbon instead of silicon. No magic there either.
And, because it insists there's something qualitatively different about human thought, it flirts with dualism.
That’s why defining intelligence in terms of any linguistic output is flawed.
In which case, sure, we can/should expand beyond linguistic behaviour, but many of the same criticisms would still hold.
https://bsky.app/profile/jbarbosa.org/post/3l6iukubwyt22
"Thinking" is not a well-defined, objective property, like for example, "generates heat" or "is water soluble".
And arguably, if the answer is, "yes", then arguing about whether the thing *really* thinks is useless.