It is extremely hard to avoid anthropomorphizing ChatGPT when it ads eerily “empathetic” and “thoughtful” comments answering a prompt. Also humans are hard-wired to equate intelligence with quick responses (most IQ-related tests are timed) and ChatGPT responses are incredibly fast.
Don’t disagree (although I don’t think I anthropomorphize it here, I don’t consider it sentient or intelligent or conscious) — but I don’t feel like that need be relevant to “understanding” text.
We are definitely entering into the “uncanny valley” chatGPT equivalent. It should be relatively simple to train chatGPT to obfuscate any attempts to test it for sentience. We are entering “the brave new world”
Dunno. It has no memory, it gives statistical probabilities for tokens which are then chosen at random. And if you remove the randomness it will get stuck in meaningless loops. It can’t “think” since it’s a strictly feed forward network. GPTs might have a role in true AI but they aren’t it.
Here we quickly get to the “hard problem of consciousness” in the Philosophy of Mind. I would argue that we don’t know enough about our minds to fully define “understand”, but enough to know that LLMs don’t qualify. We can only look at capabilities, the qualia are opaque (both in humans and LLMS)
When I give gpt some shell commands and ask it to “turn this shell script into a Java class” and it can do that, it certainly feels like that qualifies for “understanding” in a real and meaningful (but not conscious) way. It has to unpack what the shell commands “mean” in order to translate them.
And while it is ultimately stateless it does “learn” new meanings within the token window (which is the only memory it has). The high-dimensional embeddings for each attention head represent what it has understood about the context to that point.
Comments
It’s a bit circular (all the best things are) but to understand means to comprehend intended meaning…