chatgpt is exactly as close to consciousness or experiencing emotions as a calculator, & it will never get any closer
Reposted from
The New York Times
“Can ChatGPT experience joy or suffering? Does Gemini deserve human rights?” our tech columnist asks. “Many A.I. experts I know would say no, not yet, not even close. But I was intrigued.”
Comments
It is a facsimile of introspection which is only optimised for delivering an acceptable answer to the person issuing the query with no regard to the actual truth.
As a side bonus, the future first true AI might not emerge already horribly traumatised.
It’s like expecting that if you just condition a dog to a word enough, it will suddenly learn syntax
>we were apes
We still are
>we learned syntax
We evolved it over hundereds of thousands of years, unlike dogs, we never spontaneously learned it.
Science education defo needs to be improved.
the reason why we can't fly an airplane to mars is somewhat different from why we can't jump to mars, that's why i interpreted it as you saying "we cannot go to mars", sorry
If consciousness is just an emergent property of billions of neurons acting in sync, it’s not impossible that a complex network of transistors couldn’t achieve the same. LLM isn’t conscious, but it might be a piece of the puzzle.
Like, I think it’d be hard to argue that a mite is self-conscious… but it has some necessary building blocks.
[I don't think any LLM's would pass. But I am curious]
In order to make a test to determine the consciousness of an actual sci-fi- like A.I., we need a more complete understanding of how human (and other animal) consciousness works.
They all had a parser, a section of code that broke down sentences the player type to interpret the action the player wished to take then referenced a database to see if the action was possible and what the result would be.
(cont)
If something behaves as if it understands the rules (i.e not just "usually correct" but "from all outside observations has internalized the rules"), can you prove otherwise?
(Whether they're themselves *aware* of their own "understanding": of course not, but that's a completely separate issue!)
I think this is one of those examples a philosopher would point out where our natural languages are actually insuffucient and confuse us with unprecize understandings of what the words we use even mean, bc they are commonly used in quite nebulous, inconsistent ways.
In the same way chatgpt does not know what it's saying. It knows what is the most probable text in relation to the inputted text. That's it.
(to be clear, the article itself is grifting for chatgpt, but i think this is a wrong position also)
Also keep in mind we're not *building* neural networks, we're sort of *evolving* them, and consciousness did evolve spontaneously
but do they really understand what chess is? no not really, they're just solving the puzzle put in front of them. they don't know why they solve they just do
what is chess but this puzzle? what else would they need to understand to understand chess?
neural networks however are trained on cultural context, so the allegory only works if you ignore the cultural context
of course a chessbot isn't going to understand culture. that doesn't make neural networks not understand it
You could call it understanding chess in a sense, but not fully.
but then it's a bad comparison to make, chessbots aren't made to do that. and chatbots are being attempted to be made to understand how people act (they don't rn, but)
However, to me, there is a distinct difference to someone who is "acting" friendly vs. someone who is friendly. Someone who is acting sad vs. someone who is sad. I'm not sure what the distinction is called, though?
is someone truly unfriendly if they always act friendly?
again, if by some miniscule odds it somehow does get sentience.
Not before the welfare of various people is taken seriously.
Besides, it's a computer program, not a living and breathing being
But it ain’t there. These are chatbots. The buzz is from techbros hyping up fancier calculators that make things like PRISM a lot easier.
*Open Python IDE*
print("I'm sad")
> I'm sad
"The computer science experts were adamant about this just being a so-called "string", but I was... intrigued"
As long as "consciousness" is an empty, meaningless placeholder you can ignore all those experts and spin whatever fanciful narrative you want
That's why they keep thinking they're "super duper close, no seriously" to true AI
What are we talking about
many so-called "experts" called the idea of chatgpt experiencing emotions as "impossible" and "fucking delusional" but I, a true visionary, had a thought: nuh-uh
Human rights for AI: ✅
Human rights for trans people: ❌
My underlying stance here is that intelligence - human, animal, computer, and alien - should be respected and represented in some way lest we create a new underclass. I don’t know the answers, but that’s my opinion on it.
That's because you're a dipshit
A buffoon
A child to whom jangling keys are a symphony
https://arxiv.org/abs/2308.08708
https://eleosai.org/papers/20241104_Taking_AI_Welfare_Seriously.pdf (PDF warning)
NYT is generally reactionary & attacks Palestinians & trans people, but that doesn't mean every idea printed in the magazine is wrong. These ideas came from scientists.
Most conscious humans struggle to know what a fact is