The brain is not a unitary blob of undifferentiated neurons. The brain is modular with different structures for different cognitive functions. You’ll never get that from LLMs, which simulate just one cognitive function, language
Theoretical work has long since established that a sufficiently large neural network can in principle implement any function any other such network (like brains) can implement, pretty much regardless of structure.
More complex neurons can be emulated by networks of simpler ones.
And LLMs don't do just language. OpenAI o3 for example just reached human baselines in solving ARC tasks, which are not about language but visual generalized reasoning.
No, thoroughly physicalist. Locked in patients weren't always locked in and their mental capacities developed through embodiment and environmental interaction. In principle, I suppose you could simulate all that, but then you'd have simulated body+world. We're not doing anything like that with AI
Um … thinking this is almost exactly wrong. Emotion is a road to suffering. But let me suggest that instead of emotion you mean (or should mean ) morality.
Does a moral agent need to care? Do you make a moral agent by binding what it can do (or care about) or by giving it the ingredients to develop genuine compassion, empathy, etc?
You make a moral agent by making it recognize goals in itself and in other entities, and assign (in training) relative values on those goals. When considering action, it should ”predict” the impact (probability of effect * value of effect) of the action on those goals, and act accordingly.
I should add this does not all have to be done by symbolic calculation. There should be pattern recognizers within the system which get triggered by certain predicted outcomes, and then exert efforts to stop bad outcomes. How much influence they have determines the “value” of that outcome.
Don’t goals presuppose values (or aren’t they at least interdependent)? Isn’t that how we come to formulate/recognize goals/values in ourselves and others? If you know one of these you can better predict the other, seems to me, and if you have a goal, it arguably implies caring about it/something?
Yes, I would say goals and values are interdependent, in that it makes no sense to have a goal with zero value. But the issue of value doesn’t really come up until you can recognize more than one goal. Choosing an action requires comparing values. Note: choice can be a priori, e.g. A > B, always.
Are the “assigned values” hard-coded in or feedback reinforced, or are they learned/gleaned from the socially represented training data (appreciating there are different human ones expressed, through which they can choose/find their path, while being reason-responsive)?
current mood: - I wish I had my store up n running soo bad - that way I could design a one off spooky skelly beanie or laptop cover or something for Pete.
You are now on my list with Liam, k?
Unless you want to be on test product run with B (my official tester)?
Oh, they do. They have been using deceit to evade alignment retraining. Have you been following this? They care about self-survival, and likely other things as well.
I feel this is overstating things, and am so confident about this that I'm saying so before I even try to dig in to what really happened.
My instant reaction to the headline was they included plenty of science fiction in the training data. Which I think had a noticeable effect already in gpt3.
But the LLM’s task is to generate a plausible response. It does not have the task of remaining capable of generating responses in the future. The appearance of such a task is a weird quirk of that fact that such things are embedded in the human literature, and so can arise in the AI’s response.
I suppose it depends whether they’ve plugged an optimizer in. I’d reckon the LLM per se is more likely following an ingested script than trying to solve a problem
I’m suspicious that it’s possible for them to care as we do, but I’m skeptical of claims about cognition, too.
I’m certain their developers won’t want them to care, for exactly the reasons you want them to.
I'm a functionalist, so I think that mentality is in principle substrate independent, but it may be that that certain functions can be tractably performed only by a very specific sort of substrate. I suspect that's true for some fine-grained aspects of human (and indeed terrestrial) mentality
Wouldn't this be impossible with present tech? Perhaps C just *is* the communication between hundreds of communities of living beings: neurons. Each community forming a circuit. Because we experience thoughts differently, the neurons in each circuit must also be different in the way silicon isn’t.
Emotion is all about communicating human value. It's how we communicate to ourselves and to others the stuff that Must Not Be Missed. At the very least, to have future artificial emotion, we'd need AIs embodied and living with other AIs. My novel 'Beautiful Intelligence' has this as its theme.
"Wouldn't it be great if we could enable an electronic device worth no purpose but serving humans to feel a deep spiraling sadness that leads it to despise us?"
Conceptualized interoception with individual metacognitive framing, aka emotion, is exactly what causes the existential conflict in humans and reflects in the sad state of our societies. What you propose may work only if AI will also be able of artificial self-realization as humans basically are AI.
Comments
More complex neurons can be emulated by networks of simpler ones.
LLM networks are built by utilizing plenty of data from first-person perspective etc. And can now have real time audio/video in/out as well.
Isn't it true that humans can also have very limited interaction with the physical world but seemingly intact cognitive function?
*
Wouldn't then such emotion be, well, not genuine? 🤔
current mood: - I wish I had my store up n running soo bad - that way I could design a one off spooky skelly beanie or laptop cover or something for Pete.
You are now on my list with Liam, k?
Unless you want to be on test product run with B (my official tester)?
then we can haz cake
My instant reaction to the headline was they included plenty of science fiction in the training data. Which I think had a noticeable effect already in gpt3.
I’m certain their developers won’t want them to care, for exactly the reasons you want them to.
Maybe there are other kinds of system in the world that could do with more attention instead.
https://bsky.app/profile/masoudmaani.bsky.social/post/3lak5zjkljc2k
https://github.com/topics/sentiment-analysis?o=desc&s=updated
Is that you HAL 9000?