So next time someone tells you these things learn like humans, either they're lying or, like generative 'AI', they don't understand human beings. 20/20
Comments
Log in with your Bluesky account to leave a comment
Ultimately all generative tech does is predict what the next word should be. It's learning the pattern of language, not meaning. It's like if I gave you directions by telling you to take three lefts and a right, instead of showing you a map, or mentioning any landmarks.
Sure, we can make the model learn to maintain a consistent story and voice. We can tell it to use words it has identified as emotive and expressive. If a human wrote by choosing only the words it considered the "most sad" or "most poetic" you would end up with haikus written like legal contracts.
Yes! And there has been a lot of research on what it would mean to address symbol grounding through embodiment, eg robotics etc. Bullshitting is easier, though, so here we are.
Thanks for this thread. I get a better understanding of AI "thinking". To make a TLDR...I would use the analogy...AI can play the notes, but not the music.
I am simply not interested in text and other "content" generated by a computer. I want to learn and interact with other humans within a culture that values (or at least, understands) our collective concerns. I want my interlocutors to be morally intentional and accountable (even when long dead).
Even a perfect AI simulation of a human (whatever that might mean) could never _be_ a human or participate in human culture in the way that humans do. AI systems literally have no skin in the game.
AI vendors and advocates want us to give up on humanity and obscure what that even means.
As a couple of people have pointed out, the idea of understanding language through experience in the real world is referred to as 'grounding' or more specifically 'the symbol grounding problem' for AI, if people want to read up more on it. Not my area of expertise, but a useful concept to know.
Very well put! As a speech pathologist myself, co-signing, and also waving to mom McGann. :) I especially liked your simple explanation for why using the term 'hallucinations' for AI is inaccurate.
As 1 of many actors who protested/strike against AI replacing us, we argued that what AI can’t do is remember the feelings of 1st kiss, broken heart, dying parent, getting that dream job, injustice, wanting revenge, etc. AI can’t project those feelings in how bodies move, talk, weep, shout, sing.
I asked Chatbox AI if it can feel fear, this is part of its answer, Our purpose is to assist and provide information to the best of our abilities, without the burden of emotions.
Yes, it reminds me of discussion of the 'semantic web' back in the 2000s, the sense that systems might be able to establish an actual 'understanding' of the words being used, or at least the understanding that those programming search tools had. It is interesting then to highlight as you have done
here that AI at present does not 'get' the world and so makes these errors. My father has Alzheimer's and while he can engage with what is around and still member who people are, his ability to make meaning or see why some things are done the way they are has gone. I feel that at present AI is in a
similar situation. Like my father it can produce coherent output but there are errors and mistaken assumptions in it. Perhaps if we draw parallels to people with certain mental states that is not a surprise. It is almost as if AI is going from suffering such a condition backwards to greater acuity.
Of course, progress might not get far in part because it does pick the distortions and biases that humans put into their work and increasingly from imperfect AI output that has come before but which the AI systems have no way to judge the quality against especially as daily the sources to make such
a judgement - perhaps too human a word - let us say 'comparison', but anyway what it compares against daily is falling in terms of the quality of how well it connects what we know to be 'real' let alone 'correct'. I am fascinated how what might seem a technical discussion soon shades into a
That's just associating different streams of data together. Cup shape with, sensory touch data, for example. A self-driving car does the same essentially by correlating lidar distance with changes in its own position and velocity. It may seem unfamiliar but just associating patterns over time too.
This is kind of why the "what is a woman?" or "what is a chair?" families of questions will always fail.
We learn by forming gestalts, and they're personal, they don't come from definitions. We refine them over time through experience (and through errors).
A chair with sharp points or brittle components or partially alive may fit the AI's model, but a person would quickly and without thinking update their internal model to exclude them.
My internal understanding of "chair" is likely functionally the same as yours, but I couldn't well explain it.
This is also IMO part of the problem in math education. Definitions are necessary but students have an innate understanding of lotsa stuff which is stomped on by bad curricula/pedagogy.
As far as plagiarizing, you'd have a better case if all the training data were retained inside the AI rather than analyzed and used to update it's own models and weights...just as humans take in other people's art and experiences to remix them as their own later on.
We're blind to most of the electomagnetic spectrum. We can give AIs far better and more detailed sensory experiences denied to our biological systems. They'll be able to train on sensory data we can't comprehend. Will that make us inferior or just different?
We like to believe we're special, but all these supposedly human characteristics are just learned from recognizing patterns in sensory details, and I don't see any reason a computer can't do the same essentially. In some cases the computers will have better vision, hearing, etc than we do.
Our perception of the world is just a stream of electrochemical signals, we aren't completely different. Touch sensory data can be replicated as well. Seeing something in the real world is just sensory data from light on your retina which computer vision can be trained on as well.
No we're not. You're describing the signals and completely ignoring what the signals are travelling through. There are chemicals having as much effect on what and how you're feeling as the electricity. You are not a piece of electronics.
They convey data across neurons firing do they not? They build up potential energies to either fire a neuron or not...much as a semiconductor is on or off, no?
You're trying to define everything in terms of computing, reaching for a parallel for every biological system. They are not the same. Maybe some day there will be AGI, but it's sci-fi for now. People study consciousness their whole lives and can't explain it, you're not going to solve it on Bluesky.
More to the point, your argument is essentially that we can *simulate* brain activity in silico. This is not only reductionist but also irrelevant—if you wrote a chat bot that exactly reproduced what my dead grandmother would say, it still wouldn’t *be* my grandmother.
Simulation is not identity.
I saw a video the other day of a woman throwing her pet duck in a kiddie pool. Every time the duck would get out & run over to her & flap its wings until she did it again.
The Sam Altmans of the world talk about human-like intelligence. But nothing they do is even leading to duck-like intelligence.
To add, Transformer models dont "learn" through action, they "learn" when someone flips the 40 million dollar an hour "learn" switch on ths supercomputer.
That's all very well, but Text Based Statistical Probability Generator isn't a phrase that markets as well as AI! to the credulous gamblers easily dazzled by Bleeps and Bloops who run venture cap firms.
Brilliant thread. The focus of my writing about gen 'AI' has been about the original datasets are a completely unfiltered data vacuum, and as you pointed out, that egg can't be unscrambled.
All kinds of garbage is in there, and trying to fix it is like trying to block holes in a sieve.
That's literally the point I keep hammering. It's a fundamental law of computer science, and tech bros have no way to code around it (I made my last rant about it my pinned post, just so I don't go off on it again)
Or... they don't understand how LLMs work and are just fascinated at its mimicry as if it were magic. I am coming around to usefulness in various mundane tasks so long as I am not using it as a creative output plagiarism machine.
There is no ethical use for web-trained gen AI. It’s built on other people’s IP and it’s an environmental liability. If you’re using it, that’s what you’re supporting.
I will admit I struggle with the fact that I agree with many points you make and also work in tech where the prevailing sentiment is inevitability which has caused me to develop an extremely nuanced position that probably fits nobody's rubric save my own.
I also believe there is a high likelihood that web-trained AI models will collapse on themselves under the weight of consuming their own genAI content.
I’m all for ethical machine learning with properly vetted data sets. Giant web-trained LLMs are an obscenely expensive bubble. We’ll have to see what comes after it bursts, but I hope they take a more grounded route.
Thanks. What an interesting thread, explaining the complex with clarity. As a writer I will draw some succour from your analysis of AI’s lack of real I.
So now you've argued that it doesn't learn EXACTLY like a human, but you haven't proven it plagiarises any more than a human would, if asked to do that.
The algorithms "learned" and then did what it was designed to do.
If you don't like it, don't use it 🤷♂️
We humans use words to describe things. But the words are not the thing described. Words are imprecise, because we interpret them through context and prior knowledge, which an LLM *does not have*. So the word-model does poorly as a thing-model.
Not even that one. "Word" is, trivially, an example of a word. But the four-letter token [word] is not self-explanatory -- phrases like "in the beginning was the word" require context and cultural knowledge to interpret.
I don't know about that. A cup is a cup regardless of being understood or explained as a cup. The same for word -- the four letter token or its local linguistic translation, it is what it is quite literally, regardless of its understanding or explanation.
As someone who has, I'm there past, asked a question about the difference I both recognise how little I understand about humans and appreciate this thread.
Also, even if they WERE right? Like, for the sake of the argument, say these people ARE correct. These AI's DO learn like humans?
That takes what these AI companies are doing from "ethically dubious"
Right. Saying an AI “learns” is an analogy. Not an accurate statement but sometimes precise enough for a conversational context. In this conversational context, people are trying to be precise but most lack a deep enough understanding of how AI works to be precise. (You need to be an AI engineer.)
So ultimately, we still rely on analogy. So it’s impossible to be precise. But I believe if we accept that precision will be lacking, we still advance peoples understanding. People get closer to the “ballpark”, as it were.
But you’re doing more than using analogies, you’re anthropomorphizing. You’re seeing language and assuming intelligence. I’ve listened to machine learning experts painstakingly explain, time and again, why LLMs are not a path to AGI. You’re talking as if they are.
Language did not create civilization in isolation, it did it as something used by minds in the physical world. Take away the minds and the physical world and it's just symbols and patterns that can never be understood. Read the thread.
Because it means every time they sell, edit, or delete an AI script they are selling, brainwashing, or murdering *a mind*. On a global scale, spinning up millions of minds and destroying them, over and over and over.
So not even they *really* believe it's learning like a person, that it's a mind.
Comments
https://youtu.be/160F8F8mXlo?si=QMkBoCxMrK-mS2Xj
AI vendors and advocates want us to give up on humanity and obscure what that even means.
Our conceptualisation of gen AI is all over the place and this explains it well.
http://oisinmcgann.com/no-ai-does-not-learn-like-a-human-and-this-is-why/
Also, blessings on your mom for explaining what she does, and why--and how & why it works. 🌸
We learn by forming gestalts, and they're personal, they don't come from definitions. We refine them over time through experience (and through errors).
This is also IMO part of the problem in math education. Definitions are necessary but students have an innate understanding of lotsa stuff which is stomped on by bad curricula/pedagogy.
It is the bastard offspring of a roulette wheel and a bag of scrabble tiles.
(Best read in the voice of John Oliver)
We are software.
LLMs, including VLLMs, are clearly not biological analogues.
Simulation is not identity.
Image generators can (somewhat) reliably show you a dog after being fed millions of photos of dogs, but it will never *understand* what a dog is.
It simply *can't*, because that's not what it's programmed to do. Hence why calling it "AI" is a lie
The Sam Altmans of the world talk about human-like intelligence. But nothing they do is even leading to duck-like intelligence.
All kinds of garbage is in there, and trying to fix it is like trying to block holes in a sieve.
The algorithms "learned" and then did what it was designed to do.
If you don't like it, don't use it 🤷♂️
We humans use words to describe things. But the words are not the thing described. Words are imprecise, because we interpret them through context and prior knowledge, which an LLM *does not have*. So the word-model does poorly as a thing-model.
That takes what these AI companies are doing from "ethically dubious"
to
"ethically MONSTROUS."
LLMs are a big step to AGI. Having an encoding of semantic space seems to be part of the AGI puzzle.
I’m not discussing AGI. We were discussing learning. People have encodings of semantic space, do you agree?
AGI doesn't require language but we aren't going to back off ASI for AGI so, pragmatically, LLMs will be the central component of AGI.
So not even they *really* believe it's learning like a person, that it's a mind.