Gen ‘AI’ brings nothing. It only has what it was fed of other people’s work, which it can regurgitate into different, complex shapes, at the suggestions of its clients. It is digital input shaped by mathematics . . . describing things it has never experienced, never felt, never understood. 19/
Comments
https://youtu.be/160F8F8mXlo?si=QMkBoCxMrK-mS2Xj
AI vendors and advocates want us to give up on humanity and obscure what that even means.
Our conceptualisation of gen AI is all over the place and this explains it well.
Also, blessings on your mom for explaining what she does, and why--and how & why it works. 🌸
We learn by forming gestalts, and they're personal, they don't come from definitions. We refine them over time through experience (and through errors).
This is also IMO part of the problem in math education. Definitions are necessary but students have an innate understanding of lotsa stuff which is stomped on by bad curricula/pedagogy.
It is the bastard offspring of a roulette wheel and a bag of scrabble tiles.
(Best read in the voice of John Oliver)
We are software.
LLMs, including VLLMs, are clearly not biological analogues.
Simulation is not identity.
Image generators can (somewhat) reliably show you a dog after being fed millions of photos of dogs, but it will never *understand* what a dog is.
It simply *can't*, because that's not what it's programmed to do. Hence why calling it "AI" is a lie
The Sam Altmans of the world talk about human-like intelligence. But nothing they do is even leading to duck-like intelligence.
All kinds of garbage is in there, and trying to fix it is like trying to block holes in a sieve.
The algorithms "learned" and then did what it was designed to do.
If you don't like it, don't use it 🤷♂️
We humans use words to describe things. But the words are not the thing described. Words are imprecise, because we interpret them through context and prior knowledge, which an LLM *does not have*. So the word-model does poorly as a thing-model.
That takes what these AI companies are doing from "ethically dubious"
to
"ethically MONSTROUS."
LLMs are a big step to AGI. Having an encoding of semantic space seems to be part of the AGI puzzle.
I’m not discussing AGI. We were discussing learning. People have encodings of semantic space, do you agree?
AGI doesn't require language but we aren't going to back off ASI for AGI so, pragmatically, LLMs will be the central component of AGI.
So not even they *really* believe it's learning like a person, that it's a mind.
What genuinely confuses me is as a child I drew a family portrait. People had a heads with legs coming directly from the head and arms extending from the side of the legs. This despite in my experience with people having necks and torso.
Every writer I know encourages reading, especially authors you don’t share the experience to be able to create richer deeper narratives yourself.
Is this not the same stage of development of a child or adolescent learning?
Beyond the nature of language and learning, our abilities to 'think' and 'create' are NOT understood by us.
Humans cannot explain how we generate ideas, to think we can re-engineer it is a false hope and, frankly, insulting.
Mrs Dalloway said she'd do the creative work herself.