join me, please, in calling them what they are:
predictive models
ChatGPT is a predictive text model
CoPilot is a predictive code model
Stable Diffusion is a predictive image model
they are not generative.
they predict what something you described might be like.
they do not make that thing
predictive models
ChatGPT is a predictive text model
CoPilot is a predictive code model
Stable Diffusion is a predictive image model
they are not generative.
they predict what something you described might be like.
they do not make that thing
Reposted from
Audley
This is, incidentally, why I hate that we're calling LLMs "AI".
They don't know anything. They can't judge anything. They can't remember anything. There's nothing intelligent about what they're doing.
They don't know anything. They can't judge anything. They can't remember anything. There's nothing intelligent about what they're doing.
Comments
True AI would be already trying to reach you about your extended warranty by now.
Each word in 'AI' is a probability-based guesstimate as to what should appear, based on what has gone before.
An electrical generator generates electricity. Sure you could argue that it transforms some other energy to electricity.
A GPT (generative pretrained transformer) transforms input into output. Doesn't generate anything, despite the name.
The same can be true of asking “Write a persuasive argument against bigotry” of an LLM, I suppose right?
Generated electricity is, although there's no semantic connection here, generally useful.
GPT output isn't.
I would be fine calling electrical generators something else.
GPTs are not generative.
Generators generate current and transformers convert between current and voltage.
In my example, the generator is the extra words I asked for “Generate a persuasive argument against racism.” It is then also transformed.
Where’s the disconnect there?
If it decides to shoot, does it matter if this decision is predictive or generative?
Anything deeper than that and you'd need a team of psychologists.
I *am* genuinely curious about how you’re going about validating your LLM and what explainability techniques you’ll use to provide a wider confidence and in its outputs, but appreciate you might not want to say on an open channel
These won’t be going away anytime soon because they CAN work. John Deere ain’t going to just do nothing with all that soil data, etc
Predictive AI uses large data repositories to recognize patterns across time. Predictive AI applications draw inferences and suggest outcomes and future trends.
but instead we get model collapse https://www.techtarget.com/whatis/feature/Model-collapse-explained-How-synthetic-training-data-breaks-AI
They are no more intelligent than my can opener.
They have no experience, so they cannot know anything.
Roger Penrose has written extensively about this.