Predictive text models do not ever "answer your question"
They predict what an answer to your question would probably look like.
Which is very, very, very different
They predict what an answer to your question would probably look like.
Which is very, very, very different
Reposted from
Katie Mack
I don’t think it can be emphasized enough that large language models were never intended to do math or know facts; literally all they do is attempt to sound like the text they’re given, which may or may not include math or facts. They don’t do logic or fact checking — they’re just not built for that
Comments
The wrinkle was that whenever the robot was asked a question about what someone else was thinking, it would just give the answer the person asking wanted to hear.
Okay fine, I'm citing this.
If your question resembles a question that was asked and correctly answered a lot in the training data, odds are good the prediction will look like the correct answer. But if not...
They're good at predicting answers to questions you can easily find correct answers to on the Internet.
There are a lot of questions that fall into that category!
And there are a whole lotta questions that don't.
I've heard it described as code that assembles words to produce something shaped like an answer to a question; and of course, just because something is shaped a certain way doesn't make it that thing. lol
Given how fluid and complex consent can be, I am dubious that there is an automated way to address these issues.
like this is super cool
https://github.com/datamade/parserator
Them: When do you get off work again?
My phone's suggestions:
"Whenever you want!"
"Hmm, I'm not sure."
I should have screenshot it at the time b/c those are insane options to suggest.
Now, "We're all unbalanced here," said the MadChatter.
On topic: given the training data deals being minted along with generated posts, how long 'til Ouroboros becomes TheMascot™?
Was it all just BS to woo VCs?
that is not the same thing as understanding the question and answering it
The AI said
1 answering is impossible
2 ask this other website
3 gave an answer that was different than the other website
just like any scientist, a proper data scientist will prominently feature appropriate caveats so their conclusions aren't misunderstood
They are really good at translating text tho, much better that regular translation software.
But advertising them as such would not be as profitable to the company
1. you showed them your hand
2. asked "how many fingers do you see?"
3. they googled "how many fingers do you see?"
A: It would be a novel & unusual usage. In practice, not every "response" is an "answer".
"Answer" commonly refers to a "response" made in reaction to the linguistic meaning/semantic content of a question.
ChatGPT's statistical text prediction responses don't do this.
if I think it actually understands what I am asking about, considers it, and answers me, I would expect useful information about novel situations
if it's a probably-plausible machine? not so much