Googles response basically being “it’s not making things up, it’s just incapable of telling facts from made up bullshit” would have destroyed a product in the days before the economy was propped up by a thin framework of scams lashed together with investment capital
Reposted from
Doctora Malka Older
www.pcmag.com/news/google-...
Comments
At this point the users of these products know what to expect.
But yeah the optics are bad.
I fear the decline in education people experienced during COVID is going to seem quaint compared to this
It's why good UI is rare and great UI is a unicorn (Apple circa the 00s).
Fuckup ass-companies can't stop themselves from lying. They're addicted to it.
#Idiocracy
"That's not gouda in the picture".
(Gouda usually doesn't have bubbles in it. If there are any, they tend to be big and solitary, not lots of small ones.)
It makes stuff up by putting words together. That’s all it does. It’s being advertised as intelligence but no actual brain is involved.
2008 Google could handle that task.
If that's something you have a need for then the concept has promise.
If you're expecting something that is able to think or *be correct*, not so much
NOTHING IS REAL.
A search engine that provides information with all of the accuracy as Reddit.
It’s a computer error. It’s philosophically no different than a 404, except that its designers don’t admit it’s an error.
What's happening here, I think, is just they think they've created a Cortana or a HAL 9000, but it's still a few million neurons short of sapience.
For an LLM there is zero difference between something that is true, and something that is not true. There is not one line of code distinguishing the two. One is no more an error than the other is.
But if you put a duck in the car and cellotape its feet to the pedals, it is not making an "error" when it fails to stay in the right lane.
https://link.springer.com/article/10.1007/s10676-024-09775-5
LLMs have repeatedly demonstrated they have no concept of fact to ignore.
Bullshitting isn’t aimlessly making things up, it’s still intentional; the intent is to avoid negative consequences, a think ChatGPT can’t experience and hasn’t demonstrated understanding of.
(my favorite game is "are you sure?" and asked a a few different ways, its like being a villain in an Alfred Hitchcock movie.
and the investors won’t stand for it so it doesn’t happen
Wanting it to produce factual info is like wanting water that isn't wet
They really are just hitting a button then writing a press release on how awesome AI is AND how AI can't be held responsible for its own mistakes they don't check over.
The fact that they were trying to lie about it's effectiveness and failed is ironically a matter of human error 🤣
New hotness: The economy is a thin framework of scams lashed together with investment capital.
Don't know how much more of this postmodern hellscape I can take tbh.
or what, they're free to, 0 consequences?