In my younger days I had a roommate that would generate bullshit answers to almost any topic. It infuriated me, as my girlfriend at the time use to believe every word, and would actually argue with me that his bullshit was factual.
What are we missing that could give AI the ability to more effectively kill us in the future? A funny question. Here’s another one: Should we build AI just because we can?
I read the artical form CJR, and while it's very interesting and points at a true flaw (searching ai work less well than search engines for "plain text" search), it does not point at a true failing to answer with accuracy. The problem here is a small subset of all behaviors. Misguiding headline.
I have gotten scores of false answers on questions ranging from how to change settings on a Linux distro to the demographics of New Zealand. It is *common* not some obscure thing researchers are using for click bait.
Also i use perplexity a lot. And while often it's incomplete, or wrong, I happen to have learnt a lot thanks to it.The tool is well thought and while not magical it does some stuff very well when correctly used. The problem is on the side of the user and in dishonest marketing by the ai companies.
Users do not know what they are using, and I suspect some marketing teams do not know what they are selling. Or maybe they do, but they over sell it or do not present the accuracte use cases.
Debate about is all the fuss of ai worth it is still open.
I'm not an ai zealot and i do not have equities 😁
I'm not saying ai search is accurate, i'm saying the article does not show it's not accurate in the way the title means it. If we want to take a scientific approach, then yes the title is formally misguiding. Even if it's true, being "right" for the wrong reason is not a case.
My own research over a year is inescapable that perplexity flat out steals content and scrapes in direct contravention of robots.rtxt and ToS. It is also highly likely to create hallucinated results when clear answers aren't in its cache.
Not surprising at all. Ubiquity of possible solutions seems to be the true focus of AI and overrides context and meaningfulness. They'll get it better sooner or later--too much too fast.
Comments
Some of the most bullshitting posts are there too. AI cannot gage sarcasm or how to vet a source. It just regurgitates.
I think he may have been the basis for AI.
Debate about is all the fuss of ai worth it is still open.
I'm not an ai zealot and i do not have equities 😁