LLMs are the great dunce mask-off moment. If you think these things are magical you are credulous beyond redemption.
How would a person who doesn't have the knowledge have the ability to tell between correct and incorrect answers? How would they know what to ask?
How would a person who doesn't have the knowledge have the ability to tell between correct and incorrect answers? How would they know what to ask?
Reposted from
Mark Cuban
I have to admit that I've been surprised at the level of hate at AI LLMs.
There is finally a tool that allows those without access to an advanced education to self learn, anywhere , anytime. To get answers to questions, (even if sometimes wrong) , that they would otherwise never have access to
There is finally a tool that allows those without access to an advanced education to self learn, anywhere , anytime. To get answers to questions, (even if sometimes wrong) , that they would otherwise never have access to
Comments
It's bullshit all the way down.
many of us have experience learning from untrustworthy humans.
Introducing: Google.
*canned cheering sounds*
Wired: having the world's most advanced and resource intensive auto complete regurgitate forum posts and stolen YouTube transcripts
Midwest public high school
ivy BA
work in finance & tech in NYC
the “AI” results google pushes to top of my search results are often wrong or misleading
I’ve gotten to where I just ignore them bc it’s not worth fact checking
AI = vaporware
also Kagi is worth a try
(info via Pew Research: https://www.pewresearch.org/short-reads/2021/06/22/digital-divide-persists-even-as-americans-with-lower-incomes-make-gains-in-tech-adoption/)
The technology is much better at deciphering what I want than SEO-infested web pages.
Sure, maybe over time you trust some sources more than others and sometimes they will screw you over when you do.
It’s MUCH faster to understand something than to learn everything you need to SYNTHESIZE that thing
I don’t have to TRUST it and you are indeed a dunce to trust it in areas where you lack knowledge yourself, but your further conclusion simply doesn’t follow.
Wait…oh
the fact that the frontier models plainly understand and can explain advanced science and many other branches of knowledge, is utterly magical
But instead, these companies trained LLMs on the internet, stealing people's work, and f-ing up the whole product.
Easy way out disease.
In a closed environment it can be effective.
And the continued realization that these MVPs are pathetic when compared to the cash poured into them.