this is how you can tell somebody doesn't know what they're doing
LLMs produce unusable, wrong "answers" a significant chunk of the time.
even highly educated people are painfully gullible. all you gotta do is animate the text like it's typing and these suckers slurp it down like mommy's milk
LLMs produce unusable, wrong "answers" a significant chunk of the time.
even highly educated people are painfully gullible. all you gotta do is animate the text like it's typing and these suckers slurp it down like mommy's milk
Reposted from
Mark Cuban
If you have zero education, but learn how to ask AI models the right questions , in many jobs you will be able to outperform someone with an advanced degree, but who is unwilling to use Large Language Models.
Just takes a smartphone, curiosity to experiment and a mindset to learn.
Just takes a smartphone, curiosity to experiment and a mindset to learn.
Comments
I don't think it's a good learning tool at all, partly because you will never know what it's gotten wrong.
I use an LLM to help me with work but only for structural and grammatical help, not content. It can also be helpful if you feed it something specific to help you analyze it.
I'd think curiosity would have anyone playing with how it responds with things they knew intimately. And observing its lies.
The real answer is even correct in every Google result about J18.