I am seriously concerned about the growing lack of discernment and critical thinking in regards to the large amounts of content generated by ChatGPT, AI image generators, conspiracy and misinformation websites, as well as AI services offering summaries of books, research, and articles.
Comments
Their response is almost a tautology of "well we just need to use it so we can get used to using it"
I'm no luddite. I just don't use shit that doesn't help me in any way.
I'm proud of learning and understand what I was assigned. I'm proud of having been able to grok it and interpret it in an original way in a paper that'd usually get a good grade on it MYSELF.
I'm proud of having ABILITY to do so.
I have noticed at the grad level, LLMs are a tool students who are extremely insecure about writing lean on to accomplish...
I asked chatGPT once to write a bio of me as an experiment. It got every single thing wrong. Why do I think it would do better at something else?
"Dream up some facts about me" gets you exactly what you paid for.
One might as well just write what one wants to write oneself at that point, and skip the bot middleman. Saves time and hassle.
This can be highly efficient for processing books-at-a-time or extracting information for hundreds of emails. If you can tolerate a modest error rate, they're okay.
They aren’t good at “dependably accurate.” They’re only good at “sounds plausible.” Which is how they fool a lot of people into accepting inaccurate results.
But one has to be fully aware that the results are “hand-wavey kinda sorta.” And most people aren’t, and it’s being sold as a magical answer machine, so this stuff is wildly misused now. 🙃
https://www.opb.org/article/2024/12/09/artificial-intelligence-local-news-oregon-ashland/