posted some of my thoughts on using LLMs for basic tasks on tiktok and i’m getting cooked by people who insist that it is actually good to offload your critical thinking skills to a bullshit machine
Comments
Log in with your Bluesky account to leave a comment
Honestly I truly think this might be one of the leading causes of this mess in society. People's critical thinking skills are shot because God forbid they had to think
At the end of the day, they will lose. These people are just destroying their ability to learn stuff. So I don't care. Get your machine-led ass left behind.
the people who are just begging for early onset dementia bc they aren't using their brains anymore are mad you told them hey maybe don't let that become reality lol
Big fan, but you gave a limited, one-sided view of LLMs. Obviously you shouldn't offload your critical thinking skills, but they can be powerful tools. Check out Andrej Karpathy's video How I Use LLMs.
You’re not getting shit from me.
If people are too stupid to see AI will replace them…then I hope THEY get all the hoped for and leave the rest of us alone!
AI companies would seem to have pretty big incentives to wield pro-AI bot armies across social media and human cheaters have pretty strong incentives to defend their false laziness with the arguments provided by AI companies' wurlitzers.
My last art director used it all the time in lieu of being talented, and churned out some of the most generic game assets ever. We lost a bid for a new game because of it, even though we had a great concept artist on staff who was doing nothing. We’re about to see the inshitification of everything.
This is actually the reason "thinking machines" were banned in the Dune universe.
"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them."
Anyone who thinks it's actually good to offload their critical thinking skills to a bullshit machine doesn't have any critical thinking skills to offload.
I had a friend say anytime the group sent a message longer than two paragraphs they just had chatgpt respond, I cut ties after telling him if he can't be bothered to reply himself I might as well not deal with him.
People who don't want to use AI for what it's good for (e.g. saving hours per week on corporate documents and spreadsheets) are welcome to compete with those who do, and it'll go about as expected.
I like thinking. The thinking is the fun part of a project. The research and the figuring out how to explain and communicate ideas is where the enjoyment comes from. What are you doing with all that free time you now have that is more meaningful than thinking?
There was nothing more dark to me than the end of the NYMag story where the college dropout was pitching his AI assistant that will tell you what to do on dates. A world where two people use AI to know how to be successful on dates and then get married and...what? Just default to AI forever?
/2 "We consider the greatest end of science is the classification of past data. It is important, but is there no further work to be done? We’re receding and forgetting, don’t you see? Here in the Periphery, they’ve lost nuclear power."
/3 "In Gamma Andromeda, a power plant has undergone meltdown because of poor repairs, and the Chancellor of the Empire complains that nuclear technicians are scarce. "
/4 "And the solution? To train new ones? Never! Instead, they’re to restrict nuclear power." Don’t you see? It’s galaxy-wide. It’s a worship of the past. It’s a deterioration—a stagnation!"
If tiktok is giving you a headache, don't even dare look at LinkedIn. Every 2nd post is about how to use LLM's to do everything except personally interact with people.
I specifically don't use it for critical thinking. I use it for stuff like "write me the boilerplate for this app so I have more time to think about the logic because I won't spend a bunch of time typing." It's good at dumb, repetitive stuff.
AI goals: create a machine intelligence equivalent to humans that can serve as uncompensated, immiserated labor to people too lazy to actually do creative or intellectual work.
The implied promise of AI is that you can put the apple back on the tree of knowledge. I think that's why it's so appealing to fascists. They want to be free of the burden of intelligence. All this complicated thinking stuff just makes them anxious.
LLMs are not bad & can be quite useful, the question is what is proper role and how do we shield ourselves from unintended consequences.
Personally, I would like strong science based LLMs to moderate social media, to label misinformation as such & block harmful disinformation. Perfect for this!
How could anyone ever trust an LLM to give them credible health and medical information, when we know how they are populated with (mis)information? I like where your heads up but the tech is not even close yet
I’ve been involved in health IT for 30 yrs.
Our tech broligarchs are shielded from scrutiny regarding harmful effects of social media on individuals & societal level… now same principle is being applied to LLMs
Will humans lose critical thinking skills if they outsource it to LLMs in a generation?
LLMs are gonna self poison themselves on their own output in the same way the inbreeding in a small genetic pool gradually produces worse and worse results.
These AI bros can't even answer what will be good about AI for the average professional. When all the work is automated to suboptimal results - how will people participate in the economy?
They have no answer but I'm pretty sure the "problem" they want solved is how to get wages to zero.
People said the same thing during the Industrial Revolution. I am certainly concerned about the potential for technology to lead to greater wealth concentration.
I would suggest the solution is change the tax structure and provide basic income.
I work in IT and I do find it useful if I want to experiment - like for a UI change so I don’t waste a real designer’s time with crazy ideas. But for my core work I avoid it like the plague.
That's also what really gets me in all this. Critical thinking *is* the fun part. It is, alongside acts/process of creation, such a big part of the joy of being alive. People just mass noping on all that is *wild* to me. 🤷
I really don't get it! No wonder everyone is disappointed and feeling a lack of agency all the time. there's no power or pleasure in spontaneously arriving at a beige destination nobody wanted to go in the first place. Doesn't matter how many crappy cookies at the end, LAAAAME
LLM are ingesting anything digital. The plethora of false, naive, inaccurate, dumb stuff available means LLM are false, naive, inaccurate, dumb. Garbage in, Garbage out.
3) even if i did, i would not ask them to do things like research because part of how i figure out WHAT i think is through the process of reading and sifting through information.
what if I don't want to spend hours researching a topic and just want to get a quick answer to why my tv remote isn't working? Not every question needs an in-depth or well-studied response.
The very worst thing that Google has done, as part of its ongoing enshittification -- perhaps even worse than giving people lackluster search results -- is to put some authoritative-seeming AI answer at the top of the page when someone enters a search request. Its information is often just *wrong.*
But over and beyond that, the way people treat ChatGPT and other LLMs as if they are search engines or research tools just demonstrates the degree to which we are all cooked as a society
if it takes you hours of research to figure out why your TV remote doesn’t work i think that is evidence you need to spend more time on your thinking skills
if you spent more time troubleshooting your own problems and less time hoping the autocomplete algorithm will regurgitate a helpful reddit post instead of nonsense, it probably wouldn't have taken you hours to think of something as basic as making sure your devices are paired
I don't use LLMs. When I had an issue with my remote recently, and found I'd misplaced the manual, I looked the manual up online. The troubleshooting section was right there. Took about two minutes to find the answer, and another five to find the correct synch code (it's a universal remote).
Was talking to a group of attorneys who insisted using AI was “just like using a paralegal or a junior associate.” When I pointed out that not it wasn’t because they weren’t mentoring anybody, they told me they didn’t have time to mentor and that’s not their problem. Vile people.
I worked for a lawyer and she and I experimented one day to see if ChatGPT could summarize a couple of legal documents & judgements.
It did a horrible job. It missed key bits of info and it summarized the same doc 3 different ways, making different mistakes each time.
“iT wOrKs FiNe YoU jUsT nEeD tO cHeCk ItS oUtPuTs.” So I need to already know what I’m asking it to tell me in order to evaluate whether what it’s telling me is the truth? So the best case scenario is it tells me what I already know, and the worst is it tricks me into believing lies? Cool.
it's like a teacher who gave the class a reading assignment to assess, then collected the papers, then teach the same class next day using the students' work
All the AI people think the final output is the only thing that matters when it is actually all the input (effort, research, reading, thinking, synthesizing, etc) that actually makes you smarter and more knowledgeable.
A colleague shared a script written by AI through prompts. It was pedestrian. But they were impressed. Then there are the religious beliefs popping up around AI.
This is only on audible, unfortunately, but it's worth listening to if you use audible.
I have some links I'll post if I remember. But essentially people who use Ai are showing decline in cognitive ability. Their brain is already rewriting neural pathways to rely on go to it opposed to rely on memory.
It looks like this study was one of the ones I think was referenced in the article. I think it was like 3 months ago which is currently the American equivalent to a decade.
Also LLMs are all unconscious psychopaths, not "assistants", and people seem to be willfully ignorant of that fact. Regular folk don't seem intellectually equipped to handle the implications of that, nor interested in it.
I’m a legal assistant, being someone’s outsourced executive function for annoying little tasks is my day job, the lawyers I work for don’t ask me to do legal research for them bc that would be insane! *I* didn’t go to law school! I just transcribe hearings & mail letters & fill in templates.
And yes, I do those things better than a stochastic-parrot plagiarism bot, hence my continued employment. If a bot could transcribe accurately they wouldn’t have me transcribing. However, it can’t.
Interesting. I was a legal assistant/paralegal (lots of arguments back then about the term to use), in the olden days. When Lexis Nexis first came out I ran almost all the research on it for our litigation dept. -only a few associates cared to learn.
SMU law professors did our CLE training in Dallas, TX back then. They taught us how to research & write. The TX bar ass. was the first to develop a separate division for legal assistants & I served on the Dallas bar's ethics committee and taught legal writing. We did research. 🤷🏻♀️
Ok. I guess times have changed. Now you have to take some classes to be a paralegal rather than just an assistant. My job is mostly clerical, I do a bit of drafting but usually just inputting info to templates, the lawyer makes edits before filing.
Ok. I started at the beginning in the late 1970s. There weren't any classes so initially the lawyers trained us and then got SMU law school to set up classes for us. I shared a secretary with the lawyers I worked with.
Biggest issue with LLMs in my opinion is how much critical thinking students are offloading. Why actually think of a critique to an argument when ChatGPT will do that for me? Why actually understand how a function works when ChatGPT can just explain it to me?
well... you'll get an explanation, but it won't necessarily be correct, and since the person asking doesn't know the difference will not be able to grasp it's wrong, either.
Right... that's the point of what I said? The part where they will be unable to tell the difference of between a correct and incorrect response since they lack the critical thinking skill and experience to know the difference?
And it's not just students. We're seeing this at my company with people
who are applying for jobs and using an AI in the background to do the code. They literally are unable to answer questions we ask them about what a particular line or function is doing. In one case, they could not answer why something the AI generated would in fact never execute.
It's sad to see philosophy dying. The mental organization of writing things down and figuring them out* in good faith (checking yourself for fallacies, inconsistencies with your beliefs) is an important beginning step.
*recording&reviewing counts, too, for differently abled ppl
The fact that there's already people (probably a LOT of people) that think LLM's should take the place of interns or assistants is scary.
The human you interviewed/hired to help you has a lot more skill AND vested interest in their job. The other is just a computer program vulnerable to GIGO.
No, don't offload to a machine, that won't get you to consider in a different way the things that you are researching. But a human assistant just might, and you can also influence their view, too. That's a strong part of teamwork, diversity and inclusion offers all of us who share in it.
Yeah, a human assistant can collaborate on a process in ways a robot can’t. Also there are tasks that are delegatable to assistants and tasks that are not.
that person's retort was silly, but also any intern who got shit wrong as often and as badly as chatgpt, or who engaged in wholesale plagiarism like chatgpt, would be fired I mean *immediately*
"here's that whitepaper you asked me to write. fyi I included a bunch of 'hallucinations' & cited articles that don't exist, plus the prose is dogshit. so you'll want to go over every inch with a fine-toothed comb & probably end up rewriting it or you might lose your job" very helpful, thanks intern
Guy I know saw Will at a Nationals game (long ago, obviously) and said a passerby went up to George and said, "I KNOW YOU! YOU'RE ON TELEVISION! You're…YOU'RE...TOM CLANCY!"
I'd have paid to have been there for that moment, if not for the game.
Am i crazy on this? because I kinda feel like i'm crazy. Doing research and reading banal texts is kinda fun... i always liked doing research(less than what the research leads up to) but I personally wouldn't want to remove my role in reading and digesting for a summary machine(that doesnt work)
Great point. I had an assistant for a time and a lot of what I did was developing her to be able to do the work and reach higher levels (expected in academia). When I asked her to do non-routine research tasks, we would then meet and go through the work so I would understand it, too.
Exactly. I always had research assistants (as a professor), but their job was to help me stay on top of the latest findings, which I would then review on my own, and never to "do the research."
Right, it’s one thing to employ someone to regularly scan a research database for new entries with relevant keywords and bring you the filtered results, it’s very much a different thing to employ someone to do all the reading for you and give you a maybe-wrong bullet-point summary.
And even if you did the latter, the intern/human assistant doesn’t drink a whole lake in the process and is at least actually capable of cognition, not just putting words in a statistically-likely order and thus often saying totally wrong things.
4) even if you did ask them for research assistance, you could chat with them as another human being with critical thinking skills, and they would be accountable for mistakes
that point is so dumb bc you are using your critical thinking skills to choose that employee for their capabilities. Capabilities that include not hallucinating information.
Also if you are replacing interns with ChatGPT, you are irreparably destroying the future of your industry. People have a moral duty to help pass on their work to younger people.
One of my hot takes is automating a bunch of the previously labor intensive parts of traditional animation with computers in the 90's and 2000's completely wrecked career/skill path for the industry and is one reason why 80's and 90's stuff looks so good compared to a lot of modern animation
This is my hot take for a lot more things because people no longer have comparisons to make or the mechanical skills to pass down (or build on). It’s why I think every culture should have a handful of artisan or artisan as art laborers who keep the material knowledge alive.
I’ve never used LLMs but i wonder if it can be used to compensate for finite memory when dealing with a large body of research— like if you forgot what you already read or cant find it in your notes. Or to allow you to analyze enormous bodies of tedious material by asking it to find patterns?
It's also worth noting the quality of AI responses have degraded in the past 15 months since they started training on user prompts and conversations. Most prompts are mediocre in nature and the neural networks are adjusting to mediocrity. Additionally, poison datasets are further degrading quality.
Nothing exemplifies the phrase "when you have a hammer everything looks like a nail" as well as the AI fad. We tend to do this with all new technologies but it's especially apparent now.
Students using AI to write papers for class is one of the big crises facing higher education. Their goal is to complete the assignment with the least effort, not actually learn the material. And the AI generally creates a poor answer that doesn't actually meet the assignment.
I've tried using our campus supplied "GenAI" tool for the relatively simple task of converting a transcript of a technical presentation into documentation and found it more work to clean up the LLM generated gobbeldy gook than to just write the documentation from my notes.
This is very similar to the situation found in Asimov’s “Foundation” series and it leads to the collapse of galactic civilization, sentencing billions to centuries of barbarism.
The genie is out of the bottle, and it is going to be used. I'm not in disagreement about the intellectual atrophy that occurs as more of our thoughts processes are automated for us.
Just as most things will be automated soon. This is just consensus reality. It will be pushed upon the masses...
I don’t believe in the existence of critical thinking as a learned skill, like sewing. If critical thinking = putting what you already know together in new ways, the biggest problem is that college students aren’t learning anything bc of LLMs. They have nothing to think with in the first place.
I don't get Jamelle. It's not an end all, be all. Kenyans are entering the information it spits out at $2/hour. It's absolutely hot garbage and I don't understand why people think it's an authority on anything. I haven't ever used one. I never will.
their brains were ate by A.I. zombie's and replaced with LLM bullshit! if your ignore them they'll get stuck in a logic loop and go away mumbling about how great being lazy and not thinking for yourself really is.
I find LLMs are good creative partners. I do a lot of AI imaging and have found it comes up with useful analysis and ways to be more efficient. They need to be guided but can offer good suggestions. You must take what they say with a grain of salt if it's information.
One of the reasons 64% of Americans have a reading level less than that of a 6th grader is they have offloaded most of the ability to think critically, and read anything that isn’t on Tic Toc or Faceplant. Pathetic!
Good question. I do know that when I finished elementary school in 1966 I had an 11th grade reading and comprehension level, and I was just a middling student. I also know the US lead the world in educational standards. Now for all major industrialized western nations we are at the bottom.
I feel so freaking blessed not to be a kid during all this mess of LLMs, social media overload, etc., I have a lot of sympathy, I do, but good God the skills I got growing up before dial up are invaluable
They should realize critical thinking skills are what you need when using an LLM, same as when you get advice about home repair from your neighbor over the fence. Does what they said make sense? Is it logically coherent? Do the numbers add up? What does that word really mean?
I mean...many people on TikTok believed that there was an infinite money hack involving writing checks to yourself because someone said that it worked on TikTok...many people have already given up the idea of fact checking what someone tells them.
Yeah, I don’t think I’m an LLM evangelist, but I do think they’re useful. It’s like having a friend whose skillset (and even competency) is unclear to you… which is prob more often true in real life anyway.
It produces answers and ideas, thoughts that can be worth mulling or using at low stakes
I know nothing about VBA but it’s produced really workable VBA that help me do annoying, tedious tasks at work. If I had the time and a non-unmedicated ADHD brain maybe I could take hours and hours to learn it but I don’t
All of your arguments are identical to an argument that no one should use Google / Wikipedia and should instead ask a human at a library all of their research questions because it will be of higher accuracy and quality (which is true).
I have yet to find an AI bot that will generate an image of a Putin->Trump->Elon->MagaHatter human centipede, so it’s pretty much worthless technology IMO.
ChatGPT is getting visibly less accurate. Told me my short story was ten times longer (50k words) instead of the actual length (5k). LLMs are in danger of becoming stupid on their own data
I'm just amazed by the number of people willing to stand in front of their bosses, or a judge, or anyone else in authority and say with absolute confidence, "Because ChatGPT said so."
The same people who insist on going to the gym 6 times a week have concocted a system where a random number generator composes all their communications, organizes their lives, separates fact from fiction, and makes most of their decisions for them.
I've had to investigate AI for my job as a WordPress developer and probably 60% of the time it gives me commands to try that don't actually exist. And that's as straightforward a task as I can imagine, they're all right there in the WordPress Codex along with actual examples.
To be fair, there have been times when it suggested approaches I hadn't considered that were helpful. Though even then I had to make adjustments to bits that didn't work. I don't know how a novice would do that; if you don't know how to code, how can you identify and fix suggestions that are wrong?
People who don't write software will say "I heard it's good for coding" and it's just... not. A series of characters statistically likely to follow your prompt may be good enough to fool a person chatting with the bot but it's not good enough to fool a compiler.
It's wild to me how little thought people put into what impact this stuff does and will have. I saw a job posting the other day with a disclaimer that they use ai to summarize interviews, evaluate my answers, and judge my facial expressions. It was a big GTFO when I read that part.
And on the basic stuff like you mention, why even bother? Why tell a machine to summarize messages for you? Are we that lazy that taking an extra minute or two to read is too much?
I love technology but the direction it's taken is infuriating.
Anthropic (the company behind the Claude chatbot) explicitly tells job applicants not to use a ChatBot to write their application materials: "We want to understand your personal interest in Anthropic without mediation through an AI system". Not exactly a ringing endorsement of their own product.
If I found out that someone is using LLMs to write why would I value anything they wrote ever again? Machines are for doing things that humans don't need to be involved in doing. Thinking is not one of those things.
This right here! It’s the ultimate creation by committee. Everybody who says they’re sick of Hollywood not having any new ideas, as an example, should hate this concept. And yet…
It's the "80% problem" described by LLM users: it can output 80% of a quality project, and getting it to output the remaining 20% will be impossible. So you end up spending tons of time fixing up the output anyways. This is true for nearly any technical project being done thru LLM.
yep, and it isn't useless either! but it does require the knowledge to know limits and capabilities so you don't get stuck spending all your time fixing the output.
the idea that c suites think they can replace their engineering teams with "vibe coders" is comical, but the allure of cost savings...
It's hard being in a technical role where I basically use it every day for minor tasks or things I don't need to build my understanding towards, then going online and seeing (gestures variously) every possible take on it held simultaneously even if those takes are mutually exclusive
i have a friend as msft who is "building my way out of a job" (he works on VSCode LLM stuff) and when I ask him why they're doing it willingly, he just says their vibe is "if we don't, someone else will"
IMO, using AI is just lazy, and there still is a premium placed on voicing one's own thoughts. Not to mention the real threat posed by the diversion of enormous amounts of energy and water to running the machines that churn out deepfake videos.
Cognitive offloading is a fundamental human trait (counting on fingers, writing in a journal). I'd argue LLMs aren't about offloading, they're about replacing. To the extent the LLM is doing the "thinking," you're not thinking. If they're just gathering info, fine, but they're not reliable yet.
I read too much science fiction back in the 50s and 60s to ever, ever be happy about or trust AI… as soon as you give up, thinking for yourself and give it to a machine, the machines own you!
After reading the devastating NYmag article yesterday, I talked to my son who is a freshman Colgate. So gratified that his response to us "we're not spending 90 grand a year here for me not to learn". At least some of the kids are all right.
They're literally useless at it. I gave OpenAI a really simple task and it totally failed. The attitude you get back from OpenAI is akin to a begging drunk promising they'll never do it again (Hegseth Confirmation Hearing) and an aggressive, nasty drunk (Hegseth East Egg Roll) from MetaAI.
They are a supplemental tool. People that use them as their primary tool are choosing stagnation over growth in whatever skill sets are relevant to the task. Plus, they're going to get a lot of things wrong.
You absolutely need to double check. I haven't used Gemini much and cannot remember if you can track its sources, but that is essential. Oftentimes, certainty is given in an answer when reality is far from certain. Less frequently, the LLM just gets things wrong.
Do you have any concept of what the point of an internship is? When I delegate lower level tasks to an intern, it’s so the intern can learn how to do them! It’s how you get the next generation of workers. chatGPT cannot and will not “learn”!
Yea and I’m pointing out (along with pretty much everyone else in these replies) that your analogy sucks and is bullshit. If you’ve got a different point, make it better. Because clearly this analogy didn’t do it for you.
That’s an interesting analogy. When I’ve worked with interns, they are usually expected to prove their ideas actually work. When an AI can do that, it will be a lot more useful.
Yes, but also the intern is an actual human who presumably benefits in some way (money, experience) from the situation. The AI is nothing. It's anti-life.
No. I agree with you on that point. I’m saying there are other non critical-thinking use cases for AI. Now those may come at an unacceptable cost as well, but that is a separate issue than the one raised by the OP. I’m not defending AI.
That's a good analogy to argue against using AI in education, especially as a student, but sometimes you just need a heavy object lifted. Forklifts are useful tools. That doesn't mean they can't be used improperly.
I have never delegated a task more complex than "implement a single, well-specified function" to an intern, so no, not really.
So far, no LLM I've tested has had a success rate with that task that even comes close to my worst interns on their worst days -- and the interns learn from their mistakes.
What tasks do people delegate to an intern that require critical thinking skills? The barista will make the coffee for them and the dry cleaner already knows how much they charge for three shirts.
I don't think you understand *my* point. I'm making a joke based on what you think interns do, not what they actually do. Interns learn the job and they do so through discussions and real world practice. They aren't just free personal assistants.
No. I’m not. There are basic tasks that can be delegated. The OP appeared to suggest that handing ANYTHING over to ChatGPT was offloading critical thinking. I could be wrong about the latter.
Depends on the task. Massaging something I wrote is sometimes good, if checked. Summarizing is usually inaccurate. LLMs that do citations are an improvement. Code generation is often good.
LLMs are not usable as-is for critical factual tasks. A sloppy untrustworthy intern with an English degree.
Delegating tasks to an intern and then reviewing them is a human interaction that allows for sharing of knowledge, review, and improvement of knowledge bases on both sides.
Also, separate from your question, interns have motivations to come back with correct answers. ChatGPT has none.
Hi, you're a stranger who wrote something really stupid on the Internet and I'm having a bad day so I'm piling on your bad post with nothing of value to add. We both suck! Maybe we'll do better tomorrow!
Do you not understand the difference between a human being and a plagiarism machine that gives you the answer you are statistically most likely to want?
I’m at least confident you are the author of your insult. It’s a rather easy question to answer. Point is, there are some tasks that can be delegated to an AI application that do not result in the erosion of your critical thinking.
We’ll end up spending more critical thinking time trying to figure out what tasks can safely be delegated to AI than just applying our own critical thinking to them. Man, the first time I realized ChatGPT was hallucinating publications to me, and then lying about the “author” I was done w it.
You think interns hallucinate or lie about aspects of their jobs? Or PAs?
I'm super curious how you got the ideas an intern was synonymous with a pa. Because that's not correct. An intern is an apprentice. Your job is train them in how to do what you do.
No. I don’t think that. Just saying there are other use cases for AI that don’t require you to relinquish your critical thinking. Using interns as an analogy appears to be somewhat fraught.
The fundamental difference between an intern and ChatGPT is that an intern has the capacity for cognition and ChatGPT does not. It’s a crap analogy. An intern can think critically, if they’re any good. ChatGPT can’t think at all, critically or otherwise.
Well they'd have to know how to do it to even assume in the first place, wouldn't they... and now we're back around to "you need to already know the answer to ask the right questions".
TikTok has a weirdly strong pro-AI stance, but I have seen more anti-AI stuff in just the last day and that might be because you're sharing those videos so please keep doing it.
To be fair, most of my thoughts are bullshit, and automating them really lightens my cognitive load. Lets me focus on the bullshit that really matters, y'know?
Fortunately, there's no reason to listen to those who have voluntarily abandoned critical thinking. Even better, they save you time by declaring it publicly!
Seeing people using LLMs more and more at work. Quality of work has declined as they started using it. Have rewrite emails or documentation as the inputs people use are off the mark and the LLM has no clue what it’s being given due to the nature of our business. Using LLM = more work for everyone
They are so wrong so often with basic facts, as lawyers keep finding out, that they are really only useful to me for helping summarize and suggest edits of my own writings or to find specific sources.
I can’t imagine a critical mass of people disciplined enough to have a bot teach them to think critically; how will the tutee know the questions to ask? “I’ve just read a poem. Please help me understand it better. You are a college English teacher. Please challenge me constructively step by step.”
All I can think of when AI comes up is the water crisis we're already having and how much worse it will become. People will literally die so billionaires can try to avoid paying people wages.
I agree, but the one circle I cannot square is: the worst personal and social outcomes we are seeing all look like an organic arms race between people who need money or access to make money. The incentives are difficult to resist. College applications and schoolwork, online job applications…
I was very lucky to have a Professor for my AP physics in high school in the 90's who insisted you never take a chart/data/essay or story at face value. True observational science requires Questioning and doubt. He ended with never feel bad for questioning. it reveals what is true and what is not.
I don't find any reasonable way possible to connect critical thinking to LLM use in my everyday use of technology. I know there must be *some* people out there using what looks like critical thinking to manipulate AI into creating results they connect to their critical thinking process. Okay. Fair.
BUT- that is the minority in users, imo, and this is a problem because it is not the minority of users who believes they're using critical thinking skills (CTS) when accessing AI technologies. It's not CTS to ask it to skim a document and summarize it for you- it's using a tool to simplifya source.
3. And that's okay if it helps or if it's an accommodation that someone finds useful- I have occasionally used Google's LLM to go over sources for me for this reason when I was especially overwhelmed d/t being auDHD and dyspraxic. If AI didn't have as much environmental impact I likely wouldn't mind
4. nearly as much about it as I do, it would become "only" an issue of ethics, how people learn and them doing themselves a disservice if they use it in school/higher ed, etc. I don't appreciate it because of how easily people see it as an opportunity to exploit- time, their own worth, others' work,
5. Etc. In one of my classes about half the class used AI to respond to a *critical thinking analysis* assignment 🙃 It was ridiculous for them to try it at all, but to try it on a CTS analysis... shameful. Why bother with higher ed? A bad or late answer would have been better than using AI.
I have shared my prof's response on several threads about AI use lately. It's not some innate evil, but users can abuse it easily and in ways that aren't to their advantage in the long run. I do think there's value to learning how to use it well, and it's not the be-all answer some tech cronies wish
I'd love to see you chat with your colleague @kevinroose.com about this topic! I lean on your side of the LLM debate but appreciate that he acknowledges the skepticism, and it sounds like you've both been savaged by the weirdest people on opposite sides.
It was great for taking out the cuss words and making me sound more polite than I was feeling in my resignation letter when I left a job I loved but had an asshole boss I could no longer work for. 😳 It is a task I would recommend it be used for. 👍
I mean I like it but I ain’t gonna ever stop maintaining a healthy dose of skepticism towards it. It’s a MACHINE, people! Don’t ever forget The Legend of John Henry and stuff like that. Oh wait they probably don’t teach that story in school anymore. (70s child here 😅) https://youtu.be/Y46f2C3epeU?si=PesSEHEkn4zxjdNO
Some of the big boring jobs are just perfect for AI, but some co-workers are happy to shift to a button pushing data mover, instead of a creator, who uses AI to augment, they are letting it do all the work.
When we figure out what is screwed up it'll be too late.
I definitely use AI to generate prompts for socials for my client, but the output ALWAYS needs a human to build out the copy. Ngl, it is a gift on blank page paralysis days to just have a starting point.
Jamelle, they are supposed to be an aid not a “here do this for me.” My use of AI as a research assistant; the results are just breathtaking. It’s like having a dedicated assistant with multiple advanced degrees who can organize ideas for my consumption. AI doesn’t think for me just the scut work
Recently watched respected colleagues crow about using AI to write emails, client letters, etc. Not only is the AI writing incredibly poor, you don't have to give corporations a reason to do layoffs, they'll find them on their own 😅
I spent part of last night asking one some important questions and then correcting its answers. What seems strange is how quickly it would agree with me. I asked if it would make the same sort of mistakes in the future if another person asked and it said it wouldn't. I'm going to ask again and see.
Comments
It's exhausting to see how quickly people were willing to shut off their brains entirely.
If people are too stupid to see AI will replace them…then I hope THEY get all the hoped for and leave the rest of us alone!
https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf
"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them."
This is the perfect description of what it means to be a Republican.
Welcome our Robot Overlords now!
Somebody ping me when AI dusts/ cleans blinds/ finishes the dozen or so household projects I’ve been meaning to get to.
It's for lazy, stupid people.
https://www.microsoft.com/en-us/research/publication/the-impact-of-generative-ai-on-critical-thinking-self-reported-reductions-in-cognitive-effort-and-confidence-effects-from-a-survey-of-knowledge-workers/
“it’s not trained to generate *accurate* answers, it’s trained to give *plausible* answers. which just makes the inevitable errors harder to catch”
Humans doing a great job analyzing and seeing patterns, critical thinking...
Personally, I would like strong science based LLMs to moderate social media, to label misinformation as such & block harmful disinformation. Perfect for this!
Our tech broligarchs are shielded from scrutiny regarding harmful effects of social media on individuals & societal level… now same principle is being applied to LLMs
Will humans lose critical thinking skills if they outsource it to LLMs in a generation?
And their approach is wrong. They're a very expensive carny trick.
LLMs are gonna self poison themselves on their own output in the same way the inbreeding in a small genetic pool gradually produces worse and worse results.
They have no answer but I'm pretty sure the "problem" they want solved is how to get wages to zero.
I would suggest the solution is change the tax structure and provide basic income.
What I can tell you is this isn't going away.
2) i don’t have an intern or assistant
3) even if i did, i would not ask them to do things like research because part of how i figure out WHAT i think is through the process of reading and sifting through information.
If you never take the journey, the goal is meaningless.
You have to read it yourself, write it yourself, paint it yourself.
There's no cliffsnotes for life...
Chat GPT is a machine whose sole function is to generate sentences that sound like what it thinks you want to hear.
Not to research or gather information that it determines is true, because it can't make that assessment.
It took you more time to pull up the AI and find the answer than it would have to do the most basic step shooting steps.
Your reply just makes you sound dumber.
Answer is yes. Billionaires would love to eliminate all those gross “employees” that they have to “pay”.
It did a horrible job. It missed key bits of info and it summarized the same doc 3 different ways, making different mistakes each time.
Lawyers, please DON'T USE IT.
This is only on audible, unfortunately, but it's worth listening to if you use audible.
https://www.audible.com/pd/Captured-Audiobook/B0DZJ5W4Y7
https://www.nature.com/articles/s41599-023-01787-8
But the biological functions of another human being has other positive outputs as well.
And it's not just students. We're seeing this at my company with people
*recording&reviewing counts, too, for differently abled ppl
The human you interviewed/hired to help you has a lot more skill AND vested interest in their job. The other is just a computer program vulnerable to GIGO.
I'd have paid to have been there for that moment, if not for the game.
I think the core issue is believing that the raw output of an LLM using only its own training data is going to be good.
You need to give the model relevant context to get useful output.
This is very similar to the situation found in Asimov’s “Foundation” series and it leads to the collapse of galactic civilization, sentencing billions to centuries of barbarism.
But I’m sure there’s a downside.
https://thebullshitmachines.com
Just as most things will be automated soon. This is just consensus reality. It will be pushed upon the masses...
It produces answers and ideas, thoughts that can be worth mulling or using at low stakes
https://www.404media.co/email/0cb70eb4-c805-4e4e-9428-7ae90657205c/
If we could scrape together just a few moments that didn’t feel like impending doom, that would be fantastic.
But so is pretending that there is zero utility or future here.
It's almost like... there's some nuance to the issue.
BlueSky, much as I love it, has a very unfortunate mob mentality when it comes to anything about AI.
I love technology but the direction it's taken is infuriating.
my main issue is that it just isn't very good at a lot of what people use it for. so you get 2 outcomes:
1-they produce low quality work
2-they spend as much/more time fixing the output
the idea that c suites think they can replace their engineering teams with "vibe coders" is comical, but the allure of cost savings...
plus it's a drag to write. I hate writing mocks...
because like you said, what’s the harm? and we have a pretty solid manual QA team so we won’t be shipping anything blindly
i have a friend as msft who is "building my way out of a job" (he works on VSCode LLM stuff) and when I ask him why they're doing it willingly, he just says their vibe is "if we don't, someone else will"
dark times ahead, maybe? until then...GLGL
And it's not intelligent. It's just much, much better pattern recognition.
How are people's brains going to be affected by offloading thought? How are *kids* brains going to be affected?
The "coasters" have always been with us and all evidence suggests that it's long been the surest path to wealth and power.
https://www.tiktok.com/t/ZTjUyFqtb/
And... That's about it.
So far, no LLM I've tested has had a success rate with that task that even comes close to my worst interns on their worst days -- and the interns learn from their mistakes.
There’s your problem right there!
LLMs are not usable as-is for critical factual tasks. A sloppy untrustworthy intern with an English degree.
Also, separate from your question, interns have motivations to come back with correct answers. ChatGPT has none.
I'm super curious how you got the ideas an intern was synonymous with a pa. Because that's not correct. An intern is an apprentice. Your job is train them in how to do what you do.
The era of journalists being paid a ton of money to do nothing is over.
Just false equivalencies all the way down.
Nobody says “Hey, human assistant, tell me what to believe about this important thing I need to do.”
LLMs are great when they are used as a productivity enhancement tool.
Willful ignorance is seldom a selling point.
https://youtu.be/Y46f2C3epeU?si=PesSEHEkn4zxjdNO
Some of the big boring jobs are just perfect for AI, but some co-workers are happy to shift to a button pushing data mover, instead of a creator, who uses AI to augment, they are letting it do all the work.
When we figure out what is screwed up it'll be too late.
They created Language Learning Models (LLM) that could mimic interaction & even appear sentient to the slow witted
They renamed LLMs as Artificial Intelligence
Slow witted people have begun trying to fжck the relabeled LLMs
https://bsky.app/profile/milesklee.bsky.social/post/3loemouvhks25