AI skepticism / criticism would hit a lot harder if the field wasn't full of prominent folks repeatedly claiming AI doesn't work / has no value. Hard to engage with something when it can seemingly support such a weird conclusion
Comments
Log in with your Bluesky account to leave a comment
Speaking of category errors, I feel like much of the discourse is folks taking past each other because they can’t agree on categories. Eg, one might use AI to mean naked LLMs, minus any structures aimed at the mitigation of named errors, with the mirror being reticence to view the Sociotechnical sys
It’s even worse than that. People also ignore that “ChatGPT” or “Gemini” is not only those technical/model components but the teams of people who make decisions in real time about how they behave. It’s impossible to discuss the behavior of models without the humans in the loop of these systems.
I guarantee for every one of these things there was a meeting and people at Google or Anthropic or wherever actually decided many of these things. Either directly or through other design decisions they made. These “AI”s are systems which include both computer and human components.
Yes. Whatever moral decisions these systems appear to make, they are merely the moral decisions of those who developed them, and not the systems themselves.
People with out of money want to use this system to replace their workers and they want their workers to be the ones to set it up. That's a true statement that needs to be added into your conversation if you want to understand maybe or against llm.
I would ignore criticisms by some of those prominent individuals. They have almost no value and are mostly performative. There is a whole aspect of cosplaying the old-school complete non BS engineer there (unsurprisingly, they have no corporeal engineering experience, it's just an aesthetic).
However, the blanket AI skpeticism aside, it is also important to keep it real. There is a lot of magical thinking around AI by even prominent people (so it goes both ways). Every now and then I see someone talk to Claude for a few weeks and they totally become AGI pilled in the most ridiculous way.
The more mundane explanation, that what they call AGI is actually Ashby-type amplification of human intelligence, would simply be too boring. Much easier to become AGI-pilled.
Exactly. I find it maddening. They sit and talk to Claude or o1 and then start viewing the discourse by the "skeptics" differently and then start forming a completely ridiculous position in opposition to it. Something like, hey it really got something, these people don't see the truth, I saw it.
They are true believers all right. "The models just want to learn," blah-blah. No, the models don't _want_ anything, _we_ want to learn, and the models are amplifying our ability to do so.
Yes definitely. But I do see people in that category in droves, mostly from academic circles. A lot of it is about unhappiness with the state of things, which is fair, but it is essentially reactionary and anti-intellectual in essence and comes from a place of reflexive contrarianism (maybe even
even main character syndrome). It is not about improving things. I find this type of mimetic "skepticism" almost as harmful as the stochastic parrots cluster one (not talking about the authors, but the larger set of people), which has more of an ideological motivation, but also mimetic and circle
It all becomes about random issues and about them. Both of these categories of performative skepticism muddy the waters and don't contribute anything concrete, and as I said, I think they are mostly anti-intellectual and fundamentally uncurious.
It’s hard to engage when there’s also so much noise in the so-called facts. For example, ChatGPT doesn’t burn a water bottle with each answer, for example, a claim that was dubious to start with, and unverifiable, and certainly wildly out of date even if it were true within weeks.
And despite there being some wild claims nobody can verify over energy use (and with plenty of evidence suggesting the claims aren’t true), there is also an undeniable increase in energy demands at big tech data centers now, so those worried about carbon emissions aren’t exactly wrong either!
Not to mention, there’s a lot to gain by being “anti AI” (whatever flavor chosen), and so there’s plenty of big names using this moment in tech to make sure they get their dues. It’s a great time to work at OpenAI and also be an AI critic.
Okay that’s just cheating. “Yes, it doesn’t satisfy the definition of AGI we had set out but according to this other definition I just made up that’s easier to satisfy it does”
Even his current claim that o1 is “better than most humans in most tasks” is pretty wild imo. What are “most tasks” here even? Obviously not any physical tasks because there is no embodiment. Can o1 actually completely replace a human in any job? Can it manage a project from start to finish?
They're also stifling the necessary debate about risks and impact of AI on society, such as job losses or threats to democracy. You are either for or against AI, no middle ground.
Well if these people are commenting publicly they aren't immune from critique. I won't put you on blast, I'm genuinely curious who you have in mind. I pressed Casey Newton on this point yday and he only identified a couple of people (non scholars).
Who I stumble most often are people who don't either like current architectures or approaches (Gary Marcus, to a degree Yann LeCun, François Chollet, Rodney Brooks maybe), or oppose AI from a more political perspective: Timnit Gebru as a prime example.
Good list! I'll look into Brooks. But none of the names you've listed have claimed AI "doesn't work" or "has no value." (Marcus has predicted the business valuations will drop, for sure.)
Yes, this is any excellent list, with the caveat that the criticism is mostly confined to LLMs. I think Gary Marcus does go this far, saying that they are basically useless, and the others state that they are overhyped distractions from real work.
Yes those terms are of course exaggerations, but the flavor of critique is around AI being hyped as offering much more now or in future than its reality warrants.
Then there's a lot of cultural or economically strategic opposition from "the emergent luddite class" as someone amusingly put it: often young people in creative professions suspicious of SV, tech bros, tech right, capitalism, pointing to copyright theft, bias or other kind of injustice, etc.
Gotcha. I think people have been conflating the "em-lud" class with the AI skeptical intellectual movement, and I think that's the source of some confusion and misunderstanding.
I think I’m comfortable describing a general feeling I get from reading a field but not comfortable ascribing these to individuals whose views may be more complex than this reduction
I think most harsh critics don't really care to extensively interact with llms or other sophisticated systems (because it would require them to come up with much more nuanced views)
right. i was suprised to find out that GM actually has some very reasonable criticisms. i just started ignoring him bc his entourage spends most of their energy on outrage bait and it all looks so ridiculous
The funniest part of the dismissal to me is how insulting it is to the hundreds of millions of people that choose to use these tools every day of their own accord. “This stuff is useless and you’re an idiot for thinking otherwise”
Trust me, as someone who doesn’t find much value in LLMs I get called obsolete, useless, and other insulting things. Please don’t think only people you dislike are annoying in this discourse. Saying anything can get you a dogpile, no matter what.
I’ve also been on a kick of trying to debate with these folks, which have mostly fallen on deaf ears (with some exceptions!!) but need to get myself outta it.
The blackpill for me was realizing most papers which claim technical (or at least social-scientific) anti-LLM arguments are as you say actually thin wrappers on moral & sociological arguments. The slyness of moving between the two cants me against the value of debating these folks scientifically.
I'll keep repeating that for some people it just doesn't, and that for some of then even the shortcuts offered can be negative.
For the first part I can point you to people who are good at writing, and who cherish the craft of coding (pardon any romanticism), and for them LMs are mostly useless
For the second part I think this paper can give the general gist https://arxiv.org/abs/2204.09565 (just to be clear Sian is in general pro AI, I'm not saying she's a critic, but when I was wondering about the issues with assistive technology she pointed it to me)
I guess that the dual consideration of "for whom in my life - if anybody - does it indeed seem to provide value?" is then natural (if not complete), and perhaps there is quite some variation in answers to this, depending on which circles you run in.
I just find it hard to imagine that they don't have folks in their life for whom it provides value. It provides seeming value to my parents who are not particularly tech folk!
Does anyone really want to learn obscure nuances of bash, awk/sed, and myriad other tools? I use stackoverflow for this already and LLMs are great at saving a ton of time—even for me, an expert who has spent hundreds of hours reading the manuals for many of these tools
FWIW, I am more or less in the claimed situation - I'm sure that I know some people who are using these sort of tools for coding and maybe e.g. writing copy, but the extent of the positive impact is hard to gauge / trust. On e.g. the research side, I don't often hear of colleagues benefiting.
I happen to find the broader picture compelling because for my own reasons, I know a bit more about how these developments are being used in the context of more classically scientific pursuits (e.g. weather prediction), but I'm not imagining that most people are engaging with progress on that front.
I say this because I want to engage with it! I think it's worthwhile to think about strong criticism of what you're doing (and I have a lotttttt of concerns about AI art)
critics aren't very thoughtful and thoughtful people aren't, primarily, critics. anti-ai discourse is very heavily incentivized to be maximally inflammatory clickbait and the people producing it are generally fundamentally uninterested in the actual technology or issues.
fundamentally the sentiment is reactionary, in the sense of being strongly ideologically or emotionally opposed to change. the people promoting it are never going to care if they are proven wrong about anything. they also don't care when policy concessions are made.
There’s a narrative floating around that AI in art is shaking things up like the camera shook up portrait artists, which I don’t see as too concerning. Thoughts?
I think when calling for nuance, calling at those with a lot of power and funding _first_ (not saying _just_!) is important. From my perspective, that’s definitely the pro hype gang. Otherwise you risk making yourself into a convenient shield for those on top to deflect criticism.
Oh, hm, I don’t agree with this heuristic. I’m not sure why reflexively critiquing power is the right thing. I think the AI hype folks are closer to correct than their detractors. This is also not a blogging platform so it’s dominant function isn’t really nuance
I think it has value and is impressive, but how it’s implemented prioritizes profit-driven motives for large corporations. There shouldve been more effort to educate general public about these tools (too many people think they’re sentient), and prevent creators from being exploited or overwhelmed.
If we lived in a world where there was decent universal basic income providing safety nets for anyone who might lose their jobs, and a high baseline level of scientific literacy to prevent false beliefs about generative AI being conscious or sentient, I would be more optimistic.
The way that these are being developed is likely to be exploitative. Also, they help experts and generally harm beginners who use them without thoughtfulness in a way that won’t be obvious to their users for years
I agree they’re getting better, but they also need to be used by people who will be able to recognize when the models make mistakes, and that’s not the state of the general public.
We gave many of those people years to refine the arguments. Instead the critiques have largely devolved into main character-ism while the actual challenges of adoption, inclusivity and exploitation remain heavily understudied
Yes, I would not say that the visible bits of AI criticism often touch on these issues though tbf the very visible parts of any debate are mostly nonsense
I have two simple rules of thumb reading critical papers: do the criticisms remain effectively unchanged if you replaced AI with something else, like shoes 👠? do the critiques change if the training and uae change? So much AI criticism fails these simple counterfactual tests ➡️🚮
Comments
The New York Times isn’t knocking down my door for my “AGI is a category error, but these things are extremely useful” take.
That’s an exaggeration but points to an instinctual gut reaction that people then try to back up with reasons.
Which isn’t too surprising because that’s just how people are.
It all becomes about random issues and about them. Both of these categories of performative skepticism muddy the waters and don't contribute anything concrete, and as I said, I think they are mostly anti-intellectual and fundamentally uncurious.
Unfortunately the dialogue is directed by those on either end of the spectrum (AI is useless vs AGI is already here) without much room for nuance.
For the first part I can point you to people who are good at writing, and who cherish the craft of coding (pardon any romanticism), and for them LMs are mostly useless
https://www.goodreads.com/quotes/51214-my-uncle-ordered-popovers-from-the-restaurant-s-bill-of-fare