i'm really grateful to hear that, thanks - i had been kinda circling this idea in so many drafts over time that i was starting to worry i was overbaking it and it wouldn't make sense to anyone lol
No it's super clear! I like that it's not going down the 'but let's talk about what we mean by 'intelligence'' route, which is so overdone as a critique / 'tought-provoker' imo...
Great post. There's a lot of value in more general framings such as "automated decision-making systems" that avoid making things contingent on the inner workings of the system ... but "AI" is indeed a concept in its own right, and that it's inherently political.
Historically we've seen many ideological projects that sought to take away individual authority and autonomy. But so far they all relied principally on other humans to do it. What's different this time is the structure is so inhuman.
yeah, i thought about including that (and i'm probably ultimately leaning towards agreeing with you) but then there are cases like Amazon's "Just Walk Out" or Mechanical Turk, or those Tesla bots. they're "fake AI" but they also help dislodge/displace local autonomy - i didn't want to exclude them
Ok. I guess I could see AI as encompassing two related but different actual things, roughly:
1) AI as a project to actually replace humans with tech
2) AI as a BS marketing term *pretending* to replace humans with tech but actually using tech to replace them with other, more exploited humans
I had sort of been thinking of (1) as 'real' AI (and the existential threat and (2) as a bunch of BS but your correct that there is an anti-human-worker ideology that ties them together and this is for sure a valuable insight.
i thought about underscoring how the goal is ultimately to shift to tech structures, but the first steps could look like anything (eg crowd workers), but then i kept coming back to the ethos that no implementation is forbidden - expert systems, neural nets, crowd work... "AI" may take any form.
Sorry nevermind, I just realized it's because you pinned it. 😆 I was looking for it among your posts from 9-10h ago. (I opened the piece earlier today then finally got a chance to read and comment.)
OSS engineer here, generally opposed to the OpenAI / "pay for our centralized API" trend. you seem to suggest implementation doesn't matter. "We built a program that can predict things" == AI == "ideological project to shift authority and autonomy away from individuals". is that accurate?
This frame doesn't distinguish the current crop of technologies from algorithms. Then what exactly is the difference between AI and bureaucracies that follow standardized protocols in a leaky and stochastic way?
That frame seems so large to me that I don't know what to do with it.
“project to shift authority and autonomy away from individuals, towards centralized structures of power” describes a bunch of other things too - with vastly different responses.
yep! you should consider its usefulness to the project that you're engaged in, and calibrate or make it more precise in the ways that are useful to you. i don't mean to impose the artifact of my process as a hegemonic idea, but as a demonstration that the process yields useful tools
Agree with @gemmamilne.bsky.social also it appears you’re still (rightly) “shouting at engineers”. Keen to help in that chorus. Otherwise i now find it difficult *not* to see tech enterprises like AI or DAC or AVs as anything but a political enterprise that concentrates power in engineering culture
This is good, and could easily be extended back to the whole discourse of automation from the late 18th century on, and of Technologie/ Technology itself.
Good post. "AI" as 100% political helps explain how the focus on AI sidelined the relatively straightforward data bias and algorithmic regulation debates. In our 2023 book we tended to avoid the term precisely b/c it impedes understanding basic processes. (might've been a marketing mistake tho)
First of all, many many thanks for such a clear position. I don’t 100% buy it — I’ve tended to believe AI is an aspect of reflexive modernity — but you’ve given me a nudge to articulate that point. So I will, and it’s on me to defend it.
Your definition also suggests, on its own terms, why AI practitioners' criteria can tend to brush aside the awkward history of AI, e.g. expert systems.
!!! i wanted to talk about lisp and prolog but i decided against it because it felt like i would have to take a hard detour and i was like "i either need to commit really hard to that digression or cut it out entirely"
but i'm really glad an active reading yielded that
„Awkward history of AI (e.g. expert systems“ - I‘m intrigued 😊. Not unfamiliar with expert systems but couldn’t find out what @thomasarnold.bsky.social referred to with „awkward“.
Maybe another post on lisp and prolog at some point? I’d be happy to read it and learn more about this history.
I use belief related metaphors in my talks and people get what's important so much more easily. AI is stories. AI is marketing. Then you can be specific about an individual tool and context in examples. Broad definitions on the tech fail to serve the function of defining in a way that's useful
Every now and then I think about the "did you know that tomatoes are fruit" kind of things. Like, botanical (technical) definitions have their own use, but generally, I'm pretty sure the definition of vegetable vs fruit is more about whether the plant-thing in question is commonly eaten cooked..
You're a powerful writer with a great site (though you need `padding-bottom: 2rem` on the main container 😉), and I always love some #FeministEpistemology -esque politicization of science, but this one stumped me. First off, a popsci polemic -- justified or no -- is an odd place to seek definitions.
IMO the only real technical definition of AI is the classic "program that doesn't work very well yet", which is a tongue-in-cheek way of saying it has no real definition. That said, there's absolutely two main opposing designs in the field: symbolic vs. connectionist, or neat vs. scruffy.
Certainly you were exposed to this in school, so it's weird to see you say CS experts are unjustly hoarding definitional power. Maybe it's complex to learn, but these distinctions are far from intuitional or determined by credentials -- they're real and important differences in mechanical design.
Ultimately, and somewhat above all that, I'm just sad to see someone so smart and woke cede AI to the corporations. FOSS is amazing, and I don't at all see in what way either Linux or Llama are "fiefdoms"...
The working class needs all the tools it can get; why preemptively draw lines in the sand?
Either way, use the word how you want -- if you want to use it as a political synonym for 'harmful tech' then power to you. But that's just a language game. Even granting that there's an instrumental benefit there, the scientists will necessarily just swap terms and keep on going. As science does!
that's fair! i would ask whether a university press is strictly "popsci" and i've heard people argue either way, but i think any earnest effort to put forth a way of understanding or thinking about a subject is worthy of taking seriously (or trying to). /2
i've got tons of youtube videos from "comedian"/"commentary" youtubers who seem to understand AI very clearly & speak clearly to its effects in ways that CS and engineering experts heading AI institutes would wish to be as clear-headed. i wouldn't readily dismiss them for being "layperson" analyses
> One might inspect handguns and stun guns, come to the conclusion that these are two totally different technologies […] and embarrassingly overlook that they’re obviously both tools police use to maim and kill people.
Techno-boosterism seems to occupy the entire overlook niche by design. Shameful.
Ah, your parenthetical makes this same distinction anyway:
> AI (as a political project that happens to be implemented technologically in myriad ways that are inconsequential to identifying the overarching project as “AI”)
In union organizing, it has occasionally come up that generative AI may “still” have utility, and in some instances, may even be an effective tool to fight back against the boss, e.g. to generate artwork for some of the strike games: https://nytimesguild.org/tech/guild-builds/
I’m not sure about that interpretation myself, as the mere usage of generative AI may still harm our wider community of interest (e.g. by harming the environment).
I think your definition clearly situates the technology as part of a wider project that I feel labor organizing must push back against.
AI is not an ideological project. It may be, for a very small bunch of people, but most AI researchers do it because it's fun and interesting. Oh, and it brings fame. As simple as that.
Jon E had some really good writing on this last year that actually widened the scope beyond "ai" and large models alone, to encompass both them and the underlying knowledge graphs and the data collection practices & philosophies that drive it. long read but highly recommend https://jon-e.net/surveillance-graphs/
Sorry, this is absolutely nothing to do with your post, but if you move your CSS declarations from the end of your page to the head at the top, you’ll reduce/stop having a flash of unstyled website when your page loads. Hope that’s helpful
one thing that might be buried here is that, more and more lately, i'm evaluating the quality of definitions and frameworks based on whether they elicit and sensitize me to some important detail or another. i think that's a more precise, more actionable, way to think about whether they're *useful*
also, for the record, i've got like weeks of drafts and notes thinking about this fucking insurance claims system example. WEEKS of that shit, and then the CEO of that insurance company goes and walks into 3 bullets. pain in the ass.
Comments
"AI is an ideological project to shift authority and autonomy away from *human* individuals, towards centralized *technolgical* structures of power."
1) AI as a project to actually replace humans with tech
2) AI as a BS marketing term *pretending* to replace humans with tech but actually using tech to replace them with other, more exploited humans
It's not clear what is afforded by framing it as political project-first.
There must be a way to distinguish when a thing has transcended being a technology and has become a political project. ↓
That frame seems so large to me that I don't know what to do with it.
I do like inverting Winner’s proposition to state that politics have artifacts.
And that artifacts are a site where multiple political projects interact and are contested.
https://www.haibane.info/2025/01/03/defining-artificial-vs-synthetic-intelligence/
but i'm really glad an active reading yielded that
Maybe another post on lisp and prolog at some point? I’d be happy to read it and learn more about this history.
I enjoyed this long view https://www.cambridge.org/core/books/quest-for-artificial-intelligence/32C727961B24223BBB1B3511F44F343E
The working class needs all the tools it can get; why preemptively draw lines in the sand?
Techno-boosterism seems to occupy the entire overlook niche by design. Shameful.
> AI (as a political project that happens to be implemented technologically in myriad ways that are inconsequential to identifying the overarching project as “AI”)
In union organizing, it has occasionally come up that generative AI may “still” have utility, and in some instances, may even be an effective tool to fight back against the boss, e.g. to generate artwork for some of the strike games: https://nytimesguild.org/tech/guild-builds/
I think your definition clearly situates the technology as part of a wider project that I feel labor organizing must push back against.
https://bsky.app/profile/supporthr12.bsky.social/post/3lcqcpohx422g
It's just a question of how much damage it does to everywhere it's applied in the mean time.