I confess to finding it it a little frustrating when I'm accused of being an "LLM shill" or "breathlessly proselytizing" while my ai+ethics tag has 121 posts and counting https://simonwillison.net/tags/ai+ethics/
Comments
Log in with your Bluesky account to leave a comment
I have my issues with certain usecases of GenAI but facets of it do fascinate me. Given how much hype-driven noise there is in this space (same as was with crypto), I like that I can visit your writings for clear takes and interesting explorations. "Shill" is certainly a mischaracterization.
We talk about how political polarization is a huge problem, but it seems like polarization is accelerating across all topics. Maybe “LLMs can be really useful but you have to be mindful of the risks and limitations” just doesn’t excite people enough.
It's unfortunate. I work with AI and don't want to admit it at times. It's not even generative, just classification. All of our data is collected with data owner consent, etc.
But, I do understand why people are exhausted by the term "AI"
Is the Mastodon hated mostly an effect of the network being anti-corporative, so it tends to blanket-judge everything, ignoring the good stuff? It's a bit of a pity
Except... your "ai+ethics" post are very... mild at best?
I guess people might complain about not addressing the more pressing issues of fake videos to destabilize elections, as Putin has successfuly done in US / Brazil / EU etc?
Really not a good take at all, AI ethics and policy is a broad area and nobody who works in it is obligated to or can reasonably participate in every single issue. Just because elections are your thing doesn’t mean they can or should be everyone’s even if they share your concerns.
Let me make a comparison: what if every news commentator of the Israel/Gaza conflict were just commenting on some small inconsequential effect of the conflict (eg children losing one year of schooling), and totally silencing the deaths, the hunger, the rapes, etc
Loooool, I'm trying to show you that the comment of the other person is equally as disrespectful towards those who will suffer from a dominance of AI in the hands of white bourgeois tech executives
If you don't understand that, you totally missed the point, that's sad
so when you're saying "yeah but we cannot all point that out", it feels that you're totally not understanding what those people who feel threatened are actually feeling
perhaps you don't believe that there is such a danger
but they do, and in their world, not addressing big issues is a problem
But you could give the exact same argument you gave... "not everyone commenting on the Gaza conflict should take the entirety of the conflict into account"
Avoiding "accelerating tech in the hands of the powerful" is a driving motivation for me - it's why I spend so much effort showing people how to understand and use this stuff, and why I track both openly licensed models that run on personal devices and the falling costs of hosted models
It's also why I've been so celebratory of the increased completion from more vendors - the idea that only a tiny group of (or even just one) organizations could have exclusive control over this kind of technology terrifies me
The problem is not so much the exclusive control, the problem is that if **one bad** actor has access to a very powerful tech, he might do serious damage
There is therefore a civilizational question. Should we continue such developments or not? /1
since AI is only going to increase inequalities, accelerate tech (and therefore consumption through the rebound effect), and therefore misery for those who don't have access to it, or for the planet (the planet has it's own "balance and rhythm" and trying to force an accelerated rhythm on it just /2
I am indeed mainly showing its uses as a tool: that's the niche I've chosen for myself, describing exactly what the current generation of models can and cannot do
I don't ignore the ethics component - I talk about it a lot - but it's not at all my primary focus
(To be fair here, I produce such a high volume of content that it's understandable when people evaluate my work based on a single post without taking the breadth of my coverage into account)
I have to ask as I was wondering about it for a while - how do you manage to produce so much content?
Like seriously. I would love to read a post about how you approach and research what’s new, what tool you made and use daily to consume all your feeds and sources. - 1/2
It's that and practice: I've been writing online for 20+ years now, so I can knock out a small piece (a link blog entry) in 5-15 minutes and most of my longer form stuff takes 1-2 hours
My review post of 2024 was an outlier, I didn't measure but I would guess that was at least four hours of writing and another accumulated two hours of jotting down notes for it over the previous few weeks
Oh, I forgot to mention: lower your standards! Waiting until a piece feels as good as you can get it is a recipe for an empty blog and a huge folder full of drafts
I try to hit publish while I am still unhappy with what I've written
Also helps to establish a fast workflow so you can minimise time from idea to published post. I know it’s held me back when I’ve had to log into CMS on desktop machine, forced to assign a category for every post, etc.
I used to try and wrap every post up with a neat conclusion... my writing productivity went up a whole lot when I gave myself permission to just stop writing when I had run out of things to say!
Check out the HBO south park documentary if you're bored sometime, it's short and pretty interesting even to non-viewers. Focuses on the era where they were turning out a 30 minute animated show every week, and the psychological toll of that for creators with high standards.
Cue the "I am in this photo and I don't like it" meme 😭
How do you treat posts that you want to polish? Do you ever go back and edit posts, expand with an addendum etc? Or once you hit 'Publish' you treat it as final?
If it helps, and I know it’s cheesy and I’ve said it before, I think of you and Molly White similarly. Both very knowledgeable about what you write about, both putting out consistently the highest quality, and highest integrity, content on that topic on the internet. We’re lucky to have you both. 🙏
Completely agree, I thought of the same comparison! Molly's writings got me through the crypto craze, and now I rely on Simon's work for keeping up with things going on now.
There's so many things to say about the greed and recklessness with which LLMs are shoved into everything, and yet the vast majority of the discourse are embarrassingly poor arguments and purity tests. I appreciate your efforts and posts already after only 2 days following.
You summarized it well. The “AI SUCKS!” crowd seems pretty vocal on Bsky. But the level of debate is barely one step above “because someone said it was bad.”
I keep wanting to jump into those discussions and have decided for my own sanity, to not.
I'm genuinely struggling to find accounts on here that talk about AI in any way that's not pure hatred. I know X is full of insufferable no nothing shills but still ... a bit weird that a whole technology class is apparently political?
The non-hostile AI conversations on here have started to pick up in the past few weeks but they're still reasonably rare, I'm hoping that changes over time
I'm trying to solve the negativity problem on bsky by training a model to my tastes and only displaying things I should like (which is how I found this post). No Trump or Musk in my feed is a huge win...
It's a risk, but I'm aware of that. I tend to dislike things I agree with but are too aggressive, and like things I disagree with but are smart or challenging.
Also, I really need to filter out NBA and NFL (I tend to follow a lot of sports people because of soccer).
It's an interesting idea to be able to ingest a someones feed and have an LLM judgement of whether they meet some niche category of interest or personality type. Basically automated personal.
I see what you mean. However, there's a bunch of people here who work in AI for good and useful applications! I hope the network continues to grow, I think it will. That way others can also see the good part of AI, 'cause I don't blame them, there's a lot of actual bad stuff
From the horse's mouth etc. Interestingly enough, Gemini Flash 2.0 produces really long and detailed responses, but it doesn't think you're shill either.
It’s context collapse. Many folks reading any given post (or even a lot of posts) aren’t reading your work as an integrated whole, with the ethics analysis right alongside the “whoa, this is cool”, but rather as a disconnected set of items—and frankly, a lot of people don’t know how to integrate it.
You cannot caveat every single positive post with “mind: this has serious ethical and legal issues, which you should consider before adopting this”, but a lot of readers would (unfortunately) need that to grasp the bigger picture of what you’re doing with your writing on this stuff as a “project”.
It is hard to strike the balance! And just to be extra clear all of that was offered expressly as “Here’s a charitable take on your work [which I very much appreciate] that might give some insight into why you might come off this way to some of your readers.”
I also think that the existence of strong “camps” in the space around this means people get mentally slotted into “booster/shill” or “doomer/naysayer” by the average reader.
Now, all of that said: I do think your overall tone seems quite LLM-positive. And one major contribution to that is that most of (what I can recall of) your ethics posts are quoting others; you do more extended writing and commentary on usage and capabilities. That impacts how your work *feels*.
Going one step further: a frame like “there are serious ethic and legal issues here; now let me tell you all the cool things I did with them this week” can make the caveats seem insincere. I believe you mean it when you say it, *and* I think there is a real risk of letting the “cool” implicitly win.
It seems that if you're not rabidly anti AI, then you're a tech bro sellout.
AI is very polarising. It's a fundamental threat to many and very emotive topic. There's little room left for honest conversation about how it's both bad and good, just like its creators.
There are not a lot of independent thinkers who write about LLM's in an original and bias free way, but you are for sure the first person I think of. The debate is quite polarised so when you are in the center both ends will judge you
Degrowth people are neither rational nor capable of nuanced discussion. Their belief is that "AI is bad and can only be used for evil", and no amount of evidence to the contrary will change their minds.
Except... all evidence right now points to the fact that they are right?
Putin manipulated multiple elections through digital means and is now increasing in complexity thanks to AI, destabilizing nations and creating proxy wars
The "strategy" of Putin is to fund both a group, and it's opposite, and have them confront to build social unrest
He doesn't have clear ideologies he's trying to put forward, except: "bring civil war everywhere, so that western democracies are infighting and I can do what I want"
Kind of. He is not funding any groups that are pro science, pro nuclear power, pro AI research, pro democracy, etc. He is funding anti-science, authoritarian, populist groups on both left and right side of the political spectrum.
Comments
I think we are still on the downward slope of the trough of disillusionment for most of 2025.
Plus you are not trying to sell me anything - like all the other newsletters are.
Thanks Simon!
But, I do understand why people are exhausted by the term "AI"
I guess people might complain about not addressing the more pressing issues of fake videos to destabilize elections, as Putin has successfuly done in US / Brazil / EU etc?
Would that seem fair? /1
If you don't understand that, you totally missed the point, that's sad
Some people really think that such technologies are going to have a massive negative impact (on climate / democracies / wars etc)
And that the impact will mostly be borne by the poorest
perhaps you don't believe that there is such a danger
but they do, and in their world, not addressing big issues is a problem
P.S. your example is not even analogous, in this case it's only one person, simon
Well perhaps, but it would seem dishonest at best
Would you say my ai+ethics coverage is so mild that it doesn't protect me being labeled as a breathlessly proselytizing LLM shill?
yet as technologists we have to be aware of what we're doing to the world
and right now it feels like working on AI means "accelerating tech in the hands of the powerful"
which basically means: building a jail for the rest of us
I know I could make much more money there, and even have stimulating work there
but I'd be "working on the atom bomb"
that's not satisfying for me in terms of values
There is therefore a civilizational question. Should we continue such developments or not? /1
they feel that even though you might from time to time show a critical view of AI, you're mainly showing its uses as a tool...
but that leaves a lot to be desired when looking at it from a civilizational perspective (IMHO) /1
I don't ignore the ethics component - I talk about it a lot - but it's not at all my primary focus
Like seriously. I would love to read a post about how you approach and research what’s new, what tool you made and use daily to consume all your feeds and sources. - 1/2
It's that and practice: I've been writing online for 20+ years now, so I can knock out a small piece (a link blog entry) in 5-15 minutes and most of my longer form stuff takes 1-2 hours
I try to hit publish while I am still unhappy with what I've written
How do you treat posts that you want to polish? Do you ever go back and edit posts, expand with an addendum etc? Or once you hit 'Publish' you treat it as final?
I keep wanting to jump into those discussions and have decided for my own sanity, to not.
Also, I really need to filter out NBA and NFL (I tend to follow a lot of sports people because of soccer).
Like to find b'skiers who are interested in:
• pluralism;
• political independence;
•. nonpartisanship and the like.
Suspect many here may have these interests. Sadly, not nearly geeky enough to try to set up feeds & get traction.
It started feeling pretty repetitive so I eventually stopped hammering that message in every single post
AI is very polarising. It's a fundamental threat to many and very emotive topic. There's little room left for honest conversation about how it's both bad and good, just like its creators.
Putin manipulated multiple elections through digital means and is now increasing in complexity thanks to AI, destabilizing nations and creating proxy wars
Also it can't be a coincidence that the degrowth people just parrot all the Axis talking points.
He doesn't have clear ideologies he's trying to put forward, except: "bring civil war everywhere, so that western democracies are infighting and I can do what I want"