been saying it for a while but anything about "Ai replacing employees" is a marketing exercise, and if I'm honest I lump in most AI safety conversations with it, because they never seem concerned with the actual harms. It's all so cynical.
Reposted from
Brian Merchant
I do wish it was better understood by now—especially by folks in the media—that these "warnings" from large AI corporations in fact function as "advertisements"
www.axios.com/2025/04/22/a...
www.axios.com/2025/04/22/a...
Comments
But yeah it’s a year away in the same way Tesla full self driving has been a year away for a while
Whatever I hired Boobhilda yesterday after hearing from that Business Insider Guy
Oh shit, have cities considered how that's going to absolutely nuke their income?
AI Steve has AI blue cross blue shield
The idea of an "AI employee" is laughable, it's software. Anthropomoprhization fetish. Is Office or SAP their employee as well? Do they get a salary? This is moronic beyond belief.
I have to babysit relatively reliable and predictable algorithms every day, the idea a generative model could do it is utter fantasy.
I remember going live with EMRs in 2008 and how it required a Medicare mandate to make it happen.
The changes saved physical space but not labor.
Automation is always framed as a means of replacing labor.
It's almost never considered a tool for:
* Decreasing risk (worker safety / defective products)
* Increasing quality
It should be, but it's not.
They're desperate for this to work out.
Let it execute whatever commands it says. Write reports on what it's doing.
Should be fun. It's just text right, easy stuff.
capitalism out here finally eating itself
Also, what's the company liability if they're using an LLM or generative AI system, and it makes a costly mistake, or several, resulting in loss of customer value/revenue or even human lives? A PERSON you can call on the carpet, fire, or press charges.