Profile avatar
writearthur.bsky.social
Friendly reminders about spooky technologies. Words in Wired, The Economist, The Atlantic, and others. Author of “Eyes in the Sky”
166 posts 225 followers 675 following
Prolific Poster
Conversation Starter

I genuinely feel so bad for all the tech people out there working tirelessly to deprive themselves of everything that’s nice in life: books, writing, making music, socializing…and now massages.

Bluesky pulling down the Trump/Musk vid (then restoring it) shows why regulating deepfakes is going to be much trickier than many assumed. As ACLU and other civil liberties orgs have been arguing for the last few years, any blanket ban on explicit deepfakes would fail to pass constitutional muster.

Ignore the bearded belly dancers. I find it hard to believe that they're unintentional. It's much more likely that the creators of the video want you to be distracted about them, and poke fun at them, and make jokes about them. It's a distraction. Don't fall for it.

They literally want us to live in houses possessed by AI, like some kind of horror movie.

A workshops at the computer vision conference I'll attend soon attempts to address this doozy of a problem in surveillance AI inaccuracy, described euphemistically: "Computer vision methods trained on public databases demonstrate performance drift when deployed for real-world surveillance."

Cars that serve you ads while you drive. "Ad-supported" word processors that bombard you with banners and autoplaying videos while you write. Chatbots that text you late at night, begging you to engage. Tech is entering its era of thirsty desperation. And it's going to suck for all of us.

I wonder how knowing that your email is gonna be read by AI changes the way you write that email. Makes me think of how people put certain keywords or invisible text in their CVs in order to have a better chance of getting past automated HR systems. Is anyone studying this?

Feels like it's only a matter of time until someone says that the military should replace JAGs with LLMs, and then the military goes ahead and actually does it.

Grok sending horny unsolicited texts at 11pm on a Saturday lol

BREAKING: Bad

AI does not have to be on a killer robot to cause serious unintentional harm in warfare. "One intelligence officer said he had seen targeting mistakes that relied on incorrect machine translations from Arabic to Hebrew." apnews.com/article/isra...

If this worries you, now consider that militaries are using the exact same technology to summarize intelligence reports—and that this is considered a “low-hanging fruit” of military AI adoption.

I can't decide what's dumber. An AI that invents court cases or a lawyer who submits AI-generated motions without reading them first.

This is what Mark Cuban was referring to when he said that using AI as much as possible is the key to success.

We also know that the president really favors drone strikes. When he visited CIA HQ in 2017, he stopped by the secure command center for the Agency's overseas drone strike ops. According to reporting by NBC, as he watched the live video feeds he told Pompeo to make the program even more aggressive.

The CIA has a history of flying spy planes over Mexico, and CBP has been operating unarmed Reapers for twenty years. But this article is right to hint that this could lay the groundwork for something that not long ago would have been unfathomable: a drone strike south of the border.

Does an editor have an ethical obligation to tell a writer that they used AI to edit their draft?

Friendly reminder that making a series of deliberate, politically motivated—and for the most part secret—design decisions to ensure that a language model conveys certain specific "challenging" or "controversial" viewpoints is absolutely not the same as uncensoring.

Mark my words, this will end in tears. www.theguardian.com/gnm-press-of...

You could say there was no human in the loop. futurism.com/the-byte/mil...

Ok so I guess we're at the "it's ok for chatbots to lie sometimes" stage of the AI safety discourse. model-spec.openai.com/2025-02-12.h...

When governments can't even concede that AI should be safe, you know we're in trouble. fortune.com/2025/02/13/u...

Horrifying story from @gizmodo.com about how Fusus police tech enables wildly disproportionate surveillance of public housing residents in Toledo, Ohio. @toddfeathers.bsky.social

"Don't second guess yourself - you got this."

Extraordinary moment at the #IASEAI conference in Paris. After laying out the existential threat that AI poses to democracy, Maria Resa tells an audience of hundreds of AI engineers: “be careful.”

If they wanted AI to make the world better why did they model it on the most dangerous animal that ever existed?

About to facepalm so hard I knock myself unconscious.

I wonder, is an AI that fails to prevent a school shooting worthy of our “forgiveness”? www.businessinsider.com/ai-mistakes-...

For the billionth time, we only recognize human mistakes because we also recognize acts of negligence and mal intent. If we can't hold someone accountable in the same ways for avoidable AI harms, the notion of an "AI mistake" is not only meaningless, it's dangerous.

Who else missed DARPA's very quiet announcement that it's going to start developing "biohybrid" robots that combine mechanical components with living cells, tissue and organisms? www.darpa.mil/research/pro...

There's something poetic about these guys getting brain rot from spending too much time on their own platforms.

Mark Zuckerberg liked one of those corny fake AI-slop accounts that have taken over Facebook

Pretty bizarre to see the custodians of the doomsday clock publish a piece arguing for a policy that would almost certainly turn the clock closer to midnight than it already is. thebulletin.org/2025/01/memo...

when is a nazi salute not a nazi salute does feel like a conversation that mainly benefits nazis

This nails it. www.theatlantic.com/technology/a...

Anyone know what's up with this Trump AI Manhattan Project website? It looks like it went up a few weeks ago but now it's down.You can still access this weird pdf brief from the site—it's authored by someone who says they run a company called AE Studio. www.trumpmanhattanproject.com/TRUMP%20Manh...

It's only a matter of time until a government uses this cute little bugger for surveillance.

This morning Gemini appeared on one of my work email accounts. It offered to summarize the email I had just opened. The email had two sentences, for a total of 24 words.

We're fast approaching a dangerous tipping point where it's harder to opt out of using AI for a given task than it is to actually do that task manually.