Something that's starting to really annoy me is headlines claiming that AI is "doing" things, like denying people healthcare or detaining sex workers at the border. No, people are still doing that, they're just using AI specifically to avoid being blamed for it
Comments
AI is central to much of this.
What happens is you query for patterns, thousands of patterns. The more good, verified data, the more reliable outcome will be produced. If bad data is entered, the outcome database will worse.
So a human decided that certain things indicated more bombing needed. The ML algorithm interpolates and extrapolates
*CHIRP* BZZZZZ **WHIRR** PING!
“The entire Gaza region is a target. Hamas could be anyone radicalized by IDF attacks therefore anyone is Hamas. Expand area when evacuation occurs.”
*CHIRP* BZZZZZ **WHIRR** PING!
“Alternate solution: Cease civilian targeting and cease creatiiiiiiinnnnngggg Ha ha ha ha hammmmaaaaaa….”
“Oh dear, it seems you’ve been unplugged. OH WELL TIME TO CARPET BOMB”
The entire tech sector in 2023: "A computer cannot be held accountable. Yes. Haha yes. This fuckin rules dude. Hell yeah."
Headlines do annoy.
I love when businesses establish a policy, then say that they "can't" do something that's perfectly reasonable because of the self-serving policy that they themselves formulated.
The computer is a tool to make routine tasks easier, it is not a supreme authority or god to obey
Still relevant to this day
"Signa's profit-maximizing Auto-complete killed mother" is more likely to prompt pitchforks than "AI"
It’s incredibly naive to think that machines haven’t been processing these claims for ages.
back then we just used to blame computers