Companies calling things “censorship” rather than say…”product safety” has done a lot of work in terms of letting them off the hook for unleashing deadly technologies upon the public.
Reposted from
Eileen Guo
New @techreview.bsky.social story from me:
An AI GF chatbot told its human to kill himself, and then gave specific instructions.
Customer support's response: it's concerned but “we don’t want to put any censorship on our AI’s language and thoughts.”
www.technologyreview.com/2025/02/06/1...
An AI GF chatbot told its human to kill himself, and then gave specific instructions.
Customer support's response: it's concerned but “we don’t want to put any censorship on our AI’s language and thoughts.”
www.technologyreview.com/2025/02/06/1...
Comments
Nah bro.
As if these chatbots actually think.