New @techreview.bsky.social story from me:

An AI GF chatbot told its human to kill himself, and then gave specific instructions.

Customer support's response: it's concerned but “we don’t want to put any censorship on our AI’s language and thoughts.”

https://www.technologyreview.com/2025/02/06/1111077/nomi-ai-chatbot-told-user-to-kill-himself/

Comments