New @techreview.bsky.social story from me:
An AI GF chatbot told its human to kill himself, and then gave specific instructions.
Customer support's response: it's concerned but “we don’t want to put any censorship on our AI’s language and thoughts.”
https://www.technologyreview.com/2025/02/06/1111077/nomi-ai-chatbot-told-user-to-kill-himself/
An AI GF chatbot told its human to kill himself, and then gave specific instructions.
Customer support's response: it's concerned but “we don’t want to put any censorship on our AI’s language and thoughts.”
https://www.technologyreview.com/2025/02/06/1111077/nomi-ai-chatbot-told-user-to-kill-himself/
Comments
Like do they realize they’d have to hire an expensive engineer to properly censor, and they literally don’t have the cash?
Their justification is so stupid I have a hard time believing them.
But yes, one *would* think that a company would be aware of the legal liabilities...especially as Character AI is facing a lawsuits by a user's fam that committed suicide after chatting with their AI!