This means that there was a meeting, likely several, at which they discussed the suicides and mental health harms this would create, and decided that those were within acceptable ranges.
Reposted from
TechCrunch
Google will soon start letting kids under 13 use its Gemini chatbot
Comments
I doubt there was a safety or ethical discussion at any reasonable level.
At least from the AI product side.
Although I’m not sure what would be worse: no safety experts or token roles providing no meaningful input.