I guess I would feel better about Google's new willingness to use AI for weapons and surveillance if I thought the country leading in AI development actually *was* a democracy guided by core values like equality and respect for human rights. https://www.washingtonpost.com/technology/2025/02/04/google-ai-policies-weapons-harm/
Comments
https://www.ethicalpsychology.org/materials/Arrigo-JPE%201999.pdf
Which is, pragmatically, a good reason to try to behave in a way that makes it very clear to your own people that the answer to that question is no, or mostly no.
It's a bad combination of "morally, very important question" and "question you're likely to get wrong."