I think the AI safety community is largely failing this test (with most people failing one but not both bullets). Many people fail the first by arguing for some weaker point, like “AI is becoming increasingly powerful, and powerful things can be dangerous”. And maybe…

Comments