Currently seems to me that one of the greatest existential risks from AI is authoritarian lock-in by actors in either China or the US, and many AI governance efforts are unintentionally increasing this less visible risk in order to reduce more visible but IMO less likely risks.

Comments