Here’s a “demo” that it’s possible for a large AGI development project to decide that even TRYING to make nice docile AGIs is ALREADY overkill, because AGIs will just automatically be nice, again for reasons that don’t stand up to scrutiny. https://www.lesswrong.com/posts/ixZLTmFfnKRbaStA5/book-review-a-thousand-brains-by-jeff-hawkins#Does_machine_intelligence_pose_any_risk_for_humanity_ 5/16
Comments