Here’s a “demo” that it’s possible for a large active AGI development project to have a technical plan that’s claimed to create nice docile AGIs, but the plan would actually make callous sociopath AGIs, and fixing it is an unsolved problem: https://www.alignmentforum.org/posts/C5guLAx7ieQoowv3d/lecun-s-a-path-towards-autonomous-machine-intelligence-has-1 4/16

Comments