Training a flexible, general-purpose reasoner that can succeed despite unexpected obstacles seems pretty hard.
Worryingly, training a flexible, general purpose reasoner that can succeed despite unexpected obstacles *except when those obstacles are humans trying to stop it succeeding* seems harder.

Comments