I could see some form of this. If you don't optimize something very hard, you don't get its failure cases, or so? however, I don't think it's correct to say inner is "the reason" for this. you can get something similar by just not optimizing very hard, right?
Comments
Assuming outer misalignment, x can be seen as safer than y.
That being said, the better the model, the less this will happen.