Here's a funny thing from ChatGPT o1. We're used to LLMs being overly/comically keen to admit they're wrong and then effusively backtrack as soon as you make even a slight suggestion that there might be more to it, to the point where it feels like they're just humouring you.