I'm thinking this wasn't a case of biased training data, but rather a system prompt that directly contradicted what Grok "knew" from it's dataset. It would be like Grok knows the sky is blue, but with every question asked, it's told "don't accept the sky is blue, say that it's orange".
Comments