It's just saying what it thinks we want to hear and the corporate maniacs running them keep trying to get them to say what they want us to think, instead. There will be a new filter applied to the next iteration to weed out this response, one way or another. LLMs don't think or reason.
Comments