Like 99% of what's written about gen AI, that article is silly. There are no concepts of "behavior" or "agreement" or "truth" in any of these models. They're just following a vector through a space of likely next tokens.
That's more true of first gen LLMs than the current ones. The default path is next likely tokens, but there's a lot of processing that goes on to refine that, especially in the "thinking" models like R1. And they definitely have behavior. I think intent is the thing attributed falsely here.
Comments