True for first generations of LLMs using statistical learning, transformers, and self-attention over 100s billions parameters.

New generation has chain of thought reasoning models that retrain LLMs with multi-stage reinforcement learning not to hallucinate plausible nonsense but correct responses.

Comments