Yes, good way to put it. Looking at the architecture of a LLM bot there is a "foundation model" and an "inference engine". The inference engine can be developed to incrementally improve the "reasoning process" - break down problem to steps and write language programs in turn operate on those steps.

Comments