Yes, good way to put it. Looking at the architecture of a LLM bot there is a "foundation model" and an "inference engine". The inference engine can be developed to incrementally improve the "reasoning process" - break down problem to steps and write language programs in turn operate on those steps.
Comments
I am fascinated by the explainability research Anthropic is doing. If I were a philosopher I would focus on that...
and the inference engine is based on matrix algebra that does a sort of multidimensional optimization search ("attention") over the input prompt+foundation model. (Loosely speaking).
I think it is pretty easy to understand that a model could generate different outputs, based on different generation strategies (https://huggingface.co/docs/transformers/generation_strategies#decoding-strategies)