The success of LLMs on abstract reasoning tasks (e.g. https://www.nature.com/articles/s41562-023-01659-w) therefore raises the question: do LLMs solve these tasks using structured, human-like reasoning mechanisms, or do they merely mimic this approach via other mechanisms (e.g. approximate retrieval)? (3/N)
Comments