Consider the theoretical limitations on LLMs (paper: https://arxiv.org/abs/2412.02975 ) shows AGI arising from an LLM is not just a scaling issue.
It sounds like something beyond Transformers needed for complex composite problems. How will AI continues to evolve, beyond Reasoning models and Deepseek?
Reposted from Quanta Magazine
Researchers are finding that the architecture of LLMs has inherent limitations. But this is not the end of chatbots. @anilananth.bsky.social reports: www.quantamagazine.org/chatbot-soft...

Comments