Disagree.
Modern LMs should have a knowledge store for facts (don’t rely on the parameters) and they’re moving in the direction of being able to do reasoning, it just doesn’t look like human reasoning.
Modern LMs should have a knowledge store for facts (don’t rely on the parameters) and they’re moving in the direction of being able to do reasoning, it just doesn’t look like human reasoning.
Reposted from
Katie Mack
I honestly believe all LLM results should come with a disclaimer reminding people that the thing doesn’t (and absolutely cannot) know any facts or do any reasoning; it is simply designed to *sound* like it knows facts and frequently sweeps some up accidentally in the process.
Comments
The reasoning question is more nuanced and personally I’m unconvinced. In fact, I see strong evidence against it. Though who knew highly overfit models could be this useful?
https://open.substack.com/pub/robotic/p/openais-o1-using-search-was-a-psyop?r=68gy5&utm_medium=ios
The structural dissimilarities weight heavier.
the category error must be avoided, even if a system outperforms humans.
Tbf I don’t believe in free will and the likes