Disagree.

Modern LMs should have a knowledge store for facts (don’t rely on the parameters) and they’re moving in the direction of being able to do reasoning, it just doesn’t look like human reasoning.
Reposted from Katie Mack
I honestly believe all LLM results should come with a disclaimer reminding people that the thing doesn’t (and absolutely cannot) know any facts or do any reasoning; it is simply designed to *sound* like it knows facts and frequently sweeps some up accidentally in the process.

Comments