The other thing people seem not to understand is LLMs hallucinations aren't the only issue. They don't have objective knowledge, they're predicting text based on their training data. And their training data (the internet) is rapidly degrading due to LLM generated content displacing quality research.
Comments
Reading CoT of R1, it’s the neurotic ramblings of a very insecure mind. And it works pretty well!
garbage out
LLM
AI beget more content that makes AI dumber.
Lots of incompetent new (& old) folk in the next couple of years whose shortcomings have to be compensated for…