The other thing people seem not to understand is LLMs hallucinations aren't the only issue. They don't have objective knowledge, they're predicting text based on their training data. And their training data (the internet) is rapidly degrading due to LLM generated content displacing quality research.

Comments