Okay first, they don't lie they're predictive text programs. LLMs give factual errors because they're designed to give the most optimal set of words given a query. Not the most correct answer. They aren't designed to be the best answer giver, they give the "most likely" answer given a set of data.
Comments
I can’t engage on these secondary points if you won’t engage on the primary one
It's also full of jokes which a predictive text generator can't discern and is how you get the glue pizza answers