However, if the question is such that the answer is something that I can't perform an independent check on, I can't trust the LLM's output.
I can _maybe_ ask it to find primary sources and check those out myself, if I'm truly lost on a topic, but I can't even trust its summary of a primary source.
I can _maybe_ ask it to find primary sources and check those out myself, if I'm truly lost on a topic, but I can't even trust its summary of a primary source.
Comments
They're not good if you can't check their work quickly.