I wish we could chat. You asked an llm a question it couldn’t answer. It always gives the most probable answer, and sometimes the most probable is wrong. Like any tool, you need to understand its limitations. It’s neither panacea nor useless. Just a tool, mostly not for you, but not useless.
Comments
"Real humans" display the same behavior, for instance the "publish or perish" in academia. Or Politicians...
I have found that LLMs are good only when the answer is something I can cross-check; e.g. "what is the syntax for doing X in programming language Y" (where X is a SMALL THING)
I can _maybe_ ask it to find primary sources and check those out myself, if I'm truly lost on a topic, but I can't even trust its summary of a primary source.
They're not good if you can't check their work quickly.