How do people determine when it's not worth it to try to correct an LLM and get a better response than it would be to just figure out the answer on your own with your human brain?

How many other folx have spent hours rabbit holing with ChatGPT when they could've looked at a previous example? ✋

Comments