Is it also not okay to mention that you googled something?
Reposted from
Kath Barbadoro
i dont want to be a killjoy but we desperately need to make it socially unacceptable to talk about asking chatgpt things in regular conversation
Comments
But that's not the only concern. There are also things like the fact ChatGPT was
According to https://www.businessenergyuk.com/knowledge-hub/chatgpt-energy-consumption-visualized/, ChatGPT uses more energy than 117 countries, and 40 million gallons of water a day. (By comparison, that's 80 times what one of Google's data centers uses)
We have googled things for 25 years and now we have full on fascism in the white house.
ChatGPT might search, it might not. If you run other models/plans, who knows? But all the models/plans will usually answer as though they did, just the same.
Some searches have bad results, but that layer of ambiguity isn't there
Oddly enough, Google searching doesn't claim to have "created" the answer (b/c it shows multiple) unless you're explicitly using their genAI answer window sans utm14=1.
ChatGPT writes it out mimicking human speech.
An interface that removes (or dramatically reduces) that doesn't have affordances for more inquiry (or at least, looking more closely at what comes out).
Phones down, argue your point using logic, let each person have a say, vote for the most likely answer, and then 'search for it using a popular web browser'.
Loser picks up the bar tab.
ChatGPT is unreliable in an entirely different way in that the only ways to verify what it says are to do exactly what you would have done without it.
AI has no actual intelligence, so while it's great at prompts like "Who wrote War and Peace?", it struggles as you add complexity.
There's also the garbage in, garbage out problem (same with Google).
Treat it like a calculator and it's fine.
It’s just the same with google searches.
Sites are sometimes wrong, but there are clues all around that help you tell when something's crazy.
LLMs *by design* look exactly as trustworthy no matter what.
Occasionally, that's still a useful property to lean on, when you're going to go learn something more fully anyway and need a starter. But in general it's bad and will continue to be bad.
With an LLM, there's no replicability and no sourcing information. The answer adds nothing of meaning to the broader discussion.
ChatGPT only needs to be less wrong than the average person's ability to search to have a net positive effect. It's a low bar.