Because Chat-GPT is a model designed to imitate human language, it's intend is to produce plausible responses, not to be correct or accurate. By all means saying "according to Chat-GPT" in any serious manner is equivalent to "according to my 7yo cousin".
It's still ok to use it as a investigation starter as it has been trained with a lot of data and there is a decent chance it may show some connections you can further investigate based on your promps, but it should never be used as a credible source for anything.
Because LLMs often "hallucinate" and spit out incorrect information. Also ChatGPT is not a source of information and should not be used as such. I'd argue it's worse than citing Wikipedia as a source. People who approach a discussion citing ChatGPT shouldn't be taken seriously in my opinion.
It's a tool in my research, it's not my only tool. Any crucial information I always back up with other sources. I also use immense amounts of detail in my interactions with GPT. Primarily though, I use it to help me organize and structure tasks and projects, and to learn general overviews of topics
I think the only proper way to approach LLMs is to verify everything you get as a response. Otherwise you're trusting a very flawed system to give you correct information. Which can have pretty serious consequences depending on what kind of work you're in.
if I provide information and say "according to ChatGPT", that's my acknowledgement that the information could be inaccurate. That's why I took issue with your first post. However I try to avoid doing this, and additionally I try to only use GPT for broad overviews rather than granular detail.
I also don't use it to glean crucial info that I need for work, mostly just for personal research and to sate my curiosity about various topics and how they connect
Comments
But so can people? Nobody's memory is perfect. I'd even argue that people make shit up more than ChatGPT does
Like what you're saying could totally be valid, but it really depends on context.
If you use it for preliminary stuff, *then* go and look up sources yourself, you arent basing stuff off of chatgpt, you're basing it off of sources.