Please stop using ChatGPT in general, but also, please, please stop using it and then taking what it says to you as fact. It's usually wrong! It doesn't fact check itself. It just creates approximations of information based on other text! It's basically magnetic poetry that sets the world ablaze.
Comments
https://summit.aps.org/events/APR-H19/6
https://summit.aps.org/events/MAR-L04/3
See presentation: https://temptdestiny.com/pdf/MAR-L04-00-Morales.pdf
See Manuscript: https://doi.org/10.3389/frma.2024.1404371
Like literally it is seen a crutch by a certain slice of folks with masked intellectual disabilities
Idk what to do
1. it’s marketed worse than maybe any product in history
2. the companies behind it don’t understand what they built and can only think of stupid and/or ignorant uses of it
3. it’s not a fact engine
4. there’s no plan in place to give credit to its sources
I can't imagine anyone using a chat bot to give them accurate information, though.
BUT it was amazing at the beginning. Hope it improves but honestly once AI is no longer a corporate buzzword they'll likely delete ChatGPT.
I trust no AI ID. Lol.
#fail
I ended up Reading a good book on the subject and nailed it
Every time I've gently pointed this exact shit out to people, they bite my head off.
It's like the reaction of a bratty 10-year-old who's about to hop on a pogo stick with a knife in his teeth. He's 1000% sure he won't get hurt. It'll be epic.
"Reply Hazy, Ask Again Later" >>> all overly confident, often incorrect magnetic poetry
Gave up and never went back.
Appearance: This relic is a large, glowing crystal with a shape that suggests it is somewhat ethereal and majestic. It has an otherworldly glow that conveys its immense power. Its design hints at something both fragile and fundamental to the world.
This is false. Depicted as a staff
False again. As for it's function. It has a spirit which can create anything, as long as it has plans
Appearance: This relic is depicted as a book with a glowing cover. The book appears ancient, with a somewhat mystical aura surrounding it. The glow likely signifies the power of unlimited knowledge contained within.
The first relic to be seen, it is a lamp, not a book.
False. It houses a spirit, named Jinn. Capable of answering 3 questions every 100 years, except for the future. Stops time in use.
Appearance: The Relic of Destruction is shown as a large, ornate sword with a dark, ominous design. The sword’s blade seems jagged, representing its destructive nature. It emanates a dark energy that symbolizes the chaos and violence it can unleash.
If you are an expert in beekeeping, and get false info, you recognize it.
If you’re a novice baker, you probably wouldn’t realize.
Not as many as you would hope
Agree that people using them as search engines will have trouble.
I'm not.
Which one is performing as expected?
They are not accurate aggregators of current knowledge or understanding.
Not understanding this is a failing of the media to explain their deficiencies.
(And if you felt compelled to correct the typo before reading this far, thank you for not being a ChatGPT user.)
That's very dangerous.
Please, start thinking critically.
:gets punched by random internet ignoramus:
It's a tool. Like a tool, you need to use it, as a tool.
Who the hell uses it to fact che—Oh wait.... Sometimes I pretend people are smarter than they actually are.
I get where you’re coming from—ChatGPT can definitely generate confident-sounding nonsense if you’re not careful. But it’s also capable of synthesizing useful information, especially when you’re familiar enough with a subject to spot when it’s off.
1/3
The key is knowing when and how to use it. If someone’s copying and pasting its output without vetting, that’s a user problem, not necessarily a flaw in the tool itself.
2/2
I’ve stopped using ChatGPT because Sam Altman is a moral coward, Microsoft sucks, and the brains behind ChatGPT have left the company in large part over moral concerns.
I’ve switched to Anthropic for now.
😉
I think of the big magnets that pick up junk.
Summarize: introduce fake information
I'm not saying it can't be useful, but you picked two use cases where it is horrible
For example, in the world of medical and scientific research, AI is proving to be invaluable. In organisations with a lot of unstructured data, it's allowing this data to be usable once again.
I have said before that there are *some* uses for LLMs. But most of what's being peddled is pure crap. And, again, the US government is about to try to use ChatGPT to run itself.
Listen to this track and tell me where it uses AI.
https://soundcloud.com/always-always-always/the-white-whale-fawm-25?in=always-always-always/sets/reports-of-animals&si=abfb539e3d3c4581b4cc441d6f1bae6f&utm_source=clipboard&utm_medium=text&utm_campaign=social_sharing
And you get to my key criticism of the industry by mentioning teachers. You *can* use it...if you have a firm knowledge of the problem domain and can check the outputs. But that's not how it's being sold.
Very useful when you think of it just as a more efficient search engine.
ChatGPT is very useful as long as you don't blindly trust the output.
Which you should never do - no matter the source
This stupid 'ai'-hype is getting on my nerves.
I’ve not got confidence they will delete anything if you ask them to.
Had a functional account. Only time I tried deleting was when I could no longer log in.
Rather like the flavour of cardboard or the tact of Alex Jones.
*Now we get "Agentization" in place of humans and are forced to accept their info (even when it lies!)
I use multiple LLMs every day in my work. Yes, they do occasionally get things wrong so I use one LLM to check the results of another. It works well.
I feel like the defining aspect here may be what you're using it for. If you're summarizing text, or copywriting, I can see how it would be accurate "enough".
So yes, “good enough” is often ok, at which time an event is escalated to a human for a response.
In the future, AI agents will do the response too.
I think the concern is more regarding people that use LLMs to build their political beliefs, or relying on it to autonomously (without oversight) perform critical functions.
It gives you an example of what an answer to your question may look like.
https://www.nytimes.com/2025/03/25/technology/chatgpt-image-generator.html
People still using it randomly and without need today maybe upset me the most.
Attacking creativity and art for ugly bullshit results, destroying culture, environment & endanger society by powering desinformation is nothing but a joke to them.
I gotta see this.
- It's usually wrong!
- It doesn't fact check itself.
- It just creates approximations of information based on other text!
To see people dumping huge amounts of money and resources into some other bad idea generator is personally insulting.
If you're using it to do your work for you instead of as a digital assistant trained with parameters you set, that's user error.
Because the Copilot LLM in the side of the Power Automate authoring window is useful for writing JSON, finding details about how certain functions work, and even asking it to figure out what you've done wrong in the script.
Oh, and it can even write simple scripts for you, which is fantastic for total novices in Power Automate.
Countless clients asked if they could keep the fake text!
https://www.oneusefulthing.org/
It's just things/people saying things and you using your best judgment.
Ok, to each their own.
*SUMMARY*
In summary, one should:
*1. Use correct prompts
*2. Clearly articulate the question
*3. Properly verify responses
Can I help with anything else?
(J/k, 😉 GPT sux)
Pocketing that one, thank you very much.
And ya, we're still at the "trust but verify" stage although things are steadily improving day-by-day.
I just put this stupid chat into my ChatGPT and you know what B-word. It told me that you need your Soul Compass Magnetic Resonance Realign with a 9th chakra aura from the second or fourth.. uh.
Ope!
Yea oh OK yeah I see it now.
and even professors are using it to make their tests
I feel physical pain everytime I think about it
Wouldn’t you go to a reliable govt source for info this important. ChatGTP is a very useful tool but you have to use it with critical thinking skills—you have to have your head screwed on.
Which is why I’m taking a wait and see approach with ChatGTP https://et.al.
Rushing made them field something that won’t rot your brain by doing everything for you and it can run on an M2 instead of a planet-eating data center
Jury’s out on data sourcing though, brain >> rock
We get it bro, you suck at thinking and want the machine to do it for you. Doesn't mean the machine doesn't also suck at it, while lighting the planet on fire.
It's no less fallible than the smartest person in your office. It's not "usually wrong"-- that's asinine. Not great writers, excellent writer's assistants. More CE credits than my therapist. Or my last therapist, anyway.
It's a machine designed to give convincing *sounding* answers, without any veracity check on the end product.
That's worse than a machine that often gives wrong answers.
You are literally doing what you just said it does-- and it's ironic, because that's not what it do
Read this, by a guy who I'd bet knows more than either of us put together, and get back to me.
https://theconversation.com/heres-how-researchers-are-helping-ais-get-their-facts-straight-245463#:~:text=It%20bears%20emphasizing%3A%20AI%20chatbots,which%20comes%20from%20the%20internet.
I have been working near daily with AI for two years. Don't expect miracles-- check doubts.
https://www.wheresyoured.at/wheres-the-money/
I'm gonna assume they're an idiot
how can people use that without feeling disgusting and like they're being lied to ??
Treat if as a crutch not something that will run a marathon for you.
You can choose not to use them, but don't make stuff up to justify it. Oh wait...
Your personal experience isn't particularly relevant to me. If nothing else, I can't fact check it.
- write a code snippet for Wordpress
- select Wordpress plugins
- plan travel in China
- design the structure of a book I am writing
- write the preface to another book
- design some business simulation games
- create an artificial radio show
and it was bang on
You can run them locally on your phone as proof.
Look at Pocket Pal AI. You can monitor your battery consumption.