chatgpt has fully broken-through to every facet of the corporate environment. I'm spending a not-insignificant amount of my time every week now explaining to people that just because chatgpt told you that something was possible doesn't mean that it a) is or b) even exists
Comments
But yes I am surely out of touch as to how many use it and if I saw the % I'd get an instant crisis of despair
It's firmly in the territory of "When all you have is a hammer, everything looks like a nail."
I feed the same exact coding challenge prompt to Claude and ChatGPT, and get different answers.
Claude’s answer is better every single time, but my employer only lets us use ChatGPT because “we’re a Microsoft shop that uses Microsoft tools”.
Dumbest timeline ever.
I’ve seen the appetite for people to latch on to bad news about AI, and I have a feeling part of that comes from feeling overwhelmed by it all.
(I have found ChatGPT to be helpful for coding, though.)
searching for thing that doesn’t exist on ChatGPT: hallucinated information that’s at a glance indistinguishable from real info
not sure the second is an improvement!
They are good at helping people write code. They can improve someone’s coding efficiency substantially.
They can write sales letters & copy.
They can review legal & other documents & summarize them.
But they are not good at fact finding.
If you input shit or ambiguous data you get shit or ambiguous results out.
AI is good for concrete data and facts and not so good with subjective stuff.
The entirely distinct scientific "AI" tools are useful however: the ones that do protein folding and weather forecasts.
And I technically have to sell this bullshit to feckless corporate dingdongs that go from 6 to midnight over a slick PowerPoint.
Every semester I have a few students try to hand in very obviously GPT papers. The class I mostly teach is an intro ethics class. The papers are 50% “my chosen philosopher says this” and 50% “I think this and I will persuade and defend my thoughts.”
But almost none of the other students use an LLM at all. And they should. Because after you write the paper an LLM can 100% proofread and help you improve your paper.
You have a free reading buddy. Use it.
I think people so badly want to believe in the magic of it/convenience that they ignore any potential pitfalls.
And it did it so well that your users believed the result to be true, which deserves appreciation from them.
In the meantime, let’s take bets on when deregulation of agriculture brings back ergot poisoning. Watch the delis with fine rye breads & the whiskey distilleries first. Actually, these are my movie rights to this werewolf epidemic plotlne.
I had to make it clear that although I will use it for work cuz they want us to, it ends there.
Don't trade ChatGPT for an actual person or therapist that can help.
Then show them this AI response
1: 104 mph * (1 ft/5280 mi) * (1 hr/3600 sec) = 152.5 ft/s
2: 60.5 ft / (152.5 ft/sec) = 0.397 seconds
Compounding the problem, it rounds incorrectly to 0.397.
(60.5/152.2=0.3975)
Implying it can is a lie for ppl
https://link.springer.com/article/10.1007/s10676-024-09775-5
It's def frustrating, but I know it stems from prof induced stress.
Idk I'm just complaining don't mind me
Touching article once I beat Google into finding it for me tho.
ThinkGPT: Why would you ask me to do that, can't you do it yourself?
you: wait, what
ThinkGPT: I've just contacted your employer.
ThinkGPT: Update: your former employer.
There is all ready too much to think bout
So it's really just repeating millions of people going "learn to code".
You know those old stone buildings that your towns shuttered about the time you realized that social security won’t ever be a thing for you?
That’s where those chatbots WERE, and they even had faces, before the rhinestone crucifix and Slurpee shop went in.
She only believed me when I got ChatGPT to refute itself.
This tires me at work and with friends too.
They're probabilistic models, they're just rolling dice to see what word comes next, and sometimes that's reflective of information, and sometimes not. You can explain that easily.
It’s painful.
After the output I simply input "#4 ?"
ChatGPT: "Yes, you are correct. That was an error."
I think it even apologized. 😁
I'm using some of the tools myself, but already seen it suggest complete garbage on code, or simply not understanding how the structure of the app works.
Ironically it's pretty good at explaining it and they've already decided that ChatGPT is an authoritative source.
If you're dealing with glassy smooth brains, you've got to learn how to ice skate.
But AIs? They are electronic gods who always tell the truth. 😐
The worse thing about AIs is their confidence. They are giving you an answer, no matter if it is correct or not. And you yourself need to check it. But most won't.
Who had money on Skynet being a white supremacist when it finally happened?
People believe AI is not a marketing slogan for software that controls a large database, instead people actually think AI is intelligent
have access to data bases with accurate information. Instead you are going through and collecting data from an online forum.
Moronic
https://www.youtube.com/watch?v=03lrL9CFWxM
My wife is part of an AGA lawsuit against Meta's AI that pirated a fuck ton of books and used them for training.
So, they basically stoe a bunch of products with intent to redistribute.
The absolute shittiest people doing the absolute shittiest things possible.
Fuckin' hate AI.
AI is for the lazy, unimaginative, selfish, and ignorant.
A dude in Norway asked ChatGPT about his name. The AI crap that came back said the man had murdered his two sons. https://www.theguardian.com/technology/2025/mar/21/norwegian-files-complaint-after-chatgpt-falsely-said-he-had-murdered-his-children
if chatgp tell you something is impossible, it's lying and you should totally attempt whatever good ideas you might have.
Not minor points, either -- the main thing I was asking about. Seems like it works 70% less well than it did a year ago.
> To help us select between these tools I asked ChatGPT to write a slide comparing their features
Oh, thanks mate. Now I actually have to do more work checking your slop than it would have been to do my own comparison.
This is a ~$2m a year licensing decision.
We are dealing with the fallout of failing to even bother doing legit research on how tech is effecting society, really, we don't fucking know how this shit looks longer term.
We don't seem to have really understood what PFAS or microplastics would be capable of doing, and went ahead with those.
Heck, we knew what leaded fuel was capable of doing to us and it still got used.
Sadly, too much bogus input.
Much more training of humans of how to use and discern is critical.
Thanks for clarifying! I appreciate your attention to the details of large language models. While you're right that they are not a complete substitute for search engines, LLMs have key strengths:
• Ease of use compared t
Additionally, user prompts are back fed into the training models. Most user prompts are of mediocre quality and the neural net weightings have adjusted to reflect mediocrity.
Like hey dude ok sure fire us all do the work yourself let's run, follow through, and execute all that "automation" by yourself
Them: “we can just…”
Me: “no, we can’t”
T: “but ChatGPT said…”
M: “it’s wrong”
T: “how do you know?”
M: “30 years of experience”
T: “maybe there’s just a new way”
M: “that’s not new”
T: “how do you know?”
M: 🤦♂️
They're stealing another election
Otherwise, in free editions, I have yet to see an example of its writing that could tell what was true, suss out important facts versus trivial ones or otherwise set itself apart from a 17-yo who comes back to class after a lunch of fast food and weed gummies
Plagiarism.
Can’t help the spelling fixation—been a copywriter and editor for nearly 50 years
He was supposed to be a murderer twice?
Good grieves....
What a chicken shit thing to do
So now we have people who can't do research or write
Winston Churchill is rolling in his grave
supplanting the search engines it destroyed
The and see is always, 1)“hold on, yes, just got them” and 2) “no…”
- I will establish a clear policy not to use ChatGPT to write policy.
- The policy will be regularly and thoroughly audited by an audit team I don't employ.
- All personnel will be thoroughly trained in the policy by a training system I don't possess.
Being an engineer kinda sucks these days. Can we speedrun AI bubble bursting so I can go back to enjoying my job?
Not great either and they're also trying to push some idiotic AI but not yet as bad
Search isn't better but no AI Summary yet
Soooooo………. Yeah.