"we asked ChatGPT-" don't. I literally have never cared about anything less than your weird publicity stunt where you ask an algorithm to give you regurgitated words and act like it's some kind of unknowable truth
Comments
Log in with your Bluesky account to leave a comment
The only "we asked ChatGPT" things I will ever enjoy even remotely are the ones where they ask ChatGPT stuff like "are you an ethical product" or something and it replies like "nah this is actually the worst on so many levels, our creators are monsters"
No. The source is still valid, it is the qoute by the LLM which isn't.
Apply to LLM the same reasoning that you would to a blog post by an un identified random: It may be qouting the lead expert in in the field, but you can not trust the blog to use the qoute propperly, within it's context etc.
Surely. I have also, some times, seen blog posts written by randos which qouted people who, once i looked them up, turned out to be relevant and usefull. But "we asked chat gpt and..." does not give any more authority to these sources than what they would have if you were to just stumble upon them.
I think it's the implicit claim to authority what the OP is objecting to. The assumption that info from LLM reveals "truer" truth.
Like, no body would start an article by saying, "We put a couple of keywords related to our query into Google and this is what the first three results told us".
Just curious: have you ever heard of the Turing test? I don't trust that stuff either, but it is fascinating how well it impersonates intelligence (as far as it goes on this planet.)
Ok. Here's my question. Why the fuck did anyone ask someone with as many emotional problems, hangups, and extreme autism as the Enigma Machine solver how *CONSCIOUSNESS* is defined?
"Gee, make it fake interaction convincingly? It's what I do!"
The problem of course is the one we have. LLM can pass turing tests all goddamned day, because it's not testing for intellect, it's testing for plausible lying. Because we made a lying machine. Specifically to pass a turing test.
NO one asked, it's from the results of his work. Almost nobody working in the realm of creativity, is without hang-ups, spectrum disorders, etc. Kind of goes with the territory.
He also had no operating manuals, as it was the dawn of the information age, there wasn't anyone to write them. Or stuff to write them about. That's what your phone does today, "interact convincingly.
And "ask" is a generous stretching of the term here. Prompted is far more appropriate. You can feed whatever promptings you want to your arbitrary sentence generator, but please spare me from hearing about it. I really don't have any interest.
All AI is breaking copyright laws by using information humans made when their intellectual property is published. AI It is not asking for permission to use copyrighted material.
While if any of us post anyone else’s copyright material we are blocked and possibly sued for doing so!
Most AI peeps so weird. I'm entertained by that kind of nonsense sometimes, but only privately and.. why would I share it? Who cares? It's just "lol machine made a silly".
Sending someone a jar with my fart in it and "it sounded so funny, dude"
Oh wait I know what you're referring to now lol. I have not watched or paid attention to any of those "they so smart" type of things. I don't get it at all. Not watching/reading/anything like it. Just pointless.
Comments
*Markdown in comments*: Only software devs and LLMs use MD.
*Threes*: LLMs always seem to regurgitate in triple.
*Disguise*: They always try to hide it but we know.
Apply to LLM the same reasoning that you would to a blog post by an un identified random: It may be qouting the lead expert in in the field, but you can not trust the blog to use the qoute propperly, within it's context etc.
Not the tool's fault if people use it wrong 😉
Like, no body would start an article by saying, "We put a couple of keywords related to our query into Google and this is what the first three results told us".
It's like the people who think "driver assist" equals "auto pilot" and then crash because it isn't.
It's there to help us, not do it for us entirely. At least not yet.
"Gee, make it fake interaction convincingly? It's what I do!"
Fuck Alan Turing. He has no idea.
But it's still not actually thinking
While if any of us post anyone else’s copyright material we are blocked and possibly sued for doing so!
Sending someone a jar with my fart in it and "it sounded so funny, dude"
I just like when AI makes a silly stupid.