I don’t know how I didn’t see this before.
The Dunning-Kruger effect is a cognitive bias. The shorthand is that people overestimate their skills/knowledge in a given domain, because they lack the knowledge to assess their own competence.
ChatGPT etc have automated the Dunning-Kruger effect.
The Dunning-Kruger effect is a cognitive bias. The shorthand is that people overestimate their skills/knowledge in a given domain, because they lack the knowledge to assess their own competence.
ChatGPT etc have automated the Dunning-Kruger effect.
Comments
One definition of expert is “somebody who knows more than me about something”, but the traditional seminar definition is “somebody from out of town”.
In any case, if we’re all experts, we all know everything about everything.
Your statement gave me another perspective on something I’ve been thinking about a lot recently. I thought I was thinking about just AI ethics, but it’s bigger than that.
I’ll chew on it some more then come back to this thread. Thanks. 👍
My pinned post is on LLM AI & GIGO ;)
I do think it was a reasonable (mis-)interpretation of your comment, but thanks for clarifying.
One person? It’s them. Two? Probably me an an implication I missed with my phrasing. 🙂
It took me a while of sitting and thinking about the GIGO stuff before I posted anything about it publicly.
What really threw me was reading about how LLM AIs work & knowing how they obtained their datasets.
“No. That can’t be right. They can’t be that fundamentally incompetent.”
Every time I hear some AI bro brushing off “hallucinations”, it fills me with rage.
I'm not entirely clear what point you're trying to make, because it seems to be based on an assumption about something I wasn't saying.
What’s bugged me ever since I encountered LLMs was the idea that if you don’t understand the question you’re asking an “AI”, you can’t validate the answer.
That said, I’m betting other folks have come up with it too. It just fits.
A good doctor (like mine), will admit when she doesn't know something, and research it. A bad doctor will make shit up.
A bad lawyer will be hired by the Trump DOJ.
You can’t debug code produced by CoPilot, if you don’t understand the code.
You certainly can’t validate LLM-generated tariff calculations if you don’t understand economics.
I'm sure someone smarter than me has already written about it.
Effectively, supercharging D-K.
I don’t want a government using an insane methodology for tariffs… oh wait
AIs are not "neutral" and they have no thoughts of their own.
gpt's, claudes, 4o's (idc) DO. NOT. KNOW. There is no concept of know! There is only pretending/facade
I disagree on the pretending/facade, though. That does not exist. There is, quite simply, data. LLMs cannot interpret "good" or "bad" data.
https://bsky.app/profile/grissallia.bsky.social/post/3lbsseeluhm2q
There isn't really much time anyway to hash out extremely minor disagreements in technicalities, the combination of AI with fascism, perhaps especially in the aesthetics/computations (studio ghibili stuff was exemplification) means we are superduperUnited.
Forge on! 🫡