Hallucination is pretty accurate for the 2014 Deep Dream tool, but I've never seen a machine intentionally lie. I've seen machines make stuff up, but 9/10 they'll admit they made it up upon asking if they made it up.
It's basically doing math to construct a sentence from language based on an associative token database. This appears very effective at not using logic or reasoning in it's answers, because it's trying to math-understand logic and reasoning from language instead of intrinsically using those functions
Yeah thats what’s happening but when you ask a chat bot a question you’re expecting the correct answer. When it doesn’t always give it for the reasons you laid out the product is malfunctioning.
The issue i have with AI is that it's not based in logic and reasoning. I recently discovered it still can't count letters. We have digital infants with giant databases of information people are treating like a god.
The AI requires a secondary inference (forward chaining as I mentioned) to go "I know the token straw has 1 R. I know berry has 2."
This isn't a limitation of AI, but processing power. There are byte-level tokenizers that can work with individual letters as tokens and don't have these problems.
I disagree.
Using "malfunctions" or "glitches" implies that these are extraordinary events and could be fixed to make it "function" or "work" properly.
The "hallucination" is inherited part of LLMs and cannot be fixed, because they are not conscious.
Thats why I think describing “hallucinations” as “malfunctions” could have been more powerful. “Aw dude it glitched again and told you to eat rocks” idk maybe it wouldn’t have mattered.
I think it's actually a genuinely correct representation of what's happening. I'd define bullshiting as saying things that you think would sound right regardless of any genuine understanding of what you're talking about. That's objectively what LLMs do, even when they're *right*
Yeah, I've seen a few people reference Harry Frankfurt's book On Bullshit. I think there's something to it. The machine can't know what's true or false but will predict an answer that looks factual.
YES this has bothered me so much lately. "hallucinations" feel like some soft description llm supporters *want* us to soley use to make the whole feel more agreeable. they dont want words that hilight how truly broken it all is.
I actually don't think malfunction or glitch is accurate, because the only difference between what's considered a hallucination or good output is whether it happens to be true.
Yeah, there's definitely a desire for it to be a clever genie who can answer their questions. When it's a server rack that applies statistics to words.
So in order for it to malfunction it also has to do what they think it's doing when it functions. What if we just said it's wrong?
What keeps me up at night is answers themselves aren't as important as how we know them. I can get answers from some bluecheck on X. I can get answers from a peer reviewed meta-analysis of dozens of clinical studies. They don't have the same value.
A LLM doesn't 'know' how it got the answers it serves you. Even if it gives you a list of citations, it didn't necessarily draw the answer from those sources, much less understand or evaluate them.
Comments
What they consider "reason" is merely forward chaining, or improvements to the attention mechanism used to monitor output.
But human attention is far more complex and we are constantly monitoring. So we immediately see our own error.
The issue people have with AI is that it is still rudimentary
If an AI uses two tokens to represent the word strawberry [straw][berry] and you ask how many Rs are in strawberry, there are 0. Neither token is R
This isn't a limitation of AI, but processing power. There are byte-level tokenizers that can work with individual letters as tokens and don't have these problems.
Using "malfunctions" or "glitches" implies that these are extraordinary events and could be fixed to make it "function" or "work" properly.
The "hallucination" is inherited part of LLMs and cannot be fixed, because they are not conscious.
Maybe there's better term, but I don't know it.
As I see it, it is even worse: the less educated you are, the more prompt you are at trusting the AI answers as total truth.
Lies & hallucinations are accurate
The intent is there. It's not the AI's intent, it's its creators', but it's there.
I think it's worth using it to remind people AIs aren't neutral or objective, they are manipulated by the corporations that owns them.
So in order for it to malfunction it also has to do what they think it's doing when it functions. What if we just said it's wrong?