I think this is the biggest misconception people have about LLMs. They really don't "interpret". They use math on tokenized words to produce something that looks like their training data. It's not "answering" it's spitting out a likely continuation to similar arrangements of words in its data.
Basically, it doesn't know how to count, but it also doesn't even understand that's what you're asking it to do. (Sorry if that sounds like I was picking on you or being a pedant, but I really think if more people understood this, the AI pushers would have a much harder time!)
We’ve had word counts in word processors for decades. And those may make mistakes, but I bet basically all of them would have gotten the original sentence length correctly.
Every time I’ve tried one, even if the initial answer is maybe kind of accurate, it quickly becomes less useful. And if the product isn’t useful, it isn’t useable.
The thing that’s broken is capitalism. Most of the shit sorta worked ok 10 years ago (after several decades of pretty shoddy function).
But the tech overlords can’t leave it alone. They must chase “innovation” ie the next big thing. Thus forced upgrades and planned obsolescence and AI and 🐂💩
Yep, I was able to repro it in BALANCED mode. (The You icon colors are different)
My original was in PRECISE mode, which gave the right answer.
So did CREATIVE mode.
Ask it to write a paragraph in X number of words or characters. It can't after many times. It also couldn't write a code with arrays because it had to match 35 rows with 36 columns. And askbto fix and it would become like 34vs35. It hasn't been like this before it's fucked up.
It took several tries, but I finally found a way to ask Copilot why it fucked up. It gave a pretty good answer – but an answer that implies that you just can't rely on an LLM to answer even basic questions.
There is no truth or falisity baked anywhere into the thing whatsoever. What it has learned is "grammar and naturalness of language". Which is, don't get me wrong, an enormous technical achievement. But the thing doesn't know anything at all, and especially with mathematical things, it really sucks
And that's worth getting at it, because math's grammar, especially, is very stripped down. It exists to sort out truth from falsity in a lot of ways, so "grammatically correct false things" are easy to pose in formal math. Exactly the task that chatgpt is bad at
I think you're both wrong. LLMs are actually good at math: feed them a complicated math problem, and they'll spit out an answer. But a lot of LLMs are bad at *counting*. Which is the thing you do to get numbers. Counting isn't math, it's a precursor to math.
I'm very much in the "it's a thing with a purpose, that can accomplish many things", but yeah, people's whose main interest is marketing and sales oversold it, and then the tech cheerleaders got involved.
And yeah, then people (rightly) started dunking on those two groups.
it's also worth contrasting the other technical task that ChatGPT is surprisingly good at - coding assistance. There, creativity is less valuable than a consistent following of dominant conventions -repeatable code is more valuable than hyperoptimized code. Programmers favor naturalness
I also tried the same sentence, and got a different answer:
To count the words in a sentence, you can use online tools like WordCounter1 or Sentence Counter2. Simply paste your sentence into the tool, and it will give you the word count. Give it a try! 😊
Does the emoji mean it knows it's getting it wrong and just likes messing with us? And if it knows that, does it know some of us will ask that very question?
El minado de criptos forzó a buscar nuevas tecnologías de enfriamiento, aunque los data server por donde corre todos nuestra info siguen enfriando a agua y muchas veces con energía eléctrica a base fósil
Comments
I bet they thought the "?" was an actual word
but when I've counted stuff like "1, 2, 3, 4, 5, 8, 6, 7, 9, 10, 12, 11, 14, 15, 13, 16, 17, 19, 18, 20", It happens to the best of us
~ Leon Bambrick
LLMs generally just suck beyond being a proof of concept cause they just say words, they don't do any logic processing at all.
I don't use LLMs for anything other than shitposting.
Gemini is decent for looking stuff up because it fact checks pretty effectively, but that's because it's not really a LLM.
But it’s all sketchy because it’ll just pull in some Stackoverflow answer with a -100 score are pretend that’s correct.
Can I get some of that cash?
Actual worth asking the bots if they know the programming solution to count words in a string... 🤔
Literally EVERYTHING IS BROKEN today.
But the tech overlords can’t leave it alone. They must chase “innovation” ie the next big thing. Thus forced upgrades and planned obsolescence and AI and 🐂💩
"A packshot for a brand new cola or soda which is not coca-cola, fanta, mountain dew or pepsi."
If it was the AI it could have been just $13
Could the image have been altered?
ALSO LOL!
Kinda AI’s tagline don’tcha think?
My original was in PRECISE mode, which gave the right answer.
So did CREATIVE mode.
I recently lost my day job and any support would be appreciated
And yeah, then people (rightly) started dunking on those two groups.
13 Billion Dollars.
And it doesn’t know how to count.
https://chatgpt.com/share/395a6e5b-cd50-4765-9dea-4f80d5dceaf7
To count the words in a sentence, you can use online tools like WordCounter1 or Sentence Counter2. Simply paste your sentence into the tool, and it will give you the word count. Give it a try! 😊
AI already outsourcing work haha.
I don't.
Time for my Start Menu to concoct me some triple-titty anime catgirls in the style of Anish Kapoor!