If you actually read it rather than sticking your fingers in your ears shouting "AI bad", you would have seen a list 20ish papers which do tower of Hanoi tests on humans.
Everyone of which shows that the results in the "AI can't reason" paper are very similar to humans doing the same tasks.
Everyone of which shows that the results in the "AI can't reason" paper are very similar to humans doing the same tasks.
Comments
"oh yeah, well, uh, a human can't either so there!"
what the fuck is the force of this non sequitur argument even supposed to be
or are you just completing tokens vaguely in the shape of an argument
But I can't fathom the argument here.
Maybe "AI" usage is like gambling: some people get addicted, _even though they almost always lose?_
Paper claims AI cannot reason.
Paper uses the Tower of Hanoi problem to justify its claim.
Paper's own results show AI performs at human-equivelent levels.
So either
A) it's not a valid test
B) both humans and AI reason
C) neither humans nor AI reason
Before we can have meaningful science and conversation about AI's capabilities we need to define our terms.
"AI" cannot reason, at all. This is demonstrable. But most humans are too bloody stupid to solve simple puzzles. This only proves:
A. People are thick.
B. They're so stupid, they use even stupider bots _that make them dumber._ https://www.brainonllm.com/
But to your comments
Could you define the difference between a failure to reason and being stupid?
Can stupid people reason? Can a non-reasoning machine be smart?
What test are you using to differentiate between intelligence and reasoning?
"However, most of the people engaged in such matters say that this attitude is based on three things: ignorance, stupidity, & nothing else."
10 PRINT "I AM THE BASILISK WORSHIP ME"
20 GOTO 10
or perhaps your logic is salty rather than basic