Good example of how it's not really a failure to understand, or a hallucination, or something you can fix with better iterations. The statistical word association machine fundamentally does not deal in concepts, it has no comprehension of ideas, it can never do anything that requires human thought.
Reposted from
Conor Conneally
A classic example of how Google's AI is garbage, it doesn't understand that the Underground Railroad wasn't a literal railroad
Comments
Cancel cancel cancel
Real AI would involve starting at square one and learning. Because I am an Old, I think of Joshua/WOPR from War games, which started off losing at tic-tac-toe and eventually could beat humans at very complex games
Unfortunately, this *may* be fixed with more improvements. Maybe not just literal iterations, but also model improvements, work ongoing right now.
Earlier papers about stochastic parrots etc. got many things *wrong* (for example, these models *did* get far better, shockingly so, using scaling) but they persist in popular culture despite that.
I have a PhD in a related field from a while ago. I am, frankly, horrified that the neural network people keep getting proven right - but they do.
I suspect that some vendors know they can't match the hype and are basically scamming the VCs.
I doubt LLMs are being put into everything because product team thinks it's a good idea. It gives something to show VCs that the tech is being used.
(Leading to my favorite English subtitle in a Czech resistance film: "Maybe it's time to throw up the towel".)