No natural law that says tech gets better either.
The current set of LLMs have peaked. Not enough unpoisoned training data out there.
They can do it faster with better algorithms and cheaper with better chips but they can't remove the confident errors.
Unlike AI, _those_ are here to stay.
The current set of LLMs have peaked. Not enough unpoisoned training data out there.
They can do it faster with better algorithms and cheaper with better chips but they can't remove the confident errors.
Unlike AI, _those_ are here to stay.
Comments
It's saying, very rigorously, and I approximate here, that even in the best case "sure let's say you have magic flying pigs and all the atoms in the universe" that AI can't achieve human level intelligence.
It's an NP hard problem and I think that impossible will do as a nice colloquial way to describe that, because very, very, very, very, very, very, very, very, very, very hard would get tedious to type out repeatedly.
https://bsky.app/profile/homebrewandhacking.bsky.social/post/3lpgwaf7ies2q
If you feel I've misinterpreted what is written there that might be interesting.
You telling me what you believe it might say without having read it is not interesting.
But fusion for example has always been 50 years away for about 60 years now.
I guess computer fondlers don't read around outside computers. 🤷
Same with AI. Companies push it but also disclaim any liability because all that matters is growth (at all cost).
That really should have gotten more attention. If you make a mistake, fair enough. If you "vibe code" a mistake...
It took six weeks for my sarcasm gland to fully heal after it catastrophically overloaded and burnt out 😅
I mean, I think I see a solution to the problem...
The insurance companies are showing them how such things are _really_ done.
https://pivot-to-ai.com/2025/05/12/lloyds-offers-corporate-insurance-against-ai-chatbot-errors-now-try-to-get-a-payout/