Just wanna re-up this article from nature that I first encountered a year ago, in light of more recent stories about LLM performance degradation.
"AI slop" is a form of pollution, but the good news is that AI itself looks to be most susceptible to poisoning by it.
https://www.nature.com/articles/s41586-024-07566-y
"AI slop" is a form of pollution, but the good news is that AI itself looks to be most susceptible to poisoning by it.
https://www.nature.com/articles/s41586-024-07566-y
Comments
To simplify, if we accept the conventional binaries of form/content and interface/implementation, then LLM's are on the form and interface sides of that divide. That's their marketing genius.
"We found that LRMs have limitations in exact computation," the team concluded in its paper. "They fail to use explicit algorithms."
https://futurism.com/apple-damning-paper-ai-reasoning
One example of such a case: "How many protons are there in a hydrogen nucleus?" The answer is 1. This is correct to as many decimal places as you like.
But, you might object, there are famous examples--Google's DeepMind Go AI, for example--that have managed to bootstrap their way to being better at their task than any human has been, or likely ever will be, purely by self-training.