"Plausibility engine" is the pithiest description of AI.
It nails the technical microscale - penalised linear regression is Occam's razor applied to statistical correlation - and the training process as well as the chatbot outcome of an LLM. The Turing test is, after all, a plausibilty test. VG.
It nails the technical microscale - penalised linear regression is Occam's razor applied to statistical correlation - and the training process as well as the chatbot outcome of an LLM. The Turing test is, after all, a plausibilty test. VG.
Reposted from
Joe Slater
By implication, the AI is a plausibility engine; which is terribly dangerous. It's bad enough that AIs produce errors, but surely it's worse if AIs are judged on their ability to persuade people: we swallow enough nonsense as it is; we don't need them to get better at it! 2/2
Comments