Yes! Hallucinations and sycophancy are not some unintended consequence of AI. They are features, not bugs. If we want to improve things, we need changes in design, governance and incentives, not end-of-pipe fixes.
Reposted from
Sandra Wachter
"LLMs do not distinguish between fact & fiction. They are not designed to tell the truth. They are designed to persuade, yet they are implemented in sectors where truth & detail matter, e.g. education, science, health, the media, law, & finance." My interview @washingtonpost.com tinyurl.com/2e7253ja
Comments
The sooner we understand the limits of LLMs, the sooner we'll learn to deploy them properly.