Gemini and ChatGPT, et al regularly get things wrong about my biography, which is something I know rather a bit about, so I'm not confident about them being correct about things I know less about.
I have a famous ancestor. ChatGPT made up an entire family tree for him. No suggestion at any time that it had any uncertainty about its answers. Until they give these things the binary equivalent of humility, they're a scourge.
To be fair, I’ve been involved in a handful of stories that made the front pages of broadsheet papers. Which taught me that even when they have good intentions, journalists get 40% or more of the facts wrong. The LLMs getting facts wrong just means they’re mimicking humans effectively.
Garbage in, garbage out. I have personal knowledge of 4 big stories published by NYT or WaPo. Each of these stories contained SERIOUS errors that twisted the narrative in an important way. And because of where the data comes from, AI's *ceiling* is traditional journalism's floor.
Well it’s only as smart as the person programming the information. Sure I believe this software can be useful in the medical field but still skepticism about its thinking ability and common sense.
Truthfulness isn't even a trainable criteria (at least for these present technologies), because it involves having a real understanding of the topic it's creating text about.
Comments
See also Elon Musk.
It got way more wrong than it ever got right, it was genuinely scary.
When I found out it was eating the planet to spit out this garbage, I had to walk away.