JOURNALIST REFRESHER ABOUT INTERVIEWING ARTIFICIAL INTELLIGENCE.
Hi.
When new LLM bots drop, you can't interview them like people who understand the process that went into creating them.
They may give you an answer, but they don't actually know the answer.
You can't ask "who created you?"
Hi.
When new LLM bots drop, you can't interview them like people who understand the process that went into creating them.
They may give you an answer, but they don't actually know the answer.
You can't ask "who created you?"
Comments
LLM bots: 'Uh, Dr. Rachel who?'
Just a typical day of Language Models serving up a side of 'creative' responses without a clue about their own origin story. 🤷♂️"
What may be part of that dataset (via @oddletters.bsky.social) is fan fiction written.
If the answer to "Who created you?" is Dr. Rachel Kim, well...
Maybe that employee doesn't have a doctorate or work in AI.
Maybe calling her out on social media isn't a great idea.
You can verify if that data is false or racist or cynical, and that's a story. You can't take any data they give you as true.
I'm a long-time tech reporter with a masters in journalism.
If your newspaper ever wrote about AI recommending gasoline-flavored spaghetti, that was something I discovered. And there's a lesson in it:
The only places I posted that was bsky and mastodon.
AI goofs can have legs.
Meanwhile, Rachel Kim has to, like, live in the world we make for her.
Because this isn't a one time thing. This is a problem that's going to keep happening as Meta releases more dumb bots. And those bots will keep not being human sources.
I hate all the “who made you” and “what is your purpose” gotcha posts. It’s still inadvertently legitimizing the AI as an information source.
But without calling anyone in specific out, that wasn't how that was being presented.