In talks and in our online course https://thebullshitmachines.com, I worry about how the extremely high cost of training LLMs concentrates power in the hands of the very few to shape our information environment.
Grok going off the rails with talk of White Genocide is a striking example.
Grok going off the rails with talk of White Genocide is a striking example.
Comments
https://www.theguardian.com/technology/2025/may/18/musks-ai-bot-grok-blames-its-holocaust-scepticism-on-programming-error
Almost like there's an underlying theme here...
So long as LLMs are black boxes in the control of billionaires, we can't let LLMs become trusted information sources.
But agree entirely
This is the same playbook.
(Just a reminder as we try to figure this out: you can't ask an LLM why it did something and expect a reliable answer. So the posts where Grok says Musk programmed this have to be considered unreliable for now)
But these things aren't self-aware: it's just generating statistically plausible text based on token weights and chat history.
https://en.wikipedia.org/wiki/Albinism_in_humans