Here’s the thing: LLMs are semantic mirrors, reflecting the worldview—faith, beliefs, assumptions, biases—we bring to the conversations. This can create an echo chamber—a known problem. But it’s not just about the biases we bring to a chat; worldview issues and biases are built-in and inescapable.
Comments
LLMs must be trained. Trained on the words we wrote—with all our thoughts, ideas, beliefs, biases, truths, and fictions. As a result, LLMs are inherently constrained to the worldviews already present in the training data.