Profile avatar
richtatum.bsky.social
Technical SEO, AI/LLM automator, prompt whisperer, editor, media guy, photographer, factotum. Noticer of overlooked details. I ♡ story, dataviz, analytics, writing, editing, podcasting. → Available to hire!
70 posts 855 followers 482 following
Getting Started
Active Commenter
comment in response to post
This transparency and intentionality would serve everyone, regardless of worldview, by enabling informed choices about the AI systems we create and use, and ensuring they align with our shared values and ethical standards. <end transmission \>
comment in response to post
The work ahead isn't just about recognizing these embedded worldviews, but about actively incorporating ethical principles from faith traditions to guide the development of these powerful tools.
comment in response to post
—whether derived from scientific materialism, religious traditions, or philosophical frameworks—are already deeply woven into these systems.
comment in response to post
Beyond asking what role faith and ethics should play in AI development (an important question, to be sure), we need to acknowledge that foundational assumptions about reality, meaning, and ethics—
comment in response to post
This reality demands a new level of honesty in commercial AI discourse.
comment in response to post
This could help users engage with AI that resonates more closely with their values while still benefiting from the technology. By honoring various faith perspectives, we can ensure that AI serves as a tool for inclusion rather than division.
comment in response to post
I suspect that future development efforts will focus on creating bespoke, niche generative models aligned with local community standards and various worldviews and faith groups.
comment in response to post
Without this transparency, how can we fully understand or responsibly engage with these increasingly influential tools?
comment in response to post
This becomes even more critical given the current lack of transparency in AI development. We have no “ingredients list” for these models—no clear view into what worldviews, biases, or ethical frameworks have shaped their training data or alignment systems.
comment in response to post
From the curation of training data to the design of alignment systems to our daily interactions—our fundamental beliefs about reality and ethics are inevitably present, whether we acknowledge them or not.
comment in response to post
Faith traditions provide rich ethical principles—compassion, justice, respect for human dignity—that can guide AI development. By intentionally integrating these values, we can create AI systems that not only reflect diverse worldviews but also aspire to our highest shared ideals.
comment in response to post
→ So, what does faith have to offer when considering the ethical dimensions of AI?
comment in response to post
These worldviews—which may conflict, overlap, or remain hidden—are inescapably woven into the fabric of AI, whether we’re approaching these tools as atheists, agnostics, or adherents of any faith tradition.
comment in response to post
This dynamic plays out at every level of AI systems: our fundamental beliefs about reality are embedded in the training data (whether we contributed to it or not), encoded into the guardrails (whether we agree with them or not), and present in our every interaction as end users.
comment in response to post
It’s important to be aware of our cognitive biases and be intentional about the worldview we align with. But, admittedly, this is very difficult to do.
comment in response to post
Unfortunately, the only element we can truly know about our AI interactions is what biases, beliefs, and assumptions we ourselves bring to the conversation. And even there, most of us remain largely unaware of our own hidden brains.
comment in response to post
3️⃣ Third: We can’t escape our own biases and worldviews being involved.
comment in response to post
But consumers have a right to know what they’re ingesting. It should be the same for intellectual consumption. There needs to be a useful balance between protecting IP and ensuring transparency.
comment in response to post
I recognize that proprietary intellectual property is a commercial value. Companies guard their secret recipes—Coke doesn’t reveal its formula, after all.
comment in response to post
Individual users like you or me might disagree with some of these rules and permissions—if we could know them. But they, too, are opaque and unknowable to us.
comment in response to post
Think about it: every rule or law embeds an ethical or moral view. Rules reflect worldviews. Thus, the guardrails attempting to constrain AI and LLMs reflect the values or ethics of their builders.
comment in response to post
Every AI response is shaped by built-in **guardrails**—the rules and algorithms influencing, modifying, or filtering every output. Unless the model replies with an apologetic refusal to answer, these guardrails are usually invisible and unknowable to us.
comment in response to post
2️⃣ Second: The outputs are generally constrained by ethical, moral, and legal frameworks—but whose?
comment in response to post
(To be fair, some organizations are taking steps toward transparency by publishing model cards that outline aspects of the training data and limitations. That’s a step in the right direction.)
comment in response to post
While this chaotic breadth is essential for LLMs to work, the actual contents of the training corpus are completely opaque to users and beyond our influence. The worldviews are already there, but we can’t know anything about them in advance or influence which ones are present.
comment in response to post
There are also ideas present that would make the saintly cleric, the neighborhood Wiccan, the ascetic monk, and the avowed atheist cheer.
comment in response to post
Here’s the reality: these training datasets necessarily include ideas and language that would be deeply troubling to any given user, regardless of their individual faith or morals.
comment in response to post
LLMs are remarkable tools, but they can only “think” and “reason” within the paradigms in the training data. And it’s a race to the average.
comment in response to post
1️⃣ The models are trained on all the worldviews! (Not really... but sorta.) LLMs must be trained. Trained on the words we wrote—with all our thoughts, ideas, beliefs, biases, truths, and fictions. As a result, LLMs are inherently constrained to the worldviews already present in the training data.
comment in response to post
Here’s the thing: LLMs are semantic mirrors, reflecting the worldview—faith, beliefs, assumptions, biases—we bring to the conversations. This can create an echo chamber—a known problem. But it’s not just about the biases we bring to a chat; worldview issues and biases are built-in and inescapable.
comment in response to post
Thanks, @yordan-dimitrov.com!
comment in response to post
Thanks, @diwanow.com, I do really need to up my LinkedIn game.
comment in response to post
Today, marks day 63 of unemployment. (Or 9 weeks. Or 45 weekdays. Or 0.17 years... But, really, who's counting? 🤓) Some day I’ll write up how I’ve been using LLM tools in my job search, promise! »∵«
comment in response to post
Meanwhile, 🙏🏼 I really appreciate all the support, DMs, connections, well wishes, and prayers so far. The right door will open, I’m sure!
comment in response to post
And now for the shameless plug: ↓ If you're looking for an experienced SEO leader who’s also been diving deep into AI/LLM integration, my resume and background are still available at the Notion site below: richtatum.notion.site/rich-tatum-r...
comment in response to post
The market is super competitive right now (especially with Forbes/CNN SEOs likely joining the hunt soon 🔥!) But I remain optimistic!
comment in response to post
I’ve been really lucky to have gotten several fantastic interviews with some great companies so far. To date I’ve had nine interviews, have been rejected 23 times, have six applications getting colder by the minute, and three companies I’m currently interviewing for.
comment in response to post
The last time I was looking for work in 2021, I was chatting with Bill about it in DMs, and he was always quick to give encouragement!
comment in response to post
‪Bonus: she got selfies with King Ezekiel of #TheWalkingDead fame (Khary Payton)‬
comment in response to post
I would have liked to have met Bill face to face. I thought of him when my son and I drove through Carlsbad last year.
comment in response to post
Sure did!
comment in response to post
See @henshaw.social’s advice: bsky.app/profile/hens...
comment in response to post
I should put you in touch with my next employer — whoever that may be!
comment in response to post
Thanks for adding me, I’m honored! (But now I have to live up to the reputation, damn you!)
comment in response to post
🫡 I may disappoint, but Corporal Nerdery reporting as ordered!