Profile avatar
tomcostello.bsky.social
research psychologist. beliefs, AI, computational social science. prof at american university.
82 posts 3,438 followers 195 following
Regular Contributor
Active Commenter

SPSP was lovely, really got a lot out of seeing so many friends and colleagues. Let’s stay in touch!

Whova is a scourge

New research from Costello, Pennycook, and Rand. Can conversations with AI help reduce belief in conspiracy theories? Quite possibly. What’s the mechanism? Evidence production. Let me explain my little-t theory about how this might work.

Heading to SPSP? The secret is that conferences aren't about the talks; they're about connecting with people. And you might enjoy it more than you think (just ditch the grad school crew and work in pairs instead). My guide to academic networking: open.substack.com/pub/michaeli...

Changing conspiracy theory beliefs is very hard, but a replicated finding shows a short chat with GPT-4 changes people’s belief in conspiracy theories for the long term. Why? It isn’t rhetorical tricks, it's that AI provides relevant facts and evidence tailored to each person's specific beliefs.

very cool stuff here from Tom + colleagues, following up their 2024 debunking paper ->

I am among the over 700 political scientists who signed this statement "express[ing] our urgent concern about threats to the basic design of American government and democracy" under the current presidential administration:

Last year, we published a paper showing that AI models can "debunk" conspiracy theories via personalized conversations. That paper raised a major question: WHY are the human<>AI convos so effective? In a new working paper, we have some answers. TLDR: facts osf.io/preprints/ps...

Really excited about this WP we just posted - folks w questions or ideas about why we found GPT4 to be effective at debunking conspiracy theories should check out Tom's thread + the paper. This fig summarizes it to me: it's the facts and evidence that are doing the persuading, not any AI mumbo jumbo

Last year, we published a paper showing that AI models can "debunk" conspiracy theories via personalized conversations. That paper raised a major question: WHY are the human<>AI convos so effective? In a new working paper, we have some answers. TLDR: facts osf.io/preprints/ps...

This is the most relevant article to NIH and research cuts I’ve seen. Imagine if this was today , how many people would be saying “Why are we studying Gila Monsters and their impact on diabetes ? That’s wasted money !” globalnews.ca/news/9793403...

New open-access paper in Annual Review of Psychology with @mjbsp.bsky.social: “Ideology: Psychological Similarities and Differences Across the Ideological Spectrum Reexamined” www.annualreviews.org/content/jour...

Hi friends, I'll be starting as an Assistant Professor in Psychology at American University this summer! I'm pretty thrilled to be moving back to the DC/Baltimore area, to connect back with so many old friends & colleagues (as well as make new ones, ofc)

Today is the first day @lizsuhay.bsky.social & @mjbsp.bsky.social are co-EiC 🥳 Read the vision statement! 💲 onlinelibrary.wiley.com/doi/10.1111/... 🆓 osf.io/nybxm Submit your papers! onlinelibrary.wiley.com/page/journal...

Looking through my backlog of emails I found a Christmas morning surprise that this paper has made it to print! I hope it can help streamline some hierarchies Improving hierarchical models of individual differences: An extension of Goldberg’s bass-ackward method. doi.org/10.1037/met0...

My paper with @mikearcaro.bsky.social exploring the organization of pulvino-cortical connections in newborn human infants is now out in @currentbiology.bsky.social ! www.cell.com/current-biol...

Charlie Warzel and Mike Caulfield explain that the Internet functions as a "justification machine" — providing ample "evidence" for people to use (with the help of political pundits and online influencers) to create and maintain their preferred realities: www.theatlantic.com/technology/a...

Great blog post (by a 15-author team!) on their release of ModernBERT, the continuing relevance of encoder-only models, and how they relate to, say, GPT-4/llama. Accessible enough that I might use this as an undergrad reading.