We meta-analyzed 67 papers, totaling 194'438 participants and 303 effect sizes across 40 countries. In sum, these participants rated the accuracy of 2'167 unique news items.
Comments
Log in with your Bluesky account to leave a comment
For 298 of the 303 effect sizes (the meta-analytic observations), participants, on average, rated true news as more accurate than false news, and considerably so.
But participants still made some classification errors. Ideally, participants would rate all true news as most true (e.g. (6) “extremely likely” to be true, on a 6-point scale) and all false news as least true (e.g. (1) “extremely unlikely”).
For true news, people are farther from the ideal rating than for false news. We call this distance from actual to ideal rating the “error”, and the difference between errors the “response bias”.
In 203 of the 303 cases, participants displayed a positive response bias: they were more skeptical of true news than they were gullible towards false news. However, the average effect is relatively small.
We also tested several moderators, among which the political concordance of news (e.g. pro-republican news rated by republicans are coded as concordant).
We find that participants were equally able to discern concordant and discordant news (left side) but they were more skeptical of discordant headlines (right side)
Comments