But participants still made some classification errors. Ideally, participants would rate all true news as most true (e.g. (6) “extremely likely” to be true, on a 6-point scale) and all false news as least true (e.g. (1) “extremely unlikely”).
Comments
Log in with your Bluesky account to leave a comment
For true news, people are farther from the ideal rating than for false news. We call this distance from actual to ideal rating the “error”, and the difference between errors the “response bias”.
In 203 of the 303 cases, participants displayed a positive response bias: they were more skeptical of true news than they were gullible towards false news. However, the average effect is relatively small.
We also tested several moderators, among which the political concordance of news (e.g. pro-republican news rated by republicans are coded as concordant).
We find that participants were equally able to discern concordant and discordant news (left side) but they were more skeptical of discordant headlines (right side)
This suggests that interventions aimed at reducing partisan motivated reasoning, or at improving political reasoning in general, should focus more on increasing openness to opposing viewpoints than on increasing skepticism towards concordant viewpoints.
In sum, our findings lend support to crowdsourced fact-checking initiatives, and suggest that, to improve discernment, there may be more room to increase the acceptance of true news than to reduce the acceptance of fact-checked false news.
Comments