janpfa.bsky.social
phd student https://janpfander.github.io/
33 posts
103 followers
103 following
Regular Contributor
Active Commenter
comment in response to
post
That's a good point! I think it's plausible that there might be some kind of hawthorne effect.
But polls show that people don't trust (mainstream) news a whole lot, nor do they seem to consume much of it.
So I think at least some of the skepticism we observe is genuine mistrust/ignorance.
comment in response to
post
I agree, e.g. for deep fakes and misinfo spread by trusted sources (say, government officals đĽ˛), the vigilance mechanisms people rely on might fail
comment in response to
post
The way I see it is that people use their background knowledge about the world to evaluate how convincing news are
comment in response to
post
The short (but perhaps not satisfying?) answer is that we took Cohen's D because it's the most common measure for standardized mean differences of continuous variables.
In the appendix we, e.g., we confirm the results using OR for a subset of data with binary/collapsed responses.
Does this help?
comment in response to
post
Is you concern regarding the continuous interpretation of the Likert-type accuracy scales?
comment in response to
post
But I agree, if the goal is to generalize to *all of misinformation*, there is probably some selection bias lurking, whether on the end of the researchers or the fact-checking organizations that the researchers took their false news stimuli from (in most cases).
comment in response to
post
Hard to tell, but researchers had no incentive of doing that.
They often tested interventions, so in order to be able to detect a treatment effect, it was in their interest not to have ceiling effects in the control conditions we looked at.
comment in response to
post
Thanks to my amazing co-author @sachaltay.bsky.social !
comment in response to
post
Our results stress the need for researchers to think carefully about what population of news they study.
Automated news samples and larger, more diverse, news pools are needed to generalize our findings from âfact-checked false newsâ to âmisinformationâ.
comment in response to
post
Important limitation: Most studies used FACT-CHECKED false news. Anecdotally, three US studies included in the meta-analysis that automated their news selection found (i) a positive but lower discernment than our meta-analytic average, and (ii) a negative skepticism (i.e. a gullibility) bias.
comment in response to
post
In sum, our findings lend support to crowdsourced fact-checking initiatives, and suggest that, to improve discernment, there may be more room to increase the acceptance of true news than to reduce the acceptance of fact-checked false news.
comment in response to
post
This suggests that interventions aimed at reducing partisan motivated reasoning, or at improving political reasoning in general, should focus more on increasing openness to opposing viewpoints than on increasing skepticism towards concordant viewpoints.
comment in response to
post
We find that participants were equally able to discern concordant and discordant news (left side) but they were more skeptical of discordant headlines (right side)
comment in response to
post
We also tested several moderators, among which the political concordance of news (e.g. pro-republican news rated by republicans are coded as concordant).
comment in response to
post
In 203 of the 303 cases, participants displayed a positive response bias: they were more skeptical of true news than they were gullible towards false news. However, the average effect is relatively small.
comment in response to
post
For true news, people are farther from the ideal rating than for false news. We call this distance from actual to ideal rating the âerrorâ, and the difference between errors the âresponse biasâ.
comment in response to
post
But participants still made some classification errors. Ideally, participants would rate all true news as most true (e.g. (6) âextremely likelyâ to be true, on a 6-point scale) and all false news as least true (e.g. (1) âextremely unlikelyâ).
comment in response to
post
For 298 of the 303 effect sizes (the meta-analytic observations), participants, on average, rated true news as more accurate than false news, and considerably so.
comment in response to
post
We meta-analyzed 67 papers, totaling 194'438 participants and 303 effect sizes across 40 countries. In sum, these participants rated the accuracy of 2'167 unique news items.
comment in response to
post
Numerous experiments have asked participants to rate the accuracy of true and false news (without telling them which is which). We meta-analyzed the control groups of these experiments, which typically look like this:
comment in response to
post
You're right, independence of sources is a crucial assumption in our model, and often that might not be warranted. But a real-world case where it (ideally) is, is science - if they don't independently come to the same conclusion, scientists at least independently verify each other.
comment in response to
post
But scientists agree on things such as the distance between the solar system and the center of the galaxy, or the atomic structure of DNA. This represents an incredible degree of convergenceâand thus a reason to believe that scientists are right, and that they are competent
/11
comment in response to
post
A context where these inferences might be particularly relevant is science. Much of science is counterintuitive, and most people do not have the background knowledge to evaluate most scientific evidence.
10/
comment in response to
post
Why do we think this is exciting? Evaluating the extent to which others agree on something enables us to judge a piece of information and its sources in the absence of any related background knowledge.
9/
comment in response to
post
In simulations, we show that these inferences are quite rationalâgiven that the individuals are not systematically biased.
Participants take this into account: in experimental conditions with a systematic bias, they infer less accuracy and competence from convergence.
8/
comment in response to
post
In both scenarios, participants rated more convergent players as more accurate and more competent, on average.
7/
comment in response to
post
In other experiments, games are about choosing one of a few options. Again, some players were more convergent (i.e. their voted option was more consensual) than other players.
6/
comment in response to
post
In some experiments, the games are about numeric estimates. Some groups of players were more convergent estimates (i.e. closer to each other) than other groups.
5/
comment in response to
post
In our paper, we show that people tend to draw this kind of inference, and that it is justified under many circumstances.
We ask participants to evaluate the accuracy and competence of players in fictional games. They donât know anything about the game, nor about the players.
4/
comment in response to
post
But what if other scholars had arrived at very similar measurements, independently of Eratosthenes? Wouldnât that make you consider that the estimates might be correct, and that Eratosthenes and his fellow scholars must be quite bright?
3/
comment in response to
post
Imagine that you live in ancient Greece, and a fellow called Eratosthenes claims the circumference of the earth is 40000 kilometers.
Youâd probably (mis)take him for a pretentious loon.
2/