Fact-checking is everywhere, but does it have any impact? This was one of my biggest questions coming out of an American Press Institute (API) conference I attended in Washington this past January.
The growth of fact-checking is indisputable: The number of fact-checking stories – from groups like Factcheck, FactChecker (home of the “Pinocchios”), and Politifact—increased by more than 50% between 2004 and 2008. From 2008 to 2012, they grew by more than 300%.
But are those stories having any effect? Three new studies just came out — overseen by API, and supported by the Hewlett Foundation, along with our colleagues at the Democracy Fund and the Rita Allen Foundation. One of these, Estimating Fact-checking’s Effects—from Brendan Nyhan of Dartmouth College and Jason Reifler of the University of Exeter—gets precisely at this question of impact. Here’s what I found to be some of the study’s most interesting findings:
First, views on fact-checking’s “favorability” ratings differ depending on political party affiliation. While the views of “low-knowledge” respondents don’t differ much depending on whether they are Republicans or Democrats (29% vs 36% view fact-checking favorably, respectively), it makes a real difference for people with high levels of political knowledge: “just 34% of Republicans … have very favorable views of fact-checkers compared with 59% of high-knowledge Democrats.” Nyhan and Reifler hypothesize that this may be due to Republicans / conservatives tendency to “hold less favorable opinions of the news media” overall, coupled with a greater likelihood to believe that there is a liberal media bias.
Fact Checking Favorability (Source: Estimating Fact-checking’s Effects)
Then they looked at impact. Here there were (at least) two big questions:
Belief Accuracy: One risk inherent in fact-checking stories is that “exposing people to false claims in an effort to debunk them” can lead to a situation where readers recall the misinformation more clearly than they remember the intended correction. Here the question is: Does exposure to fact-checking content increase “belief accuracy?” Nyhan and Reifler found, through post-exposure surveys, that “the rate of correct answers increased from 12% to 19% among people with low political knowledge,” and was even more effective among people with “high political knowledge” (from 22% to 32%).
Motivated Reasoning: Many experimental studies in psychology and political science have found that new factual information doesn’t necessarily change erroneous, pre-existing beliefs. In fact, it can actually backfire for some groups (e.g., depending on the issue, context, and messenger, counterfactual information can cause partisans to more deeply entrench in their pre-existing beliefs). Thus Nyhan and Reifler expected partisans to be more likely to learn/recall “belief-consistent” facts.
- True to their hypothesis, the researchers found that corrections of inaccurate statements are more persuasive when the reader and politician belong to the same political parties. “Readers tend to think the opposing party politician’s statement was false, even before they read the correction.” This suggests that fact-checking may be particularly important during primary contests (though fact-checking is currently more common during general elections).
- Contrary to their expectations, they found that “correct answers increased somewhat more for belief-inconsistent facts (from 9% to 20%) than for belief-consistent facts (from 14% to 22%).”
- “Republican knowledge of belief-inconsistent facts increased by five percentage points and by ten percentage points for belief-consistent ones. The pattern for Democrats is the opposite, however — knowledge increased by 15 percentage points for belief-inconsistent facts compared with eight percentage points for belief-consistent facts.”
It’s worth noting that the public at large is not the only potential audience for fact-checking, nor necessarily even the most important one. In a prior article, Nyhan also explored the impact of increased fact-checking on politicians’ behaviors—that is, testing whether there’s a deterrent effect.
In a 2014 study of 1,200 legislators in nine states, Nyhan and Reifler sent candidates and policymakers reminders about “the risks to their reputation and electoral security if they are caught making questionable statements.” The result? A 55% reduction in the likelihood of receiving a negative PolitiFact rating, or having the accuracy of their statements questioned publicly, in comparison to legislators who were not sent reminders. That said, state legislators are very seldom fact-checked, anyway, and it’s not clear whether the same effect would hold for Congress.
Nevertheless, fact-checking seems to me a promising development in the journalistic field. For it to succeed, newsrooms (and, when relevant, funders) still need to wrestle with questions like when and what to fact-check in order to maintain both relevance and bipartisan credibility, how to scale the reach of existing efforts, and whether and how to expand beyond fact-checking politicians and pundits to other purveyors of misinformation. But those are topics for another day. For now, I’ll just say I’m grateful to API and all the researchers we’re supporting—and excited for the next round of research releases!