According to many newsrooms, fact-checking articles are among the most requested, and most read, content they produce. In recent years groups like Factcheck, Flackcheck, FactChecker (home of the “Pinocchios”), Politifact and Punditfact have emerged to address growing questions of truth in politics.
Fact-checking is done with at least three possible goals in mind, each with distinct audiences associated:
- For the broader public, to improve understanding of candidates and issues by countering “misinformation.”
- For other journalists, to help shift the culture away from he said/she said coverage towards greater fact-checking.
- For politicians, political pundits, or (more recently) news networks, to hold them accountable and deter erroneous statements.
Last year, the Madison Initiative helped to support a project by the American Press Institute (API) to research practices in this space. In December, I attended an API conference on the topic in Arlington, Virginia. More than 50 people turned out—academics from journalism and communications schools across the country, journalists from a variety of outlets, political ad companies and consulting firms, and of course a small handful of funders like me. The event covered a range of topics including recent research, journalists’ experiences from the 2014 mid-terms, and new tools and formats under development. Several highlights emerged from these discussions.
Perhaps the most fascinating research looked at which erroneous, politically important opinions are most commonly held by the public. To take one example, a plurality of Americans believes that China holds more than half of the US debt. How much does it actually hold? Eight percent.
I was also interested to learn that in the 2014 midterms, ads from outside groups (e.g., Super PACs) were more prone to misinformation than those from the candidates themselves, presumably due to a lack of accountability. Not surprisingly, closer races appear to inspire more negative (and more distorted) ads. And fact-checks are increasingly being used as “weapons” by opponents on the campaign trail.
It was likewise impressive to learn the Washington Post’s Truth Teller has built structured data into its fact-checking, in an effort to address the fact that the same mistruths are often repeated over and over again. Their prototype algorithm tests statements against a database of thousands of prior fact-checks, helping reduce the burden on human fact-checkers. Even though this is a relatively young tool it appears to be surprisingly accurate, identifying erroneous claims 75-80% of the time.
The day concluded by breaking into groups to brainstorm opportunities for research, distribution, and new tech tools. Overall, the discussions raised a few questions for me:
How can the fact-checking industry possibly keep up? One North Carolina reporter noted that, after catching a particularly egregious lie, the perpetrator (evidently a campaign manager) joked that “if you didn’t give us a red light, we wouldn’t have been doing our job.” Even assuming that most campaigns are honest, reporters are still clearly out-gunned by the campaign industry.
How can fact-checkers manage across distribution channels? Current fact-checking focuses primarily on TV ads and shows and online news content, and is not equipped to cover print, mailers, or radio as robustly. National, TV-based campaigns are clearly easiest to check—but with the rise of TiVo and micro-targeting, future political communications are unlikely to stick to these channels.
Most importantly, what difference does it make? Behavioral scientists speak to the complexity of correcting misperceptions once people have already latched onto them—in some cases finding that attempts to correct the erroneous beliefs might instead just serve to more deeply cement falsehoods. When it comes to elections, how often do people actually change their vote based on new information about candidates’ truths or mistruths, rather than just voting their party line? And how often are fact-checkers simply “preaching to the choir,” providing ammunition to further inflame party loyalists about how dishonest the other side is, rather than creating room for real learning opportunities across party lines?
On the one hand, I would be hard-pressed to just “give up on the importance of facts.” On the other, it remains unclear whether and when facts actually do matter in people’s decision-making, and what might make them matter more. Scholars and fact-checkers alike are now trying to find clearer answers to those questions.