The future of truth: Can philanthropy help mitigate misinformation?
The introduction of terms like “fake news” and “post-truth” in recent years doesn’t bode well for democracy, which hinges on the need of citizens to either share a common understanding of facts, or willingly defer to institutions to parse truth from fiction.
American institutions – government, media and elites – used to reflect a broad consensus around policy-relevant data and information, when society was cohesive enough to allow for different viewpoints to co-exist regardless of one’s party affiliation.
But trust in American institutions is nearing all-time lows. Decline of faith in the fourth estate, the press, has been particularly dramatic and threatens to corrode a key prerequisite of popular government — namely, reasonably informed citizens and officials. In our current era of political polarization and hyper-partisanship, Americans not only hold different opinions – they don’t share a common understanding of the facts.
Can U.S. democracy get beyond “red truth vs. blue truth,” and closer to a shared set of facts? And can philanthropy do anything to help?
Back in 2011, a study from the Pew Research Center found that from 1985 to 2011, respondents who believe “in general, news organizations get the facts straight” fell from 55% to 25%, and those who agree stories are often inaccurate rose from 34% to 66%. Yet, at the same time, the survey found that the public trusted information more than they trusted other institutions – including state government, federal agencies and business corporations.
By 2015, another Pew study found that only “15% of those who get news from news organizations online find them very accurate.” Democrats are more likely than others to have “a lot” of trust in information from national news organizations; in fact, 27% held that trust, compared with 15% of Republicans and 13% of independents. And while trust is higher among older generations, only 10% of 18- to 29-year-olds and 16% of 30- to 49-year-olds indicate “a lot” of trust in information from national news organizations. The 2016 election cycle did not help.
Just prior to the elections, Gallup found that “Republicans who say they have trust in the media has plummeted to 14% from 32% a year ago. This is easily the lowest confidence among Republicans in 20 years.” By 2017, Pew Research Center found 34% of Democrats said they considered information from national news organizations “very trustworthy” — and a mere 11% of Republicans said that.
Today, traditional mainstream media is but one messenger amidst an increasingly crowded field of voices. The core problem, as we are defining it at the Hewlett Foundation, is a growing lack of belief in objective facts and the idea of truth upon which to base democratic discourse. The rise of new media platforms has exacerbated the situation.
Lack of trust in the media environment has been discussed as a problem of “fake news,” which is technically information that is completely and intentionally fabricated, and can be promulgated by anyone. But the current “information problem” is multi-faceted and reflects a range of concerns beyond blatantly “fake news,” including:
Disinformation: Intentionally false or inaccurate information that is spread deliberately, generally by unknown state officials, with a motive of undermining public confidence.
Misinformation: Unintentionally promulgated, inaccurate information, which differs from propaganda in that it always refers to something which is not true, and differs from disinformation in that it is “intention neutral.”
Propaganda: Information, generally promulgated by state officials, that may or may not be true, but which presents the opposing point of view in an unfavorable light in order to rally public support.
To me, as a program officer considering where Hewlett Foundation’s charitable funds can have the most impact, “fake news” is, practically speaking, of less concern. Few social media users prefer to read and share completely fictitious information with their networks. One can assume that platforms themselves are financially incentivized to solve the problem of wholly fictitious news.
But research has shown that most citizens are psychologically predisposed to want to read news that is biased to reaffirm their preexisting beliefs and tribal identities, creating adverse incentives for commercial technology platforms when dealing with the larger information problem of disinformation, misinformation and propaganda.
“Fake news” per se is less of a concern than the ideological distortion of real news, whether homegrown or promoted by a foreign power. Regulating “fake news” might be difficult, but it’s permissible: We can punish knowingly publishing false information. Twisted interpretations of real news are another matter, because these blend fact and opinion in ways that are impossible to regulate in a nation committed to free speech norms.
The many causes of the information problem are complex and not clear cut. For example, growing polarization has separated Americans into distinct ideological camps, from which they see opposing parties as threat. These ideological divisions have increased internationally as a result of globalization and ensuing economic challenges, and domestically by geographic sorting among U.S. citizens, as well as the close competition for control between Democratic and Republicans policymakers. These broad factors have in turn coincided with four big changes particularly germane to the information problem:
International influence: The role of international political actors, most notably Russia and China, has evolved over the last decade. Unlike in the past, “Russia’s goal is not to convince people of a truth, but to erode faith in institutions, sow falsehoods, undermine critical thinking, encourage conspiracy theories, promote social discord, and use the freedom of information against liberal democracies.” And, as compared to the “analog information wars of the first Cold War, the Internet and social media provide Russia cheap, efficient and highly effective access to foreign audiences with plausible deniability of their influence” and without having to maintain an in-country presence, with all the associated risks.
Upheaval in the media landscape: The fragmentation of former journalistic monopolies has enabled the rise of cable news, talk radio, and websites with distinct ideological positioning. Simultaneously, audiences are inundated with news from a wide variety of sources, with bloggers and opinion columnists now appearing alongside more traditional news outlets. Perhaps the biggest disruption is the degradation of journalism business models. With more than 40% of journalists laid off over the last decade, local newsrooms are weakened, which further erodes mainstream media’s ability to serve the public and maintain trust.
New technology platforms: Propaganda, misinformation and disinformation are longstanding problems. What is new is the rise of social media and new technology platforms (Facebook, Google, Twitter, Reddit, etc.), the increasing use of bots, trolls, Facebook dark posts, internet-based micro-targeting, and sophisticated search-engine optimization. These technologies have created a system for news distribution that can be readily “gamed” — effectively killing trustworthy information curation.
For example, a 2015 report by the security firm Incapsula found that bots generate around 50% of all web traffic. Another study found that between September 16 and October 21 in 2016, bots produced about a fifth of all tweets related to the upcoming election. (It’s worth noting that where polarization is concerned, the social media explanation is not entirely satisfying. Researchers have found that the growth in polarization in recent years is “largest for the demographic groups least likely to use the internet and social media.”
Big data: Social media has also enabled access to “an unprecedented level of granular data about human behavior, both individually and within groups … [that has] given rise to computational social science, a discipline leveraging much of this data to analyze and model large-scale social phenomena.” This has made the so-called “psychometric,” “pyschographic” or “psyops” targeting by both foreign governments and groups like Cambridge Analytica possible, as “specific strategies and tactics can be evaluated based on their success or failure in shaping the behavior of a group of users.”
Those seeking to game the system are driven by range of motivations including political power, partisanship, prejudice, and profit. There are the domestically organized, candidate-affiliated groups like Cambridge Analytica that generally have electoral aims, or international actors like China and Russia seeking political power. There are promulgators of online hate speech that do not always have particular political aspirations. There are biased online news groups like Brietbart (now opening operations in France and Germany) that appear to have a mix of partisan and profit-oriented goals.
And then there are those that appear to be motivated purely by profit — the much-discussed teenagers in Macedonia, or little-known U.S. groups like American News LLC, which is operating profitable sites like Liberal Society or Conservative 101 on both sides of the political spectrum. “The product they’re pitching is outrage,” as one media observer puts it in a Buzzfeed article. Different motivations may necessitate different solutions.
A wide range of potential interventions have been proposed to improve the role of facts in political discourse. None of them alone would solve the problem but several together might hold promise.
These interventions can be broadly grouped based on where in the system they are focused: on production of politically relevant information, on its distribution, or on its consumption. To date, the most prominent of these solutions are focused on the front end, to improve the quality of journalism produced, or on the back end, via fact-checking and media/internet literacy aimed at news consumers. Less concerted philanthropic focus has been aimed at distribution.
1. Information production
Efforts to improve production are typically about building trust in “traditional” or credible news sources, or deterring production/distribution of biased information.
Improving journalistic quality
Many have argued that the antidote to fake news is more quality journalism. Proposed interventions include: efforts to improve journalism funding, either directly to nonprofit outlets, or via advocacy for increased funding for the Corporation for Public Broadcasting; education for journalists struggling with false equivalency (e.g., understanding modern forms of d/misinformation or propaganda, how to treat calculated political falsehoods versus legitimate alternative viewpoints), or how to tell more engaging stories, better include minority voices, better cover the interests of the political right, etc.; collaboration, such as formation of a coalition of newsrooms sharing tips on how to jointly create content, improve audience engagement, diversify viewpoints, etc.; a number of efforts to (either directly or indirectly) improve trust and transparency in journalism, like:
– The Trust Project, which works across traditional newsrooms, exploring how trust in journalism has changed, and looking for best practices and technology tools to “bake the evidence of trustworthy reporting — accuracy, transparency and inclusion – plainly into news practices, tools and platforms.”
– The Solutions Journalism Network, which aims to shift the focus of journalism from documenting problems towards also identifying solutions that empower citizens.
– The Coral Project, a collaboration between Knight and Mozilla to improve journalism’s engagement with audiences.
– Efforts suggested by American Press Institute and others to “embed the editorial process directly into journalistic content” (i.e., identifying and hyperlinking to sources, clearly labeling news versus opinion pieces, noting why stories are newsworthy, etc.)
Some interventions focus on press freedom, i.e., efforts to secure journalists themselves, and their access to sources, including FOIA legal advocacy and litigation; whistleblower protections; and support for media law practices to address restrictions in access to information, data, and sources. These also include litigation strategies, legal defense funds, insurance to address legal intimidation, subpoenas and lawsuits, as well as training and improved encryption to address hacking, cyberbullying, etc.
And, of course, there’s investigative reporting — support for newsrooms that have been key in revealing the scope and details of the problem – i.e., “covering the ‘information problem’ as a beat.” Consider how Global Voices identified a network of more than 20,000 Russian trolls on Twitter, and Buzzfeed launched a campaign to debunk false news stories as part of the FirstDraft Coalition.
However, this approach is not enough. There is quality news out there, but it is too often drowned out in a sea of partisan noise, d/misinformation or propaganda. Certainly the existence of quality journalism is a necessary condition, but insufficient to solve the problem. Moreover, the vast majority of philanthropic efforts are already focused here.
Deterring purveyors of fake news, d/misinformation and propaganda
Multiple efforts are working to identify problematic information sites based on their content. Domestically, Merrimack College established a list of hundreds of misleading news sites based on their habit of “using distorted headlines and decontextualized or dubious information.” Abroad, Le Monde’s Le Décodex offers a growing database of more than 600 news sites that have been identified and tagged as “satire,” “real,” “fake,” etc. Meedan’s Check also offers a collaborative verification platform.
Other efforts seek to identify problematic information based on the way information is disseminated. Rumor Gauge provides automatic detection and verification of rumors on Twitter based on sharing patterns; the Oxford Internet Institute works to detect politically motivated social media bots; the Observatory on Social Media (OSoMe) from Indiana University is a suite of tools (including BotOrNot) that let users visualize conversations around Twitter hashtags.
And then there are ideas to not just identify but to deter or punish fake news distributors, such as: voluntary candidate agreements to “stick to the facts”; creation of independent watchdogs or rating mechanisms to put pressure on those disseminating “fake news”; tech platforms suspending or banning questionable accounts; efforts to list and boycott advertisers who post on biased news sites in order to remove the profit motive, or to delay revenue realization for unverified news sources (both of which of course would only deter profit-motivated actors); ways to not only detect but also attribute anonymous disinformation campaigns, particularly those led by bots — to “move beyond being able to say that bots are involved in a conversation and towards identifying the people launching them, [which] would make room for legal repercussions for those behind such attacks.”
2. Information distributors
The main information distributors are Facebook, Twitter, Google, Reddit, etc. There has been somewhat less philanthropic focus on the role of technology platforms, in part due to these companies’ limited openness to collaboration or data sharing. And many funders interested in d/misinformation and propaganda have longstanding media/journalism programs, while fewer have technology expertise. Proposed interventions include:
Greater transparency by platforms on “how things work” (e.g., algorithmic formulas underlying Google auto-complete, Facebook newsfeed rankings, etc.); and data-sharing to enable more outside research. The introduction of online reputation systems (potentially including the above-mentioned removal of anonymity for online actors) can help too.
Algorithmic changes include flag or down-rank questionable stories; “better identify and rank authentic content”; serve up ads from verified professional news organizations that display factual stories on the same topic; identify trending stories and then slow down content’s velocity until it gets fact checked.
People-powered changes, such as: testing ways for people to report hoaxes more easily; introduction of tools to dispute stories by flagging them (see Facebook changes above); enabling users to screen content (e.g., Twitter recently expanded its hateful-conduct policy, enabling users to hide tweets with certain words, etc.); awarding ad credits to users who push back against d/misinformation or propaganda (as Google has done for extremist speech via Jigsaw); including volunteer moderators (a la Reddit and Wikipedia).
3. Information consumption
There are two primary efforts — fact-checking and news literacy — aimed to help consumers either correct or avoid d/misinformation or propaganda.
Fact-checking
Outside of efforts to improve journalistic quality/trust in journalism, fact-checking is perhaps the second-most commonly discussed intervention. Ideas include:
Independent fact-checkers: Hundreds of independent fact-checking groups have emerged in recent years, many now coordinated by the International Fact-Checking Network (IFCN) at Poynter. Groups like Factcheck, FactChecker(home of the “Pinocchios”), and Politifact remain most prominent in the US; Full Fact in the UK. Efforts like Hoaxy take a slightly different spin, serving as a “search engine” for fake news, illustrating how claims spread on Twitter, and also fact-checking them.
Newsrooms: The Washington Post announced a Chrome and Firefox extension that permits users to embed fact checks into tweets. The BBC is extending its fact checking/debunking project—Reality Check—and is working with Facebook. Other groups like First Draft News, founded in 2015 as a coalition of newsrooms and social media platforms, are working to improve practices, standards and technology for debunking problematic information and verifying eyewitness media online.
Technology platforms: Several efforts are working to embed fact-checking directly into the platforms (e.g., via correction bots, France’s CrossCheck, the Engaging News Project, etc.). Facebook recently began outsourcing fact-checking to established organizations (e.g., Snopes, Factcheck.org, ABC News, AP, and Politifact), and now relies on its users to flag potentially “fake” news stories in order to trigger the fact-checking process. Google is also now helping draw users’ attention to fact-checking articles relevant to significant news stories.
Government: Others are calling for the government to issue more pre-emptive, public refutations of false claims, for example by creating “official government webpages acting as a U.S. government “Snopes” for disarming falsehoods” (e.g., via the State Department and Department of Homeland Security).
However, several challenges persist:
Distribution: The challenge with independent fact-checking groups is that often the most misinformed audiences are not the ones seeking out corrections. (Hence efforts to try to serve up fact-checks “real-time” to avoid simply “preaching to the choir” of the small group of citizens predisposed to visit fact-checking sites directly).
Motivated reasoning and confirmation bias: Even when fact-checks do reach larger audiences, people often don’t believe the correction. Behavioral scientists are consistent in their view that feelings and ideological affiliations often precede evidence in people’s decision-making processes. (i.e., evidence is used primarily to support preexisting feelings, and any dis-confirming evidence is often rejected, resulting in “belief persistence” for many, despite exposure to fact-checks).
Backfiring: There is considerable, if disputed, evidence that fact-checking only works on less ideologically contentious issues, and may have a “backfire” effect in more heated policy debates, causing people to more deeply entrench in their preexisting, erroneous beliefs. Views on the “favorability” of fact-checking also differ substantially by political party affiliation, and are higher among Democrats.
Belief echoes: Even when specific inaccuracies are corrected, the effects of these inaccuracies on attitudes about the accused candidate or issue can persist even long after the factual misperception has been corrected.
As a seminal study by the RAND Corporation notes regarding fact-checking: “Don’t expect to counter the firehose of falsehood with the squirt gun of truth.”
News literacy
As a potentially longer-term solution, news literacy ideas are also highly popular, and include: incorporating news literacy into the K-12 system; developing MOOCs or OERs to avoid need for K-12 dissemination; requiring news literacy in SATs, other standardized tests; teaching via museums; finding trusted individuals within relevant communities to deliver media education training; embedding news literacy tools and training directly into technology platforms; embedding content directly into journalism (e.g., building news literacy directly into the news product by calling out “What is new about this story? What is the evidence? Who are the sources? What proof do they offer? What is still missing or unknown?”); and working with public broadcasters, mainstream television, and/or film producers to launch PSAs and /or incorporate “softball” media literacy training and social sharing norms directly into entertainment spots.
Here the same challenges around preaching to the choir, motivated reasoning, and confirmation bias apply, as do concerns about the time lag before such education would have real impact.
It is also hard to imagine an effective solution that puts the onus for problem-solving wholly on citizens who are already busy with their day jobs, overwhelmed with conflicting information, and less sophisticated than highly motivated political, partisan or profit-seeking actors. While this is important, it’s unlikely to solve what is in effect a structural, systemic problem.
Other audience-facing interventions
Finally, in addition to fact-checking and news literacy, a handful of other ideas aimed at news readers have been proposed. These include journalistic quality standards to identify and certify quality journalism (e.g., a “Fair Journalism” set of opt-in standards or an “Information Consumer Reports” seal of approval or ratings for newsrooms). Other ideas include a “universal contextualizer” offering up real-time relevant comparisons to help frame key information in a given news story (e.g., is $1 million a lot of money? Is 1000 petition signatures a lot?), and consumer “bubble-busting tools” aiming to expose people to information outside of their comfort zone or help media users understand the context of what they’re looking at. These include com, WSJ’s Red Feed/Blue Feed, “Outside Your Bubble,”“Escape Your Bubble”, and Vubble (which similarly serves up dis-affirming information, but tailored to audiences’ emotional state).
Medium-term interventions include tools to help citizens boycott companies that advertise with problematic information creators (e.g., Russia Today) or distributors. Longer-term focus includes news/media literacy projects, and changing social norms around sharing d/misinformation or propaganda.
As the Hewlett Foundation’s Madison Initiative considers how to address the role of misinformation in an era of polarized politics and dysfunctional government, interventions focused on improving the role of technology platforms appear to be particularly promising to us.
Misinformation, disinformation and propaganda aren’t new. But social media and search platforms are. They have increased the speed and scale at which ideas can be shared. They have “democratized” information sharing – cutting out traditional gatekeepers and thus, exacerbating media fragmentation and information inundation. Many of these aspects — speed, scale, democratization — are features, not bugs. The elevation of more popular click-bait content, which on the political front tends to be more extreme, is also an intentional (if lamentable) aspect of the design that enables less extreme actors to get caught in the crossfire and pulled into the fray (to mix idioms).
But the ability to game the system is a bug that clearly needs to be addressed. It’s a particularly serious one as it enables amplification of biased information, giving the erroneous impression of broad-based social endorsement of ideas often previously confined to the fringes.
These issues are unique to the technology platforms. Focusing on them alone won’t “solve” the information problem, any more than the platforms alone created it. There is no likely silver bullet here. But work on this area could help mitigate and contain the damage.
The other four interventions — improving journalistic quality, fact-checking, news literacy, and other audience-facing ideas – are being pursued by many other charitable donors and remain on our radar. But our preliminary sense is that these ideas won’t be enough. High-quality journalism is out there, but it is drowned out by noise and strategic d/misinformation and propaganda. Even if fact-checking were dramatically expanded, many behavioral and motivational challenges persist. And even if news literacy was rolled out universally (which I think is critical), it is hard to imagine individual citizens — many of whom are very minimally politically or civically engaged — making sufficient sense of a fragmented media landscape that sophisticated and well-funded actors are actively trying to manipulate.
But without more transparency from technology platforms about how their algorithms work, or data sharing about how users are engaging online, it’s difficult to even understand the extent of the information problem, much less to explore what kind of solutions the platforms should be pursuing. We have little access to data beyond what the platforms themselves voluntarily share. If nothing else changes in the near term, at least let there be some data.