Today Social Science One and the Social Science Research Council (SSRC) released the first in a series of requests for proposals for scholars to examine Facebook’s impact on elections and democracy around the world. Facebook will provide scholars with privacy-protected data, while research funding will be provided by a group of seven ideologically diverse funders, including the Hewlett Foundation’s Madison Initiative.
Questions about the supply, demand, and influence of digital disinformation are being asked everywhere you look—from the United States Congress, to the European Union, to India, and Brazil. As actors from every sector grapple with how to respond, they are concerned not only with the quality of information shared on digital platforms and the privacy around individual users’ data, but also about whether and how disinformation is accelerating political polarization and undermining democracy. The Hewlett Foundation and other funders, scholars, and digital platforms are taking steps to help grapple with this thorny problem.
As these conversations have evolved, there is an emerging consensus that we need the following three things to make progress:
- Define disinformation. There are dozens of convenings about disinformation each month and, at most of these, it becomes apparent halfway through that people are talking about slightly different things. Truly made up “fake news” is not the main problem, and platforms are incentivized to address this. But it is not clear whether or how platforms – whose business models hinge on engagement – are equipped to address the biased, uncivil news that is spread deliberately (disinformation) or inadvertently (misinformation), and which is known to elicit heavy engagement. Biased information, uncivil information, information taken out of context, manipulated videos or photographs—these are all slightly different concerns, and each likely require different solutions. Yet we still lack a common definition of the problem, and are even farther from defining it at a level of specificity where we could say, for example, that in 2016 we had “X percent” of disinformation on Facebook and in 2018 we have “Y percent.”
- Describe the problem. As we get closer to a common definition of the problem, many are now jumping to solutions before we have a clear understanding of the challenge. When looking at the supply of problematic information: how much of it is out there, who produces and who amplifies it, who is most often targeted, what factors determine virality, how much of this is paid ads versus organic content? How many people consume it? Does exposure to disinformation affect attitudes and behavior online, and off? And what do researchers need to answer these questions? We have supported several efforts to help identify and prioritize research needs including interviews with more than 50 experts in the United States and Europe; a literature review on Social Media, Political Polarization, and Political Disinformation to identify key gaps and areas of consensus; and a convening of top scholars, funders, and technology company representatives to map out a potential research agenda and infrastructure needs for the field.
- Disaggregate solutions. Finally, it is clear that there will be no silver bullet here. Instead, the problem needs to be deconstructed into its various sub components: bots, microtargeting, paid political ads, foreign interference, etc. Each of these will likely have distinct remedies, none of which will entirely solve the problem but each remedy might get us five to 10 percent closer to a healthy online information environment. Looking just at the bot problem alone illustrates this complexity well: Twitter can only successfully detect bots around 70 percent of the time. And, as company’s detection abilities improve, it’s an arms race with bad actors getting more sophisticated every day. While an obvious solution would be simply requiring real identities for all accounts, this works less well in countries operating under authoritarian regimes, where it is this very anonymity that protects pro-democracy advocates from authoritarian state backlash. And, if we go after “bad” bots, we must also remember that there are armies of “good bots” (capable of things like shaming online promulgators of hate speech) that will also get taken down. None of these problems have simple solutions, but in breaking them each down more thoughtfully, we can begin to get to better answers.
The request for proposals released today is an important first opportunity for independent researchers to access and analyze Facebook data. Because the scholars are committed to sharing their research publicly, their work should help all of us define digital disinformation more precisely, better understand and describe the problem, and start to disaggregate potential responses. We anticipate that in the coming year, our support for the work of SSRC, Social Science One and other independent academic researchers will shed light on all of these areas. To achieve that, however, scholars need privacy-protected data beyond what Facebook is providing. Google, Twitter, Reddit, Tumblr, and others are critical nodes in our new information ecosystems. Our hope is that Google, Twitter, and others will reach out soon to join this partnership, or find other ways to open up their data to independent, scholarly research. Understanding how they are impacting democracy, and quickly, will be key.