Kelly BornKelly Born is a program officer with the foundation’s Madison Initiative, which seeks to strengthen democracy and its institutions – especially Congress – to be more effective in a polarized age. Her grantmaking includes support for nonpartisan organizations focused on media, research and reform related to campaigns and elections, from advocacy groups that push for electoral reform, to researchers who examine the role of money in politics, to nonprofit news organizations. She recently co-authored, with Nell Edgington of Social Velocity, an analysis of philanthropy’s opportunities to help address the problems around political discourse related to disinformation, misinformation, and propaganda.

Propaganda in politics and elections isn’t a new phenomenon. You’ve spent the last few years immersed in the disinformation problem. What feels different about the current moment?

Two things are really different: technology and political polarization.

Today, anyone can create content, anyone can distribute it, and they can do so anonymously. Once upon a time, citizens could evaluate information based on the credibility of the source; now, peer-to-peer sharing has become the norm, and a proxy for relevance and accuracy. These conversations on the internet can be hijacked by bots and trolls working to artificially amplify divisive ideas. Moreover, big data collection allows for the micro-targeting of political messages, and there’s no broad visibility into how these messages have been tailored for narrow audiences. Arguably, infinite variations of a single message can be tested until perfected to a given individual. Such messages compete to render campaign commitments meaningless.

Meanwhile, polarization has become the defining feature of our political landscape, and partisan animosity is at a high point. Years ago, Americans might not have been so easily persuaded by extreme content, and trusted actors might have been able to intervene with “the facts.” But today there is diminishing trust in a range of democratic institutions, including mainstream media, experts and government, so there’s no credible referee in our political system to call out what’s true and what’s not. Whereas fact checking and expert opinion might still work in less divided countries, in the U.S. context, one immediately asks: “Who fact-checks the fact checkers?”

Adding to these social, technological and political changes are the basic underlying incentives of each group: social media platforms profit off of “engagement,” and behavioral science confirms that citizens are drawn to more emotional content, and prefer information that reaffirms their pre-existing worldviews.

Recently, both the House and Senate held hearings to grill representatives of big social media companies about the role of their platforms in aiding Russian efforts to meddle in the U.S. election. Some have said that our campaign finance laws need to be updated to allow for more disclosure of online political advertising. What’s your take?

The short answer is yes. America’s campaign finance laws are extremely outdated. The most recent update to these laws was 15 years ago – before Facebook, WhatsApp, Twitter and Instagram even existed. Our current laws presume TV as the primary medium for communication. But obviously the internet is playing an increasingly important, and unregulated, role. While it’s hard to track, experts estimate that online campaign advertising grew almost 800 percent over the last election cycle (with almost 40 percent of this going to social platforms, mostly Facebook).

But political advertisements represent only a tiny slice of the disinformation problem. Legally, “campaign ads” are very narrowly defined – including only ads which name a specific candidate, within 30 to 60 days of an election. Much of the disinformation we are currently wrestling with, be it Russian or homegrown, focuses on wedge issues rather than specific candidates. And much of it doesn’t have to be purchased as an ad. Some wonder why anyone interested in seriously influencing American democracy would buy an ad, subject to all of the attendant regulations, when they could just create their own media content (or company) instead, alongside a small army of bots to promote their ideas?

Again, the challenge is worse because it’s not only technology that has changed – society has too. Now 95 percent of congressional districts are “safe.” There are few undecided voters left out there. Ads used to be designed to convert or persuade people to support a specific party or candidate. That’s no longer the case. Now that most voters are already ideologically aligned, ads no longer need to persuade voters, instead they need to mobilize the base, which incentivizes a totally different kind of advertising focused on inflaming rather than informing people, and  doesn’t require ever naming a candidate in order to achieve the goal of the “advertisers.” A whole new definition of campaign (and issue) ads is needed, and that has proven thorny for decades given our free speech values. This makes me wonder if Americans need to revisit their conception of free speech in the digital age. We’re living in a time where allowing some people “freedom of speech” to promote nonsense in effect drowns out credible speech entirely. (And First Amendment protections for freedom of speech don’t apply to corporations at all – Facebook could easily delete fake content if it so chose.)

Unfortunately, technology and political strategies are changing so quickly – and our legislative processes move so slowly – that it’s hard to imagine Congress effectively solving this problem in a durable way. There is an arms race underway between the technology platforms and those seeking to interfere in our information systems. Mitigating this interference will require iterative and nimble approaches unlikely to come from Capitol Hill.

Your analysis suggests that there’s a lot we still don’t know about solving this problem – that we need more information before we can figure out which interventions make sense. Why is that, and what do you see as the key questions that still need to be answered?

It’s hard to imagine coming up with good legislative solutions before we know more about what’s really been going on. The Appendix of this latest report, which offers a research agenda for the field, illustrates the point. We don’t know how significant online echo chambers are compared with our offline lives, or how much it matters. We don’t know how much disinformation is actually changing people’s views, ideologies, or votes – presumably some of it is just preaching to the choir. We don’t know enough about when fact checking works, and when it just makes things worse. The list goes on.

With the exception of Twitter, academic researchers don’t have access to the necessary data to understand these problems. The platforms do, but they have thus far been unwilling to share it. This is in part due to very legitimate concerns about user privacy, and in part due to the risk that the data will reveal what many already believe – that the platforms are being gamed by foreign actors and homegrown ideologues, and that this is driving political polarization. So in some ways, privacy concerns are a convenient shield. Arguably thousands of platform employees already have access to our private data. I for one would rather know that someone else – academic researchers without the profit incentives the platform faces – are also looking at the problem. And presumably if the platforms can find legal and technological means to protect the safety of our data internally, an academic research center could be established with sufficient safeguards to do the same.