Recently, leading thinkers gathered to discuss what cybersecurity could look like in 2020 and beyond. Among them was Steven Weber, a professor in the Department of Political Science and the School of Information at UC Berkeley, where he serves as faculty director of the Center for Long-Term Cybersecurity (founded in 2015 with a grant from the Hewlett Foundation). Weber has led the development of Cybersecurity Futures 2020, a set of scenarios to serve as a starting point for conversation among academic researchers, industry practitioners, and government policymakers.” Following are edited excerpts from an email interview with Weber about his approach to scenario planning.   

The five scenarios you developed for what cybersecurity might mean in 2020 — just four years away! — present some fascinating possibilities, from wearable devices that track our emotional state to a new normal” where everyone just assumes their personal information will be stolen. What do you see as the most important insights arising from the scenarios?

One of the goals of scenario development is to ask and answer the question: What insights about the future are likely to be true in any plausible future? We’ve extracted ten of these insights from our work, but there are two that really stand out.

Steven Weber
Steven Weber

The first is that cybersecurity is on the verge of becoming the “master problem” of the Internet era — an existential challenge that is quite like climate change in its significance and global consequences. If you think about the resources — economic, political, or technical — that are being mobilized around climate, then you have a sense of what we believe the world will need to do around cybersecurity in the next few years.

That recognition in and of itself will bring major changes in how human beings and digital machines interact. Cybersecurity is about to enter the arena of vast psychosocial impact in a manner we’ve not seen up to now. Corporations and governments will be able to predict with precision individual human behaviors and come to know us deeply —  not just what we buy or where we go. They will know us better than we know ourselves when our memories are storable, searchable, shareable, and possibly changeable.  These are the kinds of things that go to the essence of what it means to be human, how we interact with each other, what freedom and fairness mean, and ultimately how we assess a feeling we call security.

The second is just how much hinges on the ways in which the political economy of data evolves. We think that security issues around data will become more critical than the security of digital devices or communications networks. When data becomes a more easily exchangeable asset, it also becomes something of measurable value that criminals want more so than ever to acquire and sell.

Governments won’t be able to avoid the challenge of managing markets for data, both licit and illicit. The interactive dance between data and algorithms — where the scarce resource lies at any moment, where differential insights can be created, and where the most dangerous manipulations can occur — will become a critical variable in the shape of the threat landscape.

We also landed on the view that there is no silver bullet and probably never will be. That’s hardly a new argument, but we understand better now why. It’s because the ongoing and ever-increasing demand in the digital realm for features, performance, and extensions of capabilities expands to fill the space of what is technically possible, and then goes beyond it.

This observation is a pretty basic statement about human behavior, but it suggests that the digital realm will evolve very much like other security realms have always evolved in human affairs. Which is to say that bad actors co-evolve with good, and that the meanings and identities of good and bad are never settled. Threats don’t disappear. They change shape.

In setting up the Cyber Initiative at the Hewlett Foundation, one of our key concerns was how to foster a cybersecurity field that offers robust, multidisciplinary solutions to complex policy challenges. Could you talk about how the Center for Long-Term Cybersecurity’s own multidisciplinary approach contributed to the development of the scenarios, and how you see them feeding back into your research?

We started by convening a broad group of scholars, practitioners, and policy people to identify and grapple with what we call “critical uncertainties” — the forces of change that are most important and at the same time most uncertain in the system under study. I’m not a fan of multidisciplinary approach for its own sake, but I am absolutely convinced that no single discipline can claim to understand what it is that drives a challenge to become sufficiently crucial that people label it as a security issue. The very concept of cybersecurity still needs work.

If that sounds like an academic abstraction, it shouldn’t. Most people in 2016 wouldn’t think of a credit card breach as impacting their core security interests; it’s more like a tax on your day to day life or an annoyance that somebody else has to deal with (and pay for). This matters because the landscape of potential solutions really changes when a problem becomes a security problem in political discourse, public imagination, and C-suite strategy conversations. And that happens at the intersection of politics, technology, economics, psychology, and a lot more. That’s where the demand for multidisciplinary solutions comes from.

There’s another way this could play out, of course.  Cybersecurity would rise to the top of everyone’s agenda very quickly if the Internet were to become first and foremost a military realm.   That’s possible — we’ve all heard about “cyber Pearl Harbors” and the like. We hope that it doesn’t evolve that way.  Ironically the concepts and practices around security would then narrow, and that won’t be a good outcome.

Our contribution here is based on the proposition that getting a little bit ahead of emerging challenges, rather than continuing to play catch up, is the best way we can help to prevent outright militarization from becoming the story of the Internet in 2020.

Your kickoff event for the scenarios featured one of the most successful cybersecurity storytellers: Walter Parkes, screenwriter and producer of the movies WarGames and Sneakers. How would you describe the role of narrative in your scenarios and helping the public understand cybersecurity?

I’ve never accepted the notion that science, theory and narrative are at cross purposes with one another. Quite the opposite:  If you want to bring about change in what people believe and what they do as a result, you have to tell compelling stories that embed science and theory in a narrative. In politics, stories do battle with other stories and I think a policy-relevant research agenda needs to acknowledge that reality and own it. This doesn’t mean sacrificing theory or academic integrity.  The stories that matter are those whose plot can be rigorously defended with logic and evidence.

Walter Parkes taught me the difference between story and plot and I think it’s a difference that every researcher who wants to change the world should keep in mind. Plot is what research creates — logic, causality, evidence, and reason.  Story is what the human mind wants to consume — the narratives that move human beings to act differently tomorrow than they did today. In Parkes’ telling, here’s a plot: A woman loses her baby. Here’s a story: Baby shoes, for sale, never worn.

We’ve taken that message to heart in our scenario work. Each of our scenarios expresses a clear logical plot about how the landscape of cybersecurity could change. Each has explicit theoretical causal claims at its core. And, we’ve embedded little fictions that represent snapshots of data from the future, which are characteristic observable implications of those logical plots. Those newspaper stories and the like aren’t predictions — they are stories that represent the kinds of data points that we’d start to see, if the causal logics of a particular plot are starting to unfold. Policy relevant research needs both of these elements to be effective.

Given the siloed nature of the huge sums spent on cybersecurity each year — government priorities or hardening individual company networks — we believe there’s an important role for philanthropy to play in fostering more cooperation among government, industry, civil society, and academia. Where would you advise us and other funders interested in supporting cybersecurity for 2020 and beyond to invest our resources?

It’s always easy to tell other people where they should spend their resources, isn’t it? But in this case, we’ve put our own time and money too where our mouths are. I think the key to fostering cooperation is to recognize that the time to play catch-up in the cybersecurity realm is over.

The bad actors are out ahead of the good actors in many if not most parts of the landscape. There isn’t enough money in the world to change that, if the good actors stay reactive and are generally responding to what the bad actors do. After all, it’s a no-brainer move for the bad actors to do everything they can to impede cooperation among exactly the people and institutions that funders want to bring together. People sometimes talk about a collaborative moonshot approach, but it’s important to remember that the moon was not an intelligent adversary that was trying to defeat the attempt to land on its surface.

So my advice is to get out ahead of the game and identify areas for cooperation that will start to emerge just over the horizon. I don’t know precisely what the right time frame for that kind of approach is, but my instincts tell me that we’re almost too late, for example, to deal with the first generation massive deployment of Internet of Things and that it might be smarter to move on right now to address what happens after that first generation is brought down by attackers.

I also want to endorse strongly the Hewlett Foundation’s emphasis on addressing the global dimensions of the cybersecurity challenge. The Internet is still a new enough realm of political-economic-security games that the concept of national interest in those games is up for grabs.

It isn’t that long ago that John Perry Barlow proclaimed the end of national state sovereignty on the Internet. 20 years after he wrote, “A Declaration of Independence in Cyberspace,” the global policy agenda is crowded with issues like data localization and cross-border jurisdiction that put national borders and national power questions right at the top.

We may need to go back to first principles on some of these questions and seed a global conversation about the basic characteristics of Internet “space” and what it means for national interests. That will frustrate people who see the urgent cybersecurity dilemmas in a global frame, and I understand that frustration. But how else to get sustained, productive cross-border collaborations that improve the security of this new domain?  It’s a long game that I hope the most ambitious philanthropists will see as their unique place to invest.