Navigating the future: AI, cybersecurity, and journalism at Verify 2024

Two people presenting at a conference
In a panel discussion at Verify 2024, Garrett Graff, Director of Cyber Initiatives at Aspen Digital, talks with Kent Walker and Heather Adkins, two of Google's longest-serving security and policy leaders, about how the cyber threat has changed over the last quarter-century and how AI will change the game in the future.

The complex mix of emerging threats and opportunities posed by AI and the persistent, now multi-decade, presence of hackers inside U.S. companies and critical infrastructure dominated the conversation at this year’s Verify media roundtable.

The three-day event this spring gathered 90 attendees — including journalists, government officials, cybersecurity experts, researchers, and tech executives — to discuss the latest from the cyberpolicy field. The roundtable was the fifth hosted by the Hewlett Foundation, and the third organized in partnership with Aspen Digital, the Aspen Institute program focused on cybersecurity, media, and technology.

Throughout the conference, on-stage conversations ranged from the unique trust-and-safety challenges posed by the explosive growth of generative AI to data privacy to the geopolitical landscape of cyber threats. These conversations brought together government leaders from the departments of Homeland Security, Justice, and Commerce, with executives from OpenAI, Anthropic, Discord, Google, and Microsoft, as well as civil society researchers and thinkers from Stanford, Fordham, New America, and the Brookings Institution, among others.

The opportunities and pitfalls of AI

The increasing capability and availability of AI has created tremendous opportunity — from vaccine creation to improving government services — but the rapid development has also raised concerns, from data privacy and how information is being scraped to train large-language models (LLMs) to its potential use by terrorists and other rogue actors.

During a fireside chat with WIRED’s Lauren Goode, RAND CEO Jason Matheny — whose work has long focused on the intersection of national security and threats from emerging technology — discussed the challenges posed by recent advances in genetic engineering and biological threats. Technology, he said, now delivers to graduate students capabilities that were only available to the most advanced nation-state bioweapon labs just a decade or two ago.

When he started working in biosecurity in 2002, Matheny explained, “It cost many millions of dollars to construct a polio virus, a very, very small virus. It would have cost close to a billion dollars to synthesize a pox virus — a very large virus. Today, the cost is less than $100,000 — it’s a 10,000-fold decrease over that period.” And while technology has opened up the rapid development of vaccines, research and development systems have not kept pace with potential biological threats. Matheny shared, “the cost of vaccines has actually tripled over that period. The defense-offense asymmetry is moving in the wrong direction.”

While the rapid development of AI has created new capacities, it also presents novel challenges related to privacy, security, and ethics — and LLMs, in particular, raise questions about user consent, since many harvest information and data widely online. In a conversation about protecting privacy while supporting innovation, Jennifer Granick, surveillance and cybersecurity counsel from the ACLU, said, “AI is powerful and gives us more power. The problem is that where there are problems in the system, it exacerbates those problems and introduces new problems. If you have biased information that goes into the training, biased stuff comes out.”

For Irene Solaiman, the head of global policy for Hugging Face, the question of consent felt particularly personal. She explained how her own voice was mimicked and copied using AI, part of a marketing stunt showcasing technology that could let an AI clone speak foreign languages. “A marketing company trained a voice cloning system on my voice — the irony is not lost on me that they use a clip of me speaking about the importance of consent from data subjects and ethical development,” she said. “I don’t wish this on people. It’s a very weird experience, hearing your voice say things that you didn’t say, and you didn’t consent to say.”

Countering cybersecurity threats around the world

Cyber threats and security have evolved greatly in the last two decades. As technology has rapidly developed, there’s been a constant push for the technical and policy infrastructure to grow to meet the challenges.

From their unique perspectives as two of Google’s original cybersecurity leaders — Global Affairs President Kent Walker and Heather Adkins, vice president of security engineering — discussed how security and cyber threats have evolved for the tech giant. The pair recounted the discovery and response to the significant 2009 penetration, known as Operation Aurora, by Chinese military hackers that served as a major wake-up call to Google and eventually led it to largely pull out of the Chinese market.

Walker called Operation Aurora, “a real watershed moment in the history of [Google] and how we thought about security.” But beyond Google, Walker noted, “[Aurora] changed the way governments look at the world. The [U.S.] government has invested more deeply in setting up additional agencies at the federal level and internationally that are focused on cybersecurity threats. … We are working constantly with the U.S. government, but also governments around the world, to try and make sure there’s good exchange of information in mutual ways.”

The ongoing challenge posed by Chinese military hackers came into sharp relief in subsequent conversations, as multiple government officials and private sector leaders discussed the penetration of critical infrastructure by the Chinese group known as Volt Typhoon. Department of Homeland Security Secretary Alejandro Mayorkas highlighted the report, released just days earlier, by DHS’s Cyber Safety Review Board on a recent attack against Microsoft systems by the same group, known as Storm-0558, that was responsible for the original Operation Aurora attack some 15 years ago.

“We look at the PRC [the People’s Republic of China] as an active and aggressive threat actor in the cyber realm,” Mayorkas told attendees. “We see some sectors exploited with the same level of success as in past years, and it causes me to pose the question whether our approach to cybersecurity is working and at the pace that it needs to in terms of strengthening our cybersecurity — and if not, what do we do about it?”

Mayorkas and other speakers, including Jason Tama, the incoming head of Coast Guard Cyber Command, also discussed recent new steps the Biden administration is taking to strengthen maritime port security and, in particular, concerns about sophisticated Chinese-made cargo cranes that predominate at many U.S. ports. “About 80% or so of our ship-to-shore cranes in the United States are manufactured by ZPMC, which are made in China,” Tama said. “Our cyber protection teams have been doing ongoing work to learn what we can about potential vulnerabilities that they may pose — there are no indicators of a Trojan horse-type of scenario — but the marine transportation system is full of infrastructure and equipment, software, hardware.”

In the same session, David Scott, the FBI’s Washington Field Office special agent in charge of the cyber/criminal division, highlighted how the Justice Department was keeping a close eye on the upcoming fall election. “What we have seen is some targeting of secretaries of state websites and voter registration sites,” he said, adding that nation-state actors have been using access and knowledge that builds on COVID pandemic-era state unemployment fraud. “They like to get that those personal identifiers, that personal information off of those sites and use it,” he said. “But we’re obviously watching any allegations of election fraud, voter fraud, voter suppression, threats to election workers.”

The changing landscape of tech coverage

During a panel on new opportunities and business models for journalism, four entrepreneurial newsroom leaders — 404 Media Co-Founder Joseph Cox, URL Media CEO S. Mitra Kalita, Rest of World Editor-in-Chief Anup Kaphle, and The Markup Editor-in-Chief Sisi Wei —shared their own career and startup journeys and how their news organizations are geared to take on the current industry-wide headwinds facing journalism.

As Cox, who founded 404 Media alongside other former colleagues from Vice’s tech vertical Motherboard, said, “It aims to do something, which shouldn’t be novel, but apparently it is — which is you give us money, and then we give you journalism, that is the foundation of it. Majority of the money comes from subscribers. It is very focused on building a sustainable, responsible vehicle for our journalism. We just want to do investigations. We just want to generate impact. We just want people to read that and get something positive out of it. That’s why we made this company — so we can keep doing that, rather than panicking about when the next layoff is going to be.”

For many of them, the opportunity to start a journalism project of their own came with the hope of pushing the boundaries of who the work serves and helps and what topics get covered. As Kalita said, “The framework of advocacy rubs a lot of people the wrong way when you marry it to journalism when it comes to the communities who we’re working with. It doesn’t rub people the wrong way when we’re at the Wall Street Journal, where I worked for years, and you’re helping people save on, say, capital gains tax. I think as institutions we need to examine when something becomes ‘advocacy’ just because of who we’re trying to help. Yes, it becomes advocacy on behalf of communities — which I have no qualms about saying — because I live in the community, our journalists live in the communities that they’re trying to improve.”

Rebooting cybersecurity coverage for a new era

This year’s Verify conference continued the event’s history of supporting journalists in finding and telling the stories that matter in order to inform policy conversations and keep people safer online. The Hewlett Foundation was the original host of the Verify conference, and over the last decade has supported building a more robust and capable cyber policy field. The initiative concluded last year and throughout the event, speakers noted that the energy and funding brought by the foundation to the nascent cyber policy field spurred the development of the modern cyber policy field — a field that stands well positioned to continue thriving and keeping individuals and societies around the world safer online.

Search Our Grantmaking


By Keyword