Microsoft President Brad Smith also touched on the complexities in norm-building. He explained why the company is backing the Paris Call for Trust and Cybersecurity in Cyberspace and—acknowledging the role that the private sector plays in cybersecurity—stressed the importance of a multinational, multi-stakeholder approach. Smith also explained the hard revenue choices technology companies are faced with as emerging technologies evolve. If certain technologies—such as facial recognition technologies— were made available to authoritarian governments, it could potentially impact basic human rights, including “all rights of people to assemble to express their points of view.” He added that “the only way to prevent a race to the bottom [by companies] is to have a regulatory floor.”
Technology companies, policymakers and the public need to grapple with the tradeoffs between privacy, safety, and security. Stamos, the former Facebook executive, offered a presentation on the challenges that platforms face in balancing different values held by their users, and knowing which ones to optimize for. “You cannot both say that platforms are responsible for the content on it and knowing who their users are, and then also [say that they] need to provide perfect privacy,” Stamos said. “You can’t moderate content unless you see it, and you can’t find bad guys unless you’re collecting data about them.” Europe’s General Data Protection Regulation (GDPR) and WhatsApp’s end-to-end encryption are both privacy-enhancing, but there’s a cost around security and safety, Stamos said. Experts at the event noted that society’s competing demands for privacy and transparency are hard to satisfy, and that optimizing for the highest levels of privacy has made it challenging to get access to social media data to enable independent research that can help evaluate the impact of social media and digital disinformation on elections.