General Keith Alexander maintained in a 2013 speech that, as director of the National Security Agency (NSA) at the time, he was doing “everything [he] could to protect civil liberties and privacy,” then added a warning: “Everyone also understands that if we give up a capability that is critical to the defense of this nation, people will die.”
Cryptography, security, privacy and surveillance
Cryptography is a technique used to code a message so that only the intended target or targets can read it. Usually a key is used to encrypt the message and another to decrypt it. These techniques have a wide range of security levels: some can be easily broken – you can go through all the 1,000 possibilities of your padlock manually – but some others are much tougher. Most of the encryption techniques used today rely on computational hardness: even with all of the computing power in the world, trying all the possible keys to decrypt the message would take too long. This is how computer scientists differentiate good from bad cryptography, and their threshold is pretty high: techniques that can be broken in less than a couple of decades by a computer will typically be deemed bad cryptograpy.
The Snowden leaks have marked the return of what have been called the Crypto Wars, the first round of which were fought between the 1950s and the 1990s. After World War II, the United States understood that encryption could severely undermine their intelligence capabilities in the Cold War era, and put in place an export ban on cryptography. This ban allowed intelligence agencies to decide which cryptographic techniques were allowed to go public, which weren’t, and which among those could be exported. Unfortunately, American scientists were not the only ones working on cryptography, and when the internet started to allow global distribution, the fight became too hard for the United States government to win and they eventually started relaxing the ban on encryption.
Today, the Crypto Wars are back, fought this time between the government’s intelligence agencies and internet giants. They shed a new light on the tension between security and privacy emphasized by Gen. Alexander. Intelligence agencies oppose the expectation of privacy, championed by internet giants on behalf of their consumers, and the need for national security, which has become a major element of public debate in the United States and around the Western world since 9/11, the Patriot Act, and the fear of terrorism.
The right to privacy is relatively recent in the United States, arguably born in December 1890 with a famous Warren and Brandeis article in the Harvard Law Review. In fact, some say that privacy is a social construct and that the expectation of privacy, of “being let alone” as Warren and Brandeis put it, has emerged in the past century throughout Western societies. This increased privacy, obtained in the past 20 years in part through a democratization of cryptography, pushes us towards lowered security standards — and, according to Gen. Alexander’s view, human lives are in the balance.
Internet giants, in whom trust has been damaged by the Snowden leaks because they could not adequately protect their users’ data, have continually argued for stronger cryptography and have started taking steps to prevent any intrusion, from government or other state and non-state actors, in their data. They have been encrypting communications between datacenters (for Google), encrypting communication from end user to end user (for Whatsapp), and, more recently, Apple announced that it was closing a long-time flaw in its Operating System: iOS 8 will encrypt devices by default, and Apple will retain no master key.
On the other side, intelligence agencies, engaged in unprecedented data-guzzling in the wake of 9/11 and the Patriot Act, warn of what they call “Going Dark”: the idea that stronger encryption standards will decrease their ability to protect the public from terrorist attacks. Gen. Alexander also says, if the US is no longer able to collect the “haystack” of data, then finding the “needle” will become impossible. Those intercepts, he argues, are lawful and, as Orin Kerr puts it: “How is the public interest served by a policy [stronger cryptography] that only thwarts lawful search warrants?”
It appears indeed that there is a trade-off between security and privacy, and available cryptography is one of the biggest determinants in this trade-off. Cryptography, however, is not the problem here: cheap and better cryptography available for everyone has had and continues to have a positive net impact. We should seek a healthy global public debate on this security-privacy trade-off, letting the people decide instead of non-representative bodies.
The positive and negative impacts of cryptography
A seminal 1996 report of the National Research Council, Cryptography’s Role in Securing the Information Society, stated, “On balance, the advantages of more widespread cryptography outweigh the disadvantages.” However, bad cryptography has terrible consequences. Letting intelligence agencies have it their way – introducing vulnerabilities and backdoors on existent good cryptographic tools – is a disaster waiting to happen, both in terms of cyber theft and in terms of policing.
Good cryptography not only protects potential terrorists who want to avoid surveillance, it also protects you and me in our day-to-day lives. When we buy something on the internet, when we log into our bank or Google accounts, even more generally, when we want to prove our identity online, we rely on really good cryptography to make sure that personal information does not get lost in the process. Cryptography creates trust, and trust is the foundation of merchant and non-merchant relationships. On a more intellectual level, some have even argued that the constitutional right to free speech includes the right to develop one’s ideas in private and without a fear of our thoughts becoming public before we want them to be. This “intellectual privacy,” as Neil Richards defined it in 2008, would be a prerequisite for the formation of free speech. Without taking such a detour to say that the right to encrypt is constitutional, it does seem straightforward that everyone should have the right to a space where he or she has no fear of being monitored.
Cryptography also protects intellectual property and confidential communications made by the government. In terms of losses due to such acts of cyber theft, the estimates range from $250 billion to an astronomic $1 trillion, even if the origin of those figures is still very unclear, as ProPublica explained in 2012. Susan Landau, a mathematician, engineer and privacy expert explains that in addition to its Signals Intelligence (SIGINT) program, the NSA also has a Communications Security (COMSEC) program, which aims to secure the communications of private companies. The rationale is simple: it would have cost less money to secure Sony’s information system than have to deal with the consequences of such a massive hack.
Despite such positive endeavors, in recent years, intelligence agencies have been advocating for backdoors: “They can promise strong encryption. They just need to figure out how they can provide us plain text,” said the General Counsel of the FBI, Valerie E. Caproni, speaking of internet giants. This is exactly what a backdoor is: a way for the government to access the plain text of all encrypted data, Whether we call it front doors or secure golden keys does not matter: the bottom line is that there is no difference from a technical point of view. As Julian Sanchez writes, “It is damn near impossible to create a security vulnerability that can only be exploited by the good guys.” To put it differently: good cryptography is good for everyone. Bad cryptography is very bad for you and very good for the bad guys.
We also know that cyberspace fundamentally gives the advantage to hackers over those who are trying to make the systems secure. To have a secure system requires the ability to defend against any type of cyber attack. Meanwhile, a hacker only has to find one vulnerability to break the system. This is why no one can be 100 percent safe from Advanced Persistent Threats (APT): capable hackers who really want to get into your system will always succeed. As the joke goes “There are two types of companies: those who know they’ve been hacked, and those who don’t know they’ve been hacked.” Under these conditions, allowing anyone to retain the master key of a built-in backdoor is nothing short of a disaster waiting to happen. The bad guys will always be able to eventually find it and use it against you.
But backdoors are a problem for yet another reason. They clash with the end-to-end argument that is at the very core of the architecture of the internet: the network should be as simple and agnostic as possible regarding the communications that it supports. More advanced functionalities should be developed at end nodes (computers, mobiles, wearable devices). This, argue researchers, allows the network “to support new and unanticipated applications.” The end-to-end argument has ignited unprecedented levels of innovation. The back doors that intelligence agencies are trying to promote would apply to our communications system as a whole, not only to the end nodes that are the devices with which we send the messages. This violates the end-to-end argument and undermines trust in the internet as a communications system. Such backdoors would undermine the generative internet as we know it, reducing every user’s capacity to innovate and disseminate products of innovation to billions of people in a secure and sustainable way.
Now if the intelligence agencies have it their way and win the Second Crypto Wars, we are headed towards a post-Golden Age of Surveillance, with unprecedented levels of eavesdropping (your TV, seriously?) and governments keeping records on everyone “just in case”. In addition to the scary resemblance that such a situation would have with Orwell’s 1984 or Huxley’s Brave New World, such easy access to our data records would prevent the police forces from developing the investigative skills that they used to have. As Jonathan Zdziarski bluntly puts it, “I’m all for getting some of the fat [cops] who’ve spent too much time behind a desk back on the treadmill and out in the field.” Indeed, we want the police to have investigation skills so that they are able to catch the bad guys, even if they don’t use any wired stuff.
Balancing security and privacy is difficult, yet we have to understand how to do it
We cannot expect law enforcement and our legal systems to find the right balance between security and privacy on their own. We can be critical of Gen. Alexander’s point of view, but let’s not blame him for doing everything that he can to save human lives. In fact, in his position, the moral thing to do might be to develop more eavesdropping capabilities. As long as politicians fear not doing enough to provide security for their citizens whatever the cost in terms of privacy, the current balance is very unlikely to change. We must start sending the message that we care for our privacy as well as for our security.
The truth is that we, the people, don’t know what we want because it’s difficult to understand the trade-offs. We need to fund research programs investigating what “security” really means for a scientist, for a policymaker and for a social scientist. We need an independent international institution able to assess the security of a given software or system. We need this institution to articulate debates on the balance between privacy and security. We need the media to keep us informed. And we need to have a public debate to answer the following question: what risks, including in terms of human lives, are we willing to incur to keep the civil rights, such as the right to privacy, that we gained in the past century?
Some consider privacy as a social construct of the last century. In fact, it is not our privacy but our security that is unprecedented in history. When in the history of mankind have we been able to walk so many roads without the fear of being attacked? The past shows us that liberty indeed costs human lives. The question that we have to answer is how much are we willing to risk. That decision has to be made by the public at large, and different countries will yield different answers. Then will come the hard part: implementing different national tradeoffs without balkanizing an inherently global internet platform.
Hugo Zylberberg (Twitter: @hugozylb) is a Master in Public Policy candidate at Harvard’s Kennedy School of Government. He is concentrating his studies on the political layer of internet architecture, power in cyberspace, and cybersecurity. Hugo helped found The Future Society to bring awareness of long-term technological problems to his fellow students and equip future policy makers to make better decisions, using technology instead of being used by technologists.
photo credit: flickr