Cracking the Crypto War

upnorth

Moderator
Thread author
Verified
Staff Member
Malware Hunter
Well-known
Jul 27, 2015
5,459
IKfYX7FJ_o.jpg

In his home office outside Boston, Ray Ozzie.

Ray Ozzie thinks he has an approach for accessing encrypted devices that attains the impossible: It satisfies both law enforcement and privacy purists.

ON DECEMBER 2, 2015, a man named Syed Rizwan Farook and his wife, Tashfeen Malik, opened fire on employees of the Department of Public Health in San Bernardino, California, killing 14 people and injuring 22 during what was supposed to be a staff meeting and holiday celebration. The shooters were tracked down and killed later in the day, and FBI agents wasted no time trying to understand the motivations of Farook and to get the fullest possible sense of his contacts and his network. But there was a problem: Farook’s iPhone 5c was protected by Apple’s default encryption system. Even when served with a warrant, Apple did not have the ability to extract the information from its own product. The government filed a court order, demanding, essentially, that Apple create a new version of the operating system that would enable it to unlock that single iPhone. Apple defended itself, with CEO Tim Cook framing the request as a threat to individual liberty. “We have a responsibility to help you protect your data and protect your privacy,” he said in a press conference. Then-FBI chief James Comey reportedly warned that Cook’s attitude could cost lives. “I just don’t want to get to a day where people look at us with tears in their eyes and say, ‘My daughter is missing and you have her cell phone—what do you mean you can’t tell me who she was texting before she disappeared?’ ” The controversy over Farook’s iPhone reignited a debate that was known in the 1990s as the Crypto Wars, when the government feared the world was “going dark” and tried—and ultimately failed—to impede the adoption of technologies that could encode people’s information. Only this time, with supercomputers in everybody’s pockets and the endless war on terror, the stakes were higher than ever.

THIS PAST JANUARY, Ray Ozzie took a train from his home in Massachusetts to New York City for a meeting in a conference room of the Data Science Institute at Columbia University. The 14th-floor aerie was ringed by wide windows and looked out on a clear but chilly day. About 15 people sat around the conference table, most of them middle-aged academics—people from the law school, scholars in government policy, and computer scientists, including cryptographers and security specialists—nibbling on a light lunch while waiting for Ozzie’s presentation to begin.
Jeannette Wing—the host of the meeting and a former corporate VP of Microsoft Research who now heads the Data Science Institute—introduced Ozzie to the group. In the invitation to this “private, informal session,” she’d referenced his background, albeit briefly. Ozzie was once chief technical officer at Microsoft as well as its chief software architect, posts he had assumed after leaving IBM, where he’d gone to work after the company had purchased a product he created, Lotus Notes. Packed in that sentence was the stuff of legend: Notes was a groundbreaking product that rocketed businesses into internet-style communications when the internet was barely a thing. The only other person who ever held the chief software architect post at Microsoft was Bill Gates.

He had come to Columbia with a proposal to address the impasse over exceptional access, and the host invited the group to “critique it in a constructive way.” Ozzie, trim and vigorous at 62, acknowledged off the bat that he was dealing with a polarizing issue. The cryptographic and civil liberties community argued that solving the problem was virtually impossible, which “kind of bothers me,” he said. “In engineering if you think hard enough, you can come up with a solution.” He believed he had one.

It works this way: The vendor—say it’s Apple in this case, but it could be Google or any other tech company—starts by generating a pair of complementary keys. One, called the vendor’s “public key,” is stored in every iPhone and iPad. The other vendor key is its “private key.” That one is stored with Apple, protected with the same maniacal care that Apple uses to protect the secret keys that certify its operating system updates. These safety measures typically involve a tamper-proof machine (known as an HSM or hardware security module) that lives in a vault in a specially protected building under biometric lock and smartcard key. That public and private key pair can be used to encrypt and decrypt a secret PIN that each user’s device automatically generates upon activation. Think of it as an extra password to unlock the device. This secret PIN is stored on the device, and it’s protected by encrypting it with the vendor’s public key. Once this is done, no one can decode it and use the PIN to unlock the phone except the vendor, using that highly protected private key.

So, say the FBI needs the contents of an iPhone. First the Feds have to actually get the device and the proper court authorization to access the information it contains—Ozzie’s system does not allow the authorities to remotely snatch information. With the phone in its possession, they could then access, through the lock screen, the encrypted PIN and send it to Apple. Armed with that information, Apple would send highly trusted employees into the vault where they could use the private key to unlock the PIN. Apple could then send that no-longer-secret PIN back to the government, who can use it to unlock the device. Ozzie designed other features meant to reassure skeptics. Clear works on only one device at a time: Obtaining one phone’s PIN would not give the authorities the means to crack anyone else’s phone. Also, when a phone is unlocked with Clear, a special chip inside the phone blows itself up, freezing the contents of the phone thereafter. This prevents any tampering with the contents of the phone. Clear can’t be used for ongoing surveillance, Ozzie told the Columbia group, because once it is employed, the phone would no longer be able to be used.

At the end of the meeting, Ozzie felt he’d gotten some good feedback. He might not have changed anyone’s position, but he also knew that unlocking minds can be harder than unlocking an encrypted iPhone. Still, he’d taken another baby step in what is now a two-years-and-counting quest. By focusing on the engineering problem, he’d started to change the debate about how best to balance privacy and law enforcement access. “I do not want us to hide behind a technological smoke screen,” he said that day at Columbia. “Let’s debate it. Don’t hide the fact that it might be possible.”


The first, and most famous, exceptional-access scheme was codenamed Nirvana. Its creator was an NSA assistant deputy director named Clinton Brooks, who realized in the late 1980s that newly discovered advances in cryptography could be a disaster for law enforcement and intelligence agencies. After initial despair, Brooks came up with an idea that he envisioned would protect people’s privacy while preserving government’s ability to get vital information. It involved generating a set of encryption keys, unique to each device, that would be held by government in heavily protected escrow. Only with legal warrants could the keys be retrieved and then used to decode encrypted data. Everyone would get what they wanted. Thus … Nirvana.

All hell broke loose as technologists and civil libertarians warned of an Orwellian future in which the government possessed a backdoor to all our information. Suddenly the obscure field of cryptography became a hot button. (I still have a T-shirt with the rallying cry “Don’t Give Big Brother a Master Key.”) And very good questions were raised: How could tech companies sell their wares overseas if foreign customers knew the US could get into their stuff? Wouldn’t actual criminals use other alternatives to encrypt data? Would Clipper Chip technology, moving at government speed, hobble the fast-moving tech world? Ultimately, Clipper’s death came not from policy, but science. A young Bell Labs cryptographer named Matt Blaze discovered a fatal vulnerability, undoubtedly an artifact of the system’s rushed implementation. Blaze’s hack led the front page of The New York Times. The fiasco tainted all subsequent attempts at installing government backdoors, and by 1999, most government efforts to regulate cryptography had been abandoned, with barely a murmur from the FBI or the NSA.

For the next dozen or so years, there seemed to be a Pax Cryptographa. You seldom heard the government complain about not having enough access to people’s personal information. But that was in large part because the government already had a frightening abundance of access, a fact made clear in 2013 by Edward Snowden. When the NSA contractor revealed the extent of his employer’s surveillance capabilities, people were shocked at the breadth of its activities. Massive snooping programs were sweeping up our “metadata”—who we talk to, where we go—while court orders allowed investigators to scour what we stored in the cloud. The revelations were also a visceral blow to the leaders of the big tech companies, who discovered that their customers’ data had essentially been plundered at the source. They vowed to protect that data more assiduously, this time regarding the US government as one of their attackers. Their solution: encryption that even the companies themselves could not decode. The best example was the iPhone, which encrypted users’ data by default with iOS 8 in 2014.

Law enforcement officials, most notably Comey of the FBI, grew alarmed that these heightened encryption schemes would create a safe haven for crooks and terrorists. He directed his staff to look at the potential dangers of increasing encryption and began giving speeches that called for that blast from the past, lingering like a nasty chord from ’90s grunge: exceptional access. The response from the cryptographic community was swift and simple: Can’t. Be. Done. In a landmark 2015 paper called “Keys Under Doormats,” a group of 15 cryptographers and computer security experts argued that, while law enforcement has reasons to argue for access to encrypted data, “a careful scientific analysis of the likely impact of such demands must distinguish what might be desirable from what is technically possible.” Their analysis claimed that there was no foreseeable way to do this. If the government tried to implement exceptional access, they wrote, it would “open doors through which criminals and malicious nation-states can attack the very individuals law enforcement seeks to defend.”

The 1990s Crypto Wars were back on, and Ray Ozzie didn’t like what he was hearing. The debate was becoming increasingly politicized. Experts in cryptography, he says, “were starting to pat themselves on the back, taking extreme positions about truisms that weren’t so obvious to me.”He knew that great achievements of cryptography had come from brilliant scientists using encryption protocols to perform a kind of magic: sharing secrets between two people who had never met, or creating digital currency that can’t be duplicated for the purposes of fraud. Could a secure system of exceptional access be so much harder? So Ozzie set out to crack the problem. He had the time to do it. He’d recently sold a company he founded in 2012, Talko, to Microsoft. And he was, to quote a friend, “post-economic,” having made enough money to free him from financial concerns. Working out of his home north of Boston, he began to fool around with some ideas. About two weeks later, he came up with Clear.

The “Keys Under Doormats” gang has raised some good criticisms of Clear, and for the record, they resent Ozzie’s implication that their minds are closed. “The answer is always, show me a proposal that doesn’t harm security,” says Dan Boneh, a celebrated cryptographer who teaches at Stanford. “How do we balance that against the legitimate need of security to unlock phones? I wish I could tell you.” One of the most salient objections goes to the heart of Ozzie’s claim that his system doesn’t really increase risk to a user’s privacy, because manufacturers like Apple already employ intricate protocols to protect the keys that verify its operating system updates. Ozzie’s detractors reject the equivalence. “The exceptional access key is different from the signing key,” says Susan Landau, a computer scientist who was also a coauthor of the “Doormat” paper. “A signing key is used rarely, but the exceptional access key will be used a lot.” The implication is that setting up a system to protect the PINs of billions of phones, and process thousands of requests from law enforcement, will inevitably have huge gaps in security. Ozzie says this really isn’t a problem. Invoking his experience as a top executive at major tech firms, he says that they already have frameworks that can securely handle keys at scale. Apple, for example, uses a key system so that thousands of developers can be verified as genuine—the iOS ecosystem couldn’t work otherwise.

Ozzie has fewer answers to address criticisms about how his system—or any that uses exceptional access—would work internationally. Would every country, even those with authoritarian governments, be able to compel Apple or Google to cough up the key to unlock the contents of any device within its jurisdiction? Ozzie concedes that’s a legitimate concern, and it’s part of the larger ongoing debate about how we regulate the flow of information and intellectual property across borders. He is also the first to point out that he doesn’t have all the answers about exceptional access, and he isn’t trying to create a full legal and technological framework. He is merely trying to prove that something could work. Maybe that’s where Ozzie’s plan plunges into the choppiest waters. Proving something is nigh impossible in the world of crypto and security. Time and again, supposedly impervious systems, created by the most brilliant cryptographers and security specialists, get undermined by clever attackers, and sometimes just idiots who stumble on unforeseen weaknesses. “Security is not perfect,” says Matthew Green, a cryptographer at Johns Hopkins. “We’re really bad at it.”
 

upnorth

Moderator
Thread author
Verified
Staff Member
Malware Hunter
Well-known
Jul 27, 2015
5,459
One thing that comes to mind and not mentioned in the article is if authorities for example confiscate a phone from anyone anywhere in the world and Clear is used on it and when the investigation is completed and there was actually no evidence found and the case is dropped, who's gonna pay for the destroyed phone? As phones nowdays and especially the high-ends are very expensive and common people can't just snap there fingers as rich people like Ray Ozzie can and get a new one this is one view that IMO I hope is considered but I doubt it because as Ray Ozzie himself state quote : " He is merely trying to prove that something could work. ". Sure I agree and it could work but there will be consequences and not always positive and one is what I now mentioned. If that is ignored and not even discussed it says everything about this project IMO and I strongly hope it's rejected and especially here in the European Union with tools like GDPR or anything similar.

The technique itself behind Clear I like alot and Ray Ozzie is for sure a very intelligent person but even intelligent people sometimes creates monsters even with the best intention as history show us all. Transparency is a keyword in this case IMO and not getting stuck only on it's technical details and then let anyone else deal with it's consequences as it's not the creators fault how it's finally used. That excuse is IMO too common in many cases and just poor and very naive. Narrow-minded is another word.

The article does not fully clearify every pros and cons discussed by Ray Ozzie and the other experts but I hope that it's shared also with us common people. It's a story that IMO is well worth to follow.
 
Last edited:

upnorth

Moderator
Thread author
Verified
Staff Member
Malware Hunter
Well-known
Jul 27, 2015
5,459
Bruce Schneiers reply.
Last month, Wired published a long article about Ray Ozzie and his supposed new scheme for adding a backdoor in encrypted devices. It's a weird article. It paints Ozzie's proposal as something that "attains the impossible" and "satisfies both law enforcement and privacy purists," when (1) it's barely a proposal, and (2) it's essentially the same key escrow scheme we've been hearing about for decades. Basically, each device has a unique public/private key pair and a secure processor. The public key goes into the processor and the device, and is used to encrypt whatever user key encrypts the data. The private key is stored in a secure database, available to law enforcement on demand. The only other trick is that for law enforcement to use that key, they have to put the device in some sort of irreversible recovery mode, which means it can never be used again. That's basically it.

I have no idea why anyone is talking as if this were anything new. Several cryptographers havealready explained explained why this key escrow scheme is no better than any other key escrow scheme. The short answer is (1) we won't be able to secure that database of backdoor keys, (2) we don't know how to build the secure coprocessor the scheme requires, and (3) it solves none of the policy problems around the whole system. This is the typical mistake non-cryptographers make when they approach this problem: they think that the hard part is the cryptography to create the backdoor. That's actually the easy part. The hard part is ensuring that it's only used by the good guys, and there's nothing in Ozzie's proposal that addresses any of that.

I worry that this kind of thing is damaging in the long run. There should be some rule that any backdoor or key escrow proposal be a fully specified proposal, not just some cryptography and hand-waving notions about how it will be used in practice. And before it is analyzed and debated, it should have to satisfy some sort of basic security analysis. Otherwise, we'll be swatting pseudo-proposals like this one, while those on the other side of this debate become increasingly convinced that it's possible to design one of these things securely.

Already people are using the National Academies report on backdoors for law enforcement as evidence that engineers are developing workable and secure backdoors. Writing in Lawfare, Alan Z. Rozenshtein claims that the report -- and a related New York Times story -- "undermine the argument that secure third-party access systems are so implausible that it's not even worth trying to develop them." Susan Landau effectively corrects this misconception, but the damage is done.

Here's the thing: it's not hard to design and build a backdoor. What's hard is building the systems -- both technical and procedural -- around them.

He's only solving the part we already know how to solve. He's deliberately ignoring the stuff we don't know how to solve. We know how to make backdoors, we just don't know how to secure them.

Ray Ozzie's Encryption Backdoor - Schneier on Security
 
Last edited:

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top