Chapter 18. Doing Security Properly

“You are ready, children, for everything that will have to be done. You have not come to your full maturity and power, of course; that stage will come only with time. It is best for you, however, that we leave you now. Your race is potentially vastly stronger and abler than ours. We reached some time ago the highest point attainable to us: we could no longer adapt ourselves to the ever-increasing complexity of life. You, a young new race amply equipped for any emergency within reckonable time, will be able to do so. In capability and in equipment you begin where we leave off.”

Mentor of Arisia in Children of the Lens
—E. E. “DOC” SMITH

18.1 Obsolescence

If you’ve read this far, some of the preceding sections may already be obsolete, and the book as a whole may be moving towards obsolescence. There’s no choice. Not only is high-tech a very dynamic field; the threat model also changes. Part of that latter is due to technical changes—new devices, new services, and so on will continue to appear for the foreseeable future—but it’s also due to changes in who the attackers are and what they want. It’s hard to imagine a more serious threat than a major government, but what these governments are interested in can and will vary. Nevertheless, the primary purpose of this book is to teach how to think about change; in that sense, its merits should, I hope, outlast the specific facts cited.

All that said, some changes are more likely than others and deserve specific mention. That’s not to say that specific ideas will pan out—if I knew for sure what would be hot in the future, I could make a lot more money consulting for venture capitalists than I can as a professor—but rather that certain broad technical trends seem all but inevitable. Hardware, for example, will almost certainly continue to get smaller and cheaper; even if there are no amazing new gadgets, we almost certainly have a fair number of years before Moore’s Law is repealed. Similarly, it seems all but certain that five or ten years from now we will all be using services that haven’t been invented yet. I’m writing this in 2015, when the iPhone was only eight years old, and Twitter and Facebook were nine and ten, respectively. It is hard to remember when they didn’t exist, but they’re that young. Nevertheless, they’ve changed the face of computing and with that the threat model. If you doubt the effect of the latter two, consider how much easier Facebook makes it for an attacker to answer “security” questions. Twitter, of course has brought down a Congressman [Barrett 2011] and arguably entire regimes [Saletan 2011]. Anything that powerful will attract the interests of governments and their militaries.

Threats change, too, though that evolution is driven as much by economics and politics as by technology. Stuxnet was developed not just because it was possible, but rather because some highly skilled, motivated adversaries wanted to damage an Iranian nuclear centrifuge plant. Similarly, Shamoon, which attacked Saudi oil company computers [Goodin 2012c; Leyden 2012], was quite likely a response to cyberattacks on Iran, rather than the outcome of a new technological development. Commercially motivated attacks are by definition motivated by money—but where the money is is changing. There are already reports of sophisticated hacks to steal Bitcoins [Greenberg 2014; Litke and Stewart 2014].

Prophecy is difficult; we cannot say with any confidence what will happen next. What we can talk about are possible new characteristics that will cause trouble in the future.

18.2 New Devices

Fewer bets are more certain than that hardware will continue to improve for the next several years. While there may be unpleasant surprises, similar to the heat death of the megahertz race, it seems clear that substantial progress will continue. Perhaps significantly, disk capacity has improved even more than CPU price/performance. There are several conclusions we can draw from this.

First, cheaper and smaller computers are deployable in more places. Furthermore, these CPUs will almost certainly be networked, which poses some obvious security issues. Although some of the risks are low—illicit access to the chip in a toothbrush or bathroom scale raises at most minor privacy concerns—anything that is tied directly or indirectly to an actuator is more worrisome.

Small, specialized computers are also harder to manage directly. They won’t have traditional input or output devices; indeed, even learning the MAC address will often be challenging. Sometimes, of course, these devices are easily firewalled. Digital toothbrushes, for example, will almost certainly have to be placed into a docking station, if only to recharge their batteries; this docking station can manage access control policies. Other devices, though, will need broader connectivity, without an obvious chokepoint, per the discussion of the Internet of Things (Section 17.4).

It would be nice, of course, if the programmers of such devices took proper care for security. This means not just basic access control—the need for that is generally understood by now—but also proper care in programming and some way to tie the widget into a larger policy framework. We can’t even count on encryption being used, let alone used properly [Barcena, Wueest, and Lau 2014]. Experience suggests that this will rarely be the case, which in turn means that security people are not likely to be out of a job any time soon.

There is an important corollary here: we don’t get to design the network protocols used by these new computers. This in turn means that when trying to devise security mechanisms for them, we have to take them as is, warts and all. This can’t be viewed just as a bug; rather, it’s likely to be reality.

The future, then, will be even more challenging for security. We can expect orders of magnitude more computers to protect; many of these will be more difficult to handle than today’s. The primary challenge will be understanding who is supposed to talk to whom, and how.

18.3 New Threats

Predicting new threats is hard. It’s not so much the concept that’s difficult to imagine as the context. Cyberespionage isn’t new; arguably, it existed more than 25 years ago [Stoll 1988; Stoll 1989]. The modern incarnation, though, became possible because the desired information moved online: you can’t hack into a typewritten page. It was economics that changed the situation: the productivity advantages of creating and storing industrial and defense information on networked computers were overwhelming; to refrain would have made no sense. Nevertheless, the move has had consequences.

It is important to remember that conceptually, most “new” threats aren’t new; rather, they become real or they become real at scale. Software tinkering with bank accounts, for example, was described almost 50 years ago in a science fiction book, albeit via a sentient computer [Heinlein 1966]. Cybersabotage was described by Reed [2004]. Deliberately destructive viruses were imagined by Gerrold [1972] and as a weapon of war in the myth of the so-called “Iraqi printer virus” [G. Smith 2003]. None of these are quite how it’s done today, but the basic concepts are old.

It is clear that we can think of all sorts of futuristic threats, ones that may never come to pass. Consider, for example, self-driving cars [R. Wood 2012]. It’s easy to imagine nightmare scenarios from hacked automobiles, especially when you realize how insecure today’s car networks are [Koscher et al. 2010]. However, attacks (and especially serious attacks) don’t happen simply because they’re possible; rather, they happen because someone somehow gains something from the attack. Recall the definition of a threat cited in Chapter 3: “an adversary that is motivated and capable of exploiting a vulnerability.” You need all three to have a problem: the vulnerability, the capability, and the motivation. Looking at this through a purely technological lens makes you focus on the first two, but the third is equally critical.

Predicting new threats, then, requires three steps. First, be aware of new services and gadgets. (Yes, it is indeed a job requirement that you acquire and play with lots of new toys. You have my permission to tell that to your boss.) Second, follow the security literature (including, of course, blog posts and newsletters) to learn about the new attacks and holes, and how easy or difficult they are to exploit. (The press often overhypes new holes.) Finally, pay attention to the news to see who might benefit from some new attack. Remember to factor in both the skill level required as well as whether the attack makes sense in terms of the possible perpetrator’s goals.

18.4 New Defenses

The ultimate goal of security research, of course, is to find some strong, new defenses, ones that resist attacks old and new. Most likely, some fundamentally new design principle will be needed. As noted many times in this book, most security problems are due to buggy code. It is hard to imagine what a defense against that might look like, given that every other panacea proposed over the last several decades has failed.

Wulf and Jones noted that the security field has not had any really new ideas in quite a long time [2009]. They’re quite right. Most of our systems are based on what I call the “walls and doors” principle: a strong wall between security contexts, and a door, an opening in the wall for selected requests. We’re pretty good (though not perfect) at building walls, that is, at separating contexts. Doors, though, are problematic. They’re not supposed to pass just any request; rather, they should do so only in accordance with a policy. Unfortunately, both specifying and implementing suitable policies is difficult.

Consider a simple web-mediated database search for a person’s name. It sounds like a simple security policy: accept a name, and nothing else. However (and as memorably explained by xkcd; see Figure 2.1), it seems to be very hard to get that right. Admittedly, handling names properly is hard [McKenzie 2010], but there is no excuse for SQL injection attacks today. Nevertheless, they happen and with distressing frequency.

It is likely, of course, that at some point technology will render SQL attacks a minor concern. Somewhat greater programmer awareness coupled with new APIs will make it easier to do the right thing than the wrong one. These attacks will thus decrease in importance, just as has happened with buffer overflows, what with the development of improved training and better tools (e.g., Address Space Layout Randomization [ASLR] [Shacham et al. 2004] and stack canaries [Cowan et al. 2003]). What’s next? There have been many different kinds of attacks over the years. What is significant is how many of them are against door wardens, the programs charged with enforcing a safe policy. In the 1990s, mailers were a popular target, because they had to pass information from one security context to another. These days, attacks involving active content, notably JavaScript and Java, are legion. (Microsoft says that about 75% of exploit kits in 2013 targeted Java, and about another 10% went after Flash [Batchelder et al. 2013].)

To complicate things, there are generally no defenses within the walls. This could be considered a matter of definition; alternately, one could envision an architecture with interior walls, walls of perhaps lesser strength but still with well-guarded doors. This would help with the “brittleness” problem [Bellovin 2006a], that our defenses shatter under attack so that one security bug can result in the complete penetration of a system and the compromise of everything in it.

How could such a resilient system be built? I outlined one scheme for protecting e-commerce sites in Chapter 11, involving encrypted database records. This isn’t a complete solution—e-commerce sites have many more databases than just those holding customer records, and there are many other types of vulnerable systems—but the approach is illustrative.

There are certainly other possibilities. One might use a cryptographic scheme—fully homomorphic encryption [Gentry 2010]? functional encryption [Boneh, Sahai, and Waters 2012]?—though care must be taken to avoid simply changing the policy question from “What may pass through the door?” to “Who can have access to the keys?” If the underlying logic is the same, the bugs are likely to be the same as well.

It seems clear, though, that relying on walls and doors will not succeed. More precisely, it has not succeeded despite decades of effort; we need a new paradigm. Naturally, I hope that the principles explained in this book will help us adapt to such a paradigm if and when it arises.

18.5 Thinking about Privacy

This book is about computer security, not privacy. Nevertheless, a few comments about privacy are in order.

First, privacy is increasingly important. Governments around the world are enforcing more and more stringent requirements. Companies that fail, even inadvertently, have been sanctioned.1 Consumers care more and more.

1. See, for example, http://www.ftc.gov/opa/2010/06/twitter.shtm, which describes a settlement between the US Federal Trade Commission and Twitter.

As it turns out, and despite some important differences, many of the design principles are the same. If nothing else, a system that can’t protect data confidentiality can’t protect user privacy; anyone who hacks the system can get at all of the data.

More importantly, just as proper security behavior changes with technology and attackers, so, too, do threats to privacy. Once, fairly simple schemes sufficed to anonymize data; today, deanonymization techniques are sufficiently advanced (see, e.g., [Narayanan and Shmatikov 2008]) that they have drawn legal attention [Ohm 2010]. The advances in technology have been accompanied by changes in the threat model: more people, especially advertisers, are willing to go to great lengths to track consumers [Valentino-DeVries and Singer-Vine 2012].

This does not by itself call into question such principles as “privacy by design” [Cavoukian 2009]. It does, however, mean that particular design choices must either be based on firm mathematical foundations or explicitly evaluated against a given state of technology and threat. For example, Chapter 2 of [Cavoukian 2009] advocates “biometric encryption”: converting a biometric to a key to protect personal data. This is fine in principle; however, as [Ballard, Kamara, and Reiter 2008] points out, it’s hard to evaluate the security of proposed constructs; indeed, several proposed schemes have later fallen to attacks. This doesn’t mean that a privacy design based on biometric encryption is a bad idea; it does mean that the privacy guarantees are not absolute and will undoubtedly grow weaker over time.

18.6 Putting It All Together

If there is one single principle underlying this book, it is that security designs can only be evaluated against a particular point in time. Given that, and given the rate of change of technology, it is vital that we learn to ask, “What next?”

This is a practice that has largely been ignored by the security community, with the notable exception of (good) cryptographers. I have yet to see a system analysis that makes explicit what the time- or threat-bound assumptions are. Nevertheless (and as should be very clear by now), these assumptions underlie the conclusions—and some day, the assumptions will be wrong and the security illusory.

One practice that is, fortunately, becoming more common is to do new security reviews for major revisions of a product. That’s good, but all too often, these new reviews don’t go far enough. Quite naturally, they focus on the new features, the new components, and the new interfaces. This is good as far as it goes, since anything new might have a wide-open door with a sign saying, “Welcome, Hackers!”; however, it doesn’t go far enough. These reviews only rarely go back and look at the previous review, and if they did they’d likely search in vain for explicit upper bounds on the assessed security. Reviews, like cartons of milk, should come with expiration dates.

Large projects carry their own challenges. A major, enterprise-wide software deployment can take man years to design, develop, and deploy; see, for example, [R. Stross 2012] and [Israel 2012] for discussions of two failed megaprojects. It is sobering to compare these timescales with how short a time smart phones have been around. Agility is crucial.

I’ve occasionally muttered about a magic wand that I could wave that would fix all of today’s security problems. It would have to be a very big wand, of course, carrying a very potent spell. The image, though, is wrong. The spells would have to be strengthened and recast continuously, and the wand kept continually in motion. There are and can be no final answers to security because the problem keeps changing. All we can do is to keep studying, keep improving our systems—and keep waving that wand.

The old man leaned forward again. “Go, Tony! I throw the torch to you. Your place is the place I occupied. Lead my people. Fight! Live! Become glorious!”

After Worlds Collide
—PHILIP WYLIE AND EDWIN BALMER

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.5.151