CHAPTER 7

Security and Risk

The task of security has evolved rapidly in an interconnected age. Where previously police and private forces had to protect physical assets with fences, locks, and other tangible efforts, now both threats and assets can be ephemeral and distributed. Networks of networks introduce redundancy (as in a power grid, where the local generating plant no longer constitutes a single point of failure), but they also introduce unprecedented levels of complexity. That complexity underlies all considerations of security, which has moved from obvious efforts to protect things and people from harm (in the ways just mentioned) to become a maze of cost-benefit-risk considerations. Those calculations are complicated by humans' completely predictable inability to assess risk rationally.


Considered as a sociotechnical system of people and technologies interacting in both directions, the discipline of security must be conducted very differently as compared to local efforts of a constabulary or parking lot guard. Thus, our focus here is on the managerial imperatives rather than on the techniques of perimeter protection, intrusion detection, firewall selection and configuration, password resets, and other activities that often constitute the focus of the discipline. In short, mastering the domains of costs (hard and soft), benefits, and risks requires new skills, new metrics, and new attitudes compared to the practice of physical security conducted in local settings.

Landscape

The Internet is increasingly made mobile and connects billions of devices both stationary and in motion, “users” both animate and electronic, and for purposes ranging from deep-space exploration to commercial exploitation of humanity's basest desires. Given such broad span, it presents ample opportunities for people to find trouble. Put simply, humanity has never attempted to manage anything so big, so rapidly evolving, so distributed, or so complicated. A few numbers only hint at the size of the challenge; the scale is nearly impossible for humans to comprehend, which is one of the key issues in dealing with security and risk:

  • Both Google and Bing estimate 1 trillion Web pages as of mid-2010.
  • Out of a global population of more than 7 billion people, roughly 2 billion people are online.
  • In the United States alone, mobile devices generate 600 billion geotagged messages—each day—accurate to within 10 meters if you're on a Wi-Fi connection.1
  • Half of Facebook's more than 500 million users check in daily; by itself, the site accounts for one-quarter of all U.S. page views and a third of all online ads.
  • Cisco estimates that Internet Protocol (IP) traffic will quadruple between 2009 and 2014.

For anything so sprawling and fast-moving, conventional understandings clearly fail; seeing firewalls as being “like fences,” for example, constitutes a cognitive trap. The scale of bad things occurring in information space is similarly difficult to apprehend:

  • Symantec, a digital security vendor, observed 14.6 trillion spam messages in the third quarter of 2010, which is approximately 91% of all e-mail traffic. Spam increased 100,000% between 1997 and 2004, according to the IEEE.
  • Personal records for 26 million U.S. military veterans were compromised when a single laptop computer went missing in 2008.
  • Heartland Payment Systems, a credit card processor, reported a data breach of roughly 130 million records in 2009.
  • As of late 2011, more than 2,600 reported U.S. data breaches had exposed more than 500 million records, according to privacyrights.org.
  • The Conficker worm alone has infected an estimated 12 million PCs since 2008.
  • As of 2010, one thousand credit card numbers could be bought on underground services for $300, only 30 cents per user.
  • The Kroxxu bot network had infected over 100,000 Web domains between its launch in 2009 and a year later, as opposed to attacking personal computers as had been previous practice.

Information Space Is Neither Average nor Normal

As discussed in more detail in Chapter 6, information spaces present prime examples of fat-tailed distributions: A few population members for examples are disproportionately huge (Google and Facebook, Harry Potter books, Avatar and Pirates of the Caribbean), while the curve of the distribution rapidly descends into the famous long tail of onesies and twosies. Thinking of this world in terms of the familiar bell curve conceptualization is impossible: The “average” Web site or information good is a contradiction in terms. If Harry Potter volume 19 sells 5 million copies and a routine academic study of medieval France sells 20 copies, talking about 2.5 million as the average of the two makes no sense whatsoever.

The infrastructure needed to manage an Amazon or a Yahoo! reflects this extremity. Data center buildings run in the hundreds of thousands of square feet, pulling down in excess of an estimated 100-megawatt power feeds. (For comparison, aluminum smelters use between 150 and 450 megawatts.) On the output side of the equation, Google served about 3 billion searches per day in late 2009, according to data compiled by market research firm comscore; that's 34,000 per second.2

In such a world, threats to information are not random or average. In a power-law scenario, one example (a Harry Potter or Warren Buffett, in wealth) can alter the entire landscape; in a bell curve assumption, however, large sample sizes guarantee curve smoothing: No one instance of human height or focus-group preference can reshape the landscape. In other words, Bill Gates can be 10 billion times richer than a random Kenyan, but nobody can eat 10 billion times more cherry Pop-Tarts than another customer. Nobody can stand even 1 order of magnitude taller than her neighbor.

This potential for extremity has significant implications for risk management: Very, very bad things can happen in hypernetworked environments. Whether in regard to the spread of rumors or malware, the speed and scale of today's networks drive risk skyward. For example, in 2003 the Slammer worm (technologically simple compared to the current generation of malware) infected 75,000 machines in 10 minutes.

As Nassim Nicholas Taleb noted in The Black Swan, bell curve distributions use averaging across many samples within a finite range to generate certainty.3 In information and risk space, one instance outside the presumed norm (a BP oil spill, a Hurricane Katrina, a Heartland data breach) can alter the entire landscape. Given extreme interconnection, two consequences emerge: (1) The Internet allows enormous populations (sometimes audiences) to be assembled, and (2) changes can spread across populations extremely rapidly. Both of these realities change fundamental facets of security practice as compared to previous eras.

People Systematically Misestimate Risk

Here's a simple experiment. The following list of hazards to Americans' health is alphabetical, but seeing them listed from riskiest to least risky reveals extreme differences in probability: There are no split hairs. Even when asking a group, where there is some averaging of opinion and pooling of knowledge (a lifeguard knows about sharks while a daughter of a lung cancer survivor may know about that disease), there are invariably big misses: Perception, fear, and reality do not align. Try listing these from the most deaths per year to the least:

  • Airplane accidents
  • Cancer
  • Dog attacks
  • Lightning
  • Motor vehicle accidents
  • Murder
  • Residential fires
  • Sharks

Invariably, individuals' fears, phobias, and recent experiences color perception of something as intrinsically attention-getting as accidental death. While infrequent events typically are confused, it's also common for people not to realize the most deadly phenomena on the list: Note that number 1 outranks number 2 by well over an order of magnitude (numbers refer to deaths per year), yet precautions against cancer are not ubiquitous:

  1. Cancer: 550,000
  2. Motor vehicle accidents: 42,000
  3. Murder: 16,000
  4. Residential fires: 3,500
  5. Airplane accidents: 600
  6. Lightning: 90
  7. Dog attacks: 20
  8. Sharks: <1

Why does this confusion about danger matter? Security does not simply involve keeping bad people from doing bad things to me or my organization. Instead, particularly in virtual settings involving often-intangible assets, security is a matter of priority setting, risk–reward trade-offs, and other managerial assessments. If people cannot understand in a very rational way the risk of dying, it takes considerable self-awareness, careful fact finding, and professional judgment to make good decisions regarding risks of less intuitive events on behalf of other people.

As we can see at any U.S. airport, security decisions typically are made by people away from the front lines—as well they should, provided the senior decision makers are adequately informed. At the same time, security policies can and often do reflect agendas far removed from actually keeping assets or people safer: The political uses of the Transportation Security Administration threat level colors in the 2004 election stand as an obvious example. The combination of multiple priorities and human logical fallibility relative to risk, however, means that a lot of time, money, and effort can be expended with little measurable impact on security or risk mitigation.

Instead, what security guru Bruce Schneier has called “security theater” often presents visual and dramatic elements that manipulate public perception with little impact on real threats.4 A few examples should suffice:

  • In the 1950s and 1960s, schoolchildren practiced ducking under desks in the event of a nuclear attack.
  • After the 9/11 attacks, National Guardsmen patrolled public places carrying automatic weapons. It was never revealed whether all of the weapons were actually loaded, given the danger posed by a nervous, semitrained civilian with such a powerful weapon in a crowded scenario.
  • Nail clippers were long banned from aircraft even though any of the soda cans routinely emptied during the flight could be turned into something far more lethal with no tools whatsoever.

In short, “security theater” is a predictable outcome of the normal decision-making process, reflecting the political dimension of organizational behavior rather than a sensible response to an actual threat.

Doing It Right

Many people have written extensively and well on the topic of effective security, not least of all Schneier. Three points bear consideration:

  1. Security involves people. People are both irrationally afraid of things that pose little risk (sharks) and casual with devices that can be quite dangerous if used incorrectly: Cars, text messaging, and USB drives each can serve as an example. At the same time, as Cormac Herley of Microsoft Research has shown, users of information systems behave rationally given their personal position on a risk-reward continuum: What systems administrators understand as part of a totality, users often see as hassles or roadblocks to be avoided.5 He notes that password resets, close inspection of web addresses for phishing threats, and checking digital authentication certificates all take too much time relative to the slim likelihood that such practices will confer a benefit to an individual. In short, private incentives must be managed in the pursuit of organizational objectives.

    Effective security thus requires that people be motivated, so behavioral economics, with its emphasis on reward structures and actual actions rather than fictional economic creatures, becomes highly relevant. Logic was not enough to make hospital doctors and other personnel wash their hands, for example, even though the benefits were obvious and dramatic. Similarly, more sophisticated designs for enterprise security will balance rewards and punishments in original and clever ways rather than simply having administrators dictating official procedure and expecting (or demanding) compliance.

  2. Security involves systems. Much like usability, to which security is obviously related, security is too seldom seen as a system or, more typically in a connected world, a system of systems. Designing systems (network security) is much more difficult than designing products (firewalls). Policies and procedures span organizational boundaries, become brittle with time, and must interact in the pursuit of various purposes. (The same employee ID that gets you past the security guard announces your name and perhaps other information to potential intruders who take careful notes while you go out for lunch wearing the badge.)

    Getting systems to be usable, evolving, robust against multiple types of threat, and affordable is extremely difficult. Because systems transcend organizations, and because security is effective only when nothing happens, budgeting against risk is difficult. Who pays, who benefits, and who is inconvenienced frequently misalign. Interfaces between systems are particularly hard to get right, not least because organizational authority must be managed across various gaps. Parking lots are problematic for this reason: Building or store security and the door locks on the automobile are both effective, but at the interface, attackers exploit various weaknesses that fall between organizational mandates.

    Designing Usable Systems

    Why is it so hard to get usability right? As Don Norman, one of the heroic figures in modern usability studies, puts it, complex products are not merely things; they provide services: “[A]lthough a camera is thought of as a product, its real value is the service it offers to its owner: Cameras provide memories. Similarly, music players provide a service: the enjoyment of listening.”6 In this light, the product must be considered as part of a system that supports experience, and systems thinking is hard, complicated, and difficult to accomplish in functionally siloed organizations.

    The ubiquitous iPod makes his point perfectly.

    The iPod is a story of systems thinking, so let me repeat the essence for emphasis. It is not about the iPod; it is about the system. Apple was the first company to license music for downloading. It provides a simple, easy to understand pricing scheme. It has a first-class website that is not only easy to use but fun as well. The purchase, downloading the song to the computer and thence to the iPod are all handled well and effortlessly. And the iPod is indeed well designed, well thought out, a pleasure to look at, to touch and hold, and to use. Then there is the Digital Rights Management system, invisible to the user, but that both satisfies legal issues and locks the customer into lifelong servitude to Apple (this part of the system is undergoing debate and change). There is also the huge number of third-party add-ons that help increase the power and pleasure of the unit while bringing a very large, high-margin income to Apple for licensing and royalties. Finally, the “Genius Bar” of experts offering service advice freely to Apple customers who visit the Apple stores transforms the usual unpleasant service experience into a pleasant exploration and learning experience. There are other excellent music players. No one seems to understand the systems thinking that has made Apple so successful.

    One of the designers of the iPod interface, Paul Mercer of Pixo, affirms that systems thinking shaped the design process: “The iPod is very simple-minded, in terms of at least what the device does. It's very smooth in what it does, but the screen is low-resolution, and it really doesn't do much other than let you navigate your music. That tells you two things. It tells you first that the simplification that went into the design was very well thought through, and second that the capability to build it is not commoditized.”7 Thus, more complex management and design vision are prerequisites for user simplification.

    Because it requires systems thinking and complex organizational behavior to achieve, usability is often last on the list of design criteria, behind such considerations as manufacturability or modular assembly, materials costs, packaging, skill levels of the factory employees, and so on. The hall of shame for usability issues is far longer than the list of successes. For every garage door opener, LEGO brick, or Amazon Kindle, there are multiple BMW iDrives, Windows ribbons, European faucets, or inconsistent anesthesia machines: Doctors on a machine from company A turned the upper right knob clockwise to increase the flow rate but had to go counterclockwise on company B's machine in the next operating room over. Fortunately, the industry has standardized the control interface, with a resulting decline in human endangerment.8

  3. Security involves trade-offs. Here we return to the crux of why risk management is too often both ineffective and overly expensive. While many security measures involve technical expertise, sometimes expensive and/or extensive, the managerial process of counterbalancing goals, objectives, resources, costs, and consequences can be mightily complex. The technical skills involved in perimeter protection, fraud detection, or antishoplifting measures can be esoteric, to be sure, but effective security is not a technical battle; it's a management problem. The trade-offs have much less to do with the hardware elements of the relevant systems than with the power relationships and competing agendas of the people involved.

    Bruce Schneier gets the last word here. He proposes a simple five-step rubric for assessing a security solution that can expose some of these agendas to scrutiny and reasoned discussion:

    1. What assets are you trying to protect? This question is often less obvious than it may appear.
    2. What are the risks to those assets?
    3. How well does the security solution mitigate those risks?
    4. What other risks does the security solution cause? Unintended consequences proliferate in these situations: Bank vaults did not need to be blown open when kidnapping the manager's spouse was an option; time locks were the countermeasure to the countermeasure.
    5. What costs and trade-offs does the security solution impose?9

    Everything important is addressed in this process: A $5,000 door lock to protect $200 worth of property would be exposed, as would soft costs, such as inconvenience or false positives. Too often the features and functionality of the door lock or other technology become the focal point rather than their being weighed in rational fashion alongside the other four facets of the proposed solution.

Looking Ahead

Unfortunately, few security measures are introduced in this considered fashion, so we continue to live with unnecessary vulnerabilities, excessive expense, and intrusive and/or obnoxious measures that impose excessive costs on users and bystanders. Unfortunately, given the nature of both today's threats and institutions, the situation is unlikely to improve dramatically any time soon.

Notes

1. Jeff Jonas, “Your Movements Speak for Themselves: Space-Time Travel Data Is Analytic Super-Food!” August 16, 2009, http://jeffjonas.typepad.com/jeff_jonas/2009/08/your-movements-speak-for-themselves-spacetime-traveldata-is-analytic-superfood.html.

2. comStore, “comScore Reports Global Search Market Growth of 46 Percent in 2009,” Press Release, January 22, 2010, www.comscore.com/Press_Events/Press_Releases/2010/1/Global_Search_Market_Grows_46_Percent_in_2009.

3. Nassim Nicholas Taleb, The Black Swan: The Impact of the Highly Improbable (New York: Random House, 2007), pp. 229 ff.

4. Bruce Schneier, Beyond Fear: Thinking Sensibly about Security in an Uncertain World (New York: Copernicus, 2003), p. 38.

5. Cormac Herley, “So Long, and No Thanks for the Externalities: The Rational Rejection of Security Advice by Users,” New Security Paradigms Workshop, April 20, 2009, http://research.microsoft.com/apps/pubs/?id=80436.

6. Don Norman, “Systems Thinking: A Product Is More than the Product,” Interactions vol 16 issue 5, http://jnd.org/dn.mss/systems_thinking_a_product_is_more_than_the_product.html.

7. Mercer quoted in Bill Moggridge, Designing Interactions (Cambridge, MA: MIT Press, 2007), p. 315.

8. See Atul Gawande, Complications: A Surgeon's Notes on an Imperfect Science (New York: Macmillan, 2003).

9. Schneier, Beyond Fear, p. 14.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.242.157