The task of security has evolved rapidly in an interconnected age. Where previously police and private forces had to protect physical assets with fences, locks, and other tangible efforts, now both threats and assets can be ephemeral and distributed. Networks of networks introduce redundancy (as in a power grid, where the local generating plant no longer constitutes a single point of failure), but they also introduce unprecedented levels of complexity. That complexity underlies all considerations of security, which has moved from obvious efforts to protect things and people from harm (in the ways just mentioned) to become a maze of cost-benefit-risk considerations. Those calculations are complicated by humans' completely predictable inability to assess risk rationally.
Considered as a sociotechnical system of people and technologies interacting in both directions, the discipline of security must be conducted very differently as compared to local efforts of a constabulary or parking lot guard. Thus, our focus here is on the managerial imperatives rather than on the techniques of perimeter protection, intrusion detection, firewall selection and configuration, password resets, and other activities that often constitute the focus of the discipline. In short, mastering the domains of costs (hard and soft), benefits, and risks requires new skills, new metrics, and new attitudes compared to the practice of physical security conducted in local settings.
The Internet is increasingly made mobile and connects billions of devices both stationary and in motion, “users” both animate and electronic, and for purposes ranging from deep-space exploration to commercial exploitation of humanity's basest desires. Given such broad span, it presents ample opportunities for people to find trouble. Put simply, humanity has never attempted to manage anything so big, so rapidly evolving, so distributed, or so complicated. A few numbers only hint at the size of the challenge; the scale is nearly impossible for humans to comprehend, which is one of the key issues in dealing with security and risk:
For anything so sprawling and fast-moving, conventional understandings clearly fail; seeing firewalls as being “like fences,” for example, constitutes a cognitive trap. The scale of bad things occurring in information space is similarly difficult to apprehend:
As discussed in more detail in Chapter 6, information spaces present prime examples of fat-tailed distributions: A few population members for examples are disproportionately huge (Google and Facebook, Harry Potter books, Avatar and Pirates of the Caribbean), while the curve of the distribution rapidly descends into the famous long tail of onesies and twosies. Thinking of this world in terms of the familiar bell curve conceptualization is impossible: The “average” Web site or information good is a contradiction in terms. If Harry Potter volume 19 sells 5 million copies and a routine academic study of medieval France sells 20 copies, talking about 2.5 million as the average of the two makes no sense whatsoever.
The infrastructure needed to manage an Amazon or a Yahoo! reflects this extremity. Data center buildings run in the hundreds of thousands of square feet, pulling down in excess of an estimated 100-megawatt power feeds. (For comparison, aluminum smelters use between 150 and 450 megawatts.) On the output side of the equation, Google served about 3 billion searches per day in late 2009, according to data compiled by market research firm comscore; that's 34,000 per second.2
In such a world, threats to information are not random or average. In a power-law scenario, one example (a Harry Potter or Warren Buffett, in wealth) can alter the entire landscape; in a bell curve assumption, however, large sample sizes guarantee curve smoothing: No one instance of human height or focus-group preference can reshape the landscape. In other words, Bill Gates can be 10 billion times richer than a random Kenyan, but nobody can eat 10 billion times more cherry Pop-Tarts than another customer. Nobody can stand even 1 order of magnitude taller than her neighbor.
This potential for extremity has significant implications for risk management: Very, very bad things can happen in hypernetworked environments. Whether in regard to the spread of rumors or malware, the speed and scale of today's networks drive risk skyward. For example, in 2003 the Slammer worm (technologically simple compared to the current generation of malware) infected 75,000 machines in 10 minutes.
As Nassim Nicholas Taleb noted in The Black Swan, bell curve distributions use averaging across many samples within a finite range to generate certainty.3 In information and risk space, one instance outside the presumed norm (a BP oil spill, a Hurricane Katrina, a Heartland data breach) can alter the entire landscape. Given extreme interconnection, two consequences emerge: (1) The Internet allows enormous populations (sometimes audiences) to be assembled, and (2) changes can spread across populations extremely rapidly. Both of these realities change fundamental facets of security practice as compared to previous eras.
Here's a simple experiment. The following list of hazards to Americans' health is alphabetical, but seeing them listed from riskiest to least risky reveals extreme differences in probability: There are no split hairs. Even when asking a group, where there is some averaging of opinion and pooling of knowledge (a lifeguard knows about sharks while a daughter of a lung cancer survivor may know about that disease), there are invariably big misses: Perception, fear, and reality do not align. Try listing these from the most deaths per year to the least:
Invariably, individuals' fears, phobias, and recent experiences color perception of something as intrinsically attention-getting as accidental death. While infrequent events typically are confused, it's also common for people not to realize the most deadly phenomena on the list: Note that number 1 outranks number 2 by well over an order of magnitude (numbers refer to deaths per year), yet precautions against cancer are not ubiquitous:
Why does this confusion about danger matter? Security does not simply involve keeping bad people from doing bad things to me or my organization. Instead, particularly in virtual settings involving often-intangible assets, security is a matter of priority setting, risk–reward trade-offs, and other managerial assessments. If people cannot understand in a very rational way the risk of dying, it takes considerable self-awareness, careful fact finding, and professional judgment to make good decisions regarding risks of less intuitive events on behalf of other people.
As we can see at any U.S. airport, security decisions typically are made by people away from the front lines—as well they should, provided the senior decision makers are adequately informed. At the same time, security policies can and often do reflect agendas far removed from actually keeping assets or people safer: The political uses of the Transportation Security Administration threat level colors in the 2004 election stand as an obvious example. The combination of multiple priorities and human logical fallibility relative to risk, however, means that a lot of time, money, and effort can be expended with little measurable impact on security or risk mitigation.
Instead, what security guru Bruce Schneier has called “security theater” often presents visual and dramatic elements that manipulate public perception with little impact on real threats.4 A few examples should suffice:
In short, “security theater” is a predictable outcome of the normal decision-making process, reflecting the political dimension of organizational behavior rather than a sensible response to an actual threat.
Many people have written extensively and well on the topic of effective security, not least of all Schneier. Three points bear consideration:
Effective security thus requires that people be motivated, so behavioral economics, with its emphasis on reward structures and actual actions rather than fictional economic creatures, becomes highly relevant. Logic was not enough to make hospital doctors and other personnel wash their hands, for example, even though the benefits were obvious and dramatic. Similarly, more sophisticated designs for enterprise security will balance rewards and punishments in original and clever ways rather than simply having administrators dictating official procedure and expecting (or demanding) compliance.
Getting systems to be usable, evolving, robust against multiple types of threat, and affordable is extremely difficult. Because systems transcend organizations, and because security is effective only when nothing happens, budgeting against risk is difficult. Who pays, who benefits, and who is inconvenienced frequently misalign. Interfaces between systems are particularly hard to get right, not least because organizational authority must be managed across various gaps. Parking lots are problematic for this reason: Building or store security and the door locks on the automobile are both effective, but at the interface, attackers exploit various weaknesses that fall between organizational mandates.
Why is it so hard to get usability right? As Don Norman, one of the heroic figures in modern usability studies, puts it, complex products are not merely things; they provide services: “[A]lthough a camera is thought of as a product, its real value is the service it offers to its owner: Cameras provide memories. Similarly, music players provide a service: the enjoyment of listening.”6 In this light, the product must be considered as part of a system that supports experience, and systems thinking is hard, complicated, and difficult to accomplish in functionally siloed organizations.
The ubiquitous iPod makes his point perfectly.
The iPod is a story of systems thinking, so let me repeat the essence for emphasis. It is not about the iPod; it is about the system. Apple was the first company to license music for downloading. It provides a simple, easy to understand pricing scheme. It has a first-class website that is not only easy to use but fun as well. The purchase, downloading the song to the computer and thence to the iPod are all handled well and effortlessly. And the iPod is indeed well designed, well thought out, a pleasure to look at, to touch and hold, and to use. Then there is the Digital Rights Management system, invisible to the user, but that both satisfies legal issues and locks the customer into lifelong servitude to Apple (this part of the system is undergoing debate and change). There is also the huge number of third-party add-ons that help increase the power and pleasure of the unit while bringing a very large, high-margin income to Apple for licensing and royalties. Finally, the “Genius Bar” of experts offering service advice freely to Apple customers who visit the Apple stores transforms the usual unpleasant service experience into a pleasant exploration and learning experience. There are other excellent music players. No one seems to understand the systems thinking that has made Apple so successful.
One of the designers of the iPod interface, Paul Mercer of Pixo, affirms that systems thinking shaped the design process: “The iPod is very simple-minded, in terms of at least what the device does. It's very smooth in what it does, but the screen is low-resolution, and it really doesn't do much other than let you navigate your music. That tells you two things. It tells you first that the simplification that went into the design was very well thought through, and second that the capability to build it is not commoditized.”7 Thus, more complex management and design vision are prerequisites for user simplification.
Because it requires systems thinking and complex organizational behavior to achieve, usability is often last on the list of design criteria, behind such considerations as manufacturability or modular assembly, materials costs, packaging, skill levels of the factory employees, and so on. The hall of shame for usability issues is far longer than the list of successes. For every garage door opener, LEGO brick, or Amazon Kindle, there are multiple BMW iDrives, Windows ribbons, European faucets, or inconsistent anesthesia machines: Doctors on a machine from company A turned the upper right knob clockwise to increase the flow rate but had to go counterclockwise on company B's machine in the next operating room over. Fortunately, the industry has standardized the control interface, with a resulting decline in human endangerment.8
Bruce Schneier gets the last word here. He proposes a simple five-step rubric for assessing a security solution that can expose some of these agendas to scrutiny and reasoned discussion:
Everything important is addressed in this process: A $5,000 door lock to protect $200 worth of property would be exposed, as would soft costs, such as inconvenience or false positives. Too often the features and functionality of the door lock or other technology become the focal point rather than their being weighed in rational fashion alongside the other four facets of the proposed solution.
Unfortunately, few security measures are introduced in this considered fashion, so we continue to live with unnecessary vulnerabilities, excessive expense, and intrusive and/or obnoxious measures that impose excessive costs on users and bystanders. Unfortunately, given the nature of both today's threats and institutions, the situation is unlikely to improve dramatically any time soon.
1. Jeff Jonas, “Your Movements Speak for Themselves: Space-Time Travel Data Is Analytic Super-Food!” August 16, 2009, http://jeffjonas.typepad.com/jeff_jonas/2009/08/your-movements-speak-for-themselves-spacetime-traveldata-is-analytic-superfood.html.
2. comStore, “comScore Reports Global Search Market Growth of 46 Percent in 2009,” Press Release, January 22, 2010, www.comscore.com/Press_Events/Press_Releases/2010/1/Global_Search_Market_Grows_46_Percent_in_2009.
3. Nassim Nicholas Taleb, The Black Swan: The Impact of the Highly Improbable (New York: Random House, 2007), pp. 229 ff.
4. Bruce Schneier, Beyond Fear: Thinking Sensibly about Security in an Uncertain World (New York: Copernicus, 2003), p. 38.
5. Cormac Herley, “So Long, and No Thanks for the Externalities: The Rational Rejection of Security Advice by Users,” New Security Paradigms Workshop, April 20, 2009, http://research.microsoft.com/apps/pubs/?id=80436.
6. Don Norman, “Systems Thinking: A Product Is More than the Product,” Interactions vol 16 issue 5, http://jnd.org/dn.mss/systems_thinking_a_product_is_more_than_the_product.html.
7. Mercer quoted in Bill Moggridge, Designing Interactions (Cambridge, MA: MIT Press, 2007), p. 315.
8. See Atul Gawande, Complications: A Surgeon's Notes on an Imperfect Science (New York: Macmillan, 2003).
9. Schneier, Beyond Fear, p. 14.
18.188.242.157