Chapter 5. Amateurs Study Cryptography; Professionals Study Economics

Long before anyone had built a computer, mathematicians were constructing the underlying tools of logic on which all computers are built. The research of mathematicians such as Shannon, Turing, and von Neumann enabled us to understand the limits and possibilities of what computers and the programs running on them might do. Decades of brilliant engineering have allowed us to build increasingly complex systems and make them faster and usually smaller. The underlying mathematics remains unchanged—a testament to the power of good mathematical thinking.

Their origins and very nature make computers mathematical artifacts. Indeed, the deep abstractions of computer science are often indistinguishable from pure mathematics. Mathematicians have defined a subset of their field, applied mathematics, to distinguish it from other areas that have no expected practical application. Applied mathematics would be the black sheep of that scientific family, if not for computer science.

If we consider the challenges that we face within information security to be problems of logic, the answers to those challenges should be found through the application of mathematics. This hypothesis does not ring true with the experiences of most security practitioners. They typically feel that users and other “soft” factors are the reasons why security often fails. It is true that many problems within information security are usefully illuminated by math and logic. Once we have solved those problems, issues emerge from the ways in which computers, societal norms, and people’s behavior intersect. This would not have surprised von Neumann, who helped invent game theory and who offered a framework for how to approach games like the prisoner’s dilemma.

Looking back at our Turkish hacker from Chapter 1, his plans make no sense expressed in mathematical terms. However, they are entirely sensible when you consider that his victims were human beings and thus prone to making mistakes. Those mistakes allowed the gang to manipulate numbers within computers that eventually became little bits of colored paper that they wanted because other members of society would exchange those pieces of paper for goods and services. So, looking at the failings of computer security absent the context of people and society in which those computers operate is a narrow view that is unlikely to bring us effective solutions.

This is problematic, because people who learn about information security in college are taught a narrow set of lessons about what security is. Many academics think it should be treated as a mathematical problem. Historically, computer security or network security classes at universities and colleges have focused on the study of cryptography. They applied cryptography to rather abstract problems in which various parties communicate in the presence of an adversary. Cryptography is certainly a very useful building block, but most problems in computer security do not exist because of a lack of cryptography. A late 1990s analysis of security issues described in Computer Emergency Response Team (CERT) advisories found that 85% could not be fixed through the application of more or better cryptography. Thus, the time that students spend in security classes tends to be spent on indirect, theoretical aspects of problems that just happen to be mathematically interesting. (Some universities are addressing these issues, but absent good data about what goes wrong, it has been challenging to craft new curricula.)

We are by no means arguing against the use of mathematics, but rather against the application of mathematics to security problems to the exclusion of all else. The New School suggests that because computers are inevitably employed within a larger world, information security as a discipline must embrace lessons from a far wider field. Some of those lessons will come from the fields of economics, psychology, and sociology. Accomplishing this will require individuals such as researchers and practitioners in disparate fields to collaborate and cross-ferment ideas. Some of this intermingling has been gathering steam for a few years in the area of “the economics of information security.” Other instances of collaboration between disciplines are just getting started, or have not yet begun.

Cross-disciplinary endeavors can be hard for academics, who benefit from having their work published in prestigious journals that, by their very nature, tend to focus on existing lines of research. Those journals might not be amenable to publishing work that suggests that, in fact, we’ve all been looking at problems from a wrong or at least skewed perspective. The editorial boards of academic journals might hold such work to a higher standard, so it may be practically discouraged, if not consciously. We’re happy to give the benefit of the doubt. We believe that most academics or journal editors will not flinch at the opportunity to advance the field by presenting new perspectives.

This notion that improvement can come from the introduction of new perspectives flows from the approach of observing the world and asking why. We ask why to understand people’s motivations—their conscious and unconscious incentives—and in doing so, learn to craft better approaches to security challenges.

The Economics of Information Security

Ross Anderson is a professor of security engineering at the University of Cambridge Computer Laboratory in England. His areas of research include banking, analysis of cryptographic protocols, security of medical information systems, and public-policy matters, among others. In 2001, he published a paper titled “Why Information Security Is Hard: An Economic Perspective.” That paper is generally considered the first piece of work to explicitly analyze the broad field of information security from the perspective of economics. It describes how many of today’s challenges in information security can be understood using the models and language of microeconomics, such as the theory of incentives, network effects, and liability. Anderson’s central observation was that the motivations of the various parties who interact with a system are often the most significant factor that influences its security. In other words, how people are motivated to behave can be as important as, or often more important than, how the system is designed to behave.

What’s unique about this idea is that it contrasts with the mind-set that information security is primarily a technology problem, and that ultimately the “solution” can be reached by piling on more and more technology. In fact, there is no data to show that businesses that spend more on security products will necessarily experience a corresponding reduction in security incidents. (Ways this might happen include when a company buys products and leaves them on the shelf, or turns them off after too many false alarms.) Multiple factors influence the possibility of a security incident, and the number of security technologies in use within an organization is just one. It may not be a leading indicator.

Security experts often point to user behavior as a factor that leads to security incidents. For example, experts accuse users of selecting poor passwords. The prevailing approach within the security industry is to impose technological solutions that attempt to mandate or constrict users’ behavior. Many organizations spend tremendous amounts of energy trying to get users to pick good passwords. Good passwords are a cornerstone of many authentication schemes, and much security fails if impersonation is easy. Unauthorized people can log in and do things they shouldn’t. Investigators might then focus on the wrong person. If an organization views this as a technical problem, it imposes its policy through tools that implement arcane password rules regarding capitalization, special characters, and the like. The result is that some of the company’s users write down their passwords, and others use the same password everywhere. Still others invest time in inventing password-changing schemes so that they can use roughly the same password with enough changes to satisfy the mechanics of the policy. Of course, writing down passwords violates yet another of the company’s security policies.

This cycle, long frustrating to technologists, makes perfect sense when viewed through two economic lenses. The first is that the two sets of incentives are not aligned. Users have no particular incentive to use a complex password. Indeed, they would most likely prefer no passwords at all, or to use very simple passwords across all the systems to which they have access. The second is that the small demands of each system with its own complex password policy accrue in the individuals asked to remember those many different passwords. Therefore, we see people ignore, sidestep, or subvert the policies. There is a saying: “If users are given a choice between security and dancing pigs, they’ll pick dancing pigs every time.” This quip points out that users’ incentives usually are not aligned with the goals of security technologists. At the same time, it exposes the mind-set that users are the enemy, or that they are worthy of condescension. Railing against human nature hasn’t been a winning strategy. We’ll turn to this subject later in this chapter.

When years of earnest exhortation haven’t changed anyone’s behavior, perhaps we need a new approach. The systemic failures we see in computer security require more than simply technical analyses—and these broader analyses are starting to happen. Let us now look at some examples of how economics can provide insight into significant or interesting problems within information security.

Why Do Some Security Technologies Fail?

Consider a street with several stores. The stores have suffered a rash of burglaries, so the merchants decide to hire a security guard. A guard would be too expensive for any single store to hire, but it may make sense if they can share the cost. Once the guard is in place, all the stores benefit. They hope the number of burglaries will be reduced. But even if some merchants decide not to pay, they would still receive value from the guard. If too many merchants decide to employ that strategy, there won’t be enough funds to hire the security guard in the first place. The only stable state is where there is no guard and none of the merchants are happy. This illustrates two things. The first is what economists call “free-riding.” A merchant who does not pay is free-riding on the investment of others. The second is a Nash Equilibrium, in which there is no move that any one player can make that would make anyone better off.

Here is another example of such an equilibrium. In Philadelphia, car insurance is expensive, so some people drive without insurance. If you get into a car accident in Philadelphia, it is more likely, compared to other cities, that the other person involved in the accident will be uninsured. This in turn drives up the cost of car insurance in Philadelphia, causing even more people to drive without insurance. No one individual can do anything helpful to improve the situation.

A final example of an equilibrium is that everyone wants to learn from the security mistakes of other organizations. Everyone would agree that the ideal situation is for everyone else to fully disclose their breaches. Fortunately, breach notification laws have forced organizations to act differently. There are good reasons to disclose breaches, and regulation helped break the equilibrium.

The outcome for any participant in these situations depends on the decisions made by the other participants. In other words, the benefits of your investments can depend on the investments that others make, or choose not to make. This same situation exists in the security world and can be applied as a concept for understanding the adoption of security technologies. It can also help us understand how we might influence their adoption.

Many of the network protocols that enable the internet to function are known to have security problems. One example is the Domain Name System (DNS), which is used to convert between numeric addresses and domain names such as news.bbc.co.uk. DNS has well-known, documented security weaknesses, and a newer, more secure alternative exists. But the use of DNS remains systemic, and a mass upgrade to the new protocol that has better security seems unlikely to occur, even in the long term.

If the internet used high-security protocols, we would all benefit. This would require ISPs, governments, and companies with an internet presence to adopt the new protocols. Stuart Schechter has pointed out that for these new protocols, a minimal level of adoption would have to be reached before any organization would see any benefit. As such, there is little to no incentive for any individual organization to upgrade, since the cost to upgrade is greater than the benefit the organization would initially receive. Historical evidence shows that hoping that organizations will act altruistically for the benefit of internet security appears to be a losing strategy. So, what strategies could be used to change this situation?

An authoritarian approach would be to mandate the new security technology. Tell everyone that only the new protocol will be supported at the end of the year, and then flip the switch when the time comes. This approach tends to appeal to certain kinds of security practitioners. But without a “king of the internet” who could perform such an act, it is most likely a pipe dream. A second approach would be to bundle the new security functionality within a product that will be widely consumed, such as a new operating system or piece of network infrastructure, and turn on the functionality by default. This bundling approach is a highly workable strategy, but it depends on vendors understanding and valuing the instrumental role they could play and then acting on that knowledge. Some people think this is fertile ground for industry lobbying of government, to get subsidies for the adoption of the new security technologies through tax breaks. It seems likely that this would inhibit innovation, as the cost of technology becomes distorted through the tax code.

A third approach would be to hope large organizations adopt the new technology for their own internal use. This might make the value high enough for other organizations to also adopt the technology, and then mass adoption would (hopefully) occur. There are working examples of this phenomenon, such as fax machines. Large companies initially purchased fax machines to allow staff at physically distant sites to send documents to each other. This got the ball rolling toward widespread adoption where companies and individuals could communicate not only internally, but also externally. If the internet had only a single computer attached to it, it would be useless. The value of each internet connection goes up as the number of people connected to the internet increases. This is an example of the network effect. The more people who own a technology, the more value there is for everyone else who owns that technology. A weak version of this operates offline. For example, some people buy Hondas because there are lots of Honda repair shops.

Another example of the network effect is in industries such as online gambling. Fraud affects one fifth to one half of all transactions, so digital cash systems that have many of the important properties of cash would be hugely valuable. If you are not knowledgeable about the intricacies of digital cash systems, suspend disbelief for a moment and accept that mathematics makes it possible to create numbers that work as money while preserving privacy, ensuring security, and being transferable between people. Why is such a system not widely available? It is not the mathematical complexity that is the barrier. The mathematics underlying the frequency-hopping, encrypted radio transmissions that make cell phones work is perhaps more complex. The issue is that deploying such a platform requires software on home computers, software at lots of merchants’ web sites, and software at the banks. It is hard to get so many different parties to simultaneously commit to the system and embrace it at once. In the case of electronic cash, questions also surround patents, law, and regulation, and raise the cost for everyone involved.

Our analysis above is built on Schechter’s paper “Bootstrapping the Adoption of Security Protocols.” Related work by Geoffrey Moore argues that markets for products function in stages. Many companies (not just in security) stumble and fall as they attempt to transition their customer base from early adopters of the technology to the mass market. A new product serves some group of early adopters well, and the producer focuses on that group to the exclusion of a broader market that might be reached. The broader-market product is usually simpler and less expensive. For example, the Palm Pilot was not the first handheld computer. It followed the Casio Boss and Apple’s Newton. Technologists who remember either can explain (at length) why each of these “should” have been adopted in the mass market. Unfortunately for them, not all worthy technology gets widely adopted. This applies just as much to security as it does to other fields. The adoption cycle that Moore describes is driven by careful attention to the needs of one group, and ensuring that that group’s needs are fully met. As that market becomes saturated, it can act as an evangelist to other groups. An example of this type of success in security is a technology called Secure Shell (SSH). The first group that SSH helped was system administrators. Other computer users became aware of the capabilities of SSH because it was being actively used. Those other features, such as “tunneling,” were then found to be useful enough that other groups picked up SSH. SSH is now in widespread use, and many other systems rely on SSH for communications security.

We can see that a security technology tends to be adopted in the mainstream when the user directly and immediately perceives the benefit of the technology. Where the adoption of a security technology benefits another party, or where some minimal level of adoption must be reached, we must look at other strategies to propel adoption forward.

Why Does Insecure Software Dominate the Market?

Most software is insecure. Almost all major software packages and platforms have a history of security vulnerabilities. Why do organizations continue to purchase insecure products? Part of the answer is that product features create more easily understood value than security. Even if two pieces of software have roughly the same features, it can be hard to understand which one has better security. There is even legitimate disagreement about what it means to have “good” security, as discussed in Chapter 2. Attempting to measure security requires time and money spent on experts in software security, so the transaction costs involved in evaluating software security are high.

Transaction costs are an important concept in economics. The most obvious transaction costs that people encounter are the “closing costs” incurred when buying a house. These are costs you pay to get the deal done. Sometimes transaction costs are explicit financial costs, but they can also be seen as including time and effort. For example, some credit card companies make their legal terms and conditions complex partly to drive up the transaction costs of comparison and make switching to another credit card more difficult.

In addition to high transaction costs, there are other reasons why assessing software security is hard. The results of security evaluations are rarely consistent. Two different experts with different areas of expertise will likely identify different issues. Getting organizations to invest in evaluating the security of software they are considering purchasing is also challenging. Free-riding on the analysis of others would be a perfectly reasonable strategy for organizations, except that currently no worthy evaluations are being done upon which to free-ride. Most software security evaluation programs are paid for by the vendor that creates the software being evaluated. To our knowledge, none have flunked an applicant.

Because “security” is so difficult for prospective customers to evaluate, it is rarely prioritized above other factors in their purchasing process. As a result, vendors that develop software rationally choose to invest in other factors that are more visible to prospective customers. Hiring programmers and managers who have experience in building secure software is also expensive. Software vendors can reasonably assume that it will probably be years before their decision not to focus on security will have any impact on sales. Lots can go wrong with a fledgling company before then. In the future, new management might be in place to deal with the effects of the prior lack of investment in security.

These factors combine to result in a “market for lemons.” This means that it is difficult to distinguish between products that have more or less security, and that no vendor has an incentive to sell a product that has high security. No vendor wants its product to be perceived as a lemon. Therefore, the vendor makes a concerted effort to transmit “signals” to prospective customers that its products do in fact have good security. The market today employs two general strategies for trying to do this. The first is to make claims about the processes that go into ensuring security qualities, and producing measurable evidence of improvement. That evidence can be hard to obtain or interpret, as described in Chapter 2. The second strategy is to simply make claims about your software. This often backfires, because the claims made can betray a lack of understanding to people who are experienced in evaluating software security. Examples of this second tactic backfiring include advertising security as “unbreakable” or “virus-proof.”

One idea for how to address insecure software is to extend liability to security issues in software. The idea is that liability for a problem should fall where it is cheapest to fix. The thinking goes that imposing liability on software producers will encourage them to invest in software security, and the software that they create will then become better.

This idea has a number of problems. They range from the practical to the theoretical. On the practical end, how far should the liability extend? If a company ships an open-source package (one that is given away), should the company be liable for the product? Apple ships commercial versions of open-source software packages. This allows Apple to provide its customers with high-quality software at a low cost. If Apple were held liable for the open-source software it ships, it would have to either recreate the software in-house or analyze and perfect it. Building perfect software is impossible. As the defects are repaired, there is a risk of new problems being introduced. More worrisome, people don’t agree what “secure” means or what a company should reasonably do to make a piece of software “secure.” There are many experts with passionately held opinions—a nice scenario for lawyers. Uncertainty around security would cause some projects to be canceled, making for a worse selection of software for consumers and businesses.

Even assuming that “perfectly secure software” could be built, what about user error when the software is used? Perhaps software would include a warning such as “Caution: the software you’re about to enjoy is extremely fragile.” Alternatively, software might come with thousand-page-long manuals that the user must read in order to get the warranty. Giving software creators an incentive to claim that “all problems exist between the chair and keyboard (PEBCAK)” seems like it would create a new set of problems, perhaps worse than the software security problems we face today. Finally, new liability around security could impede breach disclosure (depending on the precise wording of the law). Companies would find reasons to sweep issues under the rug as a way to avoid liability. This would put the brakes on a very important new source of evidence.

Insecure software persists within the marketplace not because companies that purchase software don’t care about security. Companies would consider security part of their purchasing process if it were easier to measure security and if those measurements could be trusted. A market for lemons can be understood as the natural outcome of the transaction costs associated with evaluating software security and a lack of objective data about software security.

Why Can’t We Stop Spam?

We’ll use spam as a final example where economics can help us analyze an intractable security problem. Spam has blossomed into a resilient ecosystem of people with a variety of products to sell (real, forged, or imagined), middlemen who market these products, and infrastructure providers who send the email messages through the defenses that have been built.

Spammers invest in ways to get their email past your defenses. They have a two-pronged strategy. First, they misspell words, use images, and do everything they can to get past the text filters that most people use. If they went to all that effort, but sent their email messages from only a few computers, it would be easy to knock those computers off the internet or blacklist them. Therefore, spammers employ a second technique—sending their spam from many different “zombie” computers. Because spammers control a shifting pool of hundreds, thousands, or even hundreds of thousands of zombie machines, their operations are very hard to shut down. Spammers may live in places where their money goes further than it would in New York City. It may make sense for them to invest months of effort for a few thousand dollars. This “cost advantage” enables attacks that many people would dismiss as not worth the effort.

Why is it apparently so easy for spammers and other criminals to compromise so many computers, turn them into zombies, and then use them to send spam? And why does this problem persist?

Some countries have made sending spam illegal. The problem is that in lots of other places, spamming is not illegal. Even where it is illegal, there is competition for police resources. Spam is not as important to the police as a mugging or an assault. Catching one spammer takes much time and effort, and the payoff may seem small, especially if the spammer is in another country. Thus, the risks to a spammer are low, because the deterrent effect of the law is so small.

Economists speak of externalities, in which the costs of a transaction are not carried entirely by the people involved. People who drive sport utility vehicles (SUVs) that emit more fumes than smaller cars don’t experience any more smog than anyone else. SUV owners don’t personally experience the consequences of their actions in a proportional manner. The same problem exists with the security of home PCs. When a PC is turned into a zombie and is used to send spam, the owner of the PC doesn’t directly suffer the consequences.

There is no software marketed as preventing a home PC from being turned into a zombie, because consumers would be unlikely to purchase it. Luckily, most antivirus products provide a reasonable level of defense against a home computer being turned into a zombie. Consumers do pay for antivirus software, because they don’t want to lose their files to a virus. This is bundling, which we discussed in Chapter 2 as a strategy for increasing the adoption of security technologies.

Unfortunately, not every home computer connected to the internet has up to date antivirus software installed. The remainder represent many millions of computers—more than enough for spammers to take advantage of. Even a small fraction of all computers connected to the internet represents a massive number.

Unscrupulous people are making lots of money sending spam. Spammers face hardly any risk of being caught. The combination of these incentives and the externalities, magnified by the size of the internet, means that it is perhaps impossible to stop spam.

We have discussed how ideas from the field of economics can be used to shed light on some specific problems in information security. The same approach can be brought to other challenges, such as the analysis of principal-agent relationships.

Alice wants to sell her car but has very little free time to look for prospective buyers, so she hires Bob to help. But how does she know he won’t sell the car to one of his friends at a discount, and tell her that was the best price he could get? Economists call this the principal-agent problem. Alice is the principal, and Bob is her agent. In this simple example, the obvious solution is to pay Bob half the money he gets above the car’s book value, thereby rewarding him for working hard.

The principal-agent problem has been studied extensively for lessons about paying CEOs. The board of directors at a company wants their CEO to be sufficiently motivated to increase the firm’s value. If they give the CEO lots of stock options, he might take too many risks, aiming for a huge rise in share price. On the other hand, if the CEO has all his money invested in the firm’s shares, he may act too cautiously. (If the CEO has options, he may purchase some shares at a given price. If the stock is trading for more than the option’s strike price, the CEO can exercise his options at a profit.) Finding the right blend of incentives that motivates the CEO in the ideal manner turns out to be a tricky problem. In a similar fashion, in conversations about hiring security experts, we often hear questions such as “is he a rock star?” Rock stars are people who are so good at what they do, they can choose which projects they want to work on, because they can always find another job elsewhere. Because they are so hirable, and because security experts are scarce, managing the agency problems associated with these individuals can become a challenge.

Addressing principal-agent problems is a key issue in security when considering the work of various groups such as outsourcing partners and auditors. Auditors have an incentive to point out a tremendously long list of problems. (This is commonly called CYA: the auditor points out every possible problem to avoid liability. One of the authors once did a security audit for a civil rights organization. He felt the need to point out that background checks are a very common practice, knowing full well that the organization opposed them.) The auditee would prefer a list of audit findings that allows it to balance risks and the costs to mitigate those risks. Companies that perform outsourced security monitoring might be tempted to save money by performing only superficial levels of monitoring. After all, if they’re performing the monitoring for a customer, how likely will the customer be to notice what has been missed?

Psychology

Psychology is another science we can use to better understand information security challenges. One such challenge is the topic of security patching. Over time, security vulnerabilities are found in pieces of software, and the vendor issues a security patch. As soon as the user of the software has applied the security patch, she should be protected against the security vulnerability. Patches contain changes to a program. They can be read to understand what those changes are. Using the changes as a map, security researchers can analyze the original program to learn about the vulnerability or vulnerabilities that the patch fixes. With this information, they can create exploit code, which takes advantage of the vulnerabilities.

How long exploit code is widely available before a patch is applied is of crucial importance, because it defines how long systems are vulnerable. (We are simplifying here in assuming that no other work-around to the vulnerability exists, which is not always the case.) From a pure security perspective, there is no reason not to apply a security patch, and yet millions of system administrators and individuals regularly choose not to do so. Why is this the case?

An important reason is that applying a security patch could destabilize the system to which it is applied. Because of this, it is common for system administrators to test patches by installing them in an incremental fashion throughout their computing environment. We’ve seen organizations that have used this incremental deployment strategy. Some took a cautious approach, sometimes with up to seven test groups of computers. This meant that security patches could take a hundred days to be deployed to all the computers that needed them throughout the environment. Research has shown that most bad patches are fixed within ten days. Therefore, there’s probably little improvement in patch reliability between eleven days and a hundred days. Also, we’ve rarely seen anything like a rational basis for how many tiers of tests should exist, how many machines should be in each tier, or expected failure rates for testing.

By understanding that tension exists between the security risk of not installing a patch and the operational risk of installing a patch, we can design a strategy that allows us to balance these incentives in an optimal fashion and time the application of security patches. The key difference is between making decisions based on fear and making decisions based on risk.

Another lesson we can learn from psychology is that psychological effects such as risk compensation, also known as risk homeostasis, can have surprising effects on how human beings interact with security measures. Understanding these effects can help us design systems that provide better security in a more effective manner. Risk compensation can best be explained by walking through some examples.

An antilock braking system (ABS) improves a driver’s ability to maintain control of the car while braking. In Munich, a study was performed on the behavior of taxicab drivers. Two groups of cars were tracked. They were identical, except that one group had standard brakes, and the other had ABS. The taxicab drivers were randomly assigned a car from one of the groups. All accidents involving the two groups were tracked; 747 car accidents were recorded during the three years of the study. The surprising result was that there were more accidents involving the cars that had ABS.

If you go skydiving but don’t deploy a parachute, you’re guaranteed to have a bad day. The major cause of death in the sport of skydiving has been exactly that: failure to deploy a parachute. Observing this, a European company named AirTec designed and built a device called a Cyprus that automatically deploys a parachute at a minimum safe altitude. That technology has been widely adopted by skydivers and has saved many lives. But the number of fatalities per participant in the sport has remained relatively constant, even after the introduction of this important new safety technology. What has happened is that there has been an increase in fatalities from jumpers attempting to perform higher-speed landings—flying their fully deployed canopies into the ground.

One ten-year study of smokers found that those who stopped smoking had fewer instances of lung disease, but their average life span was actually shorter than the group who decided to keep smoking. Another study showed that smokers who were given low-nicotine cigarettes inhaled those cigarettes more deeply and more frequently, thereby sustaining their level of nicotine intake.

As one last example, people who wear a seat belt while driving are more likely to survive an accident. However, in the United Kingdom, the number of deaths in car crashes actually went up after the law was passed that mandated seat belt use.

All these studies reflect the same underlying tendency. After a particular safety measure is introduced, the participants appear to reset the amount of risk to the level at which they were previously content. The participants do not get “safer”; they simply reapportion the risk elsewhere. Why does this happen? Returning to one of the preceding examples, why doesn’t introducing ABS make taxicab drivers safer? The answer, according to risk compensation theory, is that drivers with ABS think they’re safer and therefore perform more aggressive maneuvers. A separate study carried out in Oslo found that people who drove ABS-equipped cars drove much closer to the car in front of them, compared to cars with standard brakes. Why doesn’t an automatic parachute-opening device reduce the number of fatalities in skydiving? Skydivers take into account the new safety device and perform riskier jumps.

Attempts to reduce risk are continually frustrated when the subjects deem their level of risk to be satisfactory. The insurance field calls this concept “moral hazard.” When people know that their assets are insured, they often compensate by behaving in a more reckless manner. The insurance companies build their models to anticipate this effect. Banks provide other examples of moral hazard. The “Savings and Loan” debacle of the 1980s involved U.S. banks insured by the government. With that insurance, they invested in riskier and riskier gambits to earn returns to attract savers. The crisis cost U.S. taxpayers roughly $125 billion. Similar risky loans have led to a crisis in “subprime” lending in the U.S. and the U.K., including a bank run against Northern Rock in the U.K.

Can better approaches to security be designed with the knowledge that moral hazard exists? A 1992 law in Australia made bicycle helmets compulsory. As someone using risk compensation theory could have accurately predicted, the number of cycling deaths remained the same after the law went into effect. Compare the approach of the Australian government with the approach taken in the Netherlands. In the Netherlands, relatively few cyclists wear helmets, but their injury and fatality metrics are comparatively very low. This was accomplished in part by creating dedicated cycle lanes that separated road traffic from cyclists. A safe environment was created, with no safety decision put in the hands of the “user.” Nobody likes getting food poisoning, but no one in first-world countries chooses to learn about biology to protect themselves from that risk. People rely on the process of inspections that the health and safety board uses to shut down unhygienic restaurants. In fact, no one should have to know that the health and safety board exists, and that is exactly as it should be. They have succeeded when the public is protected from risks without knowing about either the risks or the protection mechanisms.

A twist on this approach, being used successfully in some European cities, is to remove visible safety measures such as crosswalks, speed bumps, and stop signs. The result seems to be that people drive more safely. It may be hard to adapt this thinking to computer environments, where the dangers are less visible and visceral. These aspects of human psychology may hold substantial lessons for the way in which security measures are designed. Visible security measures appear to have the effect of making people take more risks. There are implications for the ways in which we attempt to deliver better security, from interface design to the creation of security policies and the practices of security education and security awareness training.

Sociology

Security practitioners love to exhort. One of the things they exhort people to do is to lock their screens. Almost no one does it. They think that locking their screens sends a message of distrust to their coworkers. That’s an uncomfortable message to send. In one of Microsoft’s security teams, a prank has evolved where it’s acceptable to bend company policies by “borrowing” a computer left unlocked and sending prank email to the team. (There are strong limits on what pranks are acceptable. No one should do this to anyone involved in producing a security update. The mail sent should also clearly say “Fire me!”) The effect is that everyone on the team locks their screens.

This is a clever way to influence people’s behavior. Experts have been trying to get people to lock their screens for a long time. Designing this solution would require an understanding that the issues with locking screens are not simply laziness or externalities. Rather, it’s an issue of how people interact with one another. Sometimes those interactions are economic, but other times, sociology may be a better guide to how people act around each other. How people behave on the security team we used in our example could easily be described in terms familiar to anyone who has studied sociology. There is team formation, the setting and communication of norms, and activities that signal membership in a group. It’s a practical application of sociology to a question of security.

Another area where sociology can help is in understanding risks of monoculture. Diversity is a wellspring of new perspectives. We’ve discussed the idea that people from different fields bring different perspectives to their work. It’s not just different fields that can bring different perspectives. Research regularly finds that the more diverse a group, the better the solutions that emerge. A chapter that proclaims the benefits of incorporating diverse new perspectives and ideas would not be complete without noting that the professional information security community is not very diverse. The conferences we attend on information security topics are a sea of middle-aged white men. By observation, such conferences seem less diverse than the broader computer and IT professions. The same applies to the apparent ethnic origins of computer security practitioners. Since we don’t have numbers, we’ll just say that sitting in front of a screen all day doesn’t turn everyone that pale.

Assuming that our observations are representative, why is computer security such a monoculture? Is it worth trying to change, and, if so, what might we do about it? One of the main “feeder routes” into computer security is IT. In the U.S., IT tends to be full of white men, and this might not appear especially attractive to other groups. This may be an example of a self-perpetuating state, without malice by anyone involved. Like many of the examples in this book, it might be a situation where the majority want things to be different but don’t understand the reasons that perpetuate the status quo.

Another observation is that the subculture within computer security, most notably at hacker conventions, has evolved to a point of remarkable hostility and exclusion. There is often an assumption that women in the technology field must work in sales, marketing, or some other nontechnical role. Not only is this exclusion profoundly wrong, but the homogeneity of orientation and experience robs us of new insights. By embracing the New School principle of opening security problems to examination through the lens of other sciences, it is our hope that the field of information security can become more diverse not just in its perspectives, but also in its makeup.

Externally visible characteristics such as gender and race are the most obvious aspects of how we appear to others. All of us attempt to influence, or try to influence, how others see us. Some things are hard to change (gender or race), but others, such as style of dress, are easier.

Some people are quiet sports fans, and others adorn their offices or cars with displays of devotion. Other people display their religious or secular devotion by placing symbols on their cars, such as the symbol of a fish—or a fish with feet. Other signals include clothing or the use of jargon. Men’s clothing ranges from T-shirt to suit and tie. Each is chosen to present a particular image, and some people choose carefully for the crowd they expect to see. As people go from place to place and role to role, the ways in which they present themselves changes.

We might present a different face at home, at work, at sports events, or at church. The hard-driving executive may be a loving and supportive parent. The sports fanatic may spend all day at a football game with his buddies, but might not talk to them the rest of the week. People even use different names, from James at the office to Jim at home. These names may help people separate the ways in which they present themselves. We can’t be “on” all the time, but the distinction is often lost on people designing systems.

Work in computer security surrounding “identity” tends to default to the idea that each person has one identity. (This narrow thinking helps perpetuate the idea of identity theft.) Most of us have multiple overlapping identities. Systems that fail to respect the context of each of these identities are more likely to make their users feel uncomfortable. This in turn makes users more willing to bypass security policies. We’ve been told of people refusing to give their cell phone number for a disaster-recovery plan, convinced that their employer would use the information in situations well short of a disaster. This perceived lack of respect for separation of identity adds real risk to the plan.

Helen Nissenbaum, a professor at New York University, has presented the idea of “contextual integrity” as an explanation for how people respond to privacy issues. The idea is that when the context of a situation is broken, people get upset. Students post pictures and comments on social networking web sites, intending that only their community of friends will see them. They never think about prospective employers viewing their pictures and writings. This is the “context” of the sharing: their friends, not their employers. Maintaining that contextual integrity is an important aspect of privacy.

People tend to respond strongly and emotionally to behavior that is wrong for the situation. For example, members of the Westboro Baptist Church hold demonstrations at the funerals of fallen soldiers. Many people who respect their right to protest feel strongly that funerals are the wrong place for such protests. The idea of contextual integrity has been used to great effect to explain why some privacy issues blow up, others “feel wrong,” and still others are accepted with barely a whimper.

Economics and psychology provide new insight into people’s behavior. Understanding social pressures and context might also help. Opportunities to better understand security by learning from sociology have barely been explored.

In Conclusion

The title of this chapter is not meant to be facetious. If the security world can better understand the nature of its challenges, this will lead to better, more focused solutions. Even if we discover that the causes of the problems actually lie outside our control, we can at least react in other ways that might help us compensate.

Today, we can begin to find answers to many of the challenges we face in an emerging field of study: the economics of information security. For security professionals, reading the output of this new area of research is like switching on a light in a dark room. The findings provide long-needed justification for many widely held beliefs and demolish others. They provide answers to problematic questions, reveal nuances, and create new areas of research. Other sciences will hold similarly valuable insights into the challenges we face. The application of ideas from other fields is a key discipline within the New School. The goal is to transform information security into a multidisciplinary field in which technologists work closely with experts in “soft issues” such as public policy, economics, and sociology.

Lessons from other sciences allow us to observe the world, ask why, and receive an answer.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.225.234.24