Chapter 2. The Security Industry

Imagine that the police have arrested two people, Alice and Bob, for a crime. The police don’t have enough evidence to convict either, so they hope to convince each to testify against the other. If Alice testifies against Bob, but he doesn’t testify against her, he’ll go to jail for ten years, and she’ll go free. If they testify against each other, they’ll both go to jail for five years. If they both remain silent, they’ll each serve six months on a minor charge. The police offer Alice and Bob the same deal, but each must make his or her decision in a lonely cell. Each one’s fate, and the fate of the other, lies in their hands.

Several factors might influence their decisions. For instance, they might be friends. Let’s consider their dilemma from a purely rational point of view. If Bob stays silent, the best move Alice can make is to testify against Bob, because she will walk free. Even if Bob decides to testify against Alice, her best move is still to testify against him, because she would receive a shorter sentence than if she stayed silent. Because Bob is likely to use the same strategy, the end result for both Alice and Bob is worse than if they had cooperated and acted in their collective self-interest. This puzzle is known as the “prisoner’s dilemma.” It is considered a classic in the field of game theory—the study of incentives and decision-making that mixes mathematics with economics.

Can the prisoner’s dilemma teach us anything about how we approach problems such as spam, viruses, data breaches, and identity theft? If the prisoner’s dilemma is a good model of the security industry, then yes. (We may be oversimplifying, but it illustrates our point.)

An entire industry is made up of those trying to solve security problems. Most of the participants in the industry are trying to make money by doing the right thing—delivering better security to their customers. In many ways, the industry succeeds at delivering a set of products people want. No one has to write their own firewall anymore. Antivirus products are on the shelf of every computer store, priced at less than a good book on how to write such software. The market for security products is functional, but not optimal. Individual or organizational actions do not always lead to what’s in the best interests of organizations, the general public, or the security field as a whole. Sometimes one person profits at the expense of another. This is particularly true in the area of security technologies, but many other examples exist.

A big part of the problem is in having enough information to make the right decisions. (Is it better for Alice to testify against Bob or to stay silent?) A lack of evidence to support decision-making allows vendors to sell anything, because customers can’t distinguish useful products from useless ones. Salespeople refer to this as “throwing things at the wall to see what sticks.” If we had a perfect market, and if consumers were fully informed and entirely rational, perhaps things would be better. As things stand, buyers of security products don’t have a lot of good information to help them make decisions. This can result in effective security technologies or approaches being sidelined or overlooked in favor of the latest and greatest.

In support of observing the world and asking why, the purpose of this chapter is to examine how various parts of the security industry act. We will structure our analysis by describing what products and services are sold, how they are sold, and how effective they are. We will begin by examining the various groups that make up the security industry in order to understand their motives and how different groups might influence the development of ideas.

Where the Security Industry Comes From

One of the earliest influencers of security was the U.S. military, which influenced the development of the computer and information security industries in a number of ways. Military networks have always been targets, and of course the military has systems and information that it wants to keep secret. Information security has been around since information has been recorded. The computer security industry couldn’t get started until computers existed, and many of the earliest computers and networks were built by the military. This led the U.S. military to fund early and influential research in computer security. In the 1980s, the Department of Defense wrote and published security standards and then required that the IT vendors the government did business with met those standards in their products. Because of the size of the deals involved, companies such as IBM, Honeywell, and DEC spent lots of money on security features demanded by their government customers. Most governments and militaries train personnel to operate and manage their information technologies, and many of those individuals eventually transition to the commercial sector, bringing their training, education, and culture with them. Because the military invests such a large amount in information technology, and because so many people flow through its organization, the military exerts broad influence.

Contrary to widespread belief, the military does not have access to any special data set that would enable it to make better decisions than, say, a large corporation. It may have data on the use (and abuse) of specialist platforms such as multilevel secure systems, but those findings do not have particular use in a traditional IT setting, where such technologies are rare.

On the other side of the fence (both figuratively and, well, physically) are hackers and crackers. We use the traditional meaning of the word hacker: someone who is adept at pushing a system beyond expected boundaries, and not usually with malicious intent. Hackers influence the security industry in two main ways. The first is their involvement as technologists in start-ups and established security companies, performing research and influencing product development. The second is as hobbyists, creating and releasing security tools, some of which have become widely adopted as alternatives to commercial offerings. Open-source projects founded by hackers have also inspired new lines of business in information security and IT. Crackers are people who break into computers, phone systems, ATMs—anything digital, ideally with a network connection. A broad spectrum of motivations exists, ranging from the desire for teenage bragging rights all the way up to espionage or mass fraud of the type carried out by Ali Y’nin and his accomplices. Web site defacement is often the electronic equivalent of teenage joyriding, but other motivations within this group include the promotion of political points of view, financial gain, or the pursuit of fun without regard for its impact. As we noted in Chapter 1, computers that have been compromised by crackers are often sold as spam relays.

The police are interested in crackers who break the law. The police are less concerned with preventive security measures than with investigative techniques. The science of digital forensics has progressed substantially, but it remains a niche specialty. Even for a large organization, the cost of training an employee in digital forensics is likely to exceed the number of occasions upon which that employee would have the opportunity to apply his or her new skills and therefore return that investment.

Some crimes are unearthed by accountants. Usually the crimes discovered by auditors are financial. It is unclear whether IT auditors are as successful at discovering computer crimes. Considering their visible role within businesses, auditors might be expected to exert a strong influence over the security industry, but in fact they do not. Auditors are concerned primarily with the timely delivery of audit data and its correctness, along with the controls that surround the business processes from which that auditing data originates. They are largely agnostic regarding the means used to deliver that information. It could be a paper process for all they care, as long as the data is readily available and the controls around the process are robust. Most audit work is performed by junior employees, who have relatively little technical skill and are less able to influence the selection of security products in use.

The security problems that exist in the world are a tempting opportunity for entrepreneurs. In general, the entrepreneurial approach brings many benefits. (Successful businesses make money because they offer customers something useful.) Security is a field wide open for entrepreneurship. Unlike, say, the pharmaceutical industry, information security is almost entirely unregulated. This makes it easy for parties to enter the market and provide products and services. To make lots of money in security, an entrepreneur must find a problem that customers can understand and that can be solved with technology. This can channel entrepreneurs and start-ups in security into a narrow market, focused on a small subset of the important problems. It may be that addressing the problems that exist outside that narrow market would provide more value in the long term. (We’ll discuss this at greater length later in this chapter.)

Venture capitalists are in the business of investing in new ventures with potential for enormous returns. Some of the companies funded by venture capitalists become successful public companies. Others are acquired by larger companies, often looking to build more complete systems, and still others simply fail. The start-up business is speculative by definition, and many security start-ups seem to hit a plateau in their growth and then stagnate. There are plenty of reasons for this, but this isn’t a book on start-ups or entrepreneurship. One theory is that such companies can survive for a while by selling their product to all the people they know within the industry, but that isn’t sufficient to build a viable company in the long term. Another theory is that a finite set of large companies will buy new technologies simply to see if they work. Even smaller is the set of companies that will buy new security technologies.

Technological innovation also comes from hackers building tools to solve problems they’re experiencing. Many of these hackers choose to give away the software they create under various open-source licenses. This is an important source of innovation, and many important security products have come from the open-source world. Some are at the center of companies, and others continue to operate as open-source projects. Companies that spring up around open-source projects often make money by selling ongoing support, such as intrusion detection rules for the latest attacks, or support and service.

As a market matures, there’s a tendency toward vendors that offer “complete” systems. Most cars come with a stereo, and the market for replacement stereos is relatively small. However, many products now commonly available from the manufacturer start out as optional add-ons. Navigational systems are an example. Once expensive add-ons, they’re now expensive factory or dealer options. The move toward complete systems tends to be a gradual one as the market figures out what a complete system entails, but it is often hard to compete with a systems vendor. Spelling and grammar checkers were once products separate from word processors, but no one needs to buy a spell checker today. Similarly, few organizations want to buy security as an “add-on”; they increasingly want security capabilities to exist by default. Most people don’t want to invest the time to figure out what the best stereo is; they want to get a reasonably good one for their money. In the same way that it can be hard to compare stereos, it can be hard to compare the security of products. We’ll return to this point later.

Another trend is for large IT vendors to use their market penetration and existing sales channels to shape parts of the security market. For example, when the market for products that protect computer desktops swelled as a result of internet worm infections, Cisco priced its product very inexpensively in comparison to competing vendors. It was a popular product, and Cisco is a major vendor, so why didn’t Cisco charge a premium? One possible reason is that a widespread deployment of a certain technology within a business creates incentives for that business to employ additional technology from the same vendor (for interoperability, licensing, support, and many other reasons of efficiency). Another factor is that all vendors, regardless of their field, want the switching costs of their customers to be as high as possible. They understand that if their product is widely used within an organization, it can then become expensive for that organization to switch to a competing product. The more entrenched a product, the more power a vendor has to set prices for maintenance and licensing.

Industry analysts report on these practices and advise both public and private companies. Analysts are often given in-depth briefings by companies, and the quality of their analysis hinges on the rigor they place on gathering and interpreting data. (A brilliant analyst without data is a pundit.) A cynical view of the advice that research firms such as Gartner provide to business is that “no one ever gets fired for implementing Gartner recommendations.” It is true that people often fail to hold analysts accountable for the advice they have given, and analysts rarely reveal the data that underlies their predictions.

The final group we will note within the security industry is academia. Researchers in academia investigate security topics, and their research is often a leading indicator of security technology trends. Much academic research in security is years or decades distant from the workplace, but academia is often the only place where a true picture of the strengths and weaknesses of security technologies can be found. This is most true for cryptography, but it’s also true for technologies that are now considered everyday, such as intrusion detection systems. In a paper published in 1999, Stefan Axelsson, a researcher in Sweden, explained that the value of these products hinges not on their ability to detect attacks, but on their ability to suppress false alarms that drive up operational costs. This paper was one of the two most farsighted analyses of intrusion detection technology. Axelsson foresaw problems with these emergent systems long before most businesses had heard of them. (The other farsighted paper was Ptacek and Newsham’s work describing how an attacker can evade the technology.)

Two favorite haunts of academics are conferences and standards bodies. Academics have a hierarchy of workshops, symposia, conferences, and journals in which they share their work and collaborate. These forums and the work they accept play an important role in shaping information security. The perceived importance of new work is influenced by where it is presented. Venture capitalists and the IT industry participate in some of these forums, so what is selected and presented influences what new technologies get attention outside academia. Standards bodies are one of the ways in which security protocols are developed. This work is typically done in an open and non-commercial fashion, but the desires of the commercial security industry can sometimes conflict with those ideals. For example, in 2002, a working group of the Internet Engineering Task Force published the draft description of a communication mechanism called the Intrusion Detection Message Exchange Format that would allow security technologies to talk to general network management systems in a consistent way. This would allow businesses to connect such systems together. However, the business model of companies producing these products is to sell sensors and management consoles as a bundle. Implementing this new mechanism would mean that the customer was no longer locked in to using a suite of components from only one vendor, so none of the vendors implemented it. It could be argued that customers would benefit from being able to combine the output from multiple products to enable more effective security analysis to be performed. But this would not initially benefit each individual vendor, so we see the downward spiral of the prisoner’s dilemma emerge.

Orientations and Framing

Our orientation is influenced by our cultural upbringing, education, training, and experiences. We’re using the term orientation as defined by John Boyd in his Observe, Orient, Decide, Act (OODA) concept, often simplified and called a loop. In this sense, orientation is how people perceive and interact with the world. Orientation in IT is visible in the ongoing “UNIX versus Microsoft” divide. An example in the security realm is the palpable difference in orientation between a security practitioner with a policy and compliance background and someone with a technical or engineering background. To stereotype for a moment, the policy and compliance folks want to manage risks through process and controls, and the technical people want to build and deploy technology. These varying orientations manifest as different outlooks on and approaches to the world. If you want some excitement, gather a group of security professionals and ask them if it’s a good idea to write down all your passwords and carry them in your wallet. There will be wide differences of opinion (and probably fireworks).

What we see as conventional wisdom is to a large degree shaped by our personal and social preferences, since what is convenient to believe is often greatly preferred. The phenomenon of “groupthink” is well known. It’s easy to ignore reality when the people around you are all doing the same thing, and the act of challenging groupthink is usually highly unpopular. Like-minded people with a shared orientation tend to orient in similar ways. The most academically interesting topics within information security for Ph.D. students to study are those that their professors are interested in. A section of the hacker community focuses intensely on vulnerabilities and tends to believe that finding vulnerabilities in software is the most valuable skill for a security professional. Compliance practitioners within corporate security teams view security problems in terms of their possible effect on audit findings. They might dismiss a gaping security hole because they don’t believe it will impact the bottom line.

Orientations often involve a complex bundle of opinions, experiences, and approaches. For example, a section of the hacker community refers to itself as “the underground.” This self-labeled underground incorporates many ideas, including a love of investigation and learning, skepticism toward authority, respect for hard-to-acquire skills or knowledge, and a love of the forbidden. Every year in Las Vegas they hold Defcon, whose web site bills it as “the largest underground hacking convention in the world!” (The idea of a publicized event that is open to all comers being “underground” is an interesting contradiction.) At this convention, hackers get up on stage and explain how to break into things. Some include a caveat such as “Don’t break the law.”

Much of what happens at Defcon is perfectly above board. After all, hordes of law enforcement and military types are in attendance each year, and they don’t go intending to break the law. However, they often misunderstand the motivations of hobbyists and computer scientists who attend Defcon, and culture clashes are common. Some of these clashes have involved the arrest of hackers for things they have presented on stage in front of a large audience. Prolific bank robber Willie Sutton never stood on stage talking about weaknesses in bank security systems, but hackers do so regularly. There are big differences in the orientations through which these groups see the world.

None of the orientations we have described are “wrong.” In fact, they are often predictable, given the communities from which they originate. Unfortunately, making rational decisions in a microcosm does not necessarily satisfy goals that exist outside those relatively narrow views of the world. Those narrow views lead to inefficiencies and parochialism. Those inefficiencies manifest themselves in a variety of ways, but none more so than in the security products and services that are brought to market.

What Does the Security Industry Sell?

The face of information security seen by a Fortune 500 company or by an individual consumer is largely constructed by the marketing departments within the commercial security industry. The security industry advertises on buses, billboards, and taxicabs. It publishes trade magazines, runs conferences and award shows, operates security news and information portals on the web, provides training courses and professional certifications, creates security products, and delivers security services.

There’s an elephant in the room. That elephant is the assumption that the security industry has evolved to solve the problems most in need of solving. For many industries, this is a reasonable assumption. Markets are powerful mechanisms for finding solutions to important problems, and problems that the market doesn’t solve are often not worth solving. Markets fail in well-understood ways, and many of these seem to occur in the market for computer security. Do we focus on and “solve” the easy problems? Are we looking for our lost car keys under the streetlight only because it’s dark everywhere else? Many of the products and services that the commercial security industry sells simply perpetuate an unsatisfactory status quo. They don’t make the problem any worse—they certainly can help. But they often don’t address the root cause of the problem.

A key observation that can be made about the evolution of security technologies is that new security products are often developed to compensate for the unintended side effects of prior security products. This suggests that the first set of products didn’t tackle the problem at a deep enough level. The use of network firewalls to restrict the types of traffic that can flow into and out of private networks has led directly to the current situation in which every application developer simply makes all the traffic flow over port 80. They know that the web port will never be blocked, so they use it by default. Now businesses have to purchase application firewalls in addition to their normal firewalls. This type of incremental decision-making seems to be common in the evolution of security technologies. It is an example of individual participants within the security industry trying to make the right decisions and failing. Businesses and consumers look for products to improve their security, and the market responds. In doing so, the risks are often shuffled around, or attempts to address them are postponed.

As human beings, we are drawn to “new” products because in our minds “new” means “improved.” (“It’s new and improved!”) This results in a fashion show of new security products every year or so, and sometimes, after several iterations, we end up back where we started. For example, vulnerability scanners found so many vulnerabilities within computer networks that companies began employing network-based intrusion detection systems so that they could focus only on attacks. Those technologies generated too many false alarms, so managed security service companies sprang up to allow companies to outsource that monitoring function. Some companies found that those outsourced companies were ineffective in their analysis because of the distance from the networks being monitored, so they took the security monitoring function back inside. To accomplish this, they began employing “vulnerability management” products. The circle is now complete, because essentially this new generation of products are glorified vulnerability scanners, albeit with a prettier executive dashboard.

As Kurt Vonnegut said, “So it goes.” Security products often address only the symptoms, not the underlying problems. If the problem reappears in a slightly different way, a new product may well be needed. Lots of organizations have challenges within their environment with vulnerabilities, the uneven use of administrative software, network games, peer-to-peer and instant-message traffic, rogue servers, illegal software, and iPods connected to corporate workstations. Should an organization purchase a separate security product to address each risk? There is certainly at least one product for each. But almost all these risks could be addressed by a small number of fundamental activities: understanding the makeup of your IT environment, how it is configured, and how it changes over time. Today these activities are known as asset management, configuration management, and change management. If these fundamentals are correctly addressed, all these risks can be controlled, along with most predictable future permutations. The 80/20 rule, also known as the Pareto Principle, suggests that performance depends disproportionately on doing a few things really well. Does the current direction trumpeted by the commercial security industry represent the 80 or the 20? A company may well need to buy products to implement those processes, perhaps even specialist security products, but it is best to think through and understand the elemental nature of security problems and attack them at their root.

The majority of new security product development is carried out by security companies, and not by the other groups within the industry that we have described. Companies compete against each other within the market. Indeed, it is the nature of the IT industry to propel products onward and upward to ever-increasing levels of power and functionality. This unrelenting drive for new features can result in product functionality that overshoots the needs of the average customer. The rate of technological advancement within the security industry is ferocious. The result is that the average consumer might use only a small fraction of the functionality that exists in a given security product. It could be argued that whenever a vendor adds more functionality to its product, it increases the level of complexity. Bugs and misconfigurations arise from complexity, and a security bug is simply a bug with security implications. It is easier to make a simple system reliable, and it is also easier to make a simple system secure. As systems offer more powerful functionality, they often become more complex. Security practitioners often say that vendors should remove features. This is a seductive approach, but customers often need more functionality, and different customers often need different functionality. The result is large, complex pieces of security software, many of which have proven to have security vulnerabilities themselves.

Along with products, the information security industry sells services. Security services are no different from other IT services in that the quality of the work being performed varies greatly. Verifying that someone really has a specialist skill set can be expensive and difficult. One response has been the emergence of a market for security certifications, which now are available for many different specialties in the field. Organizations like to hire candidates who are certified because they see this as a diligent hiring practice. Because security has become an important business topic, this has led people to pursue security certifications in the hopes that they will become more marketable. This market for certifications has provided some benefit. Employers can validate that a prospective candidate has a certain baseline of knowledge in security. The downside is that a false economy exists within the job market, because many competent candidates who do not have a particular certification find their résumé hitting a glass ceiling. The result is that companies might not be hiring the best candidates.

When a company hires someone with a security certification, it knows that the candidate has passed a test. But does passing a test mean that the person will do a good job? The best-known security certification today—the Certified Information Systems Security Professional (CISSP)—employs a syllabus that is referred to as “the common body of knowledge.” It amounts to a statement by the certification body of what a security professional should think about. Because of what is left out, it is also an implicit statement about what should not be thought about. With this approach comes a danger that people will feel cozy in their doctrinal view of what a security practitioner should know and perhaps reject the consideration of more novel perspectives. Organizations are unlikely to find a fresh perspective on their security challenges in someone who has just memorized 800 syllabus pages to pass a test. When the innovative British hairdresser Vidal Sassoon moved to New York in 1966, he refused to take the New York hairdressing exam. Local regulators had a formalized, rigid notion of how they thought a hairstylist should work. Sassoon thought it was absurd to have to prove his worth based on techniques and methods that he had learned and then abandoned. While no organization needs every employee to be a “rock star,” few want all their employees to think the same way. Security certifications have a number of problems, and these criticisms can be leveled at certifications in other areas of IT as well. We are raising these issues because professional certifications have become a significant factor in the hiring of security personnel, and their perceived importance seems unlikely to fade in the near future.

In the same vein as professional certifications, “seal programs” are certifications for businesses. Online businesses care about security because some of their customers do. Companies want their prospective customers to trust the security of their web site so that those customers will use it. Some companies pay to enroll in seal programs in which a security “authority” evaluates the security of their web site according to some criteria, and then after some tests are passed, they can display a seal image on their site. The seal is supposed to signal to prospective customers that the site can be trusted. This is all good in theory, but less so in practice. As in the prisoner’s dilemma, one party might choose to betray the other if she considers it to be in her best interest. Sites that are not trustworthy are drawn to such certifications as a way to help them dupe the public into thinking that they are trustworthy. Sites that do have good security do not feel the need to obtain the certification. This leads to “adverse selection,” in which sites that seek and obtain trust certifications are in reality significantly less trustworthy than those without. This is perhaps not true of all seal programs, but a 2006 analysis of TRUSTe, a major seal vendor, showed that its seal participants were more likely to show up in a large database of malware distributors than their prevalence in the data set would lead one to expect.

The products and services that exist within the security marketplace are attempts to fulfill needs both real and imagined. Unfortunately, the security market isn’t perfect. Organizations should keep in mind the subtle forces at work when considering the products and services that are available. We can learn a lot by examining how the security industry operates and what motivates the groups within it. By doing so, we can begin to identify situations in which acting in rational self-interest leads to both market inefficiencies and a failure to address underlying problems.

How Security Is Sold

Fear sells. We will hazard a guess that you do not know anyone who was attacked by the infamous “flesh-eating bacteria.” So why would you know that flesh-eating bacteria even exists? The Centers for Disease Control recorded 600 cases of the bacteria in 1999 with a 25% fatality rate. So the risk of being killed by this bacteria or even coming into contact with it is infinitesimal. Even so, many newspapers around the world published headlines such as “Killer bug ate my face” and “Flesh-eating bug consumed my mother in 20 minutes.” Why the disconnect with reality? The answer is that fear sells, and the results of fear can manifest in the marketplace with spectacular effect. Sales of duct tape spiked after the Department of Homeland Security announced that all Americans should include duct tape and plastic sheeting in “home disaster kits.”

The security industry sometimes uses fear as a lever, either to sell products or advance an agenda. The worse a situation can be made to appear, the easier it becomes to convince someone to buy a particular product or undertake a particular piece of work. The security industry has, to a fairly significant extent, institutionalized the approach of using fear to sell security. It must be said that many security professionals have also taken up this approach and used it themselves. The industry has created a feeling of being under siege. The problem is that this can lead to decisions being made on the basis of emotional gut reactions—the antithesis of the ideal.

Taking advantage of people’s fears and sense of being overwhelmed by security challenges, products are sometimes marketed as if they are the panacea for all security woes. One advertising campaign for a major security vendor was built around the phrase “security’s silver bullet.” Of course, it is naive to believe that all problems have fast, simple, and purely technological solutions. Advertising such as this does a disservice to the security field because it glosses over complex problems and presents the illusion of a reality in which a panacea exists. It makes you believe you can reach nirvana by using a particular service or installing a particular product.

Another marketing technique sometimes used by security product vendors is “proof by unclaimed reward.” A vendor sets up its product on the internet and then invites attackers to attempt to break into it. When no one does, the company claims that its product “repels all hackers” or is “impossible to hack.” The trick, of course, is that the vendor tries to set up its product or the surrounding environment in a way that makes it very difficult for an attacker to be successful. Also consider that just because the product was connected to the internet for a couple of weeks doesn’t mean that anyone with skill will try to attack it. But sometimes they do. One vendor that ran five “hacker challenges” in the 1990s was hacked on the fifth. As of late 2007, the vendor had not paid the $50,000 “reward” it advertised. We’re willing to bet that the vendor counted on the game being sufficiently rigged that they would never have to pay up.

One marketing tactic is to publicize that a product is used by this company or that government agency. Because they have such a reputation for security, the implication is that the product must be useful. That organization might well use a certain product, but perhaps for reasons that are not obvious to the rest of the world—partnerships, licensing agreements, experimental purposes, etc. The organization might also be augmenting the product with homegrown technologies or using it in a unique way. Lastly, note that just because someone buys a particular product doesn’t mean that he will deploy it. It could be “shelfware,” gathering dust.

A topic closely related to marketing and sales tactics is the relationship that industry magazines have with vendors. These magazines rely almost entirely on advertising from product vendors to support them. They offer free subscriptions to qualified subscribers, where “qualified” means everyone who can fill out the form. These magazines also perform product reviews. In an issue of one such magazine, the average rating for the twenty reviewed security products was four stars out of five. This result could be explained by the magazine’s choosing to publish the results for only products that reached a certain rating (although one product scored only two stars in that same issue). They might really love those products. Another view is that the magazine relies on advertisements placed by vendors, so the magazine has an incentive to provide good reviews. In addition to that conflict of interest, a focus on products can create the idea that the problems “fixed” by products are the problems we should focus on. This is the path of Sisyphus, cursed to push a boulder up a hill for eternity.

Several industry conglomerates and organizations have established security checklists and certifications for businesses. The credit card associations have jointly established a set of criteria and checklists known as Payment Card Industry (PCI) standards, by which companies can be certified as having sufficient security to handle credit card payments. In a number of cases, companies that were certified to the PCI standard have suffered spectacular failures in their security. One of the most well known occurred in 2005, when CardSystems Solutions suffered a security breach in which attackers stole 40 million credit card numbers. Prior to the breach, CardSystems had hired a set of outside auditors to verify its security certification. “We followed the Visa rules to the letter, and the people who did the work are longtime security experts,” the leader of the audit team was quoted as saying.

CardSystems had the required security certification, but its security was compromised, so where did things go wrong? Frameworks such as PCI are built around checklists. Checklists compress complex issues into a list of simple questions. Someone using a checklist therefore might think he has done the right thing, when in fact he has not addressed the problems in depth. For example, it is common to see checklists that require certain cryptographic algorithms, such as triple DES or AES 256-bit encryption. A good encryption algorithm is necessary, as is proper key management. Imagine if all instances of the product use the same key. Anyone who learns that key can read all the encrypted traffic. The checklist format allows such important issues to be glossed over.

Conventional wisdom presented in short checklists makes security look easy. A checklist implies that there is an authoritative list of the “right” things to do, even if no evidence of that simplicity exists. This in turn contributes to the notion that information security is a more mature discipline than it really is. Checklists, frameworks, and business certifications force companies to at least address the elements of security that each framework demands. (“A rising tide lifts all boats,” perhaps.) On the other hand, all checklists are moot if we can’t validate the value of their items. If we don’t know which initiatives have value, companies are spending money very inefficiently, because insufficient evidence exists to determine the true value of controls.

In a similar fashion, the term “best practice” is a powerful lever for bringing someone around to your point of view. When you hear such a phrase, you might think that an official body vets and evaluates “best practices.” Much like the U.S. Department of Agriculture (USDA) promulgates nutrition guidelines, there must be some equivalent, like a National Institute of Computer Security. There is not. “Best practices” are simply activities that are supposed to represent collective wisdom within a field. When a company doesn’t have expertise in a certain problem domain or doesn’t have the time or inclination to obtain it, it is very common for the company to simply adopt the “best practices” in that area. “Best practices” have proliferated within IT, and especially so within the security industry. Many security practitioners perceive the pursuit of “best practices” as defining a diligent security strategy. But we have to consider where “best practices” come from: they are dictated by consultants, vendors, and the security industry as a whole. Each of these groups has a vested interest in the security decisions that are made, and anyone can (and does) call their advice a “best practice.” People find it difficult to argue against an authority or a perceived majority.

“Best practices” typically don’t take into account differences between companies or, more generally, between industries. The security decisions at an oil firm are made in a very different context than in a clothing wholesaler, and yet we are told that “best practices” can apply to both. (Perhaps the same policies or actions do make sense for both. Declaring them “best practices” brings us no closer to figuring that out.) There is no good reason for a food packaging company to implement the security measures that a government agency might adopt. But the National Security Agency (NSA) guides for “securing” certain operating systems (which can be more than a thousand pages) are sometimes applied outside the government agencies for which they are intended. In reality, implementing such highly aggressive security measures is likely to lead to significant heartache and costs caused by trying to set the bar too high.

“Best practices” are designed to be vague enough to apply in the general case. Therefore, they are highly unlikely to match the specifics of any particular environment, and so they are inefficient by their very nature. For example, most law firms have not taken many steps to protect their core information asset: their document management system. Why? Perhaps because document management systems are somewhat of a niche technology, so they are not referred to in any generic “best practice” document! For a consulting firm, a “best practice” allows it to create a template for a piece of work so that it can easily be repeated, thereby increasing the firm’s margin on similar engagements. As soon as a template and a script for performing a piece of work exist, an associate can do what would have previously required a senior manager or partner.

When we hear the phrase “best practice,” we consider it a best practice to say: “Why? Prove that this ‘best practice’ really makes sense. Show me the evidence.” Certainly, in some areas, standard ways of working have value. Some “hardening” practices can make an operating system more resistant to attack, such as turning off and removing unneeded functionality. It is also clear that some practices can help find the mathematical weaknesses in a cryptographic algorithm. Backing up data clearly has a lot of value. Software vendors have engineering insight into the systems they have built and can offer you useful advice on how best to operate them. We remain wary of “best practices” that are not grounded in mathematics, engineering, or science, because they are unlikely to ever be proven.

Consumer Reports is a publication that accepts no advertising. This arrangement allows it to focus unreservedly on the interests of its customers. Currently there are no “consumer reports” for security products and services, so it can be difficult to prove a product’s worth. In the absence of empirical ways to do so, the security industry has historically used sales tactics such as appealing to fear. Perhaps this is not so different from other markets. It may also be that vendors truly believe that their product or service is a panacea or the ultimate solution—that the ends (better security) justifies the means. But even if they are right, the net result is not positive. Companies scared by security risks and led astray by vendor marketing are likely to spend very inefficiently. When acting as a consultant, one of the authors has on several occasions been told by companies that they have no time to document their environment, and then shortly thereafter been asked what brand of new security product they should buy. What would ultimately be more beneficial to those companies: understanding their IT environment, or adding the latest security technology to the mix? It could conceivably be the latter, but most companies would be better off having an accurate and up-to-date understanding of what they have built.

In Conclusion

We have reviewed some current thinking in the security field and have seen how current approaches have contributed to the malaise we described in Chapter 1. Almost no one in the security industry acts maliciously, but the decisions that we each make on an individual level don’t always lead to the results that everyone wants. Ideas that are convenient to express can become the conventional wisdom, but even the relatively short analysis within this chapter has revealed large problems with that conventional wisdom. Buying products and “solutions” is easy, but the right thing to do is not always the easiest. The commercial security industry as it currently exists arranges itself to solve the problems that are in front of it and that it believes it can solve, so we shouldn’t let it dictate priorities.

How then can we improve? One way is to look at other fields to see what we can learn. In 2003, Michael Lewis published Moneyball, a book about how the Oakland A’s baseball team uses statistics to decide which players to buy or sell. In the book, Lewis describes how the general manager of the A’s, Billy Beane, determined that the conventional wisdom for evaluating baseball players, such as focusing on batting average, was being used simply because it was easy to acquire. In the early 1970s, the Society for American Baseball Research created a more-detailed analysis that aims to measure the contribution of every player to the team’s success. By being the first club to use that insight, the A’s became competitive with teams that have far deeper pockets, such as the New York Yankees. Moneyball also describes a clash in orientations between the old and new ways of doing things.

Like Billy Beane and the Oakland A’s, we in the security field must step back and examine the ways in which we make decisions, and see whether we can do better. It won’t be a particular product or service that will drive that change. The answer lies in embracing objective data about successes and failures. Without proper use of objective data to test our ideas, we can’t tell if we are mistaken or misguided in our judgment about what is important. Our particular orientations might lead us to believe that we alone can see the solution, and we might then turn it into a “best practice”! The answer to these challenges lies in gathering and using evidence, to which we now turn.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.107.100