Chapter 3. On Evidence

In 1610, Galileo published his observations of Jupiter’s moons. He used his findings to argue in favor of Copernicus’ heliocentric model, in which the sun is the center of the solar system and not the Earth, as was widely believed at the time. Four years later, the Catholic Church formally denounced Galileo’s work. In 1616, Cardinal Roberto Bellarmino committed Galileo to house arrest and instructed him not to further publicize Copernican astronomy. Galileo was punished because his findings undermined Church teachings. The Church was a tremendously powerful institution, with a bully pulpit that extended across Europe. It also had its own courts and police, and the power to excommunicate someone and damn a soul for eternity. In the end, history vindicated Galileo, because it is not power that counts, but reality.

Chapter 1 looked at some problems with information security. The subjects we touched on included organized computer crime, spam, viruses, data breaches, and identity theft. Chapter 2 discussed some structural issues within the security industry that, perversely, help these problems to persist. Some use highly visible failures such as viruses and spam to bolster their claim that information security is in radical decline. Others are equally resolute in their belief that we are either continuing to make steady gains or at least are not falling behind. Extremely skilled and experienced security practitioners can be found in both camps, but ultimately both positions should be tested to see if they can be disproved. If the position can’t be tested, it’s a belief—not science. This chapter examines sources of evidence and the value they might provide. In doing so, we can evaluate to what degree they can help us evaluate claims, large and small, and make the right security decisions.

Someone with no training in physics might think that heavier objects fall faster than lighter ones. But we know through scientific testing that all objects falling toward the Earth accelerate at the same rate (although they may be slowed by air resistance). It was Galileo who posed the hypothesis that everything falls at the same rate. He supposedly performed a test in which he simultaneously dropped two objects weighing different amounts from the Leaning Tower of Pisa.

The scientific orientation has been incredibly effective at increasing our understanding of the world. It includes formulating and testing hypotheses and sharing the methods of testing and results of those tests. A hypothesis is simply a testable suggestion. Tests never actually prove a hypothesis. Good tests fail to disprove the hypothesis being tested and thereby provide evidence in favor of the hypothesis. The difference is both subtle and important. For example, someone might hypothesize that the coelacanth fish is extinct, yet this was disproven in 1938 when a living specimen was caught. It’s impossible to prove a negative, because there’s always the possibility that a counter-example is out there somewhere. We might have other ideas we believe, but evidence to disprove them might be lurking right around the corner. The disproof of a hypothesis may or may not make it worthless. For example, Newton’s laws of motion are still used in civil engineering, even though they are “wrong” and don’t apply at very high speeds or at atomic scales.

The ideal way to test a hypothesis is by experiment. Good experimental design changes one variable at a time and sees what else changes, or pits two hypotheses against each other. In the real world, constraints often make it difficult to create such a controlled experiment. Ethics, cost, and matters of practicality all impose limits. We can’t move planets from their orbits to see if interesting side effects come about. Doctors don’t believe they can really control, or even get accurate reports about, how much exercise people do over long periods of time. These constraints have led to a great deal of insight about how we can study the world without controlled experiments. Researchers have developed methodologies such as the double-blind study, in which neither the patients nor the doctors know who gets a placebo and who gets real medicine. This prevents expectations from having as much impact on the study. More generally, statisticians have developed sophisticated methods for extracting information from large studies, and the advent of cheap computers makes those methods ever more powerful. These methods aren’t used much in security, because we lack data to which we could apply them.

There are fundamental questions we’d like to be able to answer. How effective are we as security practitioners? Are we pulling ahead of the bad guys or fighting a losing battle? If we are not improving, what’s broken? In the absence of good testing and evidence, we can’t answer these questions. Without the answers, talking points that claim either an improvement or a radical decline in security are unsubstantiated at best and self-serving at worst.

The act of seeking evidence is valuable. It is valuable when making decisions and when examining past strategy. The “state of the art” in information security is continuously moving forward, with new tools and techniques being developed all the time. But, in the real world, a student cannot graduate simply for having perfect attendance. The best intentions cannot be substituted for a passing score on the final exam. We should look for evidence of our success or failure to see if we are really making progress or if we are losing ground.

You might think that evidence abounds. It can feel like every technology magazine contains the results of a new survey about some aspect of security. The details of new security vulnerabilities are publicized far and wide. A cornucopia of “factoids” and articles in the security press and on security and IT web sites discuss vulnerabilities, hacking incidents, and so on. But are these data points useful? Do they enable us to make better security decisions, or is their purpose simply to entertain or distract? Do they allow us to formulate and test hypotheses? Do we have context for the data? Do we understand where biases might have crept in (or been introduced)? We can apply these tests to data to evaluate its value. In this chapter, we apply them to many of the sources of information that people rely on to make their security decisions.

It can be tricky to find objective data about information security. For many security initiatives, the more successful they become, the fewer bad things that happen. For example, suppose fewer security incidents happen. Then the difficulty is proving a negative—that a security incident didn’t happen as a result of those actions. Was the initiative really a success, or was the outcome just luck? Perhaps an incident wouldn’t have occurred even if the security measures were not in place. The absence of something can be hard to quantify, other than the recognition that it does not exist. We often only know how a security measure can fail after failure occurs. The majority of stories about information security in the popular press today describe incidents in which security has failed. Success is often silent, invisible, or boring. It doesn’t make for a good story.

Continuing this thought about the nature of success and failure in information security, telecommunications companies offer service-level contracts to provide 99%, 99.995%, or some other level of availability for which they then engineer their systems. Telecommunications managers have learned to measure uptime, and they consider what each level of improvement costs. Moving from “three nines” (99.9%) to “six nines” (99.9999%) of uptime is usually exceptionally expensive, so people make choices about the appropriate level of investment in reliability. (To put “six nines of uptime” into perspective, it equates to only about 30 seconds per year of downtime.)

For telecommunications companies, data about the uptime of their systems is relatively easy to gather. Measuring security is much trickier. The first hurdle in measuring security is the question of what should be measured. What is “security”? The broadest answer is that security is a concept, much like happiness or intelligence. With intelligence, we’ve distilled some of that concept into properties that can then be measured, such as with an IQ test. There is nothing wrong with taking a high-level concept and deconstructing it into more palatable pieces; in fact, this is a useful scientific technique. We must do likewise with security, because if security can’t be measured, it continues to be impossible to say whether we have more of it today than we did yesterday. Some of that analysis likely involves concepts from disciplines such as psychology, engineering, and operations. Security might be defined in terms of an absence of fear or the ability to resist attacks. It could also be defined as the level of assurance we have that engineering delivers what we hope it will deliver, even in the presence of clever adversaries. Any definition of security we propose is likely to be controversial. Where it enhances our argument, we’ll be precise either in the text or in an endnote. Otherwise, we’ll try to avoid a semantic war over definitions.

Philosophy aside, in the real world it is easy to find out precisely how many people died in car accidents last year. Data from independent tests is available regarding how likely occupants are to survive an accident in any particular make and model of car. But how long can a computer resist an attack? Is security better this year than last? Is there overspending or underspending on security? How does the level of security at any one company compare with its peers? Does the information that is available today allow us to answer these questions in an objective manner? Let’s look at the potential sources of evidence that are available today, and we shall see.

The Trouble with Surveys

One of the most straightforward ways to try to understand a problem is to conduct a survey and see what everyone thinks. A logical place to begin looking for evidence is therefore in security surveys. Lots of people have done many surveys about various aspects of information security, but doing any survey in a scientifically defensible manner is hard. The first challenge is designing a set of questions that don’t betray bias. Writing good survey questions is difficult, because the way in which a question is asked influences the answer. If someone is asked whether he thinks it is acceptable for an endangered species to become extinct, he will probably answer differently than if he is asked if the protection of a certain species of insect is more important than educating children.

The next challenge is finding a suitable set of respondents. For security surveys this can be tricky, because organizations hate talking about security failures. It is next to impossible to force any particular set of companies to reveal their history of security failures. A great many security surveys are simply published, and then anyone is allowed to respond. The difficulty here is that because anyone can answer the survey, the survey sample becomes self-selected. The people who answer the survey are unlikely to be representative of the larger world. This problem of self-selection is well known in surveying.

Security brings a unique set of challenges to surveying. Psychologists talk about the “valence effect,” which is people’s tendency to overestimate the likelihood of good things happening rather than bad things. Security professionals who are “in the trenches” will likely experience a reverse valence effect: believing that things are worse than they really are. This is because our recollection of painful events is highly influenced by the degree to which we were personally affected. Cops on the beat become cynical because they see bad things happen every day. Many security surveys today are answered by people who work directly in security, and these are the people fighting viruses and attacks.

One of the best-known surveys in computer security is the annual survey performed by the Computer Security Institute (CSI). In the 2004 edition of the survey, respondents were asked to rate the degree to which they agreed with the statement “My organization invests the appropriate amount in security awareness.” The majority felt that their organization did not invest enough. But most respondents had the word “security” in their job title, so we have to consider the effect that selection bias might have had on this result. If the CFOs of those organizations were asked the same question, would their combined answer have been the same? No security survey targets only people outside the field. This leads to a selection bias effect. An anonymous telephone survey conducted of the authors of this book showed that 100% felt that this effect was “huge.”

It is also clear that the organizations at which the respondents to security surveys are employed are not representative of average companies. The 2004 edition of the CSI survey had 486 respondents. For the organizations in which those respondents worked, 66% had more than 500 employees, and 81% had more than 100 employees. More than half of the companies (57%) had $100 million or more in annual revenues. We might reasonably expect that large companies experience more security activity than smaller companies. (Larger companies have more personnel and a larger internet presence. In short, they have more of everything—security activity included.) We also have no way of knowing within the companies surveyed whether wide variations exist in the degree of dependence on IT, corporate culture as it relates to propensity for risk-taking, and opinion as to the need (or not) for security measures.

A further impediment to the accuracy of surveys is that the vocabulary used in the security field is imprecise. Different people interpret terms such as “attack,” “threat,” “risk,” and “intrusion” in subtly different ways. The concepts of a “vulnerability” and a “threat” are often confused, as is the difference between a “threat” and a “risk.” Some businesses might perceive “port scanning” (the electronic equivalent of rattling a doorknob) as a hostile action, and others might consider it relatively harmless. Without a survey methodology that defines such terms, it is impossible to know what the respondents really meant.

Two professors at George Washington University analyzed the fourteen security surveys that were the most widely publicized from 1995 to 2000. They found that those surveys were replete with design errors in the areas of sample selection, the form of questions asked, and underlying methodologies. Unsurprisingly, the survey findings were wildly divergent. Seven of the fourteen surveys asked if the organization for which the respondents worked had a security policy. The results ranged from 19% in one survey to 83.4% in another. Eight surveys asked whether there had been unauthorized access to systems. The results ranged from 4% in one survey to 58% in another.

Today’s security surveys have too many flaws to be useful as sources of evidence. Survey data does have some value, because it engenders the idea that data sharing will improve our collective understanding of security. But data from the security surveys that predominate today merely captures the feelings and perceptions of the respondents. This can be interesting if that is what you have set out to discover, but it doesn’t help you make better decisions. Unfortunately, such surveys are often portrayed as authoritative. Bad data is often used to bolster the claim for the need to purchase new security products or services, either by vendors, consultants, or the security staff within companies. The U.S. Government Accountability Office (GAO) has presented the flawed data from security surveys to the U.S. Congress, possibly influencing government policy. Similarly, data from surveys has also been presented to the House of Lords in the United Kingdom, and probably elsewhere. (Refreshingly, the Lords were skeptical of the value of such surveys.)

Sometimes “statistics” take on a life of their own and become woven into the conventional wisdom. A prime example in information security is the claim that 70% of all incidents are caused by insiders. A well-known technology research and advisory firm has written that “70% of security incidents ... involve insiders.” The report went on to state that “This finding should surprise no one.” The report based that claim on only two incidents. In our research for this book, we were unable to find any credible evidence in support of the claim that 70% of all incidents are caused by insiders. A commonly quoted source was a survey carried out by the Association of Certified Fraud Examiners, but its focus is fraud and white-collar crime; the document does not even contain the word “security.” Insiders certainly cause security incidents; we have seen and responded to them. Unless we know how many security incidents occur, we have no way of figuring out how many are caused by insiders. Examining security controls from the perspective of an insider is probably worthwhile. But let us not allow our analysis of the larger problem domain be swayed by catchy memes or “movie plot” stories. That is the path to misguided, inefficient spending that is ultimately driven by emotion and not evidence.

The Trade Press

Information security is now a large enough field to support several monthly magazines. These include publications targeted at practitioners, managers, and researchers. We refer to these as “the trade press” and exclude hacker magazines and academic journals that publish peer-reviewed articles.

The trade press provides several functions, including the broad dissemination of ideas, giving people a common frame of reference, and advertising new products and services. The trade press can be a good medium for publishing information that is not time-critical, such as stories and experiences. Early stories in the media introduced and popularized new attacks. An example was the “salami attack,” in which a rogue programmer took rounding errors from thousands of accounts and put those fractional amounts into his own account. “Secrets of the Little Blue Box” was a 1973 article in Esquire magazine that explained how to manipulate the phone network. More recently, books such as The Cuckoo’s Egg have told the story of East German spies breaking into U.S. defense department networks over the nascent internet, using a university astronomy department as a jumping-off point. These stories are great for what they are, but what makes them interesting is that they are unique. Stories typically need to say something new for them to be considered publication-worthy. This means that the more extravagant a story, the more likely it is to be published. The trade press leans toward stories that have entertainment value rather than instructional value. But entertainment is a poor source of evidence.

Magazines are ill-suited for publishing time-critical data. Some information security magazines dedicate several pages of each issue to presenting data about new security vulnerabilities and the prevalence and quantity of different types of attacks. This information is tracked month to month. One of the major data graphics used by one magazine is a map of the world showing the number of attacks that originated from each continent. A number of magazines track a top ten list of internet attacks.

If the intent of presenting such data is to empower and impel action, we must ask: what action? The types or quantities of attacks that occur on the internet as a whole do not have particular significance to any one organization—they are background noise. You might enjoy reading about them, but this shouldn’t influence your day-to-day security decisions. Whether the general trend is up or down might be interesting, but it generally isn’t useful. By the time a magazine goes to market, the information is almost certainly not useful. Perhaps the intent of publishing information about the number of attacks is to suggest that security is failing, or that the world is becoming a generally more dangerous place. We might expect, however, that as the internet grows and there are more participants, the number of attackers will also rise. Those attackers will look to automate attacks that they only previously carried out manually, increasing the number of attacks observed. But fewer attacks detected might also mean that things are getting worse, because it might mean that attacks are becoming more stealthy or successful and therefore are not detected. Therefore, no particular conclusion can be drawn from an increase or decrease in the number of attacks.

The internet is a far more suitable medium than books or magazines for delivering time-critical information such as the details of new attacks. Since 2001, the Internet Storm Center has acted as a clearinghouse for data about vulnerabilities and attacks that occur on a wide scale on the internet. The benefit of such sites is that they provide a window into activity that exists outside any one organization’s environment. In theory, this then helps the organization determine whether it is being targeted or whether large swaths of the internet are also being attacked. These information-sharing efforts have value where they are carried out in an open and collaborative fashion.

Vulnerabilities

A vulnerability is a flaw in software that can be exploited. (The term isn’t always defined this way, but it is what we use in this book.) Vulnerabilities are often discovered by researchers, who variously use them, sell them, or disclose them to various parties, including the vendor affected or the broader research community. How to maximize the value of vulnerability disclosure while minimizing the harm remains a controversial question. Here, we will focus on vulnerabilities as a possible source of evidence.

The task of researching and finding vulnerabilities in software can be time-consuming, complex, and can require highly specialized skills. However, the process of capturing and categorizing the quantity and types of vulnerabilities that researchers have found and published has become fairly robust. A number of databases freely available on the internet provide data about many years of vulnerabilities found in major software packages, including operating systems. But even though a large quantity of data about vulnerabilities exists, opinions vary as to whether the data signifies an overall improvement in security or a decline. It is difficult to apply the scientific method, because a number of variables are changing at the same time. Software houses add functionality to attract and retain customers. But as discussed in Chapter 2, the larger a piece of software becomes, the more opportunity there is for that code to have bugs—some of which will probably result in security vulnerabilities. The ability of software houses to find vulnerabilities in their products depends on the quality of their staff and tools. Public vulnerability counts depend on both the number of researchers and their skills. The side with the most people and the best tools will be most successful in the long run.

Software vendors have started trying to differentiate themselves from their competitors on the basis of the security of their products. Oracle has advertised some of its products as “unbreakable,” even going so far as to claim that unauthorized users “can’t break it” and “can’t break in.” Those claims were met at the time with incredulity from security researchers. In fact, an increasing number of vulnerabilities have continued to be found in Oracle products. Oracle has since backed away from the “unbreakable” claim, suggesting that the term actually refers to a “commitment to a secure product lifecycle.”

Today it is not possible to write a computer program of any meaningful size in such a way that it can be proven that it does not have bugs, even when perfection is the goal. The space shuttle has been held up as a paragon of software-development virtue, but even its code has bugs. There have been cases where a vulnerability has been found in a piece of code that was written ten or twenty years earlier and reviewed many times. Tracking the number of vulnerabilities found in a particular IT platform or software product might be more indicative of how aggressively the code is being investigated for vulnerabilities, rather than the base rate of vulnerabilities that exists. Old code can also be found to be vulnerable to new categories of vulnerabilities as they are discovered.

As an aside, security vulnerabilities have been found in both commercial and open-source products built and sold by most of the major producers of information security software. In 2004, an internet worm used a single vulnerability in a security product line to compromise approximately 12,000 computers. The worm accomplished this feat in about 90 minutes and it also carried an intentionally destructive payload. This raises an important point: vendors of security products are no different from any other software vendor in terms of the internal and external forces that push on them. They want to sell products, and the sooner the better. Bugs and vulnerabilities are a natural consequence. Marcus Ranum described the situation like this: Imagine that you are the developer of a new piece of software. You can bring your product to market before your potential competitors, but knowing that it is insecure. If you pick that option, you’ll be driving a Ferrari in six months. Or, you can spend the time to fix the bugs. Which option will you pick?

We feel that the investments that software vendors are making in the security of their products will result in more secure products. Even so, it will continue to remain difficult to test this hypothesis in a scientific way. First, as we noted, there is not a single definition of security. Second, simply counting vulnerabilities isn’t enough. (A lack of logging functionality may well be a security issue, but it is not a vulnerability.) Vendors are also under no obligation to report vulnerabilities they find and to fix them in their own products. Even if they did, they might have sufficient incentive to game the system and release numbers that are not objective. For prospective customers, the easiest external measure is the number of reported vulnerabilities. Another potential measure is the ability of software products to achieve “certification.” Product certification schemes in security suffer from the problem that defining “what is to be tested and how” might turn out to be a can of worms. Government attempts to help, most lately in the form of the “Common Criteria,” have produced excellent cures for insomnia at the cost of many trees. They have had few other benefits.

It will continue to be very difficult to evaluate and compare products on the basis of their security. (Some very interesting properties that we would like to know about, such as the number of vulnerabilities waiting to be discovered, turn out to be very hard to measure.) As a potential source of evidence, data about vulnerabilities is intriguing. However, it is challenging to apply well.

Instrumentation on the Internet

Since the late 1990s, academic and hobbyist security researchers have been deploying security sensors on the internet and sharing their results. Some of these sensors are configured to capture a wide variety of network traffic. Others are narrowly configured to detect only very specific behavior. Sensors that are loosely configured might detect an increase in traffic of a given type, but they likely won’t be able to analyze it in detail because of the amount of data they must process. Sensors that are designed to detect specific types of attacks miss all unknown attacks. This type of instrumentation has more value when the results are shared quickly, because it allows organizations to determine whether the activity they see is happening elsewhere. To monitor the internet at scale can be quite expensive, but provides a wealth of interesting data.

An important research effort in this space is the Honeynet project. This group connects computers to the internet with the expectation (and hope!) that they will be attacked. The Honeynet project team members then monitor, analyze, and share information about the attacks observed. Their data provides a fascinating look into the mind-set and methods of attackers. Because the computer networks that the Honeynet project establishes are not “real,” all the activity they see is hostile. This methodology is powerful because it allows attacks to be easily seen. Unfortunately, it also means that the results are less useful for answering broader questions about why information security fails. Users making mistakes seems to be a major cause of security failures, and that dimension is missing from “honey” experiments.

As a potential source of evidence, instrumentation on the internet provides a useful and focused source of technical data.

Organizations and Companies with Data

A number of organizations sponsor information-sharing efforts. Some organizations also collect, collate, and report on data. In the aftermath of the first internet worm, a Pentagon agency established a Computer Emergency Response Team (CERT) that has been widely imitated around the world. CERT acts as a clearinghouse for information about security vulnerabilities and incidents. Its data set is composed of information that has been voluntarily reported to CERT, and that data is kept secret. (CERT staff have occasionally published their analysis of that data.)

More recently, the United States has introduced Information Sharing and Analysis Centers (ISACs). According to their members, ISAC meetings are a great place to collect business cards but are not particularly useful for anything else. Similar private groups exist, such as the Forum of Internet Response Security Teams. We could go on, but all such organizations try to address their members’ concerns about sharing data by making new members jump through hoops. Sometimes these hoops are financial, with substantial dues required. In the end, these organizations tend to focus more on secrecy than on useful information sharing. This essential problem has plagued every information security sharing effort of which we’re aware. A sad irony is that some of the most effective information-sharing programs have been set up by the underground to share information about new vulnerabilities. Over the years, members of the underground have set up and operated important information-sharing initiatives, including mailing lists such as Bugtraq and web sites such as PacketStorm and Milw0rm.

Some companies publish reports as a way to market their expertise on a variety of subjects, from the details of attacks that occur to high-level summaries of spending patterns and analysis of new technology trends. It’s only natural that companies that sell security products gravitate toward researching problems that their products are designed to fix. Unfortunately, it’s rare for such reports to contain their underlying data. This may be because the company believes that the data is proprietary and provides it with some type of competitive advantage. Whatever the reason, this makes it hard for the reader of the report to judge the analysis: whether the data was gathered well and analyzed in a reasonable way, or whether the report might suffer from bias. Authors of such reports fight a battle in the reader’s mind to quell the suspicion that the findings just happen to support the need for the product or service that the vendor is selling. If the goal of gathering data is marketing, that objective is likely better served by giving away not only the report, but also the data.

So far we have discussed possible external sources of evidence. Both organizations and individuals have one source of evidence that is readily available: their own experiences. Many organizations remember what went wrong in their past. Sometimes this is stories, and sometimes it is real incident data and analysis. This is a great source to study, but it’s tremendously hard to compare. As people move from organization to organization, they may bring analysis methods with them. The data that fed that analysis probably stays where it was, and it may be hard to understand what differences exist between the old organization and the new.

The preceding chapter discussed how our personal and group orientations create a lens through which we see the world. We need to work to overcome those lenses and preferences. We need to accept that cognitive dissonance can be caused by our orientations and the facts not lining up the way we’d like. However, it is only natural that many of the security decisions we make are based on our knowledge and understanding of past situations. This process of making and correcting mistakes and of making and testing decisions can be viewed as a feedback loop. Feedback loops are an essential part of how we learn. To tap into those feedback loops of other individuals and groups, and to mitigate the possibly misleading effects of navel-gazing, we need mechanisms for effective information sharing. We have already noted the problems with how today’s “information-sharing” organizations operate. Unfortunately, they have failed.

As a potential source of evidence, organizations and companies with data have the potential to be tremendously useful. The content that flows through these channels may not be objective. It can suffer from many of the problems we have outlined in this chapter. But the idea that more information, more evidence, ultimately leads to better decision-making is persistent.

In Conclusion

The sources of evidence that we have reviewed suffer from a number of problems. Analysis is often presented without the underlying data being made available, preventing effective critiques or further analysis. Some of the examples we noted were badly constructed surveys, perverse incentives in the trade press, and the difficulty of tracking vulnerabilities. The result is that the data that emerges from these sources rarely clears that bar where it could be considered objective.

The search for objective data on information security is at the heart of the philosophy of the New School. Without objective data, we are unable to test our hypotheses. Since there is a drought of objective data today, how can we know that the conventional wisdom is the right thing to do? That is not to say that the conventional wisdom is necessarily incorrect, but as professionals, we find it profoundly unsatisfying to not know the answer either way. Being unable to test hypotheses fundamentally inhibits our ability to improve.

Our feeling is that overall we have made substantial progress in many areas and that we have better and more security than if we did not have security practitioners or a security industry. But having to start such a sentence with “our feeling” hurts. It puts us in the same boat as those who decry progress in security—a flimsy boat on the high seas—because we have little to no objective data to justify our position. The lack of objective data means that there can be little or no substantive argument either pro or con, only one based on circumstantial evidence or subjective judgment.

This state of affairs is unsustainable for the commercial security industry and for security practitioners. When individual companies or the economy as a whole suffer a downturn, the information security group and its initiatives are often terminated. “Risk reduction” is too amorphous a concept to fund when expenses are being reduced. For security professionals, self-preservation and rational self-interest should lead us to focus on obtaining objective data about successes and failures and the factors that led to them. The next chapter discusses a new wellspring of objective data—one that comes from an unusual and possibly frightening source. Beliefs and feelings cannot carry us any further. The Catholic Church finally pardoned Galileo because, in the end, it is reality that counts.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.83.199