© Copyright © 2016 by Intel Corp. 2016

Steve Grobman and Allison Cerra, The Second Economy, 10.1007/978-1-4842-2229-4_10

10. Cybersecurity’s Second Wind

Steve Grobman and Allison Cerra2

(1)Santa Clara, California, USA

(2)Plano, Texas, USA

Not everything that can be counted counts, and not everything that counts can be counted.

—Albert Einstein

When New York City Mayor Rudolph Giuliani sought reelection of his post in 1997, he ran on a compelling record of accomplishments—particularly in making his citizens much safer. Facts were on the mayor’s side. Crime rate in the city had reached its lowest level in more than 30 years. With a reputation for tough policing across the crime spectrum from minor offenses (such as noise ordinance violations, public drinking, and vandalism) to violent felonies, Giuliani had successfully cleaned up a city once notoriously a cesspool of criminal activity. During his term, the number of murders, rapes, and armed robberies dropped a staggering 61 percent, 13 percent, and 47 percent, respectively. 1 While critics were quick to point to macroeconomic conditions , such as changing demographics and a healthier national economy, as tailwinds in his favor, few could discount Giuliani’s remarkable achievements in restoring order to the city. And, as it turned out, few of his citizens did discount him: Giuliani handily won reelection in a veritable landslide, ousting his opponent by a 59-41 margin, becoming the first Republican to win a second term in the heavily Democratic city in more than 50 years. 2

Upon entering office for his first term, Giuliani warned New Yorkers against measuring his performance based solely on statistics. Within his first six months in his newly elected post, he remarked about the continuing decrease in crime rates that commenced midway through his predecessor’s regime:

  • I have never been one who strongly relies on statistics as a way of measuring what we're doing. Obviously, you prefer the trend in the direction of declining crime, but I don't want the department to be overly focused on statistics. 3

By the time of his second term, Giuliani was a bit more willing to let statistics tell his story, and for good reason. Statistics had become his friend in not only allowing him to tout his accomplishments but helping him make them in the first place. Upon taking office, he suspected his force was focused on the wrong measurements. At the time, law enforcement had historically measured its success based on responsiveness to emergency 911 calls and number of arrests made in a given day. Giuliani recognized that these lagging indicators could hardly help him identify where crime was most likely to occur. Instead, he began measuring the nature of crimes, including their frequency, locality, and timing, to determine patterns and allocate resources more effectively.

His approach worked. Focusing on the noise of small crimes actually diminished the occurrence of violent felonies. Asking the right questions to derive the correct metrics provided insights on where to deploy officers, rather than relying on reports that merely recorded previous activities. Giuliani successfully challenged conventional notions of big-city crime tactics and had won in the court of public opinion with a second term. By virtually every account, New York City was the safest it had been in several decades. And then, far under the radar of Giuliani’s meticulous management and reporting, the deadliest terrorist attack on American soil struck one of the nation’s most crime-vigilant cities on September 11, 2001, revealing communications fissures across its first responder units and proving that even the most-well-intended focus on metrics can be blinding.

Of course, it would be patently unfair to criticize Giuliani for the 9/11 attack . As previously discussed, the lack of intelligence shared across various governmental units, including with the mayor himself, created a smokescreen for the impending threat. Giuliani’s dashboard of metrics was confined to those immediate and imminent threats for which he and his team had visibility. From that perspective, it is fairer to laud the former mayor for putting in place a system that precipitously reduced city crime through a relentless focus on metrics (such as petty crime incidents) previously thought to be too noisy and insignificant by prior regimes. Yet, despite making the city safer, Giuliani’s activities were insufficient to prepare for the black-swan attack looming beyond his preparedness for even the worst imaginable felony. Giuliani could have no reasonable way of predicting that his nation was at impending war with al Qaeda or that his city would take center stage as the target. Certainly, understanding that his city was soon to be Ground Zero for the most perilous strike on US soil would have likely changed the outcome, or would it?

Raising the Noise

Tension in the Middle East has been simmering for at least the past 100 years. A persistent fight for territory divides the region across various factions with a complex history and even more complicated relationships. In 1967, the simmer reached a boiling point during the Six-Day War, during which Israel, fearing imminent invasion by Egypt, launched a preemptive attack to capture the Gaza Strip, the Sinai Peninsula, the West Bank of the Jordan River, and the Golan Heights—some of which were former territories of Egypt and Syria. Realizing their opponents would not take the loss lightly, Israel began securing its position along the Suez Canal, spending $500 million in fortifications along the designated ceasefire line. Israel couldn’t have been more right in predicting the motives of its adversaries—just six years later, the region would once again find itself at war.

When Egyptian President Anwar Sadat succeeded his predecessor in 1970, he was determined to restore pride to his country by reclaiming what was lost in the Six-Day War. Inheriting an economy in desperate need of reform, Sadat was convinced that his people would only tolerate such change if their territories were returned. He found an ally in Hafiz al-Assad, the head of Syria, who saw the retaking of the Golan Heights his country had lost years prior as a strictly military option. The imminence of war was hardly a secret—Egypt publicly announced as much in 1972 when Sadat stated his willingness to “sacrifice one million Egyptian soldiers” 4 in his fight against Israel.

The intelligence arm of the Israel Defense Forces , known as Aman , was responsible for estimating the probability of war. In examining their situation, Aman had determined that war was predicated upon two factors. The first involved Egypt and Syria joining forces as it was (correctly) assumed that Syria would not go to war against Israel solo. Second, it had acquired intelligence from a confidential Egyptian source that Egypt would not engage unless first supplied defenses by the Soviets, specifically fighter-bombers to neutralize Israel’s aerial forces and Scud missiles to be used against Israeli infrastructure in deterring a comparable attack against Egypt. By late August 1973, the necessary Scud missiles had only just arrived in Egypt. No fighter jets had yet been supplied. And, it would take Egyptian forces at least four months of training to prepare themselves for battle, according to Aman’s estimates. For these reasons, Aman came to the conclusion that war was not imminent.

At the same time, Egypt did its part to reinforce this thinking in its enemy. The country had expelled almost all of the 20,000 Soviet advisors residing within it the year before, after believing the Soviets were undermining its military efforts by leaking secrets. Aman believed Egypt was in a weakened military state as a result of the Soviet exodus. Egypt furthered the misconception by planting false information of maintenance problems, lack of spare parts, and a shortage of personnel to operate the most advanced equipment. All the while, Sadat blustered war threats at his opponent, seemingly numbing them into believing he was neither serious nor prepared.

Israel had already invested $10 million responding to what it believed to be impending attacks by the Egyptians along the Suez Canal in recent months. As it turned out, the “threats” were nothing more than training exercises by Israel’s opponent. In the week leading up to Yom Kippur, the Egyptians once again staged a week-long training exercise adjacent to the Suez Canal. The Israelis, determined not to make the same mistake a third time, dismissed the effort. They also disregarded movements of Syrian forces along the border, given they believed Syria would not attack without Egypt and Egypt would not attack until the Soviet fighter-bombers had arrived.

In fact, the war had arrived, right under the Israelis’ noses. On the Jewish holiday of Yom Kippur of 1973, Egyptian and Syrian forces crossed the ceasefire lines in the Sinai and Golan Heights. The choice of raiding on Yom Kippur was no accident, as it rendered Israel ill-prepared on a day where the country comes to a virtual standstill in holy observance. Over the next roughly three weeks, Israel fought back, incurring considerable loss of blood and treasure, until an Egyptian-Israeli ceasefire was secured by the United Nations on October 25.

The aftermath of a war the Israelis should have seen coming was significant. While Israel readily defeated Syria once the ceasefire with Egypt was negotiated, taking more land in the Golan Heights as a result, Israeli Prime Minister Golda Meir stepped down just six months later. And, although Egypt technically suffered another military defeat at the hand of Israel, Sadat’s early military successes with his surprise attack on Yom Kippur enhanced the leader’s prestige and paved the way for the economic reforms he sought in his country. In the end, Egypt successfully deadened its opponents’ senses in detecting the imminence of strike, even though Israel was “measuring” its enemy’s activities leading up to the very day of attack.

The examples of Giuliani’s New York City crime rate in the years leading up to 9/11 and the Israelis’ oversight of impending attack on Yom Kippur are, in a strange way, opposite sides of the same coin. On one hand, Giuliani was fanatical about measuring leading indicators, down to the most microscopic of petty crimes, as harbingers of pending violent offenses against his citizens. His techniques worked in creating an absolute free fall in violent crime metrics during his tenure. Yet, despite his emphasis in creating a safer environment for New Yorkers, his radar was not tuned to a much more severe national threat and his forces were unprepared for anything on the scale of 9/11.

On the other hand, Israel was all too familiar with the stakes of war. It had multiple warnings from its enemies stating as such. Israel witnessed Egyptian and Syrian forces coalescing along the border. It had erroneously responded to what were nothing more than false alarms by Egyptian forces conducting training exercises and was determined not to be duped again. It even had intelligence from friendly allies in the region, aware of the Egyptian-Syrian plan and unwilling to join the fight, that war was on the immediate horizon. All the real signs to indicate an attack was under way were ignored as noise. If Giuliani’s radar was not tuned to a national threat, Israel was caught flat-footed by not heeding the warning signals clearly visible on its own. Both examples reflect the challenges inherent in anticipating a major event as unpredictable as war.

Obeying the Law

Cybersecurity is rich with metrics. Security operations first responders often receive hundreds, if not thousands, of alarms each day to inform where threats may be entering the perimeter. CISOs (chief information security officers ) have no shortage of metrics available, though determining which are most important in how to position people, tools, and processes for success is another matter. Executives are perplexed with understanding how cybersecurity fits within a company’s broader growth or innovation agenda, if at all, leaving many to virtually ignore it entirely until their company becomes the next poster child for a breach. Case in point: despite cybersecurity breaches capturing more headlines, only a little more than one-third of corporate boards review their cybersecurity posture on a quarterly basis. 5 Vendors confuse matters more by touting the latest in big data analytics, promising to point the company’s compass to the desired North Star of business-oriented outcomes, including the elusive return on investment of cybersecurity plans. Yet, many cybersecurity professionals remain overwhelmed, not helped, by the deluge of data multiplying each day. And, it’s disconcerting to think that even the best laid plans of measuring the right outputs may fail to reliably predict when or where the next adversarial campaign may strike.

That’s because cybersecurity follows a distinct pattern, also seen by nations struggling to anticipate major wars while addressing smaller, though more frequent, acts of violence. While major attacks like 9/11 or the Yom Kippur War are difficult, if not impossible, to predict, mathematicians have proven that the occurrence of war can in fact be modeled.

In statistics, a power law describes the relationship between two quantities, whereby a change in one results in a proportional relative change in the other. Graphically, what results is a principle of the vital few and trivial many. You’ve undoubtedly seen this before in daily observation. The Pareto Principle , or 80-20 rule, is one example of a power law, where 80 percent of the outcomes are generated by 20 percent of the inputs. As it turns out, a power law can also be seen in the occurrences of terrorist attacks, such as 9/11, or more conventional conflicts, such as the Yom Kippur War .

Researchers Aaron Clauset and Maxwell Young applied statistics to analyze the set of terrorist attacks worldwide between 1968 and 2004. While terrorist attacks had been believed by some to be outliers in any statistical sense, Clauset and Young instead found that the nature and frequency of events followed a power law distribution with mathematical precision. Further, the researchers found that the slope of the power law curve varied across industrialized and non-industrialized nations. Specifically, while terrorist events occurred much less frequently in industrialized nations, the severity of those attacks (as measured in the number of deaths and injuries) was significantly greater than those in non-industrialized countries. 6

Lewis Fry Richardson found conventional wars followed the same power law distribution. In studying virtually every war from 1815 to 1945, Richardson found a higher frequency of small fights, in which only a few people die, and a lower frequency of major wars that kill many. The power law has been found to apply to the frequency of the most common words in any language, the number of web hits dominated by a relatively few sites, the citations earned by the most popular scientific papers, and, yes, even the nature of cybersecurity attacks.

Researchers Deepak Chandarana and Richard Overill analyzed the degree of financial losses associated with publicly reported cybercrimes from 1997 to 2006. Their analysis revealed a double power curve pertaining to different organizational cohorts. Specifically, crimes targeted at larger organizations with stronger defense mechanisms succeeded less frequently, but with significant financial impact when they did prevail. Attacks carried out against organizations with weaker defenses succeeded with greater frequency, although they yielded smaller financial returns to the adversary. 7

Finally, in examining the nature of cybersecurity incidents for a large organization over six years and across 60,000 events, researchers Kuypers, Maillart, and Pate-Cornell found the power law alive and well in the data set. Specifically, a single rare event accounted for nearly 30 percent of the total hours allocated by the security operation center over a six-month time frame. Over time, the researchers found the number of extreme incidents to be tapering off, hypothesized to be as a result of better detection and remediation of the organization in addressing attacks. 8

In graphical terms, a typical power law distribution describing the effect of cybersecurity incidents might resemble that shown in Figure 10-1.

A421195_1_En_10_Fig1_HTML.jpg
Figure 10-1. Power law of threat volume and value in cybersecurity

The left side of the chart is replete with nuisance attacks that trigger volumes of alarms for resource-constrained cybersecurity professionals. Garden-variety viruses and worms fall into this category. While they can cause damage, they are not targeted to any particular organization and therefore carry less financial impact. Once identified, industry providers record the malware’s signature in protection products to prevent further infestation. The challenge with these viruses and worms, particularly the enhanced polymorphic varieties that change their signatures to evade detection, is the sheer number of them that cybersecurity professionals must address on a daily basis. For adversaries, the take on these crimes may not be as significant as more targeted attempts, though the opportunity to launch these attacks indiscriminately and with relatively little effort provides sufficient aggregate incentive to congest the market with scores of generic malware. Included in this category would be the Blaster worm, which infected more than ten million computers in less than a year, causing its victims to suffer repeated crashes. Annoying to disinfect? Yes. Irrevocable damage to any one victim? Not exactly.

The middle portion of the chart represents threats that are less common, although more severe. While the threats are not necessarily targeted at a potential organization, they carry more damaging consequences. In this bucket are generic phishing schemes that may also include a ransomware component. Predicting these attacks is a bit more problematic than in the first category, as unsuspecting employees are typically the weakest line of defense in any cybersecurity posture, and they can also fall victim to a generic phishing scheme if not paying careful attention. This category would contain the CryptoWall ransomware malware covered in our last chapter—which singlehandedly bilked its victims out of $325 million worldwide. Certainly, victims felt the impact in their pocketbook, but with tens of thousands of them worldwide, CryptoWall also did not bring any one organization to its knees.

The right end of the chart illustrates the most dangerous, targeted attacks to a particular organization. These black-swan attempts are intended solely for their victim and are not generally repeatable. Adversaries may spend months, seasons, or even years plotting how to infiltrate the organization to inflict the greatest harm. In April 2015, federal authorities detected an ongoing remote attack targeting the US Office of Personnel Management (OPM ), the human resources department for the civilian agencies of the federal government. Over the weeks and months that followed, it would be revealed that infiltrators absconded with the private records (including social security numbers, birth dates, and fingerprints) of more than 22 million government employees, recruits, and their families, 9 bypassing the organization’s newly minted $218 million upgrade to its cybersecurity defenses in the process. The director of the unit resigned under pressures from Congress three months later. The highly targeted OPM breach cost the organization $140 million in the year it occurred with ongoing annual costs estimated at $100 million per year for the next ten years—all to provide victims with identity theft insurance and restoration services. 10 Attacks in this realm are highly personalized and significantly devastating when they strike unsuspecting targets.

Cybersecurity professionals must address the breadth and depth of attacks across the range of threats represented by a statistical power law. Distinguishing the signal from the noise is one challenge when dealing with such volumes. The other, and perhaps more insidious, concern pertains to executives and board officers. It entails properly aligning cybersecurity incentives with intended outcomes. Organizations are overwhelmed by both imperatives, giving adversaries one more advantage in this lopsided battle.

Running the Gauntlet

The 1960s have been coined by some as marking the true beginning of computer security . In 1965, one of the first major conferences on the topic convened experts across the industry. Computers were expensive and slow. Engineers had architected a new means of maximizing scarce processing resources—allow multiple users to share access to the same mainframe via communications links. The idea greatly maximized the efficiency and benefits of mainframes while it simultaneously introduced security concerns associated with its sharing principle. An employee for one of the sponsors of that 1965 conference, a government contractor, was able to demonstrate the vulnerability of his employer’s time-sharing computer system by undermining several of its system safeguards. Conference attendees demanded more studies be conducted along the same lines of breaking into time-sharing systems to expose weaknesses. It would be one of the first formal requests to initiate and formalize this sort of war-game testing in the computer industry.

Shortly thereafter, the use of tiger teams emerged in the industry. These teams were employed white hat hackers incentivized to find the vulnerabilities within their organization’s own defenses. Soon, the process was heralded as a best practice in cybersecurity, with organizations pitting cybersecurity team against team—one to launch adversarial attacks and the other to detect and respond. By 2015, 80 percent of organizations reported a process to test their security controls at least annually. 11 The problem? The vast majority (50 percent 12 ) report annual testing intervals—far too infrequently for the velocity of attacks and increasing dwell times among adversaries. The figure is even more startling given more than half of organizations were also not confident in their security team’s ability to remediate complex threats, if any at all. 13

Even among those who do deploy some form of penetration or red-team testing, as it is commonly known in industry parlance, within their organizations, there are common pitfalls limiting its effectiveness.

  • Failing to measure what is truly important. When an organization designs a penetration test, it does so by pitting a team mimicking an adversary (the red team) against a team of cybersecurity defenders (the blue team). To be successful, the red team must first understand what is most important to the organization, as adversaries launching targeted attacks will have already done their homework on the matter. While an executive may be quick to mention theft as the most critical outcome to avoid, this may not be true in all cases. For example, a large automobile manufacturer would likely find theft of customer records to compromise its integrity in the market; however, an infrastructure attack that shuts down a large fraction of its manufacturing capacity would probably have far more devastating impact. Knowing where to point the red team to attempt the most damage is essential.

  • Shooting the messenger. The most effective red teams always succeed. You read that right. An organization actually wants to hire the best possible red-team hackers and incentivize them extremely well to relentlessly pursue any vulnerability lurking in their environment. When red teams bring such weaknesses to the table, they may be admonished by the very leader who hired them in the first place. After all, when that indictment is directed at the same leader who established the organization’s cybersecurity framework, the pill can be hard to swallow. Yet, if red teams are not celebrated for finding vulnerabilities, it is only a matter of time before a far more persistent enemy does so instead.

  • Punishing the innocent. Likewise, organizations must realize that the best red teams always win. Punishing overwhelmed blue teams for failing to secure every defense will lead to a demoralized cybersecurity team. Rather than punish the innocent, use the exercise to productively shore up weaknesses. And, executives should be careful not to indirectly create a climate of low morale among blue-team members, those who are responsible for protecting the perimeter of the organization daily, as this will only be another bullet in your enemy’s gun.

  • Inflating success. Red-teaming should be an ongoing, persistent exercise. Relegating the effort to an annual event neglects the velocity of change occurring as adversaries continue innovating. Worse still, some organizations inform blue-team members, if not employees, that such an exercise is under way. This activity is tantamount to an adversary notifying an organization in advance it’s coming for them. While some forewarning can and does exist (such as among hacktivists who are attempting to invoke change on the part of their targets), the far more common tendency is for adversaries to strike like proverbial thieves in the night. Inflating one’s success during a notified penetration test simply creates a false sense of security that is readily welcomed by the predator.

  • Focusing only on cybersecurity professionals. Of course, cybersecurity first responders should be involved in penetration testing. After all, the object of the test is an organization’s cybersecurity posture. But, that posture extends far beyond the technical staff standing watch over a company’s digital assets day and night. It also includes the communications team of the company, which must spring into action the moment a breach is detected. Having a robust communications disaster readiness plan is essential in preparing for the unexpected. Knowing a company’s policy for communicating to key stakeholders—both internal and external—for varying degrees of threat (from web defacement to e-mail exposure to a breach of customer information) is essential for the communications engine to respond in a timely fashion. At the moment of an exploit, the last thing an organization needs is for its management to have a vigorous debate around the table on what should and should not be communicated. Here again, time will not be on the side of the victim, so any exemplary red-teaming exercise will include the communications function in its assessment.

The more organizations begin thinking and behaving like their adversary, the more prepared they will be if and when one of those disastrous and highly unpredictable targeted attacks occurs.

A Second Thought

Cybersecurity is too important to be left to the corridors of information technology (IT). Nearly 80 percent of company board members indicate they are concerned with the topic. 14 This strong majority is good news for white hats in the fight. However, the unfortunate reality is that caring about a topic and aligning the correct incentives toward its acceptable outcome are often not one and the same. Too often, organizations are chasing meaningless alerts in a tsunami of data that overwhelms their ability to properly diagnose an impending serious threat. Worse yet, some organizations are being lulled to sleep by their own metrics, which may indicate cybersecurity “success,” all the while concealing an incapacity to respond to a disastrous black-swan attack. At the same time, cybersecurity professionals are indirectly encouraged to practice behaviors contrary to that which will improve a company’s security posture. Among the more immediate opportunities for cybersecurity professionals and board members concerned with their efforts are the following:

  • Kill invisible incentives misaligned to effective cybersecurity outcomes. It’s the cross that so many white hats bear everyday—they toil in virtual anonymity to ensure that nothing happens against their organization. CISOs, loath to be the next day’s headline for seemingly resting on their laurels, often indirectly encourage the adoption of the latest cybersecurity widget designed to cure what ails them. Fast adoption is a good thing, as discussed in Chapter 8. However, tool sprawl is not. Current cybersecurity environments are woefully lacking an integrated cybersecurity platform through which fast implementation of the latest technologies is possible without simultaneously multiplying operational complexity for already overburdened fast responders. As has been the common theme in this book, time is not on the side of CISOs, who have an average tenure of just 17 months. 15 Consider the point: is it sexier to adopt a point release of an already installed technology that may result in a single-digit improvement in current security defenses or go for broke to install the latest wonder product before the music stops and your chair is once again up for grabs? Many CISOs have become poster children for the latter case, introducing more complexity in their environment as a result, even though an organization’s best interests are likely fulfilled by the former.

  • Reward failure. We’ve already discussed this in the context of red-team exercise best practices, but the same holds true for senior leaders within cybersecurity. As discussed in Chapter 8, a cybersecurity defense mechanism will greatly lose effectiveness once it has been widely adopted in market. The reason is that adversaries have more incentives to develop countermeasures against a particular technology as it is used by more targets in their addressable market. By necessity, this means that products previously deployed in an organization’s environment will outlive their usefulness. In some cases, the fall from grace can be fast. Imagine the difficulty of being the CISO who adopted said technology, using the company’s scarce resources to do so, and must now inform her board that the technology has passed its prime. Rather than excoriate her for coming to the conclusion, she should be respected for challenging the status quo and forcing her company to innovate at the pace of her adversaries. There is no question that investments in cybersecurity create challenges for organizations with significant opportunity costs. At the same time, leaving dated technology in one’s cybersecurity environment only serves to muddy the waters with unnecessary complexity, if not impede adoption of newer technology poised to address more sophisticated threats—as long as doing so uses an integrated security approach that unifies workflows and simplifies back-office operations.

  • Change the scorecard. Billy Beane’s legendary success in taking the Oakland Athletics , a mediocre baseball team at the time, to 20 consecutive wins in 2002 has been immortalized in the popular movie and book of the same name, Moneyball. Baseball, like cybersecurity, is dominated by numbers. At the time, the most popular statistics used to measure a player’s worth included stolen bases, runs batted in, and batting average. Beane, dealing with the loss of three of his most valuable players and a payroll that could not compete against much larger franchises, reevaluated the scorecard. Rigorous statistical analysis revealed that on-base percentage (a measure of how often a batter reaches base) and slugging percentage (a strong measure of the power of a hitter) were better leading indicators of offensive success. Since the data were widely available but not used by competing teams, Beane was able to recruit underestimated players and build a winning franchise—all at a fraction of the cost of teams with much deeper pockets.

    Beane didn’t take advantage of any metrics that weren’t readily available to his competitors. He simply looked differently at the data. Right now, there are countless organizations with cybersecurity scorecards that measure the number of incidents captured and responded to in any given period of time. More often than not, these scorecards are measuring the left-hand side of the power curve discussed earlier—large volumes of alerts that are largely untargeted in their nature. Instead, organizations can and should reevaluate the nature of threats to determine the probabilistic nature of any as part of a highly targeted campaign. By reorienting the perspective of correlating threats to suspected campaigns (as we discussed the industry should also do with threat intelligence sharing), organizations can begin to stack-rank the severity of threats based on campaign association. By relying on existing products to handle the routine threats common on the left-hand side of the power curve, more time can be spent understanding potential threats on the right. Don’t trust your existing products to handle the routine nature of voluminous threats? That’s where red-teaming comes in to cover the gap—perceived or real.

  • Realize that imperfect data may be a threat signal in and of itself. Ask any well-intentioned cybersecurity professional one of his greatest fears, and if you don’t first get the answer “false negatives” (disregarding a real threat), you’ll likely hear about “false positives” (false alarms that are hardly threats at all). The former is an obvious concern. The latter is more insidious in its nature, since a high degree of false positives can impede a first responder’s ability to find the real threats in his environment by consuming his precious time.

  • At least, that’s what conventional wisdom would have you believe. In fact, the reality is far more complex. While false positives certainly can and do eat away at a first responder’s attention and time, dismissing them out of hand can be even more dangerous. Consider the Yom Kippur War and Israel’s dismissal of threats that appeared to be false alarms in their nature but were in fact signals of their enemy’s impending attack. Adversaries will intentionally bombard their targets with false alarms to raise the noise level and numb their victims’ senses in connecting the pattern of false positives to a very real and highly targeted looming threat. Capturing false positives in isolation, apart from examining their correlation to campaigns, not only deadens an organization’s responsiveness but relegates some of its most valuable intelligence to the trash bin.

As with any function, a misguided incentive structure can weaken a company’s cybersecurity posture and produce unintended consequences that favor its adversaries. Reporting without red-team exercising limits an organization’s field of view to only those threats it can immediately see, not potential ones looming in the future, as Giuliani learned when his city, hardened against crime, fell to an enemy of a much deadlier stripe. At the same time, ignoring what are perceived to be meaningless alarms may in fact play into the hands of an enemy bent on deadening its victim’s responses, as Egypt so skillfully did against Israel in the Yom Kippur War . Unfortunately, there is no easy answer, as is often the case when discussing a topic as complex and important as cybersecurity. Yet, there are practices organizations can employ to upend and challenge conventional wisdom and put the value back into cybersecurity metrics and objectives. Organizations that capably distinguish their response plans across high-volume and high-value attacks, rigorously test their own capabilities to expose vulnerabilities, and reward “failure” along the way increase their chances of developing a team of white hats properly refined and motivated for the task and enemy at hand. In so doing, they stand to change the game of cybersecurity in their favor in The Second Economy.

Notes

  1. David Kocieniewski, “The 1997 Elections: Crime; Mayor Gets Credit for Safer City, but Wider Trends Play a Role,” The New York Times, October 28, 1997, www.nytimes.com/1997/10/28/nyregion/1997-elections-crime-mayor-gets-credit-for-safer-city-but-wider-trends-play-role.html , accessed July 30, 2016.

  2. Justin Oppmann, “Giuliani Wins With Ease,” CNN, November 4, 1997, www.cnn.com/ALLPOLITICS/1997/11/04/mayor/ , accessed July 30, 2016.

  3. Kocieniewski, note 1 supra.

  4. “Yom Kippur War,” New World Encyclopedia, www.newworldencyclopedia.org/entry/Yom_Kippur_War , accessed July 30, 2016.

  5. PwC, “Turnaround and transformation in cybersecurity; Key findings from The Global State of Information Security® Survey 2016,” www.pwc.com/gsiss , accessed July 30, 2016.

  6. Aaron Clauset and Maxwell Young, “Scale Invariance in Global Terrorism,” Department of Computer Science, University of New Mexico, February 2, 2008, http://arxiv.org/pdf/physics/0502014.pdf , accessed July 30, 2016.

  7. Deepak Chandarana and Richard Overill, “A Power Law for Cybercrime,” Department of Computer Science, King’s College London, http://i.cs.hku.hk/cisc/news/9Aug2007/PowerLawforCyberCrime(slides).pdf , accessed July 30, 2016.

  8. Marshall A. Kuypers, Thomas Maillart, and Elisabeth Pate-Cornell, “An Empirical Analysis of Cyber Security Incidents at a Large Organization,” Department of Management Science and Engineering, Stanford University, School of Information, UC Berkeley, http://fsi.stanford.edu/sites/default/files/kuypersweis_v7.pdf , accessed July 30, 2016.

  9. Brian Naylor, “OPM: 21.5 Million Social Security Numbers Stolen From Government Computers,” NPR, July 9, 2015, www.npr.org/sections/thetwo-way/2015/07/09/421502905/opm-21-5-million-social-security-numbers-stolen-from-government-computers , accessed July 30, 2016.

  10. “OPM Breaches Still Resonating,” Fedweek, March 16, 2016, www.fedweek.com/fedweek/opm-breaches-still-resonating/ , accessed July 30, 2016.

  11. “State of Cybersecurity: Implications for 2015 An ISACA and RSA Conference Survey,” www.isaca.org/cyber/Documents/State-of-Cybersecurity_Res_Eng_0415.pdf , accessed July 31, 2016.

  12. Ibid.

  13. Ibid.

  14. Ibid.

  15. Scott Hollis, “The Average CISO Tenure is 17 Months—Don’t be a Statistic!,” CIO, September 17, 2015, www.cio.com/article/2984607/security/the-average-ciso-tenure-is-17-months-don-t-be-a-statistic.html , accessed July 31, 2016.

  16. “The Zipf Mystery,” www.youtube.com/watch?v=fCn8zs912OE , accessed July 29, 2016.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.27.244