© Copyright © 2016 by Intel Corp. 2016

Steve Grobman and Allison Cerra, The Second Economy, 10.1007/978-1-4842-2229-4_7

7. Take a Second Look

Steve Grobman and Allison Cerra2

(1)Santa Clara, California, USA

(2)Plano, Texas, USA

There is no castle so strong that it cannot be overthrown by money.

Marcus Tullius Cicero

Google the worst monarchs of England, and King John is bound to appear more than once. Described as “nature’s enemy,” 1 a “pillager of his own people,” 2 a raging madman who “emitted foam from his mouth,” 3 and a man with “too many bad qualities” 4 (this last point from someone who actually fought on the king’s side), John was a ruler with few supporters and countless enemies. The contempt was hard-earned over several decades of a failed and corrupt regime—one so reprehensible, it inspired that of the villainous character portrayed in Robin Hood legend.

John’s offenses are almost too numerous to count and too egregious to be believed, if not for historical record. Start with a base of treachery (John attempted to overthrow his own brother from the throne while the latter was away on crusade), mix in some lechery (he was notorious for sexually accosting the wives and daughters of nobles), sprinkle in unspeakable evil (he arranged the murder of his own nephew and chief rival to the throne and cruelly killed his enemies through starvation), and add a strong dose of military incompetence (he earned the disreputable moniker “Softsword” for his ineptitude and cowardice on the battlefield) and you have a recipe for a monarch who stands history’s test as being among the worst. 5

For these grievances and more, John soon found himself embroiled in a bitter civil war against nobles who would have rather risked their lives than see him continue his reign. By that time, his grip over the kingdom was already weakening. When John first took the throne in 1199, he enjoyed expansive territory rule, including not only England and large parts of Wales and Ireland but also the western half of France. 6 Within five years, due to political incompetence and military ineptness, he had lost almost all of his French dominion to Philip Augustus. 7 Determined to repair a bruised reputation, John levied punitive taxes on his nobles and countrymen to fund a significant war chest. But, true to form, King “Softsword” failed to recapture the lost territories, retreating from battle when challenged by French forces and ceding once again to Philip Augustus. When the battered monarch returned to England, void of both victory and treasure, a civil war was imminent. After months of fruitless negotiations between John and barons on mutually acceptable reform, it erupted.

In 1215, a rebel army took control of the capital city of London. Soon after, the cities of Lincoln, Northampton, and Exeter also fell. With John losing ground, he disingenuously signed the first version of what would become the historic Magna Carta, a document that established the basic human rights of his subjects. Seemingly unmoved by his empty commitment, the corrupt king was soon back to his old ways, arbitrarily seizing the land of his adversaries. The exacerbated rebellion was left with no other option than to ally with Prince Louis of France, not of English descent himself but a welcome alternative to a tyrannical ruler nonetheless.

Over the next several months, Louis gained momentum, conquering one city after the next, getting ever closer to seizing the throne. By the middle of 1216, there was one notable bastion still under John’s rule that would determine the outcome of the war. Dover, also known as the “key of England,” sat at the nearest sea crossing point to France, making it a strategic asset of military and political import. For Louis’ star to rise, Dover had to fall. But, taking the fortress would require incomparable strategy, precision, and grit. In terms of defenses, Dover stood in a class all its own.

Dover was, in fact, a castle within a castle—the first of its kind in Western Europe. Laying siege to the “keep,” the tower at the heart of the castle where the most valuable treasure and human lives were protected, required an enemy to pass a series of deadly obstacles. The keep itself was a formidable structure—an 83-foot-tall, 100-square-foot tower with walls up to 21 feet in thickness. 8

Louis meticulously planned his assault, surveying the fortress for several days before charging. He would first have to restrict the castle’s supply to outside reinforcements. That would require Louis to surround men on the ground and at sea as Dover was ensconced behind expansive land and an outer moat. Louis’ army would then need to make their way past an exterior wall and fence of stakes, all the while avoiding archers deployed on the concentric rings of inner and outer walls—the former at a higher altitude than the latter—which allowed Dover to double up its aerial military strength. The geometry of the walls themselves presented another challenge for Louis’ men. Most castles of the day had square towers, which introduced structural blind spots that left defending archers vulnerable. Dover had circular towers, in addition to square, offering its garrison maximum visibility and offensive coverage.

With Louis’ army in place and Dover completely cut off from possible outside reinforcements, the siege was on. By all accounts, Louis put up an impressive fight—first attempting to take the castle with mangonels and perriers (military devices used to catapult stones and other weapons from considerable distances). When that failed, he went underground —literally—deploying miners to excavate their way under the structure’s massive stone towers. Louis’ army did damage to the behemoth, ultimately bringing down one of the towers but, after months of exhausting fighting, the French prince was unable to fully lay claim to Dover. On October 14, 1216, Louis opted for a truce, vanquishing hope for John to be dethroned by military force. Just a few days later, John was removed from his post by other means, via a deadly dysentery attack. 9 The throne was passed to John’s untainted nine-year-old son Henry (King Henry III) and England remained under the rule of one of her own. Had Dover fallen or Louis’ army withstood battle just a few more days until John’s passing, history may very well have been rewritten.

The castle defenses popular in Dover’s day existed to deter enemies. Moats were common, allowing defenders to shoot enemy encroachers attempting to swim or sail across. Ramparts provided steep walls that required expert scaling and physical strength to breach. Even taller towers and walls further fortified the most impressive fortresses, impenetrable by climbing and requiring weaponry (like mangonels and perriers) to take down by other means. Tunneling underneath a castle was possible, as Louis proved at Dover, but not for the faint of heart, as the defending army would burrow their way to meet their attackers underground to brutally fight in darkness and confinement. Even assuming an enemy was successful in making his way into the castle, murdering holes, openings in the ceiling just past the castle’s front gate purposely designed to allow defenders to drench their adversary in boiling liquids, often served as an additional unfortunate surprise.

In short, as the refuges for their kingdoms’ most precious assets, castles were built to withstand attacks at multiple levels. Defenses were designed to adapt to an adversary’s countermeasure. This defense-in-depth strategy, as it would come to be known, became a highly effective military technique . The premise was simple: don’t rely on any one defense for protection; instead, reinforce with multiple layers, each designed to anticipate the enemy’s next move and mitigate risk of any single point of failure. If and when an attack does occur, each successive obstacle in the enemy’s path buys the victim precious time to ready contingency plans.

When threats can be seen, such as a rabid army storming a castle, defense in depth has proven itself to be extremely effective. Further, when threats can be anticipated, such as said army tunneling underground or striking unscalable walls with weapons, defense in depth’s layered approach successfully thwarts an enemy’s advancement. Its success in military strikes can be chalked up to its rootedness in common sense. Why rely on only a single defense mechanism to protect lives and treasure? Indeed, when the enemy is visible and his countermeasures are known, defense in depth is smart practice. But, when adversaries form a new threat vector entirely, a strategy based on mistakenly flawed assumptions leaves victims unconsciously exposed, subject to an enemy’s counterattack all the while trusting in their insufficient fortifications. As history would reveal, a victim believing his castle to be safe can be undone by his own defenses.

Taking the Keep

When Mohamed Atta checked in for his flight from Portland to Boston on September 11, 2001, his name was flagged in a computerized prescreening system known as CAPPS (Computer Assisted Passenger Prescreening System), indicating he should undergo additional airport security screening measures. CAPPS required any suspect’s checked baggage be held until the subject was confirmed to have boarded the plane. No problem. Atta then proceeded through a standard metal detector, calibrated to detect the metal equivalent of a .22-caliber handgun. He passed. His bags were placed on an X-ray belt for potential weapons, like knives with more than a four-inch blade, or other restricted materials. All clear.

On that fateful morning, Atta and 18 other conspirators, more than half of whom were also flagged by CAPPS, boarded four different flights, each passing a series of airport security measures in doing so. They intended to hijack the planes and had successfully averted every airport screening defense designed to stop them. In metaphorical terms, they had made it past the outer wall of the castle. Next, they had to overcome security on the plane itself. Using the knives and box cutters they had successfully smuggled across airport security defenses , the terrorists forced airline crews into submission. The flight personnel likely easily surrendered, as their training instructed them to succumb to a hijacker’s demands, focusing on landing the plane rather than fighting the enemy. Once the terrorists were in the cockpit, the inner wall of the castle was theirs.

Elsewhere, up to 50,000 office workers began their Tuesday at Manhattan’s iconic World Trade Center (WTC ). Some eight years prior, the site was attacked when a 1,500-pound bomb stashed in a rental van detonated in a parking garage beneath the building. The explosion claimed six casualties and more than 1,000 injuries and exposed serious vulnerabilities in the WTC’s emergency preparedness plan. The bombing was likely a distant memory to those employed at the WTC that September morning in 2001. After all, following the 1993 attack, the WTC had fortified its defenses, spending $100 million on physical, structural, and technological improvements to the building and bolstering its evacuation plans. 10 In case of another bomb planted within the building, the WTC was presumably ready. Tragically, the WTC would not get the chance to test its defenses against a similarly placed bomb, as the imminent threat looming that morning would come from a completely different attack vector. At 8:46 AM on 9/11, it came in the form of a hijacked commercial airliner, crashing into the upper portion of the North Tower of the WTC, severing seven floors and killing hundreds instantly. At 9:03 AM, the South Tower was struck by one of the other four planes hijacked by terrorists. The “keep” was taken.

The flawed assumptions behind a failed defense-in-depth approach are difficult to ignore, if not unfair to criticize. The Federal Aviation Administration anticipated an explosive device would be left aboard a flight—not carried on by an attacker. It certainly did not envisage an outcome where a hijacked plane could be used as a guided missile any more than WTC security professionals fathomed their site to be the target. The crews aboard likely did not put up the same resistance they would have otherwise had they known no safe landing was possible once the hijackers seized the cockpit. One by one, 19 terrorists seeped through traditional, layered security defenses to execute a plot so unimaginable it would result in the deadliest terrorist attack on US soil. Defense in depth couldn’t overcome the overlooking of one critical fact: the terrorist group behind the 9/11 attack, al Qaeda, desired death more than their victims desired life. 11

But, to be fair, defense in depth was never designed to completely eliminate all possible threats. It was intended to thwart an enemy at each turn, buying the victim precious time to contemplate contingency plans. In this way, laying the 9/11 tragedy at the feet of a trampled defense-in-depth system seems grossly unfair. However, consider that this same system created distinct siloes that actually impeded the ability of the United States to respond, and you now have a case where defense in depth failed in both protecting against and containing the threat.

Examine the distinct layered defenses involved in 9/11, each of which failed independently and none of which communicated with another to properly identify and remediate the threat. Several of the terrorists were on the CAPPS registry. Yet, airport security personnel were unaware of what intelligence had been collected on each subject to respond more earnestly to the alert. When the terrorists on the first hijacked flight to strike the WTC inadvertently broadcast this message intended for the plane’s passengers to air traffic controllers instead, a known hijacking was in motion:

  • Nobody move. Everything will be okay. If you try to make any moves, you’ll endanger yourself and the airplane. Just stay quiet. 12

Yet, no other flights were notified or grounded, even when it was later detected that the terrorists on the first flight had also exposed they had “some planes” in a transmitted message. After the first hit, occupants in the North Tower were instructed by 911 operators to remain in the building and stay calm. The deputy fire safety director in the South Tower told his counterpart in the North Tower that he would wait to hear from “the boss from the Fire Department or somebody,” before ordering an evacuation. 13 No such directive came. Instead, like those in the North Tower, occupants in the South Tower were informed to remain where they were. Some in the process of evacuation were instructed to return to their desks. Less than 20 minutes later, the choice to evacuate was no longer an option for hundreds who instantly died upon contact with the second airliner. Had a message been relayed to WTC security and 911 phone operators that multiple planes had been hijacked, perhaps the order to evacuate would have followed, saving hundreds, if not thousands, of lives, including that of the South Tower deputy fire safety director himself.

The heroes who immediately sprang into action to rescue the trapped before both towers ultimately fell suffered from siloed conditions as well. Different radio frequencies in use by emergency responders delayed, if not confused, communications. As a result of this breakdown, many heroic first responders perished when the towers gave way.

To be clear, there’s no telling how many other potential 9/11s were avoided prior to the actual tragedy thanks to the layered defenses deployed across the various US governmental agencies. Abandoning a commonsense defensive approach that mitigates against a single point of failure is equivalent to throwing the baby out with the bathwater. However, what is clear is that a defense-in-depth approach is only as effective as its architects are in anticipating new threats and identifying them when they emerge. Radically different or altogether ignored threats can readily penetrate a layered defensive system. And, in the asymmetrical battle against adversary and victim, the former only needs to succeed once while the latter must be right 100 percent of the time. When those points of failure are correctly calculated by adversaries, a disintegrated defense-in-depth approach cannot easily, let alone systematically, pass threat intelligence across its siloes to inoculate the attack. Victims are left confused, if not blinded, by the very layered security framework they employed—a point that becomes even clearer when the battle moves to the virtual realm.

Securing the Virtual Castle

The defense-in-depth approach for cybersecurity was initially recommended by none other than the National Security Agency (NSA) . In an NSA whitepaper, the government agency advised defense in depth to be a practical strategy encompassing people, technology, and operations—one that considered adversarial motives associated with various threat actors (including nation-states, hacktivists, and cybercrime) across a holistic security paradigm of protecting, detecting, and reacting to attacks. 14 Similar to the castle strategy employed in medieval times, the cyber defense-in-depth approach advocated by the NSA examined various threat vectors with many layers of protection and detection to present adversaries with unique, multiple obstacles. For example:

  • A passive attack could be first thwarted by link and network layer encryption to secure traffic flows; if adversaries made it past this first line of attack, they would be met with security enabled at the application layer as an additional line of defense.

  • An active attack may be met with firewalls and intrusion detection systems; if those are bypassed, adversaries could face access controls on hosts and servers within the computing environment itself.

  • Insider attacks must first overcome physical and personnel security measures; if successful, these adversaries would then be subjected to technical surveillance countermeasures. 15

This layered approach to cyber defenses served to increase the risk of detection for the adversary, while reducing the chances of successful breach, if not to make it increasingly unaffordable for threat actors to continue their assault. The defense-in-depth approach became the veritable blueprint for cybersecurity strategy. Organizations rushed headlong into deploying multiple defenses to secure every aspect of their virtual castles. Antivirus software protected endpoints; firewalls and intrusion detection systems defended networks; encryption secured sensitive data; multiple factors of authentication provided authorized access; web application firewalls monitored traffic running to and from web applications and servers; and so on, and so on.

The most zealous organizations deployed redundancy in addition to defense in depth. Why rely on one antivirus solution that may not detect all possible malware when multiple installations from multiple providers could potentially catch even more threats? The same could be said for any other security countermeasure running in one’s environment. In an arms race, suppliers typically win. And, cybersecurity professionals definitely faced no shortage of companies ready and willing to sell the next widget to further inoculate one’s virtual castle against enterprising adversaries. In 2015 alone, private investors pumped a record $3.3 billion into 229 cybersecurity deals, with each budding company prepared to offer the latest weapon all but guaranteed to further solidify an organization’s cybersecurity posture. 16 Cybersecurity professionals, eager to avoid being the next poster child of attack due to a lack of doing something responded in kind, deploying multiple technologies from multiple vendors.

If there was no shortage of cybersecurity products and technologies, sadly, the same could not be said for the pipeline of talent entering the cybersecurity labor force. With adversarial attacks showing no sign of abating, there simply aren’t enough good guys in the fight to ward off the next threat. Consider just a few of the more startling statistics to prove the point:

  • Only 26 percent of US organizations say they have capable personnel on staff to address cyber risks associated with implementation of new technologies 17 ;

  • In 2014, there were nearly 50,000 postings for workers with a Certified Information Systems Security Professional (CISSP ) standing, the primary credential in cybersecurity work, amounting to three-quarters of all individuals in the United States holding the certification, most of whom presumably already had jobs. 18

  • Security managers in North America and EMEA (Europe, the Middle East, and Africa) report significant obstacles in implementing desired security projects due to lack of staff expertise (34.5 percent) and inadequate staffing (26.4 percent). Because of this, less than one-quarter of enterprises have 24/7 monitoring in place using internal resources. 19

  • In 2014, there were 1,000 top-level cybersecurity experts in the United States, versus a need for at least 9,000 and up to 29,000 more. 20

  • In 2015, more than 209,000 cybersecurity positions in the United States went unfilled, with postings up nearly 75 percent over the past five years. 21

  • Demand for cybersecurity professionals is expected to rise to 6 million globally by 2019, with a projected shortfall of 1.5 million going unfilled by that time. 22

Facing an unemployment rate hovering near 0 percent, cybersecurity professionals can literally write their ticket, forcing resource-constrained organizations into an escalating bidding war to recruit top talent. Those with the title “information security manager” represented the hottest job in information technology (IT) in 2016, boasting the biggest increase in average total compensation in the sector (up 6.4 percent from 2015 to 2016). 23 In the consummate seller’s market, nearly 75 percent of cybersecurity professionals admitted to being approached by a hiring organization or headhunter. 24 And, the top-paying job in cybersecurity in 2015, that of a security software engineer, averaged $233,333 in annual salary, exceeding the $225,000 earned by the average chief security officer (CSO ). 25

True, some cybersecurity positions can be filled in short order—in as little as three months, on average, for entry-level positions. But, with an increasingly sophisticated adversary and ever-complex security tools and technologies, the more advanced cybersecurity positions take far longer to fill—more than one-fifth of jobs requiring ten or more years’ experience sit vacant for at least a year. 26 Even when an organization can find the right talent for its cybersecurity concern, it will likely be forced to pay dearly to poach him away from the well-paying job he already enjoys.

Despite reaping the spoils from a hot job market, it’s not all upside for cybersecurity professionals, many of whom are increasingly overwhelmed. Nearly two-thirds of them report being under pressure to take on new tasks or increase productivity, challenges that are not expected to subside anytime soon. 27 More than 60 percent expect their workload and responsibility to increase in the next 12 months. 28 These demands are the result of an innovative adversary market, always favored to make the first move in an attack, and the ever-expanding attack surface through which such advances can be made. No longer protecting a castle that is well understood and walled off from the outside world, enterprises are confounded by an increasingly mobile workforce spurred by limitless connectivity across all types of endpoints (computers, servers, mobile devices, wearables, and more) and public and private clouds. According to analyst Gartner, by 2018, 25 percent of corporate data traffic will bypass perimeter security and flow directly from mobile devices to the cloud. 29

And, here is where the perfect storm of defense in depth run amok, an anemic cybersecurity labor market and an increasingly untethered workforce collide to drive the greatest of unintended outcomes. Cybersecurity professionals are working harder for the various tools deployed across an organization’s environment to secure multiple domains. Approximately two-thirds of cybersecurity professionals cite “tool sprawl”—the unintended consequence of deploying multiple disintegrated security technologies across one’s environment—as a significant concern 30 in impeding productivity of an already overtaxed labor pool. Disintegrated management and communications platforms are spiraling out of control for many cybersecurity first responders, making it more difficult to detect threats or creating sufficient confusion to overestimate risk at the other end of the spectrum. Both extremes offer disastrous consequences.

Erring on the Side of Caution

The Economic Development Administration (EDA ) is an agency within the US government’s Department of Commerce that promotes economic development in regions of the United States suffering from slow growth, high unemployment, and other unenviable economic conditions. The agency had suffered a weak security posture in its history, hardly deploying aggressive defense-in-depth principles. When a new chief information officer (CIO ) took the helm of the agency in 2011, he quickly learned of his organization’s unflattering history of cybersecurity missteps—ranging from incomplete security configurations to untimely patch updates to lacking security assessments and monitoring. Later that year, when the US Department of Homeland Security (DHS ) issued a warning to the EDA and sister government agency the National Oceanic and Atmospheric Administration (NOAA) that it detected a malware infestation on its network, both organizations sprang into action. One, the NOAA, would completely remediate the threat within six weeks. The other, the EDA, would lose months of employee productivity and millions of taxpayer dollars chasing a threat that never even existed. 31

Most of us likely remember the “telephone” game we played as children. A message was started on one end of a long daisy chain, moving from one person to the next in a whisper. By the end of the chain, the message hardly matched its original version, showing how a simple misinterpretation can reverberate out of control. No truer analogy can be found to explain the EDA debacle. On December 6, 2011, the US Computer Emergency Response Team (US-CERT ), part of the DHS, passed a message to both the EDA and NOAA, indicating suspicious activity found on IT systems operating on the Herbert C. Hoover Building (HCHB) network . 32 As both agencies used this network, the warning came to ensure their respective systems were not compromised in any possible attack.

In requesting network logging information to further investigate, the incident handler for the Department of Commerce’s Computer Incident Response Team (DOC CIRT ) unknowingly requested the wrong data. The first responder’s error resulted in what was believed to be 146 EDA components under risk within its network boundary 33 —a substantial malware infection of the organization’s computer footprint.

The very next day, an HCHB network administrator informed the DOC CIRT incident handler that he had inadvertently pulled the wrong network logging information in his request (essentially retrieving all EDA components residing on the HCHB network). After a second, correct pull was extracted, only two components in the EDA’s footprint were found to be suspicious. DOC CIRT sent a second e-mail notification to the EDA, attempting to clarify the mistake, but the e-mail was vague at best, leaving the EDA to continue in the inaccurate belief that 146 components were potentially at risk. 34

Over the next five weeks, communications between DOC CIRT and the EDA continued, with each organization effectively talking past the other. The EDA was under the mistaken impression that most of its infrastructure had been potentially compromised thanks to DOC CIRT’s original misinformation that was never properly clarified. DOC CIRT assumed the EDA had conducted its own audit finding a much more pervasive outbreak of up to 146 elements—even though the EDA hardly had the resources to complete such an investigation. As the newly appointed CIO of the EDA feared a nation-state attack and further blemish to his agency’s already tainted cybersecurity record, tensions mounted and extreme measures were taken.

When DOC CIRT asked the EDA to invoke normal containment measures, as basic as reimaging the affected components, the EDA refused, citing too many incidents of compromise to contain the threat. Fearing the worst, the EDA asked to be quarantined off from the HCHB network on January 24, 2012. 35 Doing so would leave hundreds of EDA employees without access to e-mail or Internet servers and databases for months. 36 The EDA hired a cybersecurity contractor to actively investigate the threat, activity that commenced on January 30, 2012. 37 Based on a preliminary analysis, the contractor reported he had found indications of extremely persistent malware and suspicious activity on the EDA’s components. Panic erupted into full-blown hysteria.

Two weeks later, the same cybersecurity contractor corrected his own version of events, reporting those incidents of compromise were nothing more than false positives—activity that appeared suspicious but was actually benign instead. To a CIO inheriting an agency with a sketchy cybersecurity record, the reassurance was hardly enough. He wanted a guarantee that all components were infection-free and no malware could persist—a bet any reasoned cybersecurity expert would be loath to make. By April 16, 2012, despite months of searching, EDA’s cybersecurity contractor could find no other extremely persistent malware or incidents of compromise across the agency’s system. The NSA and US-CERT confirmed the same. 38

Convinced that any further forensics investigation would not lead to new results and compelled to clean up what it believed to be a widespread infestation, the EDA took drastic measures on May 15, 2012, and began physically destroying more than $170,000 worth of its IT components, including desktops, printers, TVs, cameras, and even computer mice and keyboards. If not for being refused for more money, the agency would have destroyed up to $3 million more of its infrastructure in an effort to contain an imaginary threat. 39

All told, the EDA spent more than $2.7 million—over half of its annual IT budget—in an unfortunate overreaction to a threat that could easily have been remediated. More than $1.5 million of taxpayer dollars went to paying independent cybersecurity contractors and another $1 million went to standing up a temporary IT infrastructure after asking to be exiled from the HCHB network . And, of course, there was the $170,000 in destroyed equipment that could have soared much higher had cooler heads (and finite budgets) not prevailed. 40

Beyond a folly of poor communication exacerbated by extreme paranoia, the agency’s overreaction could be chalked up to a lack of skill and management tools in diagnosing the problem. Had the EDA simply verified for itself, it would have found that there was no risk of infection from its e-mail server, deployed with the latest version of antivirus software that was performing weekly scans and had found no incident of attack. In the end, there were only six components—a far cry from the wrongly assumed 146—with malware infections, each of which could have been easily remediated using typical containment measures, like reimaging, which would have had a negligible impact to operations and budget. 41

In what would be heralded as an extreme case of overreaction, the EDA faced condemnation of a different sort. The seeming incompetence in wasting millions of dollars of taxpayer money chasing an imaginary adversary had critics excoriating the agency’s ineptitude. Yet, had the danger been real and the agency not reacted, the public outcry would likely have been even greater. Rebecca Blank, appointed as acting commerce secretary in June 2012 and who ordered the investigation, put it simply:

  • The EDA did not know what it was facing. Under those circumstances, given the cyber risks [to the government], one has to be cautious. In retrospect, it was not as serious as they originally thought. But it’s a question of which side do you want to err on? 42

Indeed, as another organization with no shortage of cybersecurity defenses would soon find, erring on the other side carries even greater consequences.

Ignoring One’s Defenses

Unlike the EDA, retailer Target intended to be at the top of its class in employing cybersecurity professionals and equipping them with the latest technology. In the world of cybersecurity, retailers find themselves in the crosshairs of innovative adversaries often motivated by profit. Only 5 percent of retailers discover breaches through their own monitoring capabilities. 43 Target aspired to be different, building an information security staff numbering more than 300 by 2013, a tenfold increase in less than ten years.

That same summer, Target spent close to $2 million installing a malware detection tool using a new technology called sandboxing. Given the very nature of a zero-day threat means it has not yet been detected by other means and therefore can bypass other defenses, like antivirus capabilities that look for known malware attempts, sandboxing executes suspicious programs within a confined test environment, quarantined away from the rest of the corporate network . The malware, deceived into believing it is operating within the target’s environment, begins executing. After unpacking its contents and finding the program to be malicious, the zero-day threat is thwarted by the sandbox, preventing it from entering the corporate network. Target had a team of security specialists in Bangalore to monitor its network around the clock. If anything suspicious was detected, Bangalore would immediately notify the company’s security operations center in Minneapolis. The company intended to protect its castle against motivated criminals storming the gates.

Those raiders encroached in September 2013. 44 As mentioned in Chapter 1, cybercriminals performed basic online reconnaissance to locate approved vendors with remote access to Target’s systems. No one knows how many targets were attempted before Fazio Mechanical Services, a heating, air conditioning, and refrigeration company based in Sharpsburg, Pennsylvania, fell prey to the thieves’ phishing scheme. The phishing attempt would likely have been caught had Fazio deployed the latest available corporate antivirus solution. The company instead used free antivirus software intended for consumer use. The enemy was able to infiltrate the outer wall of Target’s castle by stealing the credentials of a compromised vendor, one that would more than likely have had access to Target’s external billing system. By November 12, the attackers had penetrated Target’s inner wall, successfully breaching the company for the first time. 45 From there, adversaries ultimately snaked their way to Target’s valuable point-of-sale (POS ) infrastructure, testing their malware for close to two weeks.

On November 30, multiple alerts were sounded by Target’s defensive software. The software deployed on the company’s endpoints triggered suspicious activity alerts. Perhaps more important, the newly deployed sandboxing software triggered a severe alert the same day. 46 The alerts were passed from Bangalore to the company’s security operations center, business as usual, all the while the adversary had fully installed the malware on the company’s POS system. The “keep” was in the crosshairs.

On December 2, the attackers installed upgraded versions of their malware to exfiltrate data from Target’s systems during the busiest times of day, apparently to obfuscate the traffic in the company’s normal transactions. 47 Again, another sandboxing critical alert was triggered. 48 Again, Bangalore notified Minneapolis. Again, nothing happened.

It would take nearly another two weeks before Target would identify the threat and remove the malware. By then, more than 40 million credit and debit cards would be stolen along with the personally identifiable information of up to 70 million more customers. In all, the raiders had siphoned off 11 gigabytes of sensitive information from the company’s “keep” in just two weeks. Not only did Target’s defenses trigger alerts that would have more than stopped the invasion, had the company simply defaulted to having the malware automatically removed (an available option not taken when installing the new system), the missed alerts between Bangalore and Minneapolis would have been inconsequential.

Of course, hindsight is always 20/20 and one can easily play the “should have, could have, would have” game after an attack occurs. But, in Target’s case, what is perhaps most interesting is the fact that the company was active in deploying the latest technology to secure its environment. It was also zealous in hiring cybersecurity experts to monitor its network. It had ostensibly taken the preventive measures to secure its castle and then summarily dismissed the very alerts provided by its own technology.

One may criticize the company for not automatically defaulting any alarm to a removal of the suspected malware. But, that would ignore the popularity of false positives in the industry (remember the independent contractor’s initial assessment of the EDA’s risk came with false positives diagnosed only after further investigation?). If companies like Target automatically responded to every suspected threat as a DEFCON 1 assault, they would risk interrupting operations and customer service when possibly no threat exists at all.

On the flip side, one could excoriate Target for not noticing the alarms detected by both its endpoint and sandboxing security defenses in its environment. A company with the infrastructure of Target’s—more than 360,000 employees, roughly 2,000 stores, 37 distribution centers, and a heavily trafficked retail web site 49 —likely sees hundreds of thousands of alerts in any given day. Separating the signal from the noise for an already overtaxed cybersecurity team becomes par for the course in determining which of the threats are most severe and most worthy of further investigation and action.

And so it goes: even companies deploying the latest cybersecurity tools backed by hundreds of their own cybersecurity experts can find themselves tomorrow’s headline, accused of rolling out the proverbial red carpet to marauders seeking their castle’s most valuable keep.

A Second Thought

Defense in depth has proven itself a viable military tactic since medieval times, if not earlier. It’s no wonder the NSA quickly endorsed a similar approach to defend one’s information assets across people, process, and technology and throughout the threat defense life cycle (from protection to detection to reaction). When enemies are seen in a physical world, defense in depth makes perfect sense. But, increasingly, in a virtual world, it faces a new set of challenges:

  • Defense in depth works when all layers operate as one. A siloed approach to a defense-in-depth system compromises a first responder’s ability to react. Not unlike the multiple siloes in operation during the 9/11 attacks, easily transferring threat intelligence between disparate groups is challenging, at best. Automatically remediating a threat without the benefit of further diagnostics can result in a different form of losses (through either lost productivity or a negatively impacted customer experience) if the threat is actually a false positive in disguise. In either case, an organization requires a team of cybersecurity experts willing and able to separate valuable signals from cacophonous noise and armed with machine learning that quickly elevates the most insidious threats to the top of the list, while automatically remediating threats of lower importance.

  • Defense in depth assumes you have friendly insiders. In the case of insider attacks or even in the more common occurrence of employees falling victim to phishing schemes, organizations must defend against attack vectors coming from within their own walls. Otherwise, multiple defensive layers are automatically bypassed when insiders are themselves the evildoers, if not the entryways for adversaries.

  • Defense in depth is increasingly challenging when the walls of the castle are no more. In an increasingly cloud- and mobile-based world, the castle’s keep is often on the outside of its perimeter. The only things separating an authorized user from his cloud-based application or file are an Internet connection and login credentials. Securing this keep requires more than the most well-intended defense-in-depth approach.

  • Defense in depth can add disastrous complexity. With a cybersecurity talent pool that is insufficient at best, a defense-in-depth approach that selects best-in-breed technology from across the stack serves to create tool sprawl for first responders. Disintegrated security environments are accompanied by fragmented management systems. Already strapped cybersecurity professionals are now in the unenviable place of working harder for their tools, rather than the other way around. Imagine if castle defenders had to constantly learn how to employ each weapon—bows and arrows, catapults, trebuchets, and the like—each time attackers advanced. Now imagine that a castle lacked sufficient defenders to master the intricacy of each of these tools. Finally, visualize a case where the defenders are literally blinded by these same tools, perhaps by each tool being in the way of a clear line of sight to an encroaching army—and you have the metaphorical case facing so many cybersecurity first responders today.

  • Defense in depth can sometimes be more perception than reality. As an example, consider the case where the same technology is deployed across multiple instances of one’s security environment, such as a deterministic signature-based antivirus engine placed on e-mail gateways, e-mail servers, and endpoints. We may assume we have hardened our technology defenses with multiple layers. In actuality, one circumvention of the underlying antivirus engine causes all similarly lined defenses to fall. Bringing this back to our 9/11 analogy, the weapons detection techniques deployed at multiple security checkpoints and airports across the country failed at least 19 times, as the terrorists found a way to successfully evade exposure with effectively concealed box cutters.

  • Defense in depth works when all defenders are unified in purpose and collaboration. Cybersecurity professionals find themselves on the same side of the battle with one another, but often with conflicting agendas. As we discussed in Chapter 6, cybersecurity finds its roots in IT, a highly structured discipline within the organization. IT professionals specialize across various aspects of the technology stack—for instance, some look specifically at device considerations, others care for network connectivity, some worry about servers and storage, and still others consider software applications. In many cases, these organizations function more like siloes than an integrated operation, each behaving independently from the other, with unique budgets and objectives. Using the castle metaphor, it would be the equivalent of guards at the gate readying their defenses without concern for how guards in the tower were preparing the same. Even worse, since these organizations may not even consult one another on the tools being used to detect threats in their environment, it would be more similar to guards at the gate and tower not being able to communicate with each other when enemies stormed the castle.

Despite these challenges, defense in depth remains a popular strategy employed by many cybersecurity professionals. In a 2016 McAfee research study , nearly half of organizations admit they subscribe to a cybersecurity posture that employs best-in-breed technology across the stack; the other half favor an integrated view of cybersecurity across the threat defense life cycle. Presumably, one of these cohorts is less effective than the other. The question is, which one?

To answer the question, McAfee asked organizations to define their degree of integration based on the percentage of workloads consolidated with one vendor. In interpreting the data, those companies with at least 40 percent of their security workloads with one vendor were classified as having an integrated view to security—presumably advantaged with a common management toolset offset by any disadvantages associated with not having best-in-breed technology available from multiple competing vendors. The company then asked respondents to answer a series of questions related to their security posture.

In the study, those companies with an integrated view to security reported:

  • Being better protected, with 78 percent suffering less than five attacks in the past year compared with 55 percent of those with a best-in-breed security approach;

  • Having faster response times, with 80 percent discovering threats in less than eight hours compared with 54 percent; and

  • Feeling more confident in their security posture, with 30 percent losing sleep more than once per week due to security concerns versus 57 percent.

These results would indicate greater effectiveness with a more comprehensive view to security, one that is not exacerbated by multiple technologies in the same domain, often offered by different vendors, that only serve to confuse, if not impede, cybersecurity professionals from being effective in their jobs. That said, who wants to consolidate most of their security workloads with one vendor? Doing so entails increasing long-term risk of putting all of one’s eggs in a single provider’s basket. If that provider fails to innovate faster than adversaries, a company’s most integrated approach to security will do little to protect its customers from the next threat vector.

If defense in depth overcomplicates one’s security environment and a less complex integrated view increases potential long-term risk, where does an organization go to secure its castle? We submit the answer lies not in identifying a better product but, rather, a better platform for long-term security sustainability. It’s an approach not embraced by many cybersecurity professionals if, for no other reason, than misaligned incentive structures that are in direct opposition to a long game of security, a topic covered in Chapter 6. As we continue laying out the case for cybersecurity defense in an increasingly cloud and mobile world, one thing is increasingly clear: defense in depth is a sound approach when one’s castle is bordered and one’s enemy readily identifiable—two presumptions that are less relevant with each passing day in The Second Economy.

Notes

  1. Charlotte Hodgman, “In case you missed it . . . King John and the French invasion of England,” History Extra, October 16, 2015, article first published in BBC History Magazine in 2011, www.historyextra.com/feature/king-john-and-french-invasion-england , accessed June 14, 2016.

  2. Ibid.

  3. Ibid.

  4. Ibid.

  5. Marc Morris, “King John: the most evil monarch in Britain's history,” The Telegraph, June 13, 2015, www.telegraph.co.uk/culture/11671441/King-John-the-most-evil-monarch-in-Britains-history.html , accessed June 14, 2016.

  6. Ibid.

  7. Ibid.

  8. Danelle Au, “Is Defense In Depth Dead?,” RSA Conference, March 12, 2015, www.rsaconference.com/blogs/is-defense-in-depth-dead , accessed June 15, 2016.

  9. Hodgman, note 1 supra.

  10. The 9/11 Commission Report, http://govinfo.library.unt.edu/911/report/911Report.pdf , accessed June 15, 2016.

  11. Ibid.

  12. Ibid.

  13. Ibid.

  14. “Defense in Depth: A practical strategy for achieving Information Assurance in today’s highly networked environments,” National Security Agency, https://citadel-information.com/wp-content/uploads/2010/12/nsa-defense-in-depth.pdf , accessed June 17, 2016.

  15. Ibid.

  16. Reuters, “Cyber Security Startups Face Funding Drought,” Fortune, February 24, 2016, http://fortune.com/2016/02/24/cyber-security-funding-drought/ , accessed June 17, 2016.

  17. PwC, “US cybersecurity: Progress stalled; Key findings from the 2015 US State of Cybercrime Survey,” July 2015, www.pwc.com/us/en/increasing-it-effectiveness/publications/assets/2015-us-cybercrime-survey.pdf , accessed June 17, 2016.

  18. Burning Glass Technologies, “Job Market Intelligence: Cybersecurity Jobs, 2015,” 2015, http://burning-glass.com/wp-content/uploads/Cybersecurity_Jobs_Report_2015.pdf , accessed June 17, 2016.

  19. Steve Morgan, “Cybersecurity job market to suffer severe workforce shortage,” CSO Online, July 28, 2015, www.csoonline.com/article/2953258/it-careers/cybersecurity-job-market-figures-2015-to-2019-indicate-severe-workforce-shortage.html , accessed June 17, 2016.

  20. Martin Libicki, David Senty, and Julia Pollak, “H4CKER5 WANTED: An Examination of the Cybersecurity Labor Market,” Rand Corporation, 2014, www.rand.org/content/dam/rand/pubs/research_reports/RR400/RR430/RAND_RR430.pdf , accessed June 17, 2016.

  21. Ariha Setalvad, “Demand to fill cybersecurity jobs booming,” Peninsula Press, March 31, 2015, http://peninsulapress.com/2015/03/31/cybersecurity-jobs-growth/ , accessed June 17, 2016.

  22. Steve Morgan, “One Million Cybersecurity Job Openings In 2016,” Forbes, January 2, 2016, www.forbes.com/sites/stevemorgan/2016/01/02/one-million-cybersecurity-job-openings-in-2016/#57a37cc37d27 , accessed June 17, 2016.

  23. Amy Bennett, “Survey: With all eyes on security, talent shortage sends salaries sky high,” CSO, March 30, 2016, www.csoonline.com/article/3049374/security/survey-with-all-eyes-on-security-talent-shortage-sends-salaries-sky-high.html#tk.cso_nsdr_intrcpt , accessed June 17, 2016.

  24. Ibid.

  25. “May 2015: Top-Paying Tech Security Jobs,” Dice, http://media.dice.com/report/may-2015-top-paying-tech-security-jobs/ , accessed June 17, 2016.

  26. Bennett, note 23 supra.

  27. Ibid.

  28. Ibid.

  29. Gartner on Twitter, @Gartner_inc, June 24, 2014, #GartnerSEC, tweet pulled on June 17, 2016.

  30. Michael Suby and Frank Dickson, “The 2015 (ISC)2 Global Information Security Workforce Study,” a Frost & Sullivan White Paper, April 26, 2015, www.isc2cares.org/uploadedFiles/wwwisc2caresorg/Content/GISWS/FrostSullivan-(ISC)%C2%B2-Global-Information-Security-Workforce-Study-2015.pdf , accessed June 17, 2016.

  31. US Department of Commerce, Office of Inspector General, Office of Audit and Evaluation, “ECONOMIC DEVELOPMENT ADMINISTRATION Malware Infections on EDA’s Systems Were Overstated and the Disruptionof IT Operations Was Unwarranted,” FINAL REPORT NO. OIG-13-027-A, June 26, 2013, www.oig.doc.gov/OIGPublications/OIG-13-027-A.pdf , accessed June 17, 2016.

  32. Ibid.

  33. Ibid.

  34. Ibid.

  35. Ibid.

  36. Lisa Rein, “At Commerce Dept., false alarm on cyberattack cost almost $3 million,” The Washington Post, July 14, 2013, www.washingtonpost.com/politics/at-commerce-dept-false-alarm-on-cyberattack-cost-almost-3-million/2013/07/13/11b92690-ea41-11e2-aa9f-c03a72e2d342_story.html , accessed June 17, 2016.

  37. US Department of Commerce, note 31 supra.

  38. Ibid.

  39. Ibid.

  40. Ibid.

  41. Ibid.

  42. Rein, note 36 supra.

  43. Michael Riley, Benjamin Elgin, Dune Lawrence, and Carol Matlack, “Missed Alarms and 40 Million Stolen Credit Card Numbers: How Target Blew It,” Bloomberg, March 17, 2014, www.bloomberg.com/news/articles/2014-03-13/target-missed-warnings-in-epic-hack-of-credit-card-data , accessed June 17, 2016.

  44. US Senate Committee on Commerce, Science and Transportation, “A ‘Kill Chain’ Analysis of the 2013 Target Data Breach,” March 26, 2014.

  45. Ibid.

  46. Ibid.

  47. Ibid.

  48. Ibid.

  49. “Did Target’s Security Blow it or Just Get Blown Up with Noisy Alerts?,” March 14, 2014, Damballa blog, www.damballa.com/did-targets-security-blow-it-or-just-get-blown-up-with-noisy-alerts/ , accessed June 17, 2016.

  50. The Spy Factory: Examine the high-tech eavesdropping carried out by the National Security Agency. Aired February 03, 2009 on PBS.

  51. Ibid.

  52. Ibid.

  53. Ibid.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.252.201