© Copyright © 2016 by Intel Corp. 2016

Steve Grobman and Allison Cerra, The Second Economy, 10.1007/978-1-4842-2229-4_6

6. Playing Second Fiddle

Steve Grobman and Allison Cerra2

(1)Santa Clara, California, USA

(2)Plano, Texas, USA

I’m attracted to computers, and some people can’t understand that. But some people are attracted to horses and I can’t understand that.

—Dan Nydick, Electrical Engineering major at Carnegie-Mellon, 1983 1

In 2016, CareerCast, an online job search site, evaluated 200 jobs on 11 stress factors and published a report ranking the most and least stressful careers of the year. Those in high-strung positions faced punishing deadlines, physical demands, hazardous conditions, and even life-threatening risks to themselves and others. Deservingly, enlisted military personnel, firefighters, airline pilots, and police officers topped the company’s list for occupying the most stressful jobs. 2 And, while the most nerve-racking occupations came as no surprise, the study’s calculation of the least stressful job raised at least a few eyebrows. Those enjoying the most enviable combination of low-stress and high-salary positions were none other than information security analysts. 3

The finding seemed to defy logic. After all, information security analysts are on the frontline of an escalating battle between malevolent hackers and the organizations in their crosshairs. Perhaps the criteria was partly to blame. While cyberattacks can carry personal risk (think of nation-state attacks against civil infrastructure, for example), information security analysts don’t exactly put their lives on the line at work. Maybe the researcher had an ulterior motive. At the time of the study, CareerCast was flush with more than 7,500 job postings for information security analysts, compared with roughly 1,000 openings for other positions. 4 Or, according to an argument hypothesized by at least one certified information systems security professional, perhaps CareerCast missed a critical attribute among the 11 tested: appreciation. As Jerod Brennen so eloquently opined via a LinkedIn blog:

  • If you work in a back office or shared services role, it’s likely that you produce something. HR? I'd argue that you produce jobs. You help people get hired. Finance? You produce budgets that pay for all the things. Payroll? You produce paychecks… IT? As unappreciated as you are, the fact remains that you produce systems and applications that end users rely on. But what do information security professionals produce? Nothing. 5

And, it’s this lack of production that leads Brennen to speculate that cybersecurity professionals are, in essence, the unsung heroes of their organizations—quietly toiling away at all hours of the day and night ensuring that nothing ever happens. For the majority of employees around them, this invisibility creates a lack of understanding, and appreciation, for the yeoman’s work that is done inconspicuously in the background. In fact, the information security analyst is least appreciated when he is most seen—either due to a breach that critics will claim foresight could have prevented or when the latest software patch even slightly diminishes the productivity of an instant-gratification workforce. So, while the information security analyst may occupy the most enviable job according to CareerCast’s study, he often garners the least appreciation from his coworkers. And, the “least stressful” job available can quickly turn the other way when something does happen, particularly when the thing one is securing was never built for protection in the first place.

Initial Battle Lines Are Drawn

At an otherwise garden-variety computer conference at the Washington Hilton hotel in October 1972, history was made. A team of researchers demonstrated a fledgling network in the making that would forever redefine communications. The era was mired by Cold War fears and the imminent threat of nuclear holocaust. At the time, expensive and highly controlled telephone networks were the modern-day convenience for communications. In the years preceding the conference, American engineer Paul Baran began contemplating a new network design that would withstand nuclear attack and allow a society to reinstitute communications to reconstitute itself. At the same time, and working independently, Welsh scientist Donald W. Davies envisioned a more efficient network design for data—one that would not tie up costly telephone lines in use by behemoth computers, particularly during long periods of silence between individual transmissions. The two visionaries ultimately landed in the same place: a network that transmitted packets of data, allowing several users to occupy the same telephone line, by efficiently distributing the slices of data across the wire. What would become the ARPANET, the forerunner to today’s Internet that initially connected universities, was born. Its output was on display for the first time at that nondescript computer conference in 1972.

As anyone with experience in conducting live demonstrations of any sort at a major event will tell you, things don’t always go as planned (as an interesting aside, event coordinators rank in the top ten of CareerCast’s most stressful positions, 6 we suspect largely due to these unexpected occurrences). During a demonstration of the ARPANET’s capabilities to a visiting delegation from one of the largest telephone companies at the time, the system abruptly crashed. The man behind the presentation, Robert Metcalfe, a Harvard University doctoral student who would later co-invent Ethernet technology and found networking juggernaut 3Com, was incensed when he noticed the “suits” from the telephone company snickering. He would later recall, “They were happy. They were chuckling. They didn’t realize how threatening it was. . . . [The crash] seemed to confirm that it was a toy.” 7

The enmity between two sides of a technology battle—the “Bellheads,” creators of highly reliable circuit-switched networks, and the “Netheads,” aspirational visionaries of high bandwidth packet-switched networks—would fester as opposing views dictated radically different architectural principles. Of particular distinction between the two factions: where the intelligence resided in the network design. Bellheads built telephone networks with a highly intelligent core—the switches that dictated all functionality and terminated calls at dumb telephones on the customer’s side of demarcation. Netheads had the opposite approach: their vision was predicated on a dumb core, with intelligence residing at the edges, or within the computers and terminals accessing the network. The dumb core allowed users easy access, albeit at the expense of security.

Netheads had little concern for security, given that the traffic in motion was hardly worth protecting. E-mail dominated the ARPANET and became the first “killer app.” The initial sprouts of what would ultimately become today’s Internet were little more than messages between academic institutions, hardly the bait to attract cybercriminals. And so, security became an afterthought to easy and ubiquitous access, leaving historian Janet Abbate to use an interesting metaphor for what we now have today:

  • We’ve ended up at this place of security through individual vigilance. It’s kind of like safe sex. It’s sort of ‘the Internet is this risky activity, and it’s up to each person to protect themselves from what’s out there.’ . . . There’s this sense that the [Internet] provider’s not going to protect you. The government’s not going to protect you. It’s kind of up to you to protect yourself. 8

To be fair, the original visionaries of the Internet were nothing short of geniuses. They developed the blueprint for a network that would connect more people than the entire worldwide population at the time of their concept in the 1960s. It wasn’t that they entirely dismissed security in their worldview. They just assumed that security would be limited to military strikes, not imagining that their invention could be undone by the very people it connected.

It’s easy to forget that standing up a network capable of connecting billions of endpoints required a common protocol for doing so—a nontrivial undertaking in and of itself. That protocol, TCP/IP (Transmission Control Protocol/Internet Protocol ), provides users with a common language through which to communicate over the Internet to this day. Initially, the Internet’s founding fathers contemplated embedding encryption in the protocol directly. But, doing so would require extensive computing power, likely requiring new pieces of expensive hardware to properly work. This, coupled with uncertainty on how to effectively distribute encryption keys and the US Government’s disdain of cryptography as a potential national security threat (which we covered in Chapter 1), led early creators to sideline security as a challenge simply too big to tackle. As Abbate so masterfully observes, security was of no consequence to architects believing they were building a classroom; in actuality, they were building a bank 9 at a time before “hacking” had even entered the common vernacular.

The Underestimated Troublemakers

Put another password in,

Bomb it out, then try again.

Try to get past logging in,

We're hacking, hacking, hacking.

Try his first wife's maiden name,

This is more than just a game.

It's real fun, but just the same

It's hacking, hacking, hacking.

—Hacker’s anthem by “Cheshire Catalyst” 10

They were described as the modern-day Robin Hoods of an information age. Young, male, wildly talented and obsessed with learning the deepest intricacies of computers that were increasingly becoming extensions of daily life. The underground hacker culture that emerged in the 1980s was glorified by the media and Hollywood alike, representing these computer geniuses as benign pranksters with nothing better to do with their time and talent than mess with a few computer systems and see what havoc they could wreak. Their inspiration was hardly the stuff of cybercrime harm. It was simply the achievement of doing something inconceivable and then sharing their feat with like-minded enthusiasts. Security was no hurdle for many of their ventures, with measures like strong password hygiene and basic firewall protections still lingering in the distant future. In actuality, adept hackers of the day longed to find the seemingly impenetrable target. Computer systems with tougher security borders offered more tempting bait with which to test their skills, as bragging rights are greatly diminished if the catch is too easy. Neal Patrick, a 17-year-old hacker at the time and member of an online gang of similarly interested teens who successfully penetrated more than 60 business and government computer systems across North America, put it like this:

  • We were surprised just how easy it was. It was like climbing a mountain: you have the goal of reaching the top or accessing a computer, and once you reach the peak, you make a map of the way and give it to everybody else. 11

Hackers were dismissed as harmless miscreants, incapable of doing any serious damage. Steve Wozniak, co-founder of Apple, commented at the time,

  • There's this myth that kids can crack secure systems, and that's just not true. . . . I hope my kid grows up to be like them. It's the very best way to exercise your curiosity and intelligence—a nondestructive form of social deviancy. 12

Being a hacker was hardly a career or crime; it was better defined as a hobby. The face of the hacker was clearly understood—young males dominated the trade. In fact, Wozniak estimated the life expectancy of one of these virtual nonconformists to be no more than 20 years of age. After that, the obligations of holding down a “real” job and becoming a “responsible” adult were simply too great to ignore and too much to jeopardize for possible imprisonment in the name of virtual bravado. 13

Then, in 1988, the worldview changed. A Harvard graduate and son of one of the US Government’s most respected computer security experts began experimenting with the notion of enslaving an army of drone computers, what would ultimately be referred to as a “botnet” in modern times. Motivated by curiosity, Robert Morris unleashed what would become the first widely insidious attack against the public Internet and the roughly 100,000 computers connected to it at the time. 14 His Morris Worm, as it would be called, was designed to replicate itself across the network, jumping from one machine to the next, spreading its contagion. To avoid infecting the same machine multiple times, which would slow the computer’s performance and draw risky attention to Morris’ creation, the hacker designed a mechanism in his code that would detect if the same machine was infected more than once. If so, a virtual coin of sorts was flipped, with the losing worm self-destructing. However, in what was later assumed to be an attempt to prevent a decoy worm from inserting itself in his code to destroy any existing worms, Morris created an exception to the suicide feature in every seven instances—when, instead of self-destruction, the redundant worm would make itself immortal. The move nullified Morris’ original intention of infecting any machine only once. 15 Within days, the Internet was ravaged by a replicating worm with no signs of containment, sapping computers’ resources with multiple spawns of its strain as users frantically attempted to find the antidote.

The media, completely oblivious to what this “Internet” thing was and how it could possibly become infected, vacillated between extreme panic (was this World War III in the making?) to complete ignorance (could this Morris Worm spread to human beings?). But, the roughly 100,000 early Internet adopters knew the implications with which they were dealing. The innocence of the Internet had been shattered. The unintentional by-product of a dumb core was virtual lawlessness. There was no government that could batten down the hatches; no white knight to save the day. At once, reality set in—the high jinks of an elite community of virtual rebels threatened the very existence of a globally connected community.

Ten years later, a group of seven hackers testified before US Congress to articulate what those early Internet users knew a decade before. In cautionary testimony provided by a leader of the hacking ring:

  • If you’re looking for computer security, then the Internet is not the place to be. . . . [It can be taken down] by any of the seven individuals seated before you [with 30 minutes of well-choreographed keystrokes]. 16

It was clear. If the prankster hackers of the 1980s had grown up to be the potentially dangerous online criminals of the 1990s, there had to be an answer. Unfortunately, jurisdictional boundaries are not so clean in a borderless reality, so law enforcement only goes so far. To fight fire with fire would require hackers on the good side of the battle. Every “black hat” attempt would require a “white hat” response. These white hats would share their adversaries’ attraction for speaking a digital language few can understand. Their difference: using their talents to ward off, rather than launch, attacks. As an assistant professor of computer science in the 1980s presciently predicted, “If you can domesticate a hacker, you can get good work out of him.” 17 Unfortunately, these domesticated good guys would find themselves in an unfair fight from the beginning.

The Domesticated Heroes

To understand today’s white hat heroes—the cybersecurity professionals who use their hacking skills for their organization’s good—requires an appreciation of their roots. Cybersecurity degrees are increasingly popular today—no surprise for a profession that currently enjoys a nearly 0 percent unemployment rate, earns attractive money, and (according to CareerCast) suffers the least stress of 200 possible occupations. But, it wasn’t always this way. In the beginning, cybersecurity wasn’t a degree available on college campuses. Aspirational white hat hackers found themselves deployed in traditional information technology (IT) environments with commensurate IT worldviews. And, these surroundings were hardly conducive to fighting an increasingly motivated adversary with intentions to do harm.

The IT profession has certainly seen its fair share of ebbs and flows. First, there was the boon of the computer era, which saw companies increase their percentage of capital budgets dedicated to information technology explode from less than 5 percent in 1965 to nearly 50 percent by the end of the millennium. 18 IT professionals were in hot demand as companies poured big money into building massive infrastructures to compete in an increasingly digital economy. Yet, as investment flowed, so too did increasing criticism of the profession. A 1987 Harvard Business Review article , aptly titled, “Make Information Services Pay Its Way,” lamented the challenges of business leaders perplexed and underwhelmed by IT departments that were slow, inefficient, and unresponsive. The author provided a practical prescription for the growing refrain of corporate naysayers increasingly disillusioned by IT’s seemingly empty promises:

  • If IS [Information Services] is to achieve a strategic end, companies must manage it as a productive part of the organization. The best way to do this is to run IS as a business within a business, as a profit center with a flexible budget and a systematic way to price its services. In companies that have already turned their IS systems into profit centers, the results have been impressive: the top management complaints are greatly reduced, and the expected efficiencies have materialized. When organizational shackles are lifted, IS can and does serve a strategic purpose. 19

As their field was booming, IT professionals found themselves both loved and loathed by the very organizations funding their efforts. By 2003, the cacophony of critics had reached deafening volumes, this time with a Harvard Business Review article simply titled “IT Doesn’t Matter.” In it, author Nicholas Carr prosecuted a cogent case encouraging organizations not simply to treat IT as a profit center but to ruthlessly restrict their investments in a function that could rarely yield sustainable competitive advantage. Carr’s article provided compelling proof for his argument, offering several studies corroborating a negative correlation between IT investment and superior financial results. More important, Carr submitted an insightful view into the thinking behind a career that would first be lauded and later panned—specifically, that with the proliferation of technologies like data storage, data processing and data transport, leaders were wrongly inflating IT’s value as a strategic asset to their firms. Quite the opposite of being a strategic resource, IT was more like a fungible commodity—as Carr contended that sustainable competitive advantage is only achieved through that which is scarce and difficult to duplicate, not ubiquitous and easily copied. He implored leaders of the day to treat IT as the commodity it truly was—and ruthlessly prune investment in it accordingly. He effectively escalated to a crescendo in his final arguments, punctuating his point with a bold assertion:

  • IT management should, frankly, become boring. The key to success, for the vast majority of companies, is no longer to seek advantage aggressively but to manage costs and risks meticulously. If, like many executives, you’ve begun to take a more defensive posture toward IT in the last two years, spending more frugally and thinking more pragmatically, you’re already on the right course. The challenge will be to maintain that discipline when the business cycle strengthens and the chorus of hype about IT’s strategic value rises anew. 20

Consider the dilemma for IT professionals as a whole, and, more specifically, for white hat hackers who were initially absorbed in this cohort from their early beginnings. They are under pressure to stand up defensible virtual infrastructures connecting their companies to a global marketplace. At the same time, they are misunderstood, if not outright unappreciated, by the business stakeholders they are paid to support. Oh, and by the way, those same business stakeholders not only increasingly wield the power on where IT investments are made but also persist in their demands for more frictionless experiences, bringing with them consumer-adopted attitudes about technology into a workplace where intellectual property must be guarded. To put one final nail in the proverbial coffin, the very thing they are paid to support and/or protect is more and more outside their control. By the turn of the millennium, technology spending outside IT was 20 percent of total technology investment; by 2020, it is expected to be almost 90 percent. 21

This “shadow IT” conundrum is among the largest challenges for white hats. IT has been losing its grip as the owner and operator of its organization’s technology for some time. With the proliferation of technologies increasingly acquired by organizations and employees residing outside IT, each additional component presents another entryway for hackers to penetrate. Even worse, when IT is not the purchaser of said technology, it is a bit difficult to expect these professionals to even know the extent of their organization’s digital footprint. A 2015 Cisco study found the number of unauthorized cloud applications in an enterprise to be 15 times higher than predicted by the chief information officer (CIO). 22 As political finger-pointing between IT departments and business units grows with the opposing friction between these internal stakeholders, white hats are often left to pick up the pieces if and when a strike on their infrastructure occurs—whether they were aware of the digital estate in the first place or not. The challenge is only exacerbated with the “Internet of Things,” where previously non-connected devices, such as sensors on factory floors, converge in a complex labyrinth that unites physical and virtual worlds. Each connected thing must now be secured, lest it provide an open door through which the adversary can enter to cripple infrastructures once quarantined from the virtual realm.

White hats must often endure in traditional IT environments with their own challenges, even without those of cybersecurity professionals thrown in the mix. While black hats enjoyed strength in numbers with like-minded allies, white hats were thrust into an environment of confusion, abiding by a set of IT principles that hardly makes sense against an adversary. Consider just a few of the more interesting conflicts for white hat heroes confined by traditional IT constraints:

  • In cybersecurity, black hats fuel the R&D agenda. In traditional IT, the company or organization itself is responsible for research and development (R&D). The challenge is to optimize R&D against one’s competition, ensuring the right level of investment to yield competitive advantage without wasting resources. While competitors seek to win in the marketplace, black hats have a different objective: to deliberately inflict organizational harm for profit, principle, or province. In this scenario, R&D advancements are initiated by black hats in the fight, as they are the first to launch a strike. White hats must be prepared to respond, not knowing the timing, origin, or nature of the next assault.

  • In cybersecurity, time is a white hat’s enemy. There’s an old adage in software development: “In God we trust, the rest we test.” Software developers in a traditional IT environment are expected to get it right; organizations cannot tolerate software releases that hinder productivity or otherwise create chaos. Understandably, these software engineers take measured steps in testing, and retesting, their creations. Test cycles are limited to off-peak hours so as not to disrupt the business-as-usual motion.

    While this instrumented process works well in an IT environment that can be tightly controlled, it increasingly loses relevance when that same environment is under attack by an adversary. White hat software developers must run testing scenarios without fully understanding what the next attack vector might entail. And, when such an attack is launched, there is hardly time to run, and rerun, meticulous testing scenarios during scheduled off-peak hours to inoculate the threat. While time is on the side of the traditional software developer, it is very much the enemy of a white hat coder.

  • Productivity trumps security. When employees must choose between security and productivity, the latter nearly always wins. In a 2016 study among IT professionals conducted by Barkly, an endpoint security company, respondents lamented the biggest issues around implementing effective security measures in their companies. More than 40 percent cited the risk of effective cybersecurity tools slowing down internal systems, with nearly the same number expressing frustration with the frequency of software updates. 23 When cybersecurity (or even traditional IT processes, for that matter) are visible to employees, they are likely to be deemed ineffective. What employee wants to boot up her system each week with time-consuming software updates that protect her from an invisible villain she doesn’t even know, or potentially care, exists? Worse yet, who wants to endure sluggish system performance to support multiple security programs in the environment? Many employees would likely take their chances with a potential attack, versus give up certain productivity, real or perceived.

  • Cybersecurity return on investment (ROI) is seen as fuzzy math. Since at least that 1987 Harvard Business Review article, IT has been under pressure to show its companies the money. When one can test and prove the advantages of a new technological advancement that increases productivity or otherwise offers greater organizational effectiveness, ROI calculations readily follow. But, consider the plight of the white hat. How does one effectively prove the ROI of cybersecurity? As the LinkedIn blogger from our opening story so eloquently argued, cybersecurity professionals get credit when nothing happens. How does one go about proving that an attack was successfully diverted? Sure, there are plenty of indicators that these white hats use to show how many threats were thwarted from entering the environment. But, is there any real proof that the threats in question would have caused irreparable harm? And, how does one translate that value when considering the actual costs of cybersecurity deployment and the opportunity costs of real (or perceived) lost productivity associated with sluggish systems bogged down by software updates? Perhaps it comes as no surprise that 44 percent of cybersecurity professionals in a 2016 McAfee study indicated that the ROI of their efforts was ambiguous, at best.

  • Cybersecurity suffers the same back-office origin as traditional IT. Thanks to advocates in Carr’s camp, IT’s strategic value has been questioned, if not outright undermined. To this day, CIOs are often not in strong standing in the executive suite. Another 2016 study found that, while 82 percent of board members were concerned about cybersecurity, only one in seven CIOs reported directly to the CEO and most were left completely off the board. Those sobering realities coexisted in a world where 74 percent of cybersecurity professionals believed a cyberattack would happen within the year. 24 With cybersecurity remaining a part of a historically back-office function, the opportunity to voice security concerns to a justifiably paranoid board of directors is filtered, if not muffled, when a white hat advocate isn’t offered a seat at their table.

  • Traditional IT professionals need only understand the product, not the motivations of its developer. When deploying a new technology, it is critical for IT professionals to understand how it will function in their environment. However, understanding what was in the mind of the developer when he designed the latest off-the-shelf commercial software package is hardly critical. For white hats, it’s an entirely different matter. They must not only understand the nature of threats entering their perimeter but also give attention to the motivations of adversaries creating them in the first place. In fact, the best white hats have the ability to think like a black hat, all while understanding their organization’s strategic imperatives. They intersect what is most valuable for the taking with how an enemy would potentially attack. From there, they attempt to anticipate and prepare for the most likely and dangerous threat vectors. This ongoing psychological computation is par for the course for white hats as they constantly calculate the motivations of their organizations, employees and adversaries—yet another distinction setting them apart from their traditional IT counterparts.

With these challenges and more, white hats often feel like aliens in their own world. They are not only unappreciated by business units that struggle to understand complex technology but also forced to play by a set of IT rules that never applied to them in the first place. Making matters worse, the black hats have no such rulebook, so the deck is stacked against the white hat heroes from the start. Cybersecurity professionals find themselves in a heavily regimented IT world, in which they play second fiddle to established principles that simply do not fit their reality. And, while white hats can and do identify with each other, they must also specialize in their skills and perspectives to surround their organization at all points of the threat life cycle, leading to occasional friction among players on the same team:

  • Security architects are responsible for identifying necessary defenses, selecting the products and tools, and interconnecting them in a protective mosaic to secure their environment. These individuals must anticipate possible threat vectors to address the question: How might the enemy infiltrate?

  • Security operators run the daily gauntlet of detecting and responding to threats in the organization. These professionals must answer: How is the enemy infiltrating?

  • Incident responders have the task of executing a remediation plan when an attack does occur. These professionals are tasked with answering: How do I respond now that the enemy has infiltrated?

When tensions run high during a breach, natural organizational inertia drifts toward political infighting. Security operators may be quick to blame architects for failing to procure or integrate sufficient technology in the first place. Incident responders may blame operators for failing to detect the threat in a timelier manner. And, architects could blame both of their cohorts for failing to provide useful ongoing feedback to fortify an organization’s defenses. The blame game knows no end. White hats must resist these political tendencies to undermine one another when a breach occurs, channeling their respective efforts to blame and take out the one truly at fault—their common enemy.

It should come as no surprise that this perfect storm of unfortunate circumstances leaves many cybersecurity professionals feeling downright unconfident in their ability to fight the good fight. One-third of cybersecurity professionals are not even aware of how many breaches their organization suffered in the past year. 25 The only way they receive credit for a job well done is when nothing happens. So, they focus their energies deploying ever-sophisticated cybersecurity technologies, hoping the next widget will be the silver bullet in their arsenal to ensure no damage by the enemy is done. Cybersecurity software vendors are often the proverbial arms dealers in the war, capitalizing on the anxiety of white hats to sell their wares. They have a willing customer, despite the seeming lack of ROI proof: More than half of IT executives state they “would still jump at the chance to purchase new, improved security software, and one in four say there is no limit to what they would pay for something more effective and reliable.” 26

A Second Thought

White hats and black hats have a lot in common, despite being on opposite sides of a virtual fight. They have an undeniable passion for technology. They possess a genius for making machines seemingly bow to their wills. And, they are misunderstood by “Everyday Joes” who will never fully grasp the depths of their talents. Yet, each day, they line up against one another—the black hats playing offense, the white hats on the defense, on a virtual battlefield the likes of which the vast majority of people will never comprehend.

Their difference in ideologies is obvious. So, too, is the point that bad guys don’t play fair. They are not governed by organizational rules. They don’t worry about falling out of favor in office politics or missing the next job promotion. They can remain laser-focused on their mission—taking out their target.

White hats also stand vigilant defending their turfs. They know their objective is to thwart the next attack. On this, both black hats and white hats have clear purpose, albeit on opposite sides of principle.

However, beyond the first-mover advantage and lack of rulebook enjoyed by black hats, they have one more leg up on their white hat counterparts: clarity of incentives. Whether for profit, principle, or province, black hats know their ultimate prize. They can fairly balance risk and reward when considering their opportunity. Most important, their incentive is linked to their ultimate goal, driving consistent behaviors among the black hat community.

In contrast, white hats don’t simply suffer unclear incentives (due to the challenging IT environment in which they often reside), they are actually misled by wrong incentives altogether. If your ultimate goal is to keep nothing from happening, then you must prove your worth by doing something. As such, cybersecurity professionals continue plunking down money to install the next cybersecurity godsend, complete with attractive promises and marketing superlatives that all but guarantee successful aversion to their enemy’s next attack. Boards of directors and CEOs would expect as much. If a white hat isn’t deploying new technology, then he is clearly not shoring up defenses as effectively as he otherwise could. Of course, proving the value of these technologies is elusive at best, but what else is a good guy expected to do?

So, consider the scenario: You’re a cybersecurity professional in an unfair fight. You have a limited budget to protect a network that was never purpose-built for security. You have a growing number of adversaries who seek to do your organization harm. And, you have employees who play as unwitting allies to your adversary in this virtual battle—aiding the enemy by taking increasingly seductive phishing bait or otherwise using poor security hygiene. In this unfortunate scenario, activity is perceived to be your friend. And, more than likely, your organization is directly or indirectly incentivizing you in this direction.

To put a finer point on organizational incentives, in general, consider that most reward structures have been found to be ineffective at driving the right employee behaviors. In the late 1980s, when incentive-based reward structures had enamored Corporate America, an industrious researcher investigated 28 previously published studies on the topic to determine if there was a conclusive link between financial incentives and performance. G. Douglas Jenkins, Jr. revealed that 57 percent of the studies indeed found a positive effect on performance. However, all of the performance measures were quantitative in nature, such as producing more of a widget or doing a particular job faster. Only five of the studies looked at the quality of performance. And, none of those studies found any conclusive benefits from incentives. 27

Here is where the wheels simply fall off the track for well-intentioned cybersecurity professionals. The quality of their deployments is actually much more important than the quantity of what they attempt to ingest in their IT environments. Deploying one security widget after the next in a disintegrated fashion only serves to clutter the perspective through which these professionals must detect attacks. As we will see in the next chapter, the fragmented cybersecurity posture that often results leaves white hats overwhelmed by their own noise—with multiple cybersecurity products triggering alarms (some real, some false) with little to no automation or orchestration to remedy the threats.

Of course, this scenario presumes these white hats even successfully deploy multiple technologies in the first place. Thanks to tricky budget cycles, a common practice for companies is to buy software in a budget cycle, with money allocated to the next year for deployment and maintenance. Unfortunately, this strategy is often doomed to fail. In 2014, the average organization spent $115 per user on security-related software, with $33 of it (or 28 percent) underutilized or not used at all. This dreaded “shelfware,” as it is derisively known by industry insiders, is often the result of overloaded cybersecurity professionals, who are simply too busy to implement it properly. 28

And, in perhaps the most unfortunate case of an asymmetric advantage in an unfair fight, black hats need not worry about quality in their efforts. They only need one attack to land. If they can overwhelm their target with multiple assaults, the law of numbers begins to turn in their favor. Quality certainly helps their efforts, but it is not necessary. As many of the latest black-swan attacks in recent history show, even fairly crude attempts can yield big gains for the bad guys.

Understanding the plight of white hat heroes, who tirelessly and faithfully defend the borders of our organizations day and night, is one step forward in correcting the aberrations they unfairly face while doing so. Subjugating cybersecurity beneath parochial IT principles only handicaps their efforts further. In many ways, we submit cybersecurity is the most anti-IT of all the technology functions. It follows a radically different playbook given radically different circumstances. In the most dangerous of scenarios, flawed incentive structures (whether tangible or intangible) lead to devastating consequences, as white hats attempt to prove their value and thwart their enemy through lots of activity. Such behavior often leads to a disintegrated security environment that creates more work for its architects, in the worst of cases leaving scarce organizational resources to literally die on the vine as “shelfware.”

Coming back to where we started in this chapter, it is very difficult to conceive how cybersecurity professionals enjoy the least stressful of 200 positions, given the set of circumstances they face each day. LinkedIn blogger Jerod Brennen finished his post with the words of the successful InfoSec professional’s mantra: “Do you remember that awful, horrible, expensive incident that NEVER happened? You're welcome.” 29

To the white hats reading this, know that many of us greatly appreciate what you do. You often succeed in spite of scales that are tipped in your enemy’s favor. Yet, balancing the scales goes beyond convincing your organizations that IT tenets are, more often than not, irrelevant in your daily work. It will also require you to take a second look at preconceived notions that have been indoctrinated in cybersecurity since its inception. Just as traditional IT provides a framework that is unsustainable, early and lasting cybersecurity principles also jeopardize the effectiveness of white hats who are already too limited in number and too overwhelmed by the battle, a topic we reserve for the next chapter. Leaving our heroes to play second fiddle to increasingly irrelevant paradigms only serves the enemy in The Second Economy.

Notes

  1. William D. Marbach, Madelyn Resener, John Carey, Richard Sandza, et al., “Beware: Hackers at Play,” Newsweek, September 5, 1983.

  2. Kerri Anne Renzulli, “These Are the 10 Most Stressful Jobs You Can Have,” Time Money, http://time.com/money/4167989/most-stressful-jobs/ , January 7, 2016, accessed June 29, 2016.

  3. Ibid.

  4. Kieren McCarthy, “The least stressful job in the US? Information security analyst, duh,” The Register, June 2, 2016, www.theregister.co.uk/2016/06/02/least_stressful_job_is_infosec_analyst/ , accessed June 29, 2016.

  5. Jerod Brennen, “The Curse of the Information Security Professional,” LinkedIn, January 12, 2016, www.linkedin.com/pulse/curse-information-security-professional-jerod-brennen-cissp , accessed June 29, 2016 (emphasis added).

  6. Renzulli, note 2 supra.

  7. Craig Timberg, “A Flaw in the Design,” The Washington Post, May 30, 2015a, www.washingtonpost.com/sf/business/2015/05/30/net-of-insecurity-part-1/ , accessed June 29, 2016.

  8. Ibid.

  9. Ibid.

  10. www.ocf.berkeley.edu/~mbarrien/jokes/hackanth.txt , accessed June 30, 2016.

  11. Marbach et al., note 1 supra.

  12. Ibid.

  13. Ibid.

  14. Timothy B. Lee, “How a grad student trying to build the first botnet brought the Internet to its knees,” The Washington Post, November 1, 2013, www.washingtonpost.com/news/the-switch/wp/2013/11/01/how-a-grad-student-trying-to-build-the-first-botnet-brought-the-internet-to-its-knees/ , accessed July 1, 2016.

  15. Ibid.

  16. Craig Timberg, “A Disaster Foretold – and Ignored,” The Washington Post, June 22, 2015b, www.washingtonpost.com/sf/business/2015/06/22/net-of-insecurity-part-3/ , accessed July 1, 2016.

  17. Marbach et al., note 1 supra.

  18. Nicholas G. Carr, “IT Doesn’t Matter,” Harvard Business Review, May 2003, https://hbr.org/2003/05/it-doesnt-matter , accessed July 1, 2016.

  19. Brandt Allen, “Make IT Services Pay Its Way,” Harvard Business Review, January 1987, https://hbr.org/1987/01/make-information-services-pay-its-way , accessed July 1, 2016.

  20. Carr, note 18 supra.

  21. “Gartner Says Every Budget is Becoming an IT Budget,” Gartner, October 22, 2012, www.gartner.com/newsroom/id/2208015 , accessed July 1, 2016.

  22. Nick Earle, “Do You Know the Way to Ballylickey? Shadow IT and the CIO Dilemma,” Cisco Blogs, August 6, 2015, http://blogs.cisco.com/cloud/shadow-it-and-the-cio-dilemma , accessed August 12, 2016.

  23. Sarah K. White, “IT leaders pick productivity over security,” CIO, May 2, 2016, www.cio.com/article/3063738/security/it-leaders-pick-productivity-over-security.html , accessed July 1, 2016.

  24. Ibid.

  25. Ibid.

  26. Ibid.

  27. Alfie Kohn, “Why Incentive Plans Cannot Work,” Harvard Business Review, September-October 1993 Issue, https://hbr.org/1993/09/why-incentive-plans-cannot-work , accessed July 1, 2016.

  28. Maria Korolov, “Twenty-eight percent of security spending wasted on shelfware,” CSO, January 27, 2015, www.csoonline.com/article/2876101/metrics-budgets/28-percent-of-security-spending-wasted-on-shelfware.html , accessed July 1, 2016.

  29. Brennen, note 5 supra.

  30. Timberg, note 7 supra.

  31. David D. Clark, “The Design Philosophy of the DARPA Internet Protocols,” Massachusetts Institute of Technology, Laboratory for Computer Science, Cambridge, MA 02139, 1988, paper, http://ccr.sigcomm.org/archive/1995/jan95/ccr-9501-clark.pdf , accessed on July 1, 2016.

  32. Timberg, note 7 supra.

  33. Ibid.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.19.29.89