© Copyright © 2016 by Intel Corp. 2016

Steve Grobman and Allison Cerra, The Second Economy, 10.1007/978-1-4842-2229-4_1

1. The Second Side of the Story

Steve Grobman and Allison Cerra2

(1)Santa Clara, California, USA

(2)Plano, Texas, USA

If privacy is outlawed, only outlaws will have privacy.

—Philip Zimmerman, Creator of Pretty Good Privacy 1

History has a way of repeating itself, even in the unlikeliest of cases. In 1917, a seemingly nondescript proceeding in the state of Oklahoma would find its way into the annals of legal precedent. In Wilcox v. State, the plaintiff appealed his criminal conviction for assault with a dangerous weapon, specifically a claw hammer. A claw hammer was (and still is) an ordinary household tool, one used to insert and remove nails in construction projects.

In his appeal, Wilcox referred to two earlier cases. In one, Bourbonnais v. State, the plaintiff had his two-year sentence reduced after the appellate court found it excessive given the evidence, or lack thereof. In the case, Bourbonnais appealed his conviction of assaulting a man with a piece of iron. In reviewing witness testimony , the appellate court found no proof where one could reasonably infer the object to be a deadly weapon. 2 In the other, Moody v. State, two plaintiffs successfully appealed their convictions after being found guilty of using a wooden plank to strike someone. The appellate court found that the wooden plank could not be, by itself, considered a dangerous weapon without further context:

  • The charging part of the indictment was…too indefinite in that it did not describe the plank and designate in what manner it was used; it being their contention that a plank is not per se a deadly weapon; that, not being per se a deadly weapon, the county attorney was required to plead facts sufficient to show the character of the plank and the manner in which it was used, and that the facts so pleaded must show that the instrument used was of the character set out in the statute and used in such manner as to be reasonably calculated to produce serious bodily injury. 3

With both legal precedents as support, Wilcox successfully had his conviction reversed when the court ruled, “a claw hammer is not per se a dangerous weapon, and especially it is not a dangerous weapon when considered without reference to its use, and a weapon cannot be said as a matter of law to be dangerous without reference to the manner of its use.” 4 With the ruling, the claw hammer entered legal precedent as another tool that, without proper context of the user’s intention, could not be deemed dangerous.

Claw hammers, iron pipes, and wooden planks may be subject to context in order to classify them as dangerous weapons or otherwise innocuous tools; automatic weapons utilized in mass shootings, on the other hand, require no such understanding. Nearly a century after the Wilcox case, a husband and wife team opened fire at their place of employment, the Inland Regional Center in San Bernardino, California, killing 14 and wounding 22 in what was, at the time, the deadliest terrorist attack on US soil since 9/11. 5 As is common with such tragedies, the event ignited vigorous debate over the impact of guns to society. Gun advocates point to the history of claw hammers, iron pipes and wooden planks for legal precedent, suggesting that guns per se do not kill—people do. Opponents claim stricter legal constraints to gun ownership would obviate mass shootings and save countless lives.

While the gun debate is hardly new, the San Bernardino case would also rekindle a more recent dialogue fundamental to technology security—one where context means everything. As is fairly common, the Inland Regional Center had issued a work-related iPhone to one of the perpetrators. The iPhone in question was encrypted, requiring a password to unlock the device. The employer and rightful owner of the device granted permission to the FBI to investigate its contents; however, the only one in possession of the phone’s password was the now deceased assailant. The FBI took matters to Apple, asking the technology company to create a new version of its operating system and security firmware, one that disabled security features, to enable the FBI to use an automated passcode guessing attack to decrypt the device contents. When Apple refused, arguing that such a “backdoor” would ultimately endanger the security of tens of millions of American iPhone customers by also affording hackers a potential doorway through which to enter its installed base of devices, the FBI responded with an unprecedented lawsuit.

The Cryptowars

To fully understand the implications of the Apple suit requires a brief history lesson—one much more recent than the cases of Wilcox, Bourbonnais, and Moody. This time, we only need to travel as far back as 1993 when the Clinton administration introduced the “Clipper Chip.” With the budding promise of Internet and wireless technologies, the US Government, long the bastion in using and cracking secret codes to transmit or intercept sensitive information, found itself outmatched by a torrid pace of technology change. New technologies introduced the need for stronger encryption to secure advanced communications, including e-mail and wireless transmissions. These advancements would benefit from the work of prodigious mathematicians of the 1970s—be it a seminal 1976 paper from researchers Whitfield Diffie and Martin Hellman that showed how ordinary businesses and individuals could securely communicate over modern communications networks using cryptography or the 1977 system developed by Massachusetts Institute of Technology mathematicians Ronald Rivest, Adi Shamir, and Leonard Adelman, which put Diffie and Hellman’s encryption theory into practice. 6 With encryption securing more communications, the US Government began losing its grip as the singular authority in intercepting and deciphering these secured communications.

Enter the Clipper Chip—the American government’s answer to balance the needs of national security with individual privacy. It called for a state-of-the-art microchip (the Clipper Chip) to be installed into consumer hardware telephones, fax machines and modems. The Chip would serve two purposes: it would provide the public with strong cryptographic capabilities to secure conversations but would also afford the US Government access to these communications when legally warranted. Using a “key escrow” approach, a copy of each chip’s unique encryption key would be stored with the federal government (with two separate agencies, requiring dual cooperation to break the encryption). With a key to unlock any Clipper Chip, the federal government was offered a pathway into the otherwise secured communications. And, while the Clipper Chip was not a mandatory standard, the government planned to seed the market with a massive volume of devices, intending to spawn its adoption. 7

To appease public concern of a potential “Big Brother” government eavesdropping on sensitive conversations, a key selling point of the Clipper Chip involved its far superior encryption capabilities when compared to other alternatives at the time. Based on an encryption algorithm used by the National Security Agency (NSA ) called Skipjack , the Clipper Chip offered what the US Government claimed was the best of both worlds: stronger encryption for the user with a doorway for government officials to tap Skipjack-encrypted communication. Parading out a panel of academic and industry experts espousing the advantages of the Clipper Chip and its associated Skipjack cryptographic standard, the US Government played offense in attempting to assuage public dissension against the Chip. 8

Alas, public opposition grew. As echoed by cryptography ’s fathers, including Diffie, Hellman, and Rivest, the chorus of concerns over the government’s proposal centered around three key trepidations. First, the very notion of offering the government potentially unrestricted access to its citizens’ most intimate conversations gave pause to even the most trusting. Second, the reliance on a third party to break encryption negated the earlier work of Diffie and his ilk, which required no third-party intercept to decipher messages. And, combining the first and second concerns leads with the third: the federal government would be the only keyholder capable of breaking the code. With a growing list of very vocal public dissenters, it may come as no surprise that 80 percent of Americans opposed the Clipper Chip in a 1994 CNN/Time poll, with two-thirds saying that protecting privacy was more important than preserving the ability of police to conduct wiretaps. 9

The US Government remained undeterred in the face of mounting opposition from the unlikeliest of bedfellows, including industry experts, conservative commentators, and one of the first hacktivist groups, labeled the Cypherpunks , a self-proclaimed “confederation of computer hackers, hardware engineers and high-tech rabble-rousers.” 10 The final death knell to the Chip came when, in 1994, Matt Blaze, a computer scientist at AT&T Bell Laboratories , published a paper exposing a technical weakness in the technology that would allow one to circumvent the law-enforcement surveillance capabilities altogether. 11 With the government’s exclusive access quashed, the Clipper Chip was no more, although the argument juxtaposing personal privacy against national security had only just begun —and would reach fever pitch in the 2016 Apple case.

In a standoff decades in the making, Apple challenged the US Government’s right to force the company to introduce new software that would weaken the security defenses of its own products. Tim Cook, Apple’s chief executive officer (CEO ), pled the company’s case in the court of public opinion with an open letter to customers, through which the ghosts of Diffie, Hellman, and their cohort were resurrected in the message:

  • For years, cryptologists and national security experts have been warning against weakening encryption. Doing so would hurt only the well-meaning and law-abiding citizens who rely on companies like Apple to protect their data. Criminals and bad actors will still encrypt, using tools that are readily available to them. 12

In this brief excerpt, the issue of context with regard to encryption becomes abundantly clear. Like the claw hammer before it, encryption can be used for either good or bad. And, weakening the defenses for the innocent only offers adversaries an easier target and backdoor through which to attack. Interestingly, the Apple case finished almost as quickly as it started, with the US Government unceremoniously and unexpectedly dropping its charges against Apple after claiming it had successfully worked with a third party to unlock the phone in question. The government is under no obligation to divulge how the encryption was broken and, as of this writing, has made no offer to Apple to do so, which would potentially serve to shore up security weaknesses in future product releases. Despite the abrupt closure to the case and what many would consider a victory for Apple, the company quickly pointed out that a more substantive public debate was in order to fully unpack some of the issues that had only been scratched in surface, asserting, “This case raised issues which deserve a national conversation about our civil liberties, and our collective security and privacy. Apple remains committed to participating in that discussion.” 13

Follow the Silk Road

If the cryptowars of the 1990s found a second life in 2016, device passwords would be only one battle. Using a unique cryptographic language, cryptocurrencies present their own duality where context matters. A cryptocurrency is digital money not issued, backed, by or tied to any particular government or nation. 14 Like encryption, cryptocurrencies rely on complex mathematical algorithms to secure and track all transactions and regulate the supply of virtual money to mitigate against inflation. To use cryptocurrencies, one need only access the Internet.

While many may struggle with exactly how cryptocurrencies like Bitcoin work, those in the know find themselves on opposite ends of a polarizing debate. Advocates point to the freedom such digital money affords those who otherwise lack access to an official bank account. This argument holds sway in developing regions such as Sub-Saharan Africa , where some countries have more adults using a mobile money account than one at a financial institution. 15 Cryptocurrency proponents also cite benefits associated with lower inflation risk (since the supply of currency is limited by mathematical rules), lower transaction fees versus other alternatives (like credit cards), and security of personal data (given the user’s credit card is not involved in transactions). 16

Opponents call into question the financial integrity of cryptocurrencies, pointing to the bankruptcy of Mt. Gox, the world’s largest bitcoin exchange in 2014, as one cautionary tale. Due to what would ultimately be chalked up to poor management, Mt. Gox suffered multiple hacker attacks, the most fatal of which dealt a $460 million blow by adversaries who spent years fleecing the company. 17 Unlike traditional financial markets, there are no regulated agencies insuring the protection of investors in cryptocurrency exchanges. And, if illiquidity risks are insufficient to make one think twice, market volatility may do the trick. In one year alone, the value of bitcoins plunged more than 66 percent. 18

But, perhaps the most interesting debate surrounding cryptocurrencies deals less with integrity and more with intention. Cryptocurrencies are not necessarily completely anonymous, though they do afford more privacy protection than traditional bank accounts. As such, they have become the monetary unit of exchange welcomed by bad actors online. In 2013, the FBI busted an illegal online drug exchange known as Silk Road, seizing nearly $4 million in bitcoins. 19 In less than three years of inception, Silk Road had turned over $1.2 billion in revenue, 20 using cryptocurrency as the medium and sophisticated encryption software, Tor, to conceal the identity of its users.

Tor, originally standing for The Onion Router, was developed for the US Navy in an effort to protect government communications. 21 Tor cloaks users from Internet traffic analysis and works by encrypting and randomly bouncing communications through relay networks across the globe, 22 thereby obfuscating the user’s Internet protocol (IP) address and location. With Tor muddying traffic analysis and bitcoins providing monetary freedom, sites like Silk Road operate in the shadows of the “dark web.”

Yet, while these sophisticated encryption technologies can be and are used for nefarious purposes, they also provide anonymity to freedom fighters and political activists the world over. Again, like the claw hammer or lead pipe, the tool on its own is not dangerous; the intentions of the good or bad actor using the device give context to its benevolent or malevolent outcomes.

And, here is where things get even more interesting for companies defending themselves against the next major hack: threat actors know how to disguise bad intentions as perfectly normal behavior. As proverbial wolves in sheep’s clothing, hackers continue to iterate and improve their tactics against their prey. Again, a brief walk down Memory Lane paints a picture of an ever-evolving threat landscape with ever-increasingly sophisticated countermeasures launched by adversaries.

“You Must Be Punished”

In a small Pakistani computer shop in the late 1980s, American tourists were presented an offer too good to refuse: top-rated commercial software starting at $1.50. 23 Buyers were in for more than a bargain when inserting the computer program in their PC’s disk drive. Deposited on the disk was what is widely referred to as the first MS-DOS virus—“Brain.” Brain was the creation of the Pakistani shop’s owners, brothers Basit and Amjad Farooq Alvi. Back in the day, computer hard drives were precious real estate and floppy disks were used to run computer programs. Brain infected the part of the disk necessary for running programs and would contaminate computers, such that other disks were corrupted upon insertion. 24 The first of what would become known as “stealth viruses,” with redirect capabilities to remain undetected from search and recovery efforts, Brain’s bark was worse than its bite. According to one early victim of Brain, “It was only a floppy infector, and it was easy to copy anything off the floppy that you cared about, and just reformat the floppy or throw it away.” 25 Still, those with tenacious spirits in finding Brain’s hiding place within their PC were greeted with an ominous calling card from the soon-to-be-famous hackers, “WELCOME TO THE DUNGEON.” 26

The Alvi brothers claimed to have a noble intention behind the virus. Fed up with individuals who purchased or distributed pirated software, the brothers wanted to send a message, albeit in a strange way, to violators. While Brain essentially only renamed the volume label of the disk, it provided a means for the brothers to keep track of all copies made—after Basit claimed to have been a victim of piracy himself for custom software he developed. 27 In fact, the Alvis offered their personal contact information as part of their “dungeon” calling card to assist hapless victims in disinfecting their PCs (a convenient complement to the Alvis’ line of business).

Interestingly, the brothers discriminated who would be exposed to their self-professed “friendly virus,” 28 having no qualms about selling bootlegged—though clean—software to Pakistani residents. Only foreigners, particularly Americans, were offered the Brain-infected bootlegged versions. The apparent hypocrisy presented no conflict for the Alvis, since Pakistani law did not prohibit piracy. When asked why they infected American buyers, Basit, the younger brother, simply replied, “Because you are pirating. You must be punished.” 29

The Alvis’ Brain contagion spread so quickly, it even surprised them when they were inundated with phone calls from infected users demanding remediation. Their distinction with creating the first computer virus incited media to educate the public on a serious concern that could have been ripped from the pages of a sci-fi novel: What would happen if computers around the world were suddenly infected with much more insidious viruses? A 1988 TIME Magazinecover story on the topic reported that an estimated 250,000 computers had been afflicted that year alone with similar infections caused by more than 25 identified viruses, with new ones emerging nearly every week. 30

What started as a couple of dozen strains of often nuisance-bearing viruses had reached pandemic proportions just a quarter of a century later. In the third quarter of 2016 alone, McAfee Labs reported more than 3.5 million infected files were exposed to its customers’ networks and an additional 7.4 million potentially unwanted programs attempted installation or launch—each metric is what was detected per hour via the company’s global threat intelligence capabilities. 31 Total malicious software (malware) in the McAfee Labs “zoo” approached nearly half a billion samples, with nearly 45 million new strains detected in the third quarter of 2016 alone. 32

What many prognosticators in the late 1980s would have anticipated to be a major global collapse under such punishing volumes of malware, the world, complete with its even greater dependence on computing devices and networks, remains very much intact. Several reasons explain the unforeseen outcome.

First, anti-malware countermeasures were developed by software companies to detect and inoculate threats. Like criminals, early versions of malware carried their own modus operandi, easily observed patterns of behavior that, once identified and catalogued, could be used to spot and prevent suspicious interlopers. Anti-malware software companies provided their own remedy to a growing contagion by essentially recording, storing, and sharing a part of malicious code as a fingerprint of sorts and embedding this intelligence in their wares. Companies with this defensive software installed were inoculated from the known threat once the anti-malware software detected its presence (in essence, matched its fingerprint).

Second, technical advances beyond anti-malware software have played a critical role. For example, computing architectures have evolved through the years to better separate the operating system from applications to isolate the effects of contamination.

But, perhaps most important, the motives of threat actors themselves have changed. If a cybercriminal’s goal is to abscond with sensitive information that can be exploited in some way, it benefits the adversary to remain undetected as long as possible to attain that outcome. Malware that corrupts computers is certainly destructive to a company; however, it is also readily identifiable once detected. Cybercriminals have become increasingly sophisticated in their assaults, more and more of which are designed to be inconspicuous in nature to facilitate greater damage to their victim and higher financial reward for themselves.

Such conditions have prevented a torrent of malware from upending the global digital economy. That said, remedying the volume of threats with such automated precision is fairly easy, albeit necessary, when the adversary’s modus operandi is so clearly identified and understood. And, while the flood of virulent samples started with one Brain strain created by two brothers bent on “punishing” software pirates, the Alvis themselves would never be penalized for letting the malware genie out of the bottle. The same could not be said for one Donald Gene Burleson.

Enemy Within the Gates

On a September morning in 1985, company officials at USPA & IRA, a licensed life insurance agency and registered securities dealership with headquarters in Fort Worth, Texas, arrived at work to find an unusual occurrence. Approximately 75 percent of the company’s commission records had been deleted. Without these records, the company would be unable to pay commissions to its roughly 450 independent agents. 33 Company investigators discovered that the files had been deleted in the overnight hours, with someone using the system to run a series of programs that resulted in the purge. Peculiarly, the malicious programs in question were developed some three weeks earlier. 34

Three days before the incident, Donald Gene Burleson, a senior systems analyst and also the company’s computer security officer, was terminated after two years of employment. A trail of computer breadcrumbs led company officials to Burleson as the perpetrator of the sabotage, who stood accused of developing the pernicious computer programs with the intention of detonating them upon voluntary or involuntary termination. 35

In what would be a landmark case where criminal charges for computer hacking were brought against a defendant, Burleson was found guilty of burglary, harmful access to a computer with loss and damages over $2,500, and criminal mischief over $750. He was sentenced to seven years of supervised probation and ordered to pay his former company $11,800 in restitution. 36

It may be surprising that the first prosecuted cybercrime case involved a malicious insider. Sadly, Burleson would not be the last employee to intentionally inflict harm upon his employer. Those who intend to inflict damage upon their employer for whatever the reason present a real concern for public and private organizations alike. According to a Ponemon Institute study, 35 percent of companies in 2015 suffered a cyberattack from a malicious insider. 37 While troublemaking employees ranked last among the cyberattack vectors queried (including malware, web-based attacks, and others), they took top honors as the most costly predators, imposing nearly $145,000 worth of damage with each attack. 38 As a testament to the havoc they can wreak, these insidious insiders also spawned the most complex attacks to remediate, costing their employers an average of nearly two months to resolve. 39

Threats perpetrated by one or more of an organization’s most trusted employees rank among the most difficult to detect and correct. After all, the metaphorical claw hammer wielded by an employee may be used to build up a company or bring it to its knees. Discerning the difference requires context. Unlike malware, which can often be identified by its characteristics, human behavior is far more difficult to model and understand. Perhaps that explains why even external adversaries often rely on a company’s employees to do their bidding.

Gone Phishing

It was the breach that could appropriately take the moniker of The Nightmare before Christmas, one that would abruptly end the promising company careers of a CEO and CIO (chief information officer ). Between US Thanksgiving and Christmas Day of 2013—the busiest shopping season of the year—Target, an international retailer with more than 340,000 employees worldwide, suffered a cyberattack that would compromise the bank accounts of some 40 million customers 40 and expose personally identifiable information for up to 70 million more. 41 During the quarter, the retailer saw its profits plummet by nearly half, compared to the fourth quarter of the previous year, with full-year profits declining by more than a third. 42 The breach would cost the company nearly $150 million in investigative costs, 43 $116 million in settlement costs with banks and consumers, 44 and invaluable customer trust. Casualties of the attack included CEO and Chairman Gregg Steinhafel and CIO Beth Jacobs—both of whom resigned their posts in the months following the hack.

As the public debacle of the attack made for interesting headline fodder, what many may not realize is how the hackers made their way into Target’s most sensitive systems. While early reports suggested an inside job, the insider was not cut from the same malicious cloth as Burleson. In fact, the presumed insider was not even an employee of the company but a small supplier that provided heating, air conditioning, and refrigeration services to supermarkets. In this capacity, Fazio Mechanical Services of Sharpsburg, Pennsylvania, had remote access to Target’s network for electronic billing, contract submission, and project management purposes. 45

A postmortem analysis of the breach revealed that at least two months before hackers waged their attack on Target, they set their sights on Fazio Mechanical. Using spear phishing techniques, the adversaries successfully baited unwitting Fazio Mechanical employees with infected e-mails, likely with corrupted PDF or Microsoft Office attachments, which launched malware upon opening. Once the malware was deposited, the perpetrators stole Fazio Mechanical’s credentials to gain entry to Target’s network. 46

Interestingly, while industry insiders and government officials debated the sophistication of the malware itself, 47 there was little argument that the adversaries had spent weeks orchestrating their attack and duping unwitting innocent employees to give them entry. To identify Fazio Mechanical as their intended mark—or sucker—in the con required the perpetrators to first do some online research on Target’s suppliers. Luckily for the thieves, such information was readily available from Target itself via its supplier portal, 48 giving the threat actors enough to go on in setting the trap. From there, widely available social networking tools likely provided the adversaries with employee contact information for some working at Fazio Mechanical—requiring only one fish to take the bait.

Social engineering techniques , like phishing, are on the rise and for good reason: cybercriminals are offered multiple points of potential failure in the fabric of human beings who collectively are connected to a particular company target. Whether embedding their malware in an e-mail attachment or providing a bogus web-site address for the same, adversaries are aware that humans are often the weakest link in a company’s defenses. In their semiannual survey, the Anti-Phishing Working Group (APWG ) found at least 123,972 unique phishing attacks worldwide in the second half of 2014—the highest on record since the second half of 2009 (in this case, a phishing attack is defined as a phishing site that targets a specific brand or entity). 49 The study revealed that adversaries are relentless in getting their catch: The ten most-targeted companies by phishers were attacked constantly, as much as 1,000 times in a month, with this unenviable “most wanted” list sustaining more than three-quarters of all attacks worldwide. 50

If phishing perpetrators are nothing if not tenacious in their efforts, they have good reason to be. As the old adage goes, “There’s a sucker born every minute,” and employees are unintentional weapons for the other side in a cyberwar most don’t even realize is being waged. Verizon’s 2015 Data Breach Investigations Report shows that more than one in five recipients open phishing messages, with more than one in ten clicking on attachments. Even scarier, half of these unsuspecting victims open the e-mail and click on the phishing link within the first hour. 51 If you’re still not convinced that seconds really do matter in The Second Economy, consider that the median time to first click came in at just 82 seconds across all phishing campaigns. 52 Unfortunately, the same cannot be said for a company’s speedy response, with afflicted organizations clocking in at more than ten hours to remove a phishing threat. 53 And, that figure has increased by more than one hour in one year. 54 With an average cost per phishing or social engineering incident at $86,000, 55 organizations lose more than $2 per second, once one of their own is hooked.

Phishing allows predators to gain quiet access to their target’s most intimate areas, allowing them to stealthily pilfer valuable data, such as consumer credit card information, for eventual sale to other thugs. But, why bother with extracting data only to have to negotiate complex value chains to reap financial reward when one can simply hold it hostage from the victim himself? Indeed, holding data ransom offers predators an even faster return on their investment.

A State of Emergency

The year was 1989 and the world was worried about a different viral epidemic than one that could infect a computer—people were grappling to understand the immune disorder known as AIDS (Acquired Immune Deficiency Syndrome). Back before the Internet connected one to just about any known answer in the universe, self-education was less convenient. Accordingly, when a seemingly innocuous disk purporting to contain AIDS education software arrived at 20,000 locations across 90 countries, 56 self-edifying enthusiasts gladly booted up.

The disk made good on its promise. Users were greeted with an interactive survey that attempted to measure the risk for AIDS based on the answers provided. But, like the Brain-infected popular PC programs before it, this disk delivered more than the user wanted. Containing what would ultimately be known as the “AIDS ” Trojan, the disk infected the user’s PC with a virus that would eventually encrypt all files after a certain number of reboots. 57 At that time, the victim was presented with the ransom: send $189 to cover the “licensing fee ” of the software to a PO Box in Panama. Only when payment was received would the user be sent instructions on how to decrypt his or her data. 58

Long before the days of cyber laws , targeted victims of the AIDS Trojan attack—specifically hundreds of medical research institutions—panicked. Some preemptively erased their hard drive to attempt to protect the sensitive information contained therein. One AIDS organization in Italy reportedly lost ten years’ work. 59

The investigation that ensued eventually led to an evolutionary biologist with a doctorate from Harvard, Joseph L. Popp. Popp was very deliberate in targeting his victims, with many having attended the World Health Organization’s (WHO) international AIDS conference in Stockholm the year before. 60 An avid AIDS researcher himself, pundits were left puzzled by his motives in pursuing colleagues with similar aspirations of eradicating the disease. Popp’s attorney would later argue that the doctor intended to donate the ransom collected to fund alternative AIDS education research. Others suggested Popp may have just been seeking revenge, after recently being rejected for employment by the WHO. 61

Whatever the reason, Popp was single-handedly responsible for setting back the global AIDS research agenda all while letting loose the first known case of ransomware. For both distinctions, one would expect some form of consequence for the doctor. In fact, he received no penalty or jail time. After the judge in the case found Popp unfit to stand trial, he was let go without so much as a slap on the wrist. 62

With history once again repeating itself, a health organization would find itself the very public target of ransomware more than 25 years later. On the evening of February 5, 2016, staff at the Hollywood Presbyterian Medical Center (HPMC) in California noticed issues in accessing the hospital’s computer network. 63 The resulting investigation revealed that HPMC was the victim of a ransomware attack, one where the cybercriminals demanded payment of about 9,000 bitcoins or just over $3.6 million. 64 After suffering nearly two weeks of disrupted operations, relegated to relics like fax machines or paper files to conduct normal business, HPMC paid $17,000 in bitcoins to have its computer systems released. When defending the decision, HPMC President and CEO Allen Stefanek stated,

  • The malware locks systems by encrypting files and demanding ransom to obtain the decryption key. The quickest and most efficient way to restore our systems and administrative functions was to pay the ransom and obtain the decryption key. In the best interest of restoring normal operations, we did this. 65

In the weeks following the public settlement, several more US hospitals found their systems hijacked, with one in Kentucky declaring an “internal state of emergency” 66 in response to the attack. The target on hospitals is no coincidence. Hackers see these targets as soft, with money in the bank and lives at stake. Seconds, let alone weeks, matter in patient care and, as in the case of HPMC, the opportunity cost of disrupted operations far exceeded the $17,000 ultimately paid in ransom. Their decision to surrender payment is validated by Ponemon’s study on the cost of cybercrime, with the study’s authors finding business disruption to be the highest external cost in the case of a breach—accounting for nearly 40 percent of total external costs. 67

As is the case for kidnapping ransoms, estimates of the true cost of ransomware are hard to come by, since victims will usually not divulge how much they paid. It is such an insidious cyberattack vector that the FBI has weighed in on how victims should respond. When addressing an audience of business and technology leaders at the Cyber Security Summit 2015 event in Boston, Joseph Bonavolonta, the Bureau’s Assistant Special Agent in Charge of its CYBER and Counterintelligence Program, offered this piece of advice: “To be honest, we often advise people just to pay the ransom.” 68

A Second Thought

It seems obvious today that a claw hammer or lead pipe is only as dangerous as the user wielding it——in the early 20th century, the obvious required extensive examination to prescribe legal precedents that sustains to this day. While it may be easy to wrap one’s head around a physical object and its intention as a harmful weapon, the virtual world in which we increasingly dwell presents additional challenges. Cryptography, cryptocurrencies, and even the Tor network itself can be used for good or bad. Adversaries know how to disguise the use of these technologies as harmless behavior, even when malevolent outcomes are the ultimate desire. More and more, cyberattacks are becoming increasingly sophisticated, and companies are left to wonder if even their own most trusted employees are on the side, wittingly or unwittingly, of the enemy. In The Second Economy, there’s always a second side to the story.

Notes

  1. Philip R. Zimmermann, “Why I Wrote PGP,” June 1991 (updated 1999), www.philzimmermann.com/EN/essays/WhyIWrotePGP.html , accessed April 15, 2016.

  2. Bourbonnais v. State, 1912 OK CR 294, 122 P. 1131, 7 Okla. Cr. 717 (Okla. Ct. Crim. App., Case Number: No. A-865, decided Apr. 18, 1912), http://law.justia.com/cases/oklahoma/court-of-appeals-criminal/1912/16984.html , accessed April 8, 2016.

  3. Moody v State, 11 Okla. Cr. 471, 148 P. 1055 (Okla. Ct. Crim. App., Case Number: No. A-1958, decided Jan. 30, 1915), http://law.justia.com/cases/oklahoma/court-of-appeals-criminal/1915/21778.html , accessed April 8, 2016.

  4. Wilcox v State, 1917 OK CR 137, 166 P. 74, 13 Okla. Cr. 599 (Okla. Ct. Crim. App., Case Number: No. A-3055, decided July 10, 1917), http://law.justia.com/cases/oklahoma/court-of-appeals-criminal/1917/22288.html , accessed April 8, 2016.

  5. Saeed Ahmed and Ralph Ellis, “Mass shooting at Inland Regional Center: What we know,” CNN, December 5, 2015, www.cnn.com/2015/12/03/us/what-we-know-san-bernardino-mass-shooting/index.html , accessed April 11, 2016.

  6. Danielle Kehl, Andi Wilson, and Kevin Bankston, “Doomed to Repeat History? Lessons from the Crypto Wars of the 1990s,” Open Technology Institute Cybersecurity Initiative, June 2015, https://static.newamerica.org/attachments/3407-doomed-to-repeat-history-lessons-from-the-crypto-wars-of-the-1990s/OTI_Crypto_Wars_History.abe6caa19cbc40de842e01c28a028418.pdf , accessed April 12, 2016.

  7. Kehl et al., note 6 supra.

  8. Ibid.

  9. Philip Elmer-Dewitt, “Who Should Keep the Keys?,” Time, June 24, 2001, http://content.time.com/time/magazine/article/0,9171,164002,00.html , accessed April 12, 2016.

  10. Kehl et al., note 6 supra.

  11. Matt Blaze, Matt, “Protocol Failure in the Escrowed Encryption Standard,” August 20, 1994, www.crypto.com/papers/eesproto.pdf , accessed April 12, 2016.

  12. www.apple.com/customer-letter/ , February 16, 2016, accessed April 12, 2016.

  13. Romain Dillet, “Justice Department drops lawsuit against Apple as FBI has now unlocked Farook’s iPhone,” Tech Crunch, March 28, 2016, http://techcrunch.com/2016/03/28/justice-department-drops-lawsuit-against-apple-over-iphone-unlocking-case/ , accessed April 12, 2016.

  14. Kate Cox, “Bitcoin: What The Heck Is It, And How Does It Work?,” Consumerist, March 4, 2014, https://consumerist.com/2014/03/04/bitcoin-what-the-heck-is-it-and-how-does-it-work/ , accessed April 12, 2016.

  15. “Massive Drop in Number of Unbanked, Says New Report,” The World Bank, April 15, 2015, www.worldbank.org/en/news/press-release/2015/04/15/massive-drop-in-number-of-unbanked-says-new-report , accessed April 13, 2016.

  16. “Do Cryptocurrencies Such as Bitcoin Have a Future?,” The Wall Street Journal, March 1, 2015, www.wsj.com/articles/do-cryptocurrencies-such-as-bitcoin-have-a-future-1425269375 , accessed April 12, 2016.

  17. Robert McMillan, “The Inside Story of Mt. Gox, Bitcoin’s $460 Million Disaster,” Wired, March 3, 2014, www.wired.com/2014/03/bitcoin-exchange/ , accessed April 13, 2016.

  18. Market value of a bitcoin was $1,122.58 on November 30, 2013, and $379.31 on November 30, 2014, www.coindesk.com/price/ , accessed April 13, 2016.

  19. Andy Greenberg, “End of the Silk Road: FBI Says It's Busted The Web's Biggest Anonymous Drug Black Market,” Forbes, October 2, 2013, www.forbes.com/sites/andygreenberg/2013/10/02/end-of-the-silk-road-fbi-busts-the-webs-biggest-anonymous-drug-black-market/#7d147d02347d , accessed April 13, 2016.

  20. Ibid.

  21. Techopedia, www.techopedia.com/definition/4141/the-onion-router-tor , accessed April 13, 2016.

  22. Ibid.

  23. Philip-Elmer Dewitt, “Technology: You Must Be Punished,” Time, September 26, 1988b, http://content.time.com/time/subscriber/article/0,33009,968490-1,00.html , accessed April 14, 2016.

  24. “The birth of the first personal computer virus, Brain,” news.com.au, Jan 19, 2011, www.news.com.au/technology/the-birth-of-the-first-personal-computer-virus-brain/story-e6frfro0-1225990906387 , accessed April 14, 2016.

  25. Ibid.

  26. Dewitt, note 23 supra.

  27. News.com.au, note 24 supra.

  28. Jason Kersten, “Going Viral: How Two Pakistani Brothers Created the First PC Virus,” Mental Floss, http://mentalfloss.com/article/12462/going-viral-how-two-pakistani-brothers-created-first-pc-virus , accessed April 14, 2016.

  29. Ibid.

  30. Philip-Elmer Dewitt, “Technology: Invasion of the Data Snatchers,” Time, September 26, 1988a, http://content.time.com/time/subscriber/article/0,33009,968508-3,00.html , accessed April 14, 2016.

  31. Intel Security, “McAfee Labs Threats Report,” November 2015, www.mcafee.com/us/resources/reports/rp-quarterly-threats-nov-2015.pdf , accessed April 14, 2016.

  32. Ibid.

  33. J. Thomas McEwen, Dedicated Computer Crime Units (Washington, DC: US Department of Justice, National Institute of Justice, Office of Justice Programs, 1989).

  34. Ibid.

  35. Ibid.

  36. Ibid.

  37. Ponemon Institute, “2015 Cost of Cyber Crime Study: Global,” October 2015, https://ssl.www8.hp.com/ww/en/secure/pdf/4aa5-5207enw.pdf , accessed April 14, 2016.

  38. Ibid.

  39. Ibid.

  40. Ahiza Garcia, “Target settles for $39 million over data breach,” CNN Money, December 2, 2015, http://money.cnn.com/2015/12/02/news/companies/target-data-breach-settlement/ , accessed April 14, 2016.

  41. Anthony Wing Kosner, “Actually Two Attacks In One, Target Breach Affected 70 to 110 Million Customers,” Forbes, January 17, 2014, www.forbes.com/sites/anthonykosner/2014/01/17/actually-two-attacks-in-one-target-breach-affected-70-to-110-million-customers/#70b3dde6596e , accessed April 14, 2016.

  42. Maggie McGrath, Maggie, “Target Profit Falls 46% On Credit Card Breach And The Hits Could Keep On Coming,” Forbes, February 26, 2014, www.forbes.com/sites/maggiemcgrath/2014/02/26/target-profit-falls-46-on-credit-card-breach-and-says-the-hits-could-keep-on-coming/#7016cd1c5e8c , accessed April 14, 2016.

  43. Samantha Sharf, “Target Shares Tumble As Retailer Reveals Cost Of Data Breach,” Forbes, August 5, 2014, www.forbes.com/sites/samanthasharf/2014/08/05/target-shares-tumble-as-retailer-reveals-cost-of-data-breach/#6a0e0916450b , accessed April 14, 2016.

  44. Garcia, note 40 supra.

  45. Fazio Mechanical Services, Statement on Target Data Breach, http://faziomechanical.com/Target-Breach-Statement.pdf , accessed April 15, 2016.

  46. US Senate Committee on Commerce, Science and Transportation, Majority Staff Report for Chairman Rockefeller, “A ‘Kill Chain’ Analysis of the 2013 Target Data Breach,” March 26, 2014, www.commerce.senate.gov/public/_cache/files/24d3c229-4f2f-405d-b8db-a3a67f183883/23E30AA955B5C00FE57CFD709621592C.2014-0325-target-kill-chain-analysis.pdf , accessed April 15, 2016.

  47. Ibid.

  48. Brian Krebs, “Email Attack on Vendor Set Up Breach at Target,” KrebsOnSecurity, February 12, 2014, http://krebsonsecurity.com/2014/02/email-attack-on-vendor-set-up-breach-at-target/ , accessed April 15, 2016.

  49. Greg Aaron and Rad Rasmussen, “Global Phishing Survey: Trends and Domain Name Use in 2H2014,” APWG, May 27, 2015, http://internetidentity.com/wp-content/uploads/2015/05/APWG_Global_Phishing_Report_2H_2014.pdf , accessed April 15, 2016.

  50. Ibid.

  51. Verizon 2015 Data Breach Investigations Report, file:///C:/Users/acerra/Downloads/rp_data-breach-investigation-report-2015_en_xg%20(1).pdf, accessed April 15, 2016.

  52. Ibid.

  53. Aaron and Rasmussen, note 49 supra.

  54. Ibid.

  55. Ponemon Institute, note 37 supra.

  56. Alina Simone, “Ransomware’s stranger-than-fiction origin story,” Unhackable, March 26, 2015, https://medium.com/un-hackable/the-bizarre-pre-internet-history-of-ransomware-bb480a652b4b#.bzn2h2nb9 , accessed April 15, 2016.

  57. Ibid.

  58. Ibid.

  59. Ibid.

  60. Ibid.

  61. Ibid.

  62. Ibid.

  63. Hollywood Presbyterian Medical Center letter signed by President & CEO Allen Stefanek, dated February 17, 2016, http://hollywoodpresbyterian.com/default/assets/File/20160217%20Memo%20from%20the%20CEO%20v2.pdf , accessed April 15, 2016.

  64. Steve Ragan, “Ransomware takes Hollywood hospital offline, $3.6M demanded by attackers,” CSO, Feb 14, 2016, www.csoonline.com/article/3033160/security/ransomware-takes-hollywood-hospital-offline-36m-demanded-by-attackers.html , accessed April 15, 2016.

  65. Hollywood Presbyterian Medical Center letter, note 63 supra.

  66. Jose Pagliery, “U.S. hospitals are getting hit by hackers,” CNN Money, March 28, 2016, http://money.cnn.com/2016/03/23/technology/hospital-ransomware/ , accessed April 15, 2016.

  67. Ponemon Institute, note 37 supra.

  68. “FBI’s Advice on Ransomware? Just Pay The Ransom.,” The Security Ledger, October 22, 2015, https://securityledger.com/2015/10/fbis-advice-on-cryptolocker-just-pay-the-ransom/ , accessed April 15, 2016.

  69. Kehl et al., note 6 supra.

  70. Jeri Clausing, Jeri, “Study Puts Price on Encryption Controls,” The New York Times on the Web, April 1, 1998, https://partners.nytimes.com/library/tech/98/04/cyber/articles/01encrypt.html , accessed April 15, 2016.

  71. Testimony of Philip R. Zimmermann to the Subcommittee on Science, Technology and Space of the US Senate Committee on Commerce, Science and Transportation, June 26, 1996, www.philzimmermann.com/EN/testimony/index.html , accessed April 15, 2016.

  72. Steven Levy, “The Encryption Wars: Is Privacy Good or Bad?,” Newsweek, April 23, 1995, www.newsweek.com/encryption-wars-privacy-good-or-bad-181584 , accessed April 15, 2016.

  73. Ibid.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.171.20