CHAPTER 34

Conflict

As the basis for daily life, first-world economies, and much of the world's innovation moves into the world of information and communications, it's inevitable that bad guys and bad actions migrate there as well. The term “information warfare” doesn't really mean anything specific, so it's worth looking at a few of the ways computing and communications are reshaping crime and conflict. These areas might represent business opportunities for some, risks for others, and points of departure: Innovation is occurring on the dark side as well as in the light.


The intersection of technology, economics, politics, and violence that occurred in the summer of 2011 was nothing short of a milestone. When young people used Facebook and Twitter to organize riots in London, the social media tools were frequently blamed for the violence. Given the wide scale of the events of August and the diversity of participants, such an explanation is insufficient. Certainly, communications tools including BlackBerry Messenger were used to coordinate sometimes-professional criminals who were looting from prearranged lists. Other violence was copycat, undoubtedly fueled in part by hot summer temperatures, high unemployment, and political alienation. Once more, the superb effectiveness of mobile and Internet technologies in facilitating group behavior was on display, in the service of various ends: After the damage was done, the most popular Twitter term over the four days of rioting was “riotcleanup.”1

Warfare between Nation-States

As the United States formalizes its military posture relative to electronic intrusions and attacks, the relationship between cyberattacks and physical responses is being weighed carefully. Off the record, one “military official” reserves the right to respond to code with explosives: “If you shut down our power grid, maybe we will put a missile down one of your smokestacks.”2 On the record, the initial formulation of cyberstrategy emphasizes preparedness and effective defense: “By sharing timely indicators about cyber events, threat signatures of malicious code, and information about emerging actors and threats, allies and international partners can increase collective cyber defense.” Other elements of the strategy fall under the category of common sense: “Most vulnerabilities of and malicious acts against DoD [Department of Defense] systems can be addressed through good cyber hygiene,” such as strong passwords, limited use of USB drives in secure facilities, and regular antivirus sweeps and updates.3 The formulation and execution of cyberwarfare strategy is only beginning, and much remains to be determined.

The lack of “fingerprints,” for example, means that the origin of attacks can be difficult to trace. Networks of infected computers around the world can be rented to serve in so-called botnets that launch spam, denial-of-service attacks, or malware such as keystroke loggers or phishing e-mails. A nation-state actor can just as easily employ (or appear to employ) such a resource as could a criminal enterprise. The United States and Korea have been subjected to sophisticated attacks at scale in both 2009 and 2011, and while North Korea is an obvious suspect, definitive evidence was not immediately available.4

The debate over “missiles down smokestacks” is a recent manifestation of longer-running debate in military circles: What is the relationship between information and action? One answer can be found in a recent theory of battlefield strategy. Designed by an Air Force colonel named John Boyd, the inelegantly named (and never really explained*) OODA loop seeks to attack the opponent's decision-making faculty rather than its armament.5 Observation, orientation, decision, and action are the stages of tactical behavior, according to the doctrine, but not in a rote life-cycle sense. According to one interpretation, “Orientation—how you interpret a situation, based on your experience, culture, and heritage—directly guides decisions, but it also shapes observation and action. At the same time, orientation is shaped by new feedback.” For Boyd, effective warriors watch for “mismatches between his original understanding and a changed reality. In those mismatches lie opportunities to seize advantage.”6

Given that reality is ceaselessly changing, continuous adaptation is required; as the German field marshall Helmuth von Moltke is said to have proclaimed in the late nineteenth century, “No battle plan survives contact with the enemy.” Given the inevitable chaos, Boyd wrote, “We must continue the whirl of reorientation, mismatches, analyses/synthesis over and over again ad infinitum.” In short, as the opponent seeks to ground himself on something, anything familiar, the aggressor can capitalize on the newness of the actual situation.

Boyd's ideas gained traction in peculiar ways, as befits the stubborn iconoclast who generated them. His home service disregarded the OODA loop, whereas the Marine Corps, operating as it does on lean resources and speed rather than mass and scale, seized on the concept. The highly successful design of the attack on Iraq in the early 1990s was classic Boyd: Move fast, disorient the enemy, paralyze their responses. Fifteen Iraqi divisions surrendered to two divisions of U.S. Marines. When asked how this had happened, Brigadier General Richard I. Neal, the U.S. military spokesman, said on national television: “We kind of got inside their decision cycle.”7

A blunter attempt to disrupt an opponent's information environment can be seen in the Chinese launch of an antisatellite missile in 2007. By knocking down one of its own aging weather satellites that orbits at the same altitude as U.S. intelligence birds, China sent a strong signal. As Foreign Affairs put it:

With the United States now depending so heavily on assets in space for real-time communications, battlefield awareness, weapons targeting, intelligence gathering, and reconnaissance, the Chinese rocket launch may have been an attempt to show Washington how Beijing can overcome its handicap in a relatively simple way.8

As the evolving cyberwarfare doctrine illustrates, the response to such a strike has yet to be determined: If China damages a U.S. military satellite, either with missiles or lasers (which cannot down a satellite but can impair the optics of imaging systems), is it an act of war, even though no U.S. territory or American citizens were breached or harmed? Someday a commander in chief may have to answer that question.

Non-Nation-State Actors

Iraq's open-field battles were the last of their sort for quite a while. Battling loose networks of insurgents in what is called asymmetric warfare has been the primary order of business for more than a decade. In such conflicts, heavy weapons can be a liability, or at least are neutralized insofar as the opponent typically cannot be attacked, bombed, or sunk by conventional means. Information thus plays a central role in both the insurgencies and in nation-states' response to them. Al Qaeda in Iraq (a spinoff group of the original “brand”), for example, routinely videotapes improvised explosive device (IED) detonations for use as recruiting and motivational tools on various Web sites.9

Other forms of digital warfare are emerging. The lack of traceability of cyberwarfare means that nation-states can get software to do work for which missiles or bombs might be ill-suited politically. Such was apparently the case in 2010 when a software virus called Stuxnet was targeted extremely specifically: It attacked Siemens industrial devices, specifically the centrifuges at Iran's nuclear enrichment facility. With extreme sophistication, the virus embedded itself in the SCADA (supervisory control and data acquisition) system that controlled the industrial apparatus, then sent false signals to the monitoring system, indicating that devices were operating properly. Once the centrifuges spun up in an erratic manner, some were damaged, with the effect of slowing Iran's nuclear program. Official responsibility has never been claimed, but several strong clues point to the United States and Israel being involved.10 Stuxnet is the first documented episode of an attack on industrial control systems; similar systems control power plants, chemical facilities, and military installations.11

Another theme in information-age conflict revolves around secrets: The entire life cycle of intelligence gathering is sensitive, including not just the facts (country X has 100 missiles aimed at country Y) but also who asked, who told, and who took notice. WikiLeaks gained considerable attention in 2010 when it published roughly 480,000 documents related to the U.S. war in Afghanistan, then made another release of 250,000 U.S. State Department diplomatic cables that compromised both sources and diplomats. The Web site's architecture is impossible to conceive of in any age before the current one, in which control of media outlets has widened. WikiLeaks' founder, Julian Assange, calls it “an uncensorable system for untraceable mass document leaking and public analysis.”12 To remove content from WikiLeaks, an entity would have to “practically dismantle the Internet itself,” in the words of an analysis in The New Yorker from 2010.13 While technology protects the openness of the secrets, human traits—pride, vices, carelessness, and ideological commitments—made them available in the first place.

The mission of this powerful medium is spelled out in noble terms:

WikiLeaks is a non-profit media organization dedicated to bringing important news and information to the public. We provide an innovative, secure, and anonymous way for independent sources around the world to leak information to our journalists. We publish material of ethical, political, and historical significance while keeping the identity of our sources anonymous, thus providing a universal way for the revealing of suppressed and censored injustices.14

With its robust, distributed servers and an international legal system ill-equipped for this type of approach, WikiLeaks confronts established governments and corporations with a challenge to the basic need for secrets: “Publishing improves transparency,” the Web site asserts,

and this transparency creates a better society for all people. Better scrutiny leads to reduced corruption and stronger democracies in all society's institutions, including government, corporations and other organisations. A healthy, vibrant and inquisitive journalistic media plays a vital role in achieving these goals. We are part of that media.15

The rhetorical sleight of hand here is significant. Aligning WikiLeaks with media as opposed to treason or espionage puts secret holders on the defensive insofar as WikiLeaks has published verifiable material: The site's record of veracity has not been seriously questioned. But it is not entirely clear whether the site's objective is to bring truth to light (i.e., a quasi-journalistic stance) or to cripple the operations of institutions it deems illegitimate. Such governance is by Assange's definition conspiratorial; it is generated by people in “collaborative secrecy, working to the detriment of a population.”16 Much like Boyd, Assange argued that when an institution's communication connections are disrupted, the information flow among conspirators drops, to the point that the conspiracy becomes unsustainable. As The New Yorker analysis summarized, “Leaks were an instrument of information warfare.”17

While the role of founder Julian Assange may change as he sorts out multiple legal problems, the basic model of WikiLeaks will likely persist, even if its current incarnation is shut down by financial, legal, or personality issues. Digital secrets are too easy to find, to move, and to distribute for this genie to be put back into its bottle. WikiLeaks has inspired other similar projects, and in 2011 an even more diffuse, distributed effort was devoted, in part, against the very concept of security.

Whereas WikiLeaks has a public spokesman, a vetting procedure, and a fundraising component, the loose hacker collectives of 2011 have fanciful names, occasional manifestos, and apparently some skilled technologists. The identities of the people associated with Anonymous, LulzSec, and imitators are as-yet unknown, though some arrests have been made in the United States and United Kingdom. The targets range from the silly to the deadly serious; Rupert Murdoch's fictitious death was splashed on the cover of one of his newspapers while e-mail addresses and passwords for Arizona public safety personnel were published in response to that state's anti-immigration posture, which the group regards as racist.18

Anonymous gained prominence in 2010 when it attacked PayPal and MasterCard after the payment sites cut off donations made to WikiLeaks. The group quickly multiplied its efforts. When Sony sought to sue a hacker who made it possible to run Linux on PlayStations (as the original PlayStation did), Anonymous attacked the PlayStation Network, but at an organizational level (such as it is) denied the roughly simultaneous leaking of 100 million Sony user accounts that turned out to have been particularly poorly protected.* Other targets of 2011 included the North Atlantic Treaty Organization (NATO), the Public Broadcasting System, and the U.S. Senate Web site; affiliated groups claimed to have attacked the government of Brazil. Government sites in Turkey were attacked in response to attempted Internet censorship. Booz Allen Hamilton, an information services provider to the U.S. government, had 90,000 e-mail accounts and encrypted passwords for sensitive military clients breached. Affiliated vandals posted a Twitter message from Fox News falsely stating that President Obama had been shot.

The variety of targets, and the apparently whimsical motivations for their inclusion, aligns with a leaderless collective; the agenda is opportunistic, though the skills involved in the military breaches suggest that some members possess sophisticated knowledge. At the same time, the range of targets and the scale of the breaches suggest a deeper problem: Information security is not being well practiced. While hacking for the laughs out loud (LulzSec) appears apolitical, other actions exhibit some degree of geopolitical awareness: Certain NATO documents were kept quiet, and other materials that might have compromised the News of the World newspaper scandal investigation in England were also claimed to be withheld. Arrests may impair the group's efforts, but Anonymous claims to be an idea rather than an organization:

Your threats to arrest us are meaningless to us as you cannot arrest an idea. Any attempt to do so will make your citizens more angry until they will roar in one gigantic choir. It is our mission to help these people and there is nothing—absolutely nothing—you can possibly to do make us stop.19

Emerging Offensive Weapons

Given that an estimated 2,000 U.S. companies (that aren't talking about it very much) have been attacked by various Web assaults of sufficiently high degrees of sophistication to rule out so-called script kiddies looking mostly for bragging rights, countermeasures are increasing in intensity.20 Google, for example, openly accused China of cyberattacks and changed its operating procedures in response. The security vendor RSA had the algorithm behind SecureID tokens stolen; 40 million of the key-fob-size devices have been shipped to serve as a third factor of authentication for secure systems. The defense contractor Lockheed Martin had to shut off its virtual private network until new tokens with uncorrupted “seeds” that generate a fresh six-digit number every 60 seconds could be distributed.21 For a global company with sensitive data, such a scenario was troubling indeed.

So far, targets have been both civilian, including defense contractors, and military: 24,000 files were stolen in March 2011 from an unnamed contractor by “foreign intruders.”22 The new class of targets are strategic, outside the military and diplomatic sphere: power grids, drawbridges, hospital monitoring, and patient-care systems. While defensive measures are essential and sometimes effective, nation-states seek to arm themselves offensively. As the Stuxnet virus showed, software attacks can be strategically effective while being politically palatable, given the considerable legal gray area.

But where does a government buy the ability to shut down Moscow's subways, for example, or turn oil refineries in a given rogue nation into chaos? A new generation of software company, operating very quietly, is emerging to do for paying governments what skilled coders can be recruited to do in less democratic societies. Such companies as KEYW, Endgame Systems, and HBGary Federal map vulnerabilities and offer software assets for sale: What kind of computers are running where, doing what, relative to a target? How can those computers be protected, compromised, disabled, or simply monitored? For about $6 million, reportedly, a buyer gets what one article called “cyber warfare in a box.”23 Seeing the market, IBM bought Internet Security Systems (ISS) for $1.3 billion in 2006. One of ISS's key differentiators was the X-Force network-vulnerability mapping service; Endgame's founders were X-Force alumni.

Unlike the Cold War's reliance on deterrence—we know what you have and, if you utilize it, we will pay you back—cyberwar is marked by high degrees of misdirection and deniability. In addition, once a capability is shown, countermeasures can be taken in ways that were impossible with nuclear warheads, for instance: A cyberweapon has a shorter shelf life after being deployed than when it is secret. With Stuxnet, the software covered its tracks and told nuclear facility operators the centrifuges were operating normally when in fact they were spinning fast enough to destroy themselves, but now that it is known, the virus no longer can be used in a surprise attack.24

Such software could be written in only a handful of countries, but given the ease of triggering a virus or even an army of dormant compromised computers as compared to moving nuclear warheads across the world, cyberwarfare will not follow the pattern of exclusivity that was the hallmark of the nuclear club of known or strongly suspected regimes. To illustrate the complexity of the situation, consider the origin of the price list for vulnerability mapping, rootkits*, and even e-mail and Web addresses. In such a sensitive domain, how did Endgame's wares find their way into unclassified, public sources? An outside party hacked into HBGary Federal in 2011 and leaked the relevant e-mails. Who was that party? Anonymous, the hacker alliance that has no known geographical headquarters, appointed leader, or physical infrastructure: The group's publicity tools include press releases and YouTube videos, which can be close to untraceable, but otherwise it works pretty much invisibly.

Looking Ahead

Cyberwar, in short, differs from conventional warfare in just about every significant dimension:

  • The attacker can be thousands of miles away from the target, and the attack can be timed to launch weeks or months after the decision is made.
  • The identity and location of the attacker can be virtually impossible to determine.
  • The attacking parties' motivation can be invisible, ad hoc, or highly developed, and different aligned parties likely will be acting in the service of different agendas.
  • Some participants may be unaware of the existence or nature of their participation.
  • The identity of the attacker can be hidden to make it look like the work of someone else or the work of no one in particular.
  • The ultimate target may not be immediately obvious, given the dense interconnection of so many computers.
  • The intent may be to deprive the target's host of a capability, to send a political message, to cause economic disruption, or to cause bodily harm, either specifically (a particular pacemaker or insulin-delivery pump) or generally (a water supply). In short, cyberattacks can be very specifically aimed.
  • Cyberattacks cost far less to mount than conventional attacks.
  • For many reasons, targets are readily available: Few assets are completely secure.

As the number of Internet connections and computerlike devices ascends past the 4 billion mark, thinking about such notions as authentication, perimeter security, and trust will be increasingly problematic. If the risks get severe enough, might the many benefits of interconnection and mobility be overwhelmed by the dangers, or by the precautions25 insisted on by those charged with the protection of critical assets? That is, will the behaviors of the cyberwarriors, both official and unaligned, force substantial changes in the open, relatively low-cost, and heterogeneous environment people have come to expect?

Notes

1. John Burn-Murdoch, Paul Lewis, James Ball, Christine Oliver, Michael Robinson and Garry Blight, “Twitter Traffic During the Riots,” The Guardian, August 24, 2011, www.guardian.co.uk/uk/interactive/2011/aug/24/riots-twitter-traffic-interactive.

2. Siobhan Gorman and Julian E. Barnes, “Cyber Combat Can Count as Act of War,” Wall Street Journal, May 31, 2011, http://professional.wsj.com/article/SB10001424052702304563104576355623135782718.html?mod=googlenews_wsj&mg=reno-wsj.

3. Noah Shactman, “Pentagon Makes Love, Not Cyber War,” CNN, July 15, 2011, http://edition.cnn.com/2011/TECH/innovation/07/15/pentagon.cyber.war.wired/.

4. Siobhan Gorman and Evan Ramstad, “Cyber Blitz Hits U.S., Korea,” Wall Street Journal, July 9, 2009, http://online.wsj.com/article/SB124701806176209691.html.

5. See Robert Coram, Boyd: The Fighter Pilot Who Changed the Art of War (New York: Back Bay Books, 2004).

6. Keith H. Hammonds, “The Strategy of the Fighter Pilot,” Fast Company, May 31, 2002, www.fastcompany.com/magazine/59/pilot.html?page=0%2C0.

7. Coram, Boyd, p. 425.

8. Bates Gill and Martin Kleiber, “China's Space Odyssey: What the Antisatellite Test Reveals About Decision-Making in Beijing,” Foreign Affairs (May/June 2007), www.foreignaffairs.com/articles/62602/bates-gill-and-martin-kleiber/chinas-space-odyssey-what-the-antisatellite-test-reveals-about-d.

9. James Kennedy Martin, “Dragon's Claws: The Improvised Explosive Device (IED) as a Weapon of Strategic Influence,” Master's thesis, Naval Postgraduate School, March 2009, www.dtic.mil/cgi-bin/GetTRDoc?Location=U2&doc=GetTRDoc.pdf&AD=ADA496990.

10. John Markoff, “Malware Aimed at Iran Hit Five Sites, Report Says,” New York Times, February 11, 2011, www.nytimes.com/2011/02/13/science/13stuxnet.html.

11. Robert McMillan, “Siemens: Stuxnet Worm Hit Industrial Systems,” Computer-world, September 14, 2010, www.computerworld.com/s/article/print/9185419/Siemens_Stuxnet_worm_hit_industrial_systems?taxonomyName=Network+Security&taxonomyId=142.

12. Raffi Khatchadourian, “No Secrets: Julian Assange's Mission for Total Transparency,” The New Yorker, June 7, 2010, www.newyorker.com/reporting/2010/06/07/100607fa_fact_khatchadourian#ixzz1Smlo05Nd.

13. Ibid.

14. http://wikileaks.org/About.html.

15. Ibid.

16. Julian Assange, “Conspiracy as Governance,” December 3, 2006, p. 1, finemrespice.com/files/conspiracies.pdf.

17. Khatchadourian, “No Secrets.”

18. Alexia Tsotsis, “LulzSec Releases Arizona Law Enforcement Data, Claims Retaliation for Immigration Law,” TechCrunch, June 23, 2011, http://techcrunch.com/2011/06/23/lulzsec-releases-arizona-law-enforcement-data-in-retaliation-for-immigration-law/.

19. Miles Doran, “Hacker Says Anonymous Still Downloading NATO Data,” CBS News, July 22, 2011, www.cbsnews.com/8301–503543_162-20081635-503543.html.

20. Michael Riley and Ashlee Vance, “Cyber Weapons: The New Arms Race,” Bloomberg Businessweek, July 20, 2011, www.businessweek.com/printer/magazine/cyber-weapons-the-new-arms-race-07212011.html.

21. Robert McMillan, “After Hack, RSA Offers to Replace SecureID Tokens,” PCWorld, June 6, 2011, www.pcworld.com/businesscenter/article/229553/after_hack_rsa_offers_to_replace_secureid_tokens.html.

22. Chris Lefkow, “24,000 Files Stolen from Defense Contractor: Pentagon,” Physorg.com, July 15, 2011, www.physorg.com/news/2011–07-stolen-defense-contractor-pentagon.html.

23. Riley and Vance, “Cyber Weapons.”

24. Ibid.

25. See, for example, Scott Bradner, “Cyberwar and Cyber-Isolationism,” Network-World, July 12, 2011, www.networkworld.com/columnists/2011/071211-bradner.html.

*Boyd's preferred mode of transmitting the idea was an in-person 14-hour presentation of overhead transparencies.

*According to a Purdue University computer scientist testifying before a congressional subcomittee, Sony had no firewall protecting its networks and its Apache Web server software was an outdated version with known vulnerabilities. See www.eweek.com/c/a/Security/Sony-Networks-Lacked-Firewall-Ran-Obsolete-Software-Testimony-103450/.

*A rootkit is a collection of software tools that give an individual administrator-level access (“root,” as in the root directory) to a computer. Access usually comes from either a known vulnerability or intercepting or otherwise compromising a password. After the rootkit is installed, an attacker can mask his or her intrusion and commandeer the basic operations of the computer or network.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.15.161.188