8 Concentration and Terror on the Internet

In the 1990s, following the First Persian Gulf War, the United States engaged in almost daily bombing of targets in Iraq, in response to Iraq’s failure to comply with United Nations Security Council resolutions and its interference with UN Special Commission inspectors. Early in 1998, the buildup of U.S. troops and material in friendly spots in the Middle East intensified, in preparation for Operation Desert Fox, a major three-day bombing campaign. In February 1998, the Department of Defense discovered that intruders had broken into numerous secure DOD computers. They had obtained “root access,” which would allow them to steal information, alter information, or damage the DOD networks. They suspected that it was a case of “information warfare,” with the Iraqi government behind the penetration. The attacks went on for almost a month. Finally they were able to trace the intrusions back to a Internet service provider (ISP) in the Persian Gulf region. President Clinton was briefed and both cyber countermeasures and “kinetic” (physical) ones were considered. Had the hackers stolen the bombing plans? How secure were our networks?

With the help of Israeli and other foreign law enforcement agencies, the department traced the intrusions to two California teenagers, assisted by an Israeli teenager. Internet signals “hop” all around the world, and in this case, the Persian Gulf ISP was one of the hops between the teenage hackers in California and the Pentagon. (Vadis 2004, 102–3) We did not bomb the Persian Gulf ISP. This gives us an idea of the state of security on the Internet in 1998; it is not much better in 2006. Unauthorized access has been gained to nuclear power plants and other power stations, financial institutions, and intelligence agencies as well as the Defense Department.

INTRODUCTION

Think of a safely designed highway, with exit lanes, wide curves, good lighting, and safe speed limits. The Internet is like that; by itself it is very reliable, like the highway. Now put on the highway cars that can go double the speed limit or roll over easily or explode if hit in the rear. Some are driven by people who are eating, talking on the phone, drunk or doped, or too young to have good judgment. Finally add in faulty regulation of vehicle manufacturers, poor licensing standards for drivers, and few highway patrol officers. This represents the devices that get us on the Internet. The Internet is a marvel; but some of the devices that allow us to use it threaten to bring it down. (The analogy cannot be pushed very far, unfortunately; we will see how the Internet, its access devices, and their interaction are far more complex.)

The Internet has been called the world’s largest network, always on, with millions of transactions every hour. In itself, it is fantastically reliable and quite secure (though that is threatened). But the devices that get us on the Internet are prone to glitches and failures, and do not provide a great deal of security. Unless those devices provide secure transactions, the Internet presents the largest target for fraud and terrorism that we have. A terrorist can exploit faults in the operating system of a computer, in its software, or in the servers it is dependent on to gain control of a nuclear power plant, a chemical plant, or a city’s water system if these systems are linked to a public network, which unfortunately they sometimes are. The terrorist could read the plans of the Department of Homeland Security and the Department of Defense and alter them, or disable their systems. (Of less concern for this book, criminals can gain access to financial systems, including credit card agencies and banks.) This is partly because the operating systems used by 90 to 95 percent of those on the Internet comes from one source, Microsoft, and for years, because of its market dominance, Microsoft had no economic incentive to make its products highly secure or even highly reliable. While we will focus on security, the guru of computer fallibility, Peter Neumann reminds us that in systems, reliability and security are directly related. Events that can happen accidentally can be caused intentionally, and events caused intentionally can happen accidentally. (Neumann 1995, 126–28) We have to be concerned with both reliability and security, and sometimes they are so interdependent that it is hard to distinguish them.

The lack of security for machines on the Internet is beyond doubt, but as yet we have no public evidence that terrorists have used it to any great effect. (There is little public evidence that thieves have used it to great effect. Such evidence would greatly embarrass business, so it has not been made public. The estimates of yearly losses to business run in the billions.) As contrasted to the insecurity of Microsoft’s operating system and software, its unreliability is more a matter of annoyance than of disaster, at least so far. However, a study by a National Academy of Sciences panel (still in draft form as of September 2006) anticipates disastrous consequences of software failures as critical systems become more and more dependent on software. Much of the software in critical systems is Microsoft’s.

First we will examine the operating systems and servers that link to and run the Internet and the World Wide Web. This is where the potential for disaster is the greatest. (The Internet and the World Wide Web are not synonymous: the Internet connects computers through telephone wires, fiber-optic cables, radio signals, and satellites; the Web is a set of servers that are connected to the Internet. The Web stores documents that can be sent to other computers through the Internet upon request. However, we will sometimes treat them as the same since we use the Internet to access the documents on the Web.) Then we will examine the Internet itself, as the world’s largest machine, and discover why the network itself (but not a computer on it) is still inherently very secure, even though what runs on it and holds classified documents and operates parts of our critical infrastructure can be very vulnerable. Then we will examine how commercial interests seek to centralize what is presently a vastly decentralized system with open access. A common carrier such as a highway could be privatized and centralized, creating the vulnerabilities we might expect from concentration.

GETTING ON THE INTERNET

It is useful to initially consider two separate systems: your computer, and the system(s) to which your computer connects in order to send or receive e-mail or view Web pages on the Internet. An operating system such as Windows provides the software that runs the chips, relays, and so on in the computer—that is, the hardware. (The hardware itself is only rarely a security issue.) Microsoft’s Windows or Apple’s Mac OS X are commercial; you pay for these operating systems when you buy the machines or upgrade to a new version of the operating system. (You can also buy a machine without a Windows or Mac operating system and run Linux or variants of BSD Unix on it for free, but this is rarely done and requires considerable expertise.) In the language of telecommunications, your machine is the client, which is served by the second system (though any machine on the Internet can act as a client, server, both, or neither). The second system provides services to send and receive e-mail and interact with Web pages.

As a client, your operating system on your computer takes your e-mail, for example, and sends it, via a phone line, or cable, to your Internet service provider (ISP), such as America Online or Comcast, or your organization’s ISP. The ISP is acting as a server. The software in the ISP routes it to another e-mail server that the recipient can connect to. Your message is broken up into packets; they may all follow quite different routes, but each packet has the addresses of all the others so they can be combined at the recipient’s ISP.

Your operating system, the first of the two systems, is open to penetration by hackers (like the teenagers in California and Israel we met earlier), “crackers” (malicious hackers), agents of foreign governments, competing business firms, thieves, and terrorists. That is why you are urged to set up firewalls, use passwords, avoid clicking on suspicious links in your e-mail or on the Web, and even use encryption. The ISPs are also vulnerable in themselves and these also have security protections. As we will see, it is possible to disable an ISP without even penetrating it; one can overload it by sending it millions of messages. But though this second system, the Internet, is the means of access to the first, the operating system, it is the first that presents the greatest reliability challenge and security challenge. Let us examine security first.

INTERNET SECURITY

Threats to the Internet come in various ways with varying levels of disruption. First, the most common and most annoying are malware—the worms, viruses, Trojan horses, and backdoor access programs that can disrupt your machine and even erase the hard drive, where everything on your machine is stored. Once your machine has been compromised, it can be used to send these threats to other machines; your machine becomes a member of a “botnet,” after robots. (To get very useful definitions and discussion of the technical terms in any of the chapters in this book, but especially this chapter, go to www.wikipedia.org, and for this chapter especially, to webopedia.org. These Web sites are two of the noncommercial joys of the World Wide Web.) You are not aware your computer is being used. Companies that place ads on, say, Google’s search site, pay Google a few cents every time a computer clicks on that ad, and Google gives some of that money to the Web site that carries the ad. Through the botnet the hacker, with a Web site, can get many computers to click on the ad and will get a percentage of the fee the advertising company gets from Google.

Much more important for our purposes, botnets are used in another type of attack, a “distributed denial of service” attack, where specific sites are bombarded with so many messages that they are unable to send or receive messages. Theoretically, the whole Internet could be shut down with such attacks, but aside from the difficulty of pulling this off, why would a hacker want to eliminate his or her toy, or a terrorist his or her principal means of communication? Next, as we shall see, an attacker can take control of machines that are linked to the Internet and, for example, change settings on relays in a power plant. Finally, by gaining control of operating systems, the attacker can take over or modify data within a system. A law enforcement system could be gradually subverted in this way, the intelligence of intelligence agencies corrupted, or financial systems accessed and accounts changed or drained.

Everything connected to the Internet is vulnerable to these threats to varying degrees. We are all familiar with the first type of threat— worms and such—but it is the latter two types of threats that are the most consequential: taking control and modifying data. I will have much to say about the lack of security of Microsoft products in terms of the company’s failure to make the most widely used operating system with its software invulnerable to worms, viruses, and the like. But when we come to the second vulnerability, taking control of systems and modifying data, the security failure has as much to do with the failure of the industry as a whole to protect itself as it does with the vulnerability of the industry’s leading operating system.

The perpetrators of attacks come in three flavors. First there are the irresponsible hackers who are playing around, trying to penetrate protected systems to show that they can and to make trouble. Because of its ubiquity, or perhaps even more salient, its resented power, machines using Microsoft’s Windows are a frequent target. When asked why he robbed banks, Willie Sutton replied, “that’s where the money is.” Windows is where 95 percent of the operating systems are. Some of these attackers are so malicious and do so much damage that they are called crackers. Second, there are criminals who steal money, and, rarely discussed, businesses and governments that require employees to engage in criminal acts of espionage. (We should not be surprised that business and governments engage in espionage, and today the most productive venue for that is the Internet.) The third group are the terrorists who seek to disrupt the critical infrastructure of electric power, nuclear power, industrial plants, and the Internet itself in order to kill people or wreak economic harm. This group is presumably very tiny; indeed, we are not certain there are any terrorists attacking the Internet. The size of the second group is also unknown and presumed to be small in number but very effective; the victims are unwilling to have its extent of criminal activity publicized. The size of the first group seems to be enormous, but in fact may be a very small number of people; they are working in a system that has a huge amplification potential. If everything is linked, which is the genius of the Internet, everything is potentially vulnerable, and just one hacker’s virus or whatever will have tremendous scope.

Regarding the first group, there are hourly instances of damage caused by malicious software such as viruses, worms, and Trojan horses. According to a computer security firm in London, global damage from malicious software inflicted as much as $107 billion in global economic damage in 2003. The “Slammer” worm alone was responsible for nearly $30 billion in damages in one week. (Geer et al. 2003) (With figures like these, why does business rely on such a costly mechanism as the Internet? Because the economic savings the Internet provides makes these figures trivial!) Regarding the second group, criminals, as noted we have no public information on the extent of criminal financial activity. Even more hidden from view are the activities of businesses and governments who steal information and secrets and perhaps disrupt their targets.

As far as one can tell from the public record, the third group, terrorists, have yet to wreak damage through the Internet, though they may be financing themselves through criminal financial activities such as getting into bank accounts and transferring money out. However, there are strong indications that terrorist groups have sought to gain access to consequential military and industrial and financial sites, much as the teenage hackers described in the opening to this chapter. According to a Government Accountability Office report of March 2005, security consultants within the electric industry reported that hackers were targeting the U.S. electric power grid and had actually gained access to U.S. utilities’ electronic control systems. But computer security specialists reported that only in a few cases had these intrusions “caused an impact.” We do not know anything about the “few cases,” but it is disturbing that there would be any. What can be done for fun could be done by a strategic adversary. The report said that the constant threat of intrusion has heightened concerns that electric power companies may not have adequately fortified their defenses against a potential catastrophic strike. (GAO 2005)

There is no indication as to the motives behind those reportedly seeking control of utilities’ control systems; they could have been opportunists just demonstrating the vulnerability of the site or malicious hackers making trouble, which is worrisome enough, but they could have been terrorists exploring our vulnerability. The FBI reported in 2005 that there was evidence that terrorists were becoming more sophisticated in their Internet activity, but its illustration only concerned secret communications. For example, they were embedding messages in pictures that can only be read with a key (remember secret writing with lemon juice and then heating the paper to make it visible?). (It has a name: steganography.) But it did not see much activity. The FBI’s top cyber-crime official said that terrorists are still unable to mount crippling Internet-based attacks against U.S. targets, and the agency has detected no plans to launch cyber attacks. However, a worm attack, presumably by a malicious hacker rather than a terrorist, almost shut down the FBI’s Internet access. (Staff 2005b)

Widely used standardized technologies have commonly known vulnerabilities and one can go to Web sites to find programs that will exploit these vulnerabilities. It does not take a sophisticated hacker to launch malware. As a GAO report notes, “effective exploitation tools are widely available and relatively easy to use.” Just type “hacking tools” into a Google search box and you will get millions of “hits,” many of which offer instructions for hacking and directions to vulnerable sites. Hacking tools are especially effective with the new “layered systems” features of the Internet that include peer-to-peer networks and chat groups. Social networks such as MySpace and Friendster, both of which have ten million or more users, can be attacked in still different ways. These layered systems are less secure and thus more promising targets.

Furthermore, it it is possible to identify from public sources the most heavily loaded transmission lines and the most critical substations in the power grid. A study notes that a hacker could change the settings on circuit breakers and at the same time raise settings on neighboring lines, and the diverted power would overload the lines and cause significant damage to transformers and other critical equipment, which might not be repaired for months. (GAO 2005)

Nuclear power plants are at risk simply from viruses. In January 2003, the Slammer worm that attacked the Microsoft server disabled a safety monitoring system for nearly five hours at our old friend, the Ohio Davis-Besse nuclear power plant. The plant’s process computer also failed, and it was down six hours before it was available again. The virus also affected communications on the control networks of at least five other utilities. The utility claimed it was an isolated instance, but one may be skeptical as to how isolated it was. At least, the GAO report on the incident was skeptical about the utility’s claim. It noted that there is no formalized process for collecting and analyzing information about control-system incidents, so there is no way of knowing how widespread such attacks were, and it called for strict reporting. (GAO 2005) No one has answered the call for strict reporting.

The lack of security can cause sizable economic damage, but it is rarely publicly documented. One such documentation, an indictment of a British computer administrator, is revealing not for the economic damage but for the threat to our national security. In November 2002, he was indicted on charges that he accessed and damaged ninety-eight computers in fourteen U.S. states between March 2001 and March 2002, causing some $900,000 in damage. More disturbing was that these networks belonged not just to private companies but to the heavily guarded Department of Defense and the National Aeronautics and Space Administration. The indictment alleges that the attacker was able to gain administrative privileges on military computers, thus allowing him to copy password files and delete critical system files. The attacks rendered the networks of the Earle Naval Weapons Station in New Jersey and the Military District of Washington inoperable for an unspecified period of time. (GAO 2005)

In 1999, two Chinese military officers published a book promoting the use of unconventional measures, including the propagation of computer viruses, to counter the military power of the United States. In 2001, a loose coalition of Chinese hackers launched a successful denial-of-service attack on the CIA and White House Web sites, in response to the collision of a U.S. surveillance aircraft and a Chinese fighter jet. (Vadis 2004, 102) Actions by national states aside (and our Defense Department has the authority to launch cyber attacks on critical infrastructure sites of foreign states), it is not clear what breaking into military and other government establishments would do for terrorists, beyond being harassments, unless the establishments are government labs where hazmats are present. While it is true that they could disable disaster-response efforts after creating a disaster, it would require a very sophisticated terrorist organization to both create a large disaster and at the same time disable the agencies that are supposed to respond to it. The greatest threats to the Internet appear to be from hackers and crackers, but these are still serious threats. (Poorly designed software programs and applications may also cause Internet problems, but these will not be considered here. They are not strategic, i.e., intentional, threats as crackers and terrorists are, and though they may do damage there are no specific targets.)

It is worth emphasizing the interconnections in modern societies. Not only are all of our critical infrastructures connected to the Internet, but they are interconnected themselves. Thus an operating system failure or intentional penetration can interact in unexpected or even mysterious ways with seemingly unrelated parts of the infrastructure. A bug that causes a computer failure in a hospital pharmacy may result in the failure to deliver critical medicines, patients getting the wrong medications, critical cases turned away to other hospitals, and reduction of surge capacity if there is also an unrelated crisis. Computer failures at airports sometimes ramify through the dense air transport network, creating economic hardships for business and government, as well as raising the threat of aircraft disasters. Errors are everywhere, and their unexpected interactions are facilitated by our interconnectedness, which is magnified by the Internet.

DIVERSIFICATION AND REDUNDANCIES AS PROTECTIONS

In general, there are two ways to ensure the survivability of a threatened system. One is to have independent redundant components, such as a backup generator in a power plant to maintain it in a safe mode in case the plant itself fails to provide power. Backing up your computer files on a component that is independent of the computer, such as a CD-ROM or other storage device, is another example. Independent redundancy is the key to this form of survivability. Unfortunately, we have little of this in the operating systems that connect to the Internet. A virus designed to erase the hard drives on Microsoft operating systems could wipe out the data on all that are not protected. Even firewalls and other standard protections do not appear to be enough. The estimated $30 billion in damages that one worm caused in one week in 2003 affected sophisticated companies with firewalls and other protective devices. If 95 percent of computers running on the Internet have Microsoft operating systems, and this is the system the worm attacks, there is the theoretical potential of taking them all out by wiping the hard drive clean or freezing the computer so that only expensive and sophisticated recovery work would make them operable. It hasn’t come anywhere near that as yet, and the vast majority of these machines are only used for e-mail and Web surfing. Furthermore, attacks on servers are more significant, and Micro-soft does not have a monopoly here, but it makes enough of them to do damage. And non-Microsoft servers can be vulnerable to worms attacking Windows machines. Worms such as Sober and Slammer can overload and damage the Net by repeatedly sending so many messages that these servers are overloaded and go down in a denial-of-service attack. In 2005, the third version of the Sober worm spread so quickly and widely that the FBI was bombarded with 200,000 e-mails a minute over four days, and it almost killed their system. (Staff 2005b) Adding computers that have the same vulnerability as the existing ones will not provide redundancy.

To protect from such cascading failure, we need the second form of protection—risk diversification, that is, diverse types of computers being used that have different operating systems and diverse programs from different sources. One virus or Trojan horse is very unlikely to affect them all. In a computer industry conference in 2003, a group of highly qualified engineers launched a broadside attack on Microsoft’s “monoculture,” in effect, its near-monopoly of operating systems. They drew an analogy to the farmer that plants several varieties of corn, since in any one year there could be a blight that kills one of the varieties but not the others. “This sort of diversification is widely accepted in almost every sector of society from finance to agriculture to telecommunications. In the broadest sense, economic diversification is as much the hallmark of free societies as monopoly is the hallmark of central planning.” (Geer et al. 2003)

One could add that diversification is even more important where there are “strategic adversaries” such as hackers or terrorists. They could select the type of “blight” and the timing and method of delivery. Furthermore, over the Internet, the dispersion would be in microseconds. This makes the presence of a monoculture on the Internet even more serious than such modern monocultures as eBay, Amazon, and Wal-Mart. The possibility of another monoculture on the Internet is presented by the power of Google, but there are some reasonably competitive search engines from other suppliers, and search engines are not vital to our critical infrastructure. For the same reason, the near-monopoly power of Apple in the digital music realm is not critical.

The 2003 conference report created quite a storm. The report took the findings of the government’s antitrust suit against Micro-soft, which found extensive predatory behavior that led to Microsoft’s monopoly position, and drew out the implications for security rather than economic competition. But the industry association that provided the venue for the report was not one that Microsoft chose to join; it included competitors of Microsoft. So another industry association to which Microsoft, but not its competitors, belonged attacked the report for “myopically looking to technology” (that is, Microsoft’s technology) as the underlying cause of cyber breaches. It said that the root cause of most security problems is human error. “It’s not this monoculture that is at fault here,” a spokesperson said. “This started with human behavior, and it demands a human-behavior response.” The association suggested that one of the best ways to protect against cyber intrusions is to increase security training for IT workers, that is, a human-behavior response. (Glasner 2003) Microsoft’s products were apparently blameless, it was the users that should be blamed. Blaming the operator rather than the technology or system has a long history in analyses of failures. “Operator error” is the first and most common attribution when nuclear plants such as Three Mile Island or Chernobyl go awry, airplanes crash, and Bhopals happen. A more sophisticated analysis asks, “What in the system made it easy for operators or users to make mistakes?” (Perrow 1999; Reason 1990)

Why is the Microsoft operating system so vulnerable? An accepted principle of system design is to avoid complexity and tight integration of different components or interfaces. Complexity is the enemy of both reliability and security, engineers are fond of saying. The more complex the system, the greater the chance of the unexpected interaction of components, even when they are not faulty in themselves. A truly complex system may be impossible to fully comprehend, and the unexpected interactions may not appear for a very long time, and especially not in the short period of testing before marketing and installation.

Even if it were possible for all possible permutations of the system to be anticipated under all possible environmental conditions and usages, there is the problem of failures. Everything is subject to failure; no computer program, no chip, or bus, or wire, or wire connection will always be perfect. Even if every component performs as desired, there may be unanticipated interactions. This, a result of complexity and tight coupling, is a second source of failure built into the design of the system. We guard against these two sources of failures—inevitable failures and those associated with system design—with redundancies, firewalls, alarms, bells and whistles because we know they will occur. But if there are multiple failures, even very small ones that each by themselves are inconsequential or are defended against, that interact in just the right, but unexpected, way, they can bring the system down, whereas the “single-point failure” is guarded against. The interaction can create a failure that no system designer could anticipate or comprehend, and no operator can understand. If the system is also highly integrated (that is, tightly coupled), there is a good chance that either the unexpected interaction of components or the unexpected interaction of failures in the component will produce a cascading failure. (This is the argument of normal-accident theory, previously referred to several times. Interactive complexity, in that theory, says that the interactions are not just linear like the stations on an assembly line but complex, with feedback loops and multiple uses. Tight coupling means that processes cannot be stopped or reversed, or substitutes or replacements added; they must proceed as designed. In a system with both characteristics, the accidents caused by interactive complexity and tight coupling are built into the design; while rare they are “normal.” [Perrow 1999]) It is this possibility that the critics of Microsoft’s “monoculture” raised.

But this just addresses the inevitable failures of interactive complexity and interacting failures. When we have a strategic attack, a deliberate attempt to disrupt the system or gain unauthorized control of it, complexity and coupling again play a role. Let’s say a Microsoft designer adds a new feature, such as a Web browser or a music player, to an operating system such as Windows. Rather than making it an add-on or a plug-in, he integrates it tightly into the operating system, redesigning it to some extent. The designer might claim that it is tightly integrated for efficiency reasons, but the federal court denied that claim when it was made by Microsoft. More likely, the integration was designed to achieve market share: the desired feature is difficult or impossible to uninstall so it becomes, by default, the most likely of its kind to be used, thus increasing market share. The unit becomes more complex and more tightly integrated. Complex interactions with the potential for failure occur when software is integrated with the operating system rather than offered as an add-on.

Tight integration means that a flaw that a malicious hacker or terrorist might discover and exploit will have much greater consequences than it would have in a loosely coupled or loosely integrated system. It will cause other features with which it is integrated to fail; for example, the device controlling pressures or temperatures in a plant can be inoperative. Or, the cascading failures occasioned by tight integration may allow unauthorized control or invasion by malware, as the GAO feared could happen to power plants.

The alternative is to use a modular design. The new feature can be plugged in or not, or a better version of the feature from another vendor can be substituted. (Modular designs are similar to the principles of networks we will examine in the last chapter. Networks of small firms, for example, are examples of the modular designs I am advocating here.) Microsoft integrated its version of an Internet browser (Internet Explorer) into its Windows operating system, claiming that it was tightly integrated and could not be removed, so customers buying Windows had to pay for it and could not ask for another browser instead. Those suing Microsoft for restraint of trade demonstrated that it could have been designed to make removal by Microsoft easy if the customer did not want it. Thus, the suit argued, the tight integration was not necessary but was deliberate. (The suit was successful but the remedy proposed— breaking up Microsoft—was overturned by an appeals court that required only minor changes in Microsoft’s behavior.)

An example of Microsoft’s tight integration is the SQL Server 2005, a relational database management system (not the same as physical servers on the Internet). Instead of following the industry practice of selling components for data warehousing and business intelligence separately, Microsoft integrated them into its server. (Microsoft has many competitors in the data-warehousing field.) This resulted in a “radically lower price structure” but, in the eyes of some, created new dangers. A system architect commented as follows:

 

“My concern lies in Microsoft’s tendency to lock in the customer. I know that this is just how business works and Micro-soft is not doing anything that other businesses aren’t,” he said. “What I do have a problem with is the direction in which such integration leads: potential difficulty to integrate a third party product into the mix and a software object of such size and complexity that change becomes more difficult and slower.” “It seems that Microsoft stands alone in not understanding that tight integration of more and more features simply provides a greater vector for system failure, which is not easily fixed,” he concluded. (Bekker 2005)

 

His comments are supported by the troubles Microsoft had with an earlier release of its server. One year after instituting its Trustworthy Computing program, it was hit by the Slammer worm, and its advice to “keep your system up-to-date with patches” was not even followed by Microsoft itself, since an internal company network was hit, even though a patch had been available for six months. The worm crashed servers and clogged the Internet, and disabled a nuclear power plant for hours, and it took Microsoft two days to get out from under it. The company acknowledged that “the patch management process is too complex.” (Staff 2003) The failure of the new server brought down even more programs, such as a version of the SQL server packaged with Microsoft Visual Studio, and probably data-warehousing and business intelligence systems that were integrated into SQL Server 2005. The immediate problem was the software, but the tight integration propagated it. (However, the new Vista operating system, expected to be launched in 2007, is built with more small modules, making development and testing easier to manage. Still, it is 40 percent larger than Windows XP, largely because Microsoft appropriately believes it must be compatible with previous releases.) (Lohr and Markoff 2006)

In addition to the problem of unexpected interactions stemming from sheer complexity and tight coupling, errors in design or software are more likely to appear in complex systems than simple ones. System designers or software programmers are less likely to conceive of the unexpected interactions that can occur. Furthermore, complex systems are more difficult to test. It is hard to conceive of all the possible environmental conditions the complex system might meet. The more complex the system—the more different and difficult things it tries to do—the more diverse parts of the organization it is embedded in will be exercised, and these must be tested as well. Individual modules that can be assembled to produce the desired effect are easier to inspect for errors and easier to test. But a modular-based system invites the production of modules by competitors.

Microsoft has favored complex and tightly-coupled products, rather than redesigned systems that are simpler and more reliable and secure, for a number of reasons. First, in the initial years of Microsoft’s Windows program, it was used as a stand-alone operating system in the company workstations or for home computers. But once the Internet took off, everything became interconnected, and the unreliability of the workstation in a bank that had not been connected to the vast financial markets, now spread, as did the consequences of security breaches. Windows-based software is now critical for our financial structure, nuclear plants, industry control systems, defense department data, naval ships, aircraft, and so on. (Most everything that can kill is run by software, though not necessarily Windows-based: heart pumps, infusion pumps, pacemakers, medication distributions in hospitals; military weapons and fighting platforms; and now the safety of our most widely used weapon, the automobile.)

A second reason Microsoft favors complex and tightly coupled products is that redesigning the system in order to have the new feature without making the system more complex is harder than just changing the design enough to incorporate the new feature. Simple designs for products that do complex things are, perversely, much more difficult to produce than complicated designs. It is necessary to move fast in a rapidly developing marketplace, while basic redesigns that reduce complexity take time. The alternative, providing modular features for plug-ins, is risky. A competitor might provide a superior plug-in, for example, or customers might resist buying the plug-in. Complex, integrated systems promote market control.

But it would be unreasonable to have expected a for-profit company such as Microsoft to have increased their difficulty of design and production, delayed introducing new features when competitors might get ahead of them, and invited competitors to plug into their Windows operating system and sell features Microsoft could otherwise sell or had to provide for free. The firm would lose its market dominance and thus profits. The cost of such tactics— some unreliability and a substantial loss of security for users—has been quite small for the company. Market share rose, as did profits. Even its illegal predatory behavior against competitors cost it only a little in fines and agreements to desist.

The security problem extends to the servers that match addresses to destinations and show the quickest route. Competition in the server field is substantial, and here reliability (and perhaps security, but that is not clear) is a key competitive factor. This competition and emphasis on reliability may have a great deal to do with the reliability of the Internet. The initial servers were built by the government and nonprofit organizations such as universities, and reliability and security were key goals. The introduction of private, for-profit firms into the server market occurred in the late 1990s, and they may have had to put reliability and security as top goals to compete with existing servers.

COPING WITH MONOPOLY POWER

Since the beginnings of capitalism in the United States, customers have tried to force vendors to pay for the costs of faulty products, but even in our litigious society where customers are free to sue, the courts have generally ruled that the buyer must beware. We have seen how little the fines are when chemical companies break laws and have accidents that kill and pollute; how little it cost the electric power companies (two to three day’s revenue) to avoid maintenance, upgrading, and staffing expenses, but the cost to the rest of society runs five to ten billion dollars in an outage. But unfortunately, the reliability and security failures of Windows affects critical parts of our infrastructure, making us more vulnerable to industrial and terrorist disasters. We are not dealing with a product such as television sets, where product safety and security are barely an issue. Many critical parts of our infrastructure had nowhere else to go to get the immense advantages of computers and the Internet other than to Windows. Windows is not simply a consumer product, used for spreadsheets, e-mail, and Web surfing. It is found in all organizations in our critical infrastructure, and its vulnerabilities are frequently not walled off from the critical functions. It is just too valuable!

Even the U.S. Air Force does not have the resources to build a completely new, bug-free operating system, a word-processing program, graphic interfaces, and so on, so it has to use Microsoft products. But these are not reliable enough for the stringent demands of military aircraft. So the Air Force contract specifies that the software be configured in a more secure state, changing the system settings. It is able to review the highly secret, proprietary source code (but not to change it). (Theoretically, critical military devices are not connected to the public Internet, and thus are not accessible by hackers or terrorists, nor are the SCADA control systems of nuclear, chemical, and industrial plants, but in practice there are a variety of ways in which they inadvertently get connected or have to be connected. Simply accessing the SCADA system from computers in the head office of a power plant that happen to be linked to the World Wide Web will connect them, and we saw at the beginning of this chapter how hackers were able to get into the Defense Department computers and gain root access.)

The same problem confronted the Department of Homeland Security when it sought to establish a common computing environment for its twenty-two formerly independent agencies. It signed a $90 million enterprise software deal with Microsoft for server and desktop software for approximately 140,000 users. (Ironically, three weeks later a critical security flaw was discovered affecting nearly every version of the Windows operating system, including Windows Server 2003.) A security expert, the former chief of staff of the President’s Critical Infrastructure Protection Board, said, “The real alternative was to go open-source. But for 22 agencies, an overwhelming majority of which use nothing but Microsoft operating systems, to convert to another platform in an efficient and cost-effective manner would have been hard to accomplish. DHS has neither the time, the money, nor the flexibility for that. Now it is held hostage to the imperfections of Microsoft code-writing.” (Verton 2003)

Not only can Microsoft’s products not be reengineering by users to make them more reliable without having the proprietary code, but its near-monopoly power prevents alternative systems from appearing or surviving. But suppose users were spread out over a half dozen different operating systems, and a half dozen word-processing programs and other software attachments. A virus could not disable them all, and disabling one of the six is unlikely to bring down the other five. For competitive reasons, the six systems would have to provide interoperability, just as Microsoft had to make its bit-heavy, clunky Word program for word processing somewhat compatible with the superior WordPerfect program of a competitor. A critical facility such as a nuclear power plant could achieve true redundancy by having three different operating systems and sets of programs available in case one was disabled. It is also quite likely that the six manufacturers would compete on the basis of reliability and security, and not just on the variety of applications, and those most needing security would migrate to one of them. All six would, in time, have high and comparable levels of security in order to remain competitive. But with 95 percent of the market and Windows running on 330 million personal computers worldwide, Microsoft has little incentive to emphasize security and reliability.

Though the military is content to receive modified Windows programs to make it more safe and secure, the Commonwealth of Massachusetts has taken a different tack. In 2005, it standardized desktop applications on OpenDocument, an open-source format that is not supported by Microsoft Office. About 50,000 desktop PCs will be required at least to save their documents in the open-source format, and while they can continue to use their current version of Microsoft Office, they cannot purchase Office 12, which is due to be released in 2006. OpenDocument-based products will cost the state about $5 million, in contrast to the $50 million for Office 12. Microsoft could still compete with others for a program that operates on this open-source platform, but only if it supported the OpenDocument format, which it has so far refused to do. If other states adopted this tactic, and especially if the enormous U.S. government agencies did so, Microsoft’s monopoly of this biggest part of the operating system software would be broken. In Massachusetts, a department might choose to use word-processing products such as OpenOffice and variants of it from companies including Sun Microsystems, IBM, and Novell. (LaMonica 2005) It is an example of how large customers could bring about deconcentration in an industry.

Microsoft is doing more in the security area today than it has in the past, but this has largely been in the form of patches to correct the flaws—and it appears that each patch becomes a target for a hacker, and the vulnerability returns. Programmers argue that making Windows more secure would require a wholesale change that builds in security from the beginning through rigorous testing and verification. But this could have the consequence of making 95 percent of the computers in the world gradually obsolete. Finally, hackers concentrate on Microsoft because it is the dominant system. If there were four, five, or six systems sharing the market, their efforts would have to be spent on four, five, or six systems and thus would be less intense on any one.

Competition among browsers may promote security. Users of Microsoft’s Internet Explorer can be as much as twenty-one times more likely to end up with a spyware-infected PC than people who go online with Mozilla’s Firefox browser, according to a study in early 2006. (Kelzer 2006) There is also a vast difference between those two browsers in the time that they remain unpatched after a vulnerability is found. For 54 percent of the time in 2004, a worm or virus “in the wild” exploited one of the unpatched vulnerabilities in Internet Explorer. The comparable figure for Mozilla’s Fire-fox browser was only 15 percent. (Krebs 2006)

Compared with Microsoft, the other operating systems, such as Linux or Apple’s Macintosh system appear to be more reliable (though the evidence is mostly anecdotal), but not necessarily more secure. Security is a more recent demand than reliability, and perhaps the other systems, being less widespread, are not used in settings where security is paramount. There is still no great commercial advantage, as distinct from terrorist prevention, to offering secure systems, despite the reportedly enormous costs of thefts, and there may not be until there is a spectacular catastrophe or the government becomes sufficiently fearful of the damage that malicious hackers or terrorists can do.

One thing the government could do would be to make the suppliers of operating systems and software liable for security failures. This would happen if the courts ruled in favor of plaintiffs and if the Justice Department and its head, the attorney general, supported the courts’ findings as they were reviewed by higher courts on up to the Supreme Court. This is not likely to happen. Both the Clinton and the Bush administrations have consistently rejected the notion of regulating vendors or users. The Clinton administration’s information security plan stated that the president and Congress “cannot and should not dictate solutions for private-sector systems.” The Bush administration’s cyber-security plan of 2003 states that “government regulation will not become a primary means of securing cyberspace” and that “the market itself is expected to provide the major impetus to improve cyber security.” (Vadis 2004, 112) We are not likely to get liability for software or operating system failures anytime soon, even if the issue is national security. However, findings of liability are starting to appear in a small way at the state and local level. If it grew, it would catch the attention of all producers but would also impose very significant costs that would probably be passed through to consumers rather than reducing the large profit margins of producers such as Microsoft.

A draft paper by two members of the AEI-Brookings Joint Center for Regulatory Studies offers some excellent suggestions regarding mandatory reporting of vulnerabilities and liability for them. A recent California law requires mandatory incident reporting, and modeled on this, they suggest a federal program that did not release information about individual victims but reported only aggregated data. Of course these economists point out the drawbacks, principally that “reporting informs consumers, businesses, and malicious hackers” of the vulnerability. (Hahn and Layne-Farrar 2006, 56) Regulating cyber weapons is feasible from a legal standpoint, since there are already laws banning worms, viruses, cell phone scanners, and software that circumvents copyright protection. (57) Since the federal government is a key buyer of security software, consuming around 42 percent of all software and computing services as measured by revenue, the government could create a market for secure and reliable software if it insisted on it. The Department of Defense already requires all software to be tested for security by the National Security Agency. But when government agencies were rated for their overall information technology security, over half received either a D or an F. The Department of Homeland Security, which has a division devoted to monitoring cyber security, received an F. So did the Justice Department, which is responsible for investigating and prosecuting cyber crime. (60)

Insurance, the authors point out, could bridge federal policy and private-sector initiatives. The federal government requires all automobile drivers to obtain insurance before they drive, and the insurance is privately supplied. Cyber insurance is emerging as a viable option, the authors hold, and the insurance companies charge according to the degree of protection the company installs. “This route has the advantage of being market driven, so the price of additional security will be weighed against the benefits that the added security has to offer.” (62) There are two rating firms at present that act as third-party evaluators of a company’s security efforts, but they apparently have little business. This would change if cyber-security policies were mandated, just as driver’s insurance is.

OPEN SOURCE

Is open-source software any more secure than proprietary software? A debate about this has been raging for years. Microsoft argues that its system is just as secure as those that use open-source software, such as Linux and Apache; others dispute that and add that, in any case, open systems fix flaws much more quickly. One security analyst argues that the “tight integration” found in Windows is not found in Linux, and this integration increases the number of security exposures in Windows. Furthermore, open and closed systems handle flaws differently. Most reports of flaws in Windows come from antivirus firms or from hackers. Exploitation of the flaws occurs “in the wild” (on the Net) and then we depend on commercial antivirus updates, followed by an operating system patch that Microsoft sends out. (Bill Gates is noted for blaming the victim, saying that it is the responsibility of users to keep up to date with the patches, rather than saying that it is Microsoft’s responsibility to avoid the necessity of patches.) In contrast, university researchers or developers within the open-source community report security flaws more frequently, before they escape into the wild. These sources often provide immediate means to correct the flaw with an “emergency patch,” and then, to take the case of Linux, a permanent change is approved by the inner circle of Linux code writers at the Open Source Development Labs. Open-source developers argue that their system corrects errors more quickly and can use new and more secure techniques. Because they are performing for their peers (other programmers they are directly or indirectly in contact with), they derive intrinsic satisfaction from these efforts. (Staff 2005g)

Some idea of the growing open-source enterprise can be gleaned from the statement of the OpenSSL Project, one of many such projects. It says that the project is a “collaborative effort to develop a robust, commercial grade, full featured” toolkit with various protocols, including a cryptography library that is managed by a worldwide community of volunteers who planned, developed and managed the toolkit and its documentation. The toolkit is licensed in a way that anyone is free to use it for commercial or noncommercial purposes. The project “is volunteer-driven. We do not have any specific requirements for volunteers other than a strong willingness to really contribute while following the project’s goal.” (OpenSSL 2005)

The open-source movement has grown steadily over the years, along with systems such as Linux and Unix that utilize it, to the extent that it may be a threat to the business model of Microsoft. Anything that provides an alternative should improve security; users that require security can make a choice among systems on these grounds. (For a fascinating discussion of the operating superiority and security of the “freeware” Linux software versus Microsoft, see Kuwabara 2003.)

Indeed, Microsoft’s dominance of the operating-system market and software products such as Office and its Word program may not last long. People are talking about “thin client” systems where a fairly simple device connects users to the Internet and to a virtual operating system on the Internet itself. For a small fee, users could rent much of what is now a desktop computer from the Internet and rent whatever programs they desire—the module option— which are then installed on the virtual computer. Or, with advertising, there may be no rental fee. These innovations may be shaking up Microsoft; the “Internet services model” is on its way with start-ups and established companies such as IBM and Google offering free or low-cost programs that do word processing, spreadsheets, and other basic functions of the desktop computer.

After losing top executives to Google and other companies, Microsoft bought a company in 2005 and thus acquired a brilliant programmer, Ray Ozzie, who has a distinct “unbundling” program in mind. The company could then offer online versions of its Office products, supported by advertising or fees.

It may be important that Ozzie, Microsoft’s chief technical officer is seconding the criticisms I have been reporting and sounding the programmer’s mantra. “Complexity kills,” he wrote in a famous memo. “It sucks the life out of developers, it makes products difficult to plan, build and test, it introduces security challenges, and it causes end-user and administrator frustration.” He even sees “Internet services” as offering a more open, competitive model. Steve Lohr, the New York Times journalist, says that Ozzie “speaks of a thriving ‘ecosystem’ of open competition in which developers and customers have many choices and in which Microsoft’s future is not in crushing rivals but in becoming an attractive choice.” Even open source is embraced. “I consider open-source software to be part of the environment, like the Internet,” the Microsoft chief technical officer said. “It’s not the enemy and it’s not going to go away. It’s great for developers.” (Lohr 2005)

The issue of market domination, and its implications for security from both industrial and terrorist disasters, may not go away even if Microsoft loses its dominance and accepts “many choices,” although that would certainly help. But we should be cautious. A company like Google, which continues to dominate the Web search market, with its extraordinary financial returns, may be able to launch a virtual computer with integrated features that is so successful that it dominates the market the way Microsoft now does. It then might not have to pay attention to security. Before this happens, it would be wise to make security flaws subject to liability suits.

Or, we could break up Microsoft into competing firms. Thomas Penfield Jackson, the judge in the Microsoft antitrust suit recommended breaking up Microsoft into two companies: a Windows operating system and an applications software firm. (This was rejected by an appeals court.) But this would still leave Microsoft’s Windows monopoly intact. A group of very prominent economists recommended Microsoft be split into three Windows operating systems companies, and a fourth would be the software applications firm. This would introduce immediate price competition in operating systems, and the three mini-Windows companies would have very strong incentives to keep their products compatible with one another while competing on reliability and security. Breaking up Microsoft would be very tricky, but decades ago, the Bell system was broken up by the government and the results have been very positive. (Miller 2000)

THE INTERNET

Now we will leave the “cars” and turn to the “highway.” The world’s largest network is totally unique in organizational terms. It runs virtually without a central authority governing it; it is based on rules, or protocols, that are voluntarily accepted and open to modification; and while initially financed and designed by the U.S. government, it is now a mixture of public and private organizations, including foreign ones, with a minimum of central direction. Our interest in it is twofold: First, it is one example (we shall consider three others in the concluding chapter) of the possibility of a vast, decentralized, efficient, and reliable system that could be a model for some of the other systems in our critical infrastructure. Second, it illustrates the vulnerability of such systems to the economic dynamics of free-market capitalism, which intends to centralize a decentralized system. These dynamics can make it more vulnerable to errors and deliberate attacks.

Before the Internet came along (a primitive version was established a half century ago in the late 1960s), we relied on “dedicated” communication systems, such as the postal system, the telephone, and a few specialized devices such as stock tickers. These go from point to point. Dedicated systems are cheap and efficient. But the inventors of the Internet, working out of the Defense Department, chose a more complicated interoperability system. Instead of limiting a computer at the University of Chicago to make only one connection via a telephone line to a computer at Los Alamos, the computer could connect to a network that might have several computers on it. Furthermore the connection could allow a variety of transactions, such as file sharing, e-mail, or one computer taking command of another. Establishing a platform from which more than one program could be launched of course made it more complex, something we should beware of. But it made the Internet open to innovations and new uses, including Web pages and now video and telephony, which the original inventors did not conceive of. In contrast, AT&T fought vigorously to control its monopoly over the dedicated telephone system when Carterphone and other innovative devices were proposed to run on its platform.

Fortunately, AT&T lost that fight. In 1969, in the Carterphone decision, the FCC ruled that the company could not discriminate against innovators that wanted to improve the phone service; in time, this decision caused AT&T’s monopoly position to erode, and we had an explosion of innovations. When the Internet came along, the FCC again ruled that access to the telephone wires and the fiber-optic cables had to be open to all, for a reasonable fee. They were defined as common carriers, such as highways or rivers. This led to the creation of thousands of Internet service providers (ISPs). You could choose to sign up with AOL or whomever, or use your university’s or business ISP to get on the Net and go anywhere. From the beginning, the Internet designers had developed a system that would be open to innovation, but they needed that court ruling declaring open access. (It is now being rescinded, unfortunately, as we shall see.)

This interoperability is possible because all the computers agreed on rules or protocols that specify how data will be transferred. For example, the format of a Web page is specified by hypertext markup language (HTML); the order in which bits of the message are sent is specified by the hypertext transfer protocol (HTTP); there are many other protocols, such as the basic transmission control protocol and the Internet protocol, together known as TCP/IP.

The Internet protocol (IP), being minimal and general-purpose, can work with the enormous diversity of networking technologies. For example, after the IP was universally adopted, a new technology came along—local area networks (LANs), that even worked over wireless networks; the IP worked fine with it. IP is being adapted to accommodate ever newer technologies, such as asynchronous transfer mode. The fact that any one of these uses, and more to come, such as telephony, can be carried on one platform makes the growth of the Internet and the World Wide Web possible.

Separate dedicated connections for each use would probably be cheaper and more reliable, just as dedicated machines in a mass-production industrial plant are cheaper and more reliable than multipurpose machines that have to be reset for each operation. But if, as we will see, you can find a way to automate everything running on the platform, and can build in a lot of redundancy to protect from inevitable failures, then the system with expandable platforms and interoperability is far more efficient in the long run. New modes of connection, replacing the slow and limited capacity of telephone lines, can be accommodated, such as fiber-optic cable, or wireless radio waves as in a LAN in your house, company, or even a town. The electric power grid, which goes to every home, may come to be utilized; and links to satellites are appearing. These connecting modes can be provided by for-profit organizations, but they have to abide by the protocols those government employees evolved decades ago, or no one would use them. However, for-profit firms are finding backdoors to this open-systems approach in order to control content and usage and increase the charges for using their “pipes” (fiber-optic cables, telephone lines, etc.).

A LITTLE HISTORY

How did the protocols now in use come about? Initially, several commercial firms were developing and selling protocols, including IBM, Digital, and Xerox. But the government was also developing them and had a head start since the initial internet was financed by the Defense Department (with the National Science Foundation coming in later). The government protocols were free, and small groups and associations could modify them. The free protocols encouraged wide use, the wide use brought in a range of improvements and modification, and these won out over the commercial ones. Furthermore they were made to be compatible with any vendor ’s hardware. Interoperability was to be the key. (Gillett and Kapor 1997) A privatized internet would be unlikely to encourage an IBM customer to use software or hardware from Digital; IBM would insist that its own products be used and none other, just as the Bell system had tried to do and Microsoft after that. The decision to establish rules that would allow anyone’s product to work, provided that it followed the protocols that were offered free, led to what is called an open-standards source of protocols. Open standards means nonproprietary.

While the government’s role in establishing the Internet was crucial, economic competition from private firms became the engine of the Internet after 1995. In 1995, the National Science Foundation allowed commercial firms to establish themselves as ISPs, to the outrage of university users who did not think it should be open to commercial use. But commercial ISPs brought the cost of access down. For example, junior colleges and other schools could then afford to participate. Soon the prices were low enough for users in their homes. Competition among ISPs has been intense, and the cost of access to the Net has fallen greatly. (Gillett and Kapor 1997)

RELIABILITY OF THE INTERNET

How could such a vast network function with only small, sporadic failures that are more annoying than consequential. First, there are very few human beings involved in managing its operations—far fewer than the power grid network—in part because it is electronic, and electronic devices are more reliable than electromechanical or mechanical devices and require fewer human monitors. Human error accounts for many software failures, but these do not involve the Net. Your browser may crash, or hackers may gain access to your financial data, but the Net itself goes on. Second, there is a great deal of redundancy designed into the Internet. We will examine some of this since it offers lessons for other systems, some of which are applicable to our critical infrastructure. Third, its structure is remarkably decentralized; it is probably the most decentralized system on earth run by humans. Decentralization is easy because of its electronic character, but mechanical and human systems can also be quite decentralized, as we shall see in the next chapter.

Why is the Internet so robust with regard to errors? Not only is it electronic, but it has built into it a great deal of redundancy. Messages are broken up into “packets,” each of which carries the address they are to go to and the addresses of the other packets that make up the message. It does this through two methods that are quite different from what we normally think of as redundancy. The normal notion of redundancy in a nuclear power plant, for instance, means that there is a backup, such as a diesel generator, that can be brought into play if the normal source of power (the power from the nuclear plant itself) fails. But the Internet has additional sources, called replacement or link redundancy and mirroring. (These are complicated, and are briefly discussed in Appendix A.) Most systems have many redundancies either designed into them (the preferred form) or added on (a less reliable form for a number of reasons). But the Net’s redundancy is found in a peculiar network characteristic; it is a “scale-free” system, a characteristic of some other examples of networks.

Roughly this means that nodes (computers and ISPs) can be added to the network without increasing the levels of supervisory control and thus hierarchy. It does not need an additional level of control when another thousand nodes are formed because there are an enormous number of different paths over which a message can travel. (To return to our highway analogy, if one road is blocked there are other roads you can use. In actual highways, however, there is a hierarchy; some roads are interstate highways, others country lanes. In the Internet, all roads are equally fast if not congested, so there are many more choices. [This is called net neurality.] In addition, the speed at which the message can travel is almost instantaneous.) A very high number of these paths can be disabled, but the message will still get through.

Unfortunately, physicists observe, there are a very small number of nodes that have millions of links to other nodes, and if these are disabled, the whole network will crash. So while it is safe from random attacks, errors, and electronic failures, which accounts for its reliability, it remains vulnerable to deliberate attack on a few critical nodes. “Such error tolerance and attack vulnerability are generic properties of communication networks.” (Albert, Jeong, and Barabasi 2000) However, the few critical nodes, called hubs, are currently highly defended by the voluntary groups that oversee the Net. As we shall see, this could change.

I have gone into this arcane and complex structure because it allows me to make a very important point. A scale-free system is able to increase in size, in this case, increase the number of nodes or points of connection, without changing its structure. This means that networks are not subject to the perils of concentration that have so concerned us in this book. If it were possible to reduce the vast number of ISPs (concentrating the access points of your computer to the Internet), tell you which roads you could use (limiting access to sites), and charge you more for using a faster road, the centralization of the Net would increase, and there would be nodes whose disruption by accident or intent would cause great rather than trivial damage. To see how this works requires a closer examination of the structure of the Internet and the World Wide Web.

The World Wide Web, which works through the physical network of the Internet, is also a scale-free network, in that there are a few very central Web sites that link to many other sites, while the vast majority of sites only link to a few sites. But for your computer to connect to a site on the Internet (such as www.wikipedia.org), it must know where the computer(s) that make up Wikipedia are located on the Internet. Every computer on the Internet has a four “octet” number (an IP number, such as 69.73.163.167) that routers on the physical network use to move traffic from one computer to another. So in addition to the physical hubs that connect computers on the Internet, there is a group of servers that tell computers how to find each other using the domain name system (DNS). This translates familiar names such as wikipedia.org into numbers that the routers can use to send information between these servers and your computer.

There are essentially only four levels of hierarchy in the domain name system, and thus in the Internet as a whole. At the top level are root servers. There are thirteen of these for the whole world (although for safety and reliability reasons they are duplicated in different physical locations). We might consider this as level one in the four-level informational hierarchy of the Internet. If a computer wants to know how to find www.wikipedia.org, it would first go to one of these servers to ask where to find a server in charge of addresses that end in “.org.” Level two consists of thousands of DNS servers, which are clients of level one servers, the root servers. These clients themselves are servers for more clients below them. The thousands of servers in level two are often grouped into “server farms” having thousands of processors each. (The size of the farm, or data center, is only limited by its ability to remove the heat generated by each processor in each server. They are generally large, windowless air-conditioned buildings in major cities.) These servers may not know the address of www.wikipedia.org, but they do know which server(s) are in charge of the wiki pedia.org domain. Level three consists of servers that know the addresses of the computer(s) that make up www.wikipedia.org. These computers are located at the fourth level of the hierarchy. Once your computer knows the address of the Wikipedia servers at level four, it can contact these servers directly. This whole process is shortened by caching: if another computer at your ISP recently asked for the address of www.wikipedia.org, it would already know where to find it without working its way through the entire hierarchy.

There is no central body running this vast system. There are, however, some crucial organizations that “rule the root.” One is something called ICANN, which establishes domain names (such as .com, .edu, or .net), Internet protocol addresses, and protocol port and parameter numbers. ICANN is a nonprofit organization in the United States, and its control is being contested by other nations that want to be able to control their own ISPs (of which more later). Others include organizations coordinating the routing of traffic on the net, setting service quality standards, protocol standards, and the unique identifiers, which are controlled by organizations such as the Internet Engineering Task Force (IETF).

A number of committees deal with proposed changes in standards, and there is an Internet Architecture Board that ratifies them. The committees, and boards like this, are open to public participation and review. The government of the Internet, and through it the Web, is light, nonproprietary, tiny in size, very expert, and largely voluntary. It is a remarkable achievement, considering the vastness of the system and its importance for modern society.

THREATS TO THE INTERNET

The main threat to the Internet is not that it might be shut down, but that its openness, innovativeness, and nonproprietary nature could be compromised. Were it to pass into proprietary hands, the number of ISPs could decline drastically, thereby restricting its openness but also making larger, concentrated targets for thieves, terrorists, and those that are simply malevolent.

An amusing, though trivial, instance of the threat to the openness of the Internet came from U.S. Senator Orrin Hatch (R-UT), who suggested in 2003 “that people who download copyright materials from the Internet should have their computers automatically destroyed,” according to the news story. He was concerned about downloading copyrighted music. But it was pointed out that he was himself using copyrighted software for which he had no license on his official Web site. (Kahney 2003) Less drastic efforts than smoking computers have been utilized to prevent pirating of music and videos, but with limited success. However, pirating of copyright materials is a small problem compared to the controversy regarding access to the “pipes,” the fiber-optic cables and the high-speed phone lines that most of us use.

The problem begins with the fiber-optic cables that carry more and more of Internet traffic. They are privately owned and built by cable and other telecommunications firms. From the beginning, the courts ruled they were common carriers, that is, like highways and ship channels that everyone has access to for a reasonable fee. In the late 1990s, firms overbuilt these pipes to such an extent that it depressed the dot-com sector, helping to bring about a serious downturn for Silicon Valley and other telecommunications sites. With so much excess capacity, the cable companies could only charge small fees for leasing space in the pipes, called bandwidth. But as traffic increased with more Internet users and especially with the downloading of the large files required for music and videos, the pipes became crowded and service was supposedly threatened. (Another interpretation is that once people were enticed onto the Net, larger fees could be charged to provide high revenue at little additional expense. There is still a surplus of bandwidth.)

In a crucial decision, the FCC ruled that the pipes were no longer common carriers but proprietary; they should be governed by telecommunication rules, which themselves were being relaxed. Since they are no longer common carriers, bound to let any user on that paid a reasonable fee, the owners could keep some ISPs from using them if they wished, and thus potentially control the content by denying use. The owners argued that this was desirable; they could block pornography and spam. But they could also block sites with political content they disapproved of. They can block competitors who offer music or videos or e-mail or shopping or whatever that is in competition with their own offerings of these items and services. Congress revisited the issue in 2006, with firms that are heavy users of the Internet—such as Google, Amazon, and Yahoo!—wanting continued inexpensive access, on one side. On the other side are the network operators such as Verizon and cable companies. They want to be able to block sites. They also want to be able to charge higher fees for heavy usage (not an unreasonable demand). The fear is that the carriers will block consumers’ access to popular sites or degrade the delivery of Web pages whose providers don’t pay extra. Google’s home page, for example, “might load at a creep, while a search engine backed by the network company would zip along,” according to a Business Week story in 2005. It stated:

 

[R]ecent court and regulatory rulings have given the carriers more room to discriminate. In June, the U.S. Supreme Court ruled that cable broadband services were almost free of regulation. Two months later, the FCC granted the same liberty to the Bells’ broadband services. The FCC made two newly merged megaphone companies—created from AT&T and SBC and Verizon and MCI—vow to keep their Internet lines open to all for the next two years. But FCC Chairman Kevin J. Martin favors a light regulatory touch until he sees widespread abuse by the networks. (Yang 2005)

 

But the issue of volume pricing is also not settled. It is estimated that file-sharers swapping music and movies accounts for 60 percent of North American residential broadband use. As the Net predictably slows with congestion, rather than add more fiber-optic cables, the carriers may prefer to charge heavy users more, opening the door to slowing down the pages of competitors. A Korean phone giant found that 5 percent of its users accounted for half of its traffic. (Yang 2005) Just as adding more transmission lines or upgrading them has not been in the interests of the newly consolidated electric power industry, it may not be in the interests of broadband owners to increase supply when it can get higher revenues without the capital costs.

“The end of the Net as we know it” has long been a fear. A variety of nonprofit, commercial, and government organizations governs the Internet, but its astounding growth has made it a prime commercial target. In 2003, VeriSign, the company that controls two of the thirteen root DNS servers, .com and .net (they connect Web browsers with sites fourteen billion times a day!), argued that the infrastructure should be commercialized by getting rid of the nonprofits. The head of VeriSign told reporters: “It’s time for the internet infrastructure to go commercial. . . . it’s time to pull the root servers away from volunteers who run them out of a university or lab. . . . That’s going to be an unpopular decision.” (Murphy 2003) VeriSign did not succeed, but the threat remains.

In 2003, Michael J. Copps, a Federal Communications Commissioner, expressed his fears about the centralization taking place in telecommunications:

 

Once upon a time, cable was going to save us from too much network control of the broadcast media. Today 90 percent of the Top 50 cable channels are controlled by the same corporations that own the TV networks and the huge cable systems. Then we were told not to worry because the Internet would be the ultimate protection. We looked at the top 20 news sites on the Internet. Guess who controls most of them? The same big companies that provide us with our TV and newspaper news. Some protection. (Copps 2003)

 

CONCLUSIONS

The reliability of the devices that access the Internet is in question, as we saw earlier, and much damage could be done to the nation because of their lack of security. Military establishments have been broken into and their passwords taken, making the military vulnerable to foreign countries and terrorist groups; the control systems of electric power plants and transmission facilities have been penetrated (though as yet without any damage that has been publicly admitted); nuclear plants have been disabled for hours; the FBI’s computer system was temporarily rendered almost inoperable; and the amount of criminal activity involving money and industrial secrets is said to be extensive. Most of the vital things in our nation’s critical infrastructure now depend on Internet security, down to the automated hospital systems for matching prescriptions with patients and delivering them. Anything that skimps on reliability or security in this vast system endangers us, and there are two things that allow skimping: monopoly power by a for-profit enterprise, and concentration of the critical nodes of the Net. The first occurs because reliability and security cost money for the monopolist, and that reduces profits. The second, concentrated nodes, occurs because concentration of ISPs translates into market power, and the consequence of this is a larger and more vulnerable target for terrorists to attack, or for industrial accidents and errors to disrupt.

The main problem identified in this chapter is the hegemony of Microsoft operating systems whose complexity and proprietary safeguards have created products that lack reliability and security. Of course this is an industry problem, and if a firm like Google were to get 95 percent of the market share and did not have to pay attention to reliability or security because of it, we would be in the same position. But since the industry is dominated by a single firm, Microsoft, it makes little sense to criticize the industry and not the firm that is responsible for the shape of the industry.

But the most basic criticism is of a society that allows such market concentration to exist. We need broad antitrust regulations that say it is not in the public interest to have a single firm dominating a market because it can lead to monopoly profits, and if it is critical to our infrastructure, to increased vulnerability. Competition in telecommunications will lead to more innovations and to systems that compete on the basis of reliability and security. Our present judicial ideology is unable to address this issue. After a court ruled that Microsoft had clearly engaged in anticompetitive behavior, Microsoft appealed the decision. An appeals court accepted the ruling of anticompetitive behavior but levied penalties that were trivial, and most important, declined to break up Micro-soft in such a way as to encourage competitive operating systems. We can only hope that the development of competing virtual operating systems that may be accessed from the Net might make security a prominent competitive feature. Or that more states and even the federal government might do what Massachusetts has done (to save money) by requiring open sourcing of its operating systems and software.

A correspondent at Microsoft puts it well in a personal communication: “[A]s a society we have created a world where companies routinely under-estimate, and under-invest in, risks, and many systemic risks are externalized in such a way that nobody deals with them. It is a society where technical people are unintentionally complicit in this by providing overly-optimistic analyses, blaming the user, and avoiding taking responsibility for dealing with the real limitations of all systems today.”

The reliability of the Internet itself is not in question. It is a model of a highly reliable, secure, efficient, decentralized system of vast size. It is the security of the devices that use the Net that is the problem at present.

However, the governance of the Internet raises two concerns. First, it is the freedom of access and content that has made this such an extraordinary system in a world where freedoms appear to be diminishing. If AOL or Comcast can restrict your access to sites on the Internet that they have a commercial interest in, or an ideological interest in, the present freedom that we enjoy will be constrained.

More important for the analysis of this book, the reduction in the number of ISPs that are available, and the number of sites that can be accessed by those ISPs, will constitute a centralization of what is at present a remarkably decentralized system. The security issue enters here. The consolidation of servers will also make them larger targets for terrorist and hackers and more vulnerable to human and electronic failures, and perhaps even to weather or other natural disasters, though that seems remote. Once again we see that concentration threatens the security of our critical infrastructure. The market for security has failed or not been enough of an urgent concern. Government must play a role; as the largest purchaser of software it could insist that reliability be a condition of purchase. The courts could legitimate suits around security interests, thereby bringing insurance into play and supplying another incentive for secure products.

The few movements in this direction are only concerned with consumer security, not the security of our critical infrastructure. Still, it is encouraging that the Federal Trade Commission brought unfair trading practices actions against Microsoft and Eli Lilly, claiming they had misled consumers with their promises of the security and privacy of customer information. Also California has required businesses to disclose computer security breaches if they allow unauthorized access to personal information. (Vadis 2004, 109–10) Perhaps, as with the initial consumer-oriented efforts to get the attention of the chemical industry, these efforts, now concerned with competitiveness, freedom of choice, and privacy, will expand into areas where there is the potential for large disasters.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.37.254