Chapter 5. Design for Deployment

Changing the Internet is hard but not impossible. To change the Internet, it is necessary to consider the deployment strategy as a principal design consideration. A civil engineer does not just design a bridge; he provides a plan for constructing it. An electronic engineer does not just design a gadget; he provides a plan for manufacturing and testing it as well. Constructing the bridge and manufacturing and testing the gadget are central considerations in the original design. If Internet engineering is to advance, the deployment plan must be a central part of our design.

I call this approach design for deployment.

Objectives

Identify the characteristics of the infrastructure you want to establish.

Architecture

Consider the possible architectures that realize that infrastructure.

Rank according to the feasibility of deployment.

Strategy

Identify potential killer applications that might drive deployment.

Rank according to the value delivered to early adopters.

Design

Select the architecture that appears to have the best chance of success.

Complete the detailed design.

Evangelize

Convince others to act.

These stages are the infrastructure equivalent of a marketing plan.

The process is recursive: A large infrastructure change must be broken down into manageable pieces and the principles of design for deployment applied to each constituent piece.

In the remainder of this chapter, we will look at the master deployment plan for the Accountable Web. Then in later chapters, we will look at proposals designed to address specific threats such as spam, phishing, botnets, and so on. By itself, each proposal is comparatively modest, but the cumulative effect is significant for both users and criminals.

Objectives

Our objective is to have it all; we want to make the Internet a safer place without giving anything up. We don’t want to give up the ability to send e-mail to anyone we please. We don’t want to give up the ability to read any public Web site we might choose. We don’t want to give up the ability to create new identities.

As we saw in Chapter 1, “Motive,” pseudonymity, the ability to separate the online and the offline world, is an essential safety control in situations such as online dating. Pseudonymity provides real security value, provided that all parties involved are aware that they are dealing with an online identity and that they should not expect hold them accountable offline. Impersonation, the ability to steal the identity and reputation of another, provides nothing positive. You can be any made-up identity you like on the Internet, but only you can be you and only I can be me.

This means that we can’t expect to achieve what we want using only the traditional computer security paradigm of access control where we build a wall around the assets we want to protect and guard them closely. Instead, we must look to combine the access control approach with the accountability approach.

We have already seen how the problems of spam and telephone fraud are caused by the lack of accountability. An accountability gap is the common feature in every one of the forms of Internet abuse we consider in this book.

To establish accountability controls, we need three components: authentication, accreditation, and consequences.

  • Authentication—To hold people accountable for their actions, we need the ability to make an exclusive claim on a particular identity. This identity might or might not correspond to a real-world identity. We do not need to know a real-world identity to stop offensive blog comments. If, on the other hand, someone wants to run an online shop or bank, his real-world identity and reputation matters a great deal. We cannot allow any party to fraudulently pass himself off as a trusted real-world identity.

  • Accreditation—Having authenticated a claim to an unambiguous Internet identity, we can use statements other parties make concerning the party with that identity. For example, we are more likely to accept e-mail from an authenticated sender if a trusted third party states that it is a legitimate business and has not been observed sending spam.

  • Consequences—For accountability to work, there must be consequences for abuse. These consequences can range from the loss of communication privileges (for example, refuse to accept e-mail from a known spammer) to civil and in some cases criminal penalties.

The accountability approach complements and extends rather than replaces traditional access control.

In traditional access control, we authenticate the person making a request; then we ask, “Is he permitted?” In an accountability scheme, we authenticate the person making the request and ask, “Can he be held accountable?”

Architecture

As we have seen, the “end-to-end” principle is one of the core design principles of the Internet. Complexity must be kept out of the Internet core at all costs. Deployment of changes to the core of the Internet is an exceedingly slow process indeed.

The need to keep complexity out of the Internet core reflects another Internet architectural principle: the distinction between a network and an internetwork. A network is an organizational unit under unified administrative control. An internetwork is a network of autonomous networks. By definition, an internetwork does not have a unified administrative control.

A computer network that serves a house, a small business, a university, or even a research network serving an entire country typically has a single authority responsible for administration and planning. When two independent networks are linked, something different emerges: The machines in one network can talk to machines in the other, but there are now two authorities responsible for administering the combined network.

The distinction between a network and an internetwork is critical when we are attempting to make changes. There is no central governance authority that can make the decision to act. Instead, it is necessary to persuade each of the autonomous member networks to act independently.

The lack of central governance in the Internet creates tension at the diplomatic level as certain governments appear to believe that there must be a control apparatus somewhere if only they can find out where it is.

The Internet emerged as the dominant internetwork precisely because it was designed as an internetwork rather than a network. Joining HEPNET required a major effort on the part of a university department. Machines would have to run particular software and be administered in particular ways. The Internet recognized the member networks as autonomous entities with the right to administer their own internal networks as they saw fit. The only rules in the Internet were that all networks had to support the Domain Name System (DNS) and that all communication between the networks would use the Internet Protocol (IP).

The lower entry cost of joining the Internet made it the natural choice for a computer network seeking limited internetworking capabilities. Over time, the ability of the Internet to glue together unlike networks at low cost made it the eventual winner of the networking standards wars. America Online (AOL) began in 1983 as a bulletin board for Macintosh computer systems but did not connect to the Internet until a decade later. When AOL did finally join the Internet, it did so slowly and gracefully, first giving its subscribers the ability to exchange e-mail with Internet users and then adding features such as Web browsing. Conversion of AOL’s internal network to IP took many years more, and even today AOL still supports some older editions of the AOL client software that are not IP based.

The Internet provides the technical infrastructure that allows machines on different networks to communicate with each other. It does not establish the necessary accountability controls between the networks. Our objective is to establish those controls.

Understanding the internetwork as a network of networks provides us with three places where we might attempt to deploy changes to the Internet infrastructure: within the machine that provides the backbone of the Internet connecting networks to networks, at the communication endpoints where conversations begin and end, and at the interfaces between the constituent networks and the network of networks. These three locations are known as the core, end, and edge respectively (see Figure 5-1):

  • The core—Keeping complexity out of the Internet core is one of the foundational principles of the Internet. Making changes to the Internet core is a slow and difficult process. The routers that serve the Internet backbone are switches designed to move data at exceptionally high speeds. They are not designed to perform other tasks.

    In practice, this means that the only type of infrastructure change that is feasible at the Internet core is one designed to protect the core infrastructure.

  • The end—The Internet has more than a billion users and a billion machines. Infrastructure changes that require updates to the endpoints are possible but take many years to take effect. Getting a feature implemented in a popular application takes a minimum of three years, and it will be at least another three years before a majority of users are using it.

  • The edge—The Internet is a network of networks. At the edge of each network are machines that connect one network to another. When the Internet began, these machines were simple switches. Today the edge is patrolled by mail servers, firewalls, spam filters, VPN boxes, wireless routers, and proxies—machines that are managed by specialist administrators and serve functions that are largely transparent to the average Internet user. It still takes a great deal of persuasion to get a professional technical specialist to change the configuration of a machine under his control, but if the right argument is made, critical mass can be established much more quickly than at the end or the core. Deployment at the edge servers managed by the largest ISPs can cause a protocol change to be deployed extremely rapidly.

End, core, and edge

Figure 5-1. End, core, and edge

The most deployable architecture is to put new functionality at the edge where the network connects to the internetwork.

Deploying complexity at the edge rather than the end is not really a major departure from the end-to-end principle. When the Internet started, computers were bought one machine at a time. When you only have one computer connected to the Internet, the distinction between the end and the edge disappears.

Naturally, there are some cases where it is impossible to meet our goals through edge deployment alone. Earlier we identified usability as a key defect in existing Internet security schemes. We cannot rectify a defective user experience unless our solution touches the application that the user interacts with.

To address usability issues, our changes must affect the communication endpoints by necessity. But even though we must make some changes here, we are not required to make all our changes at the endpoint. We can begin by deploying complementary infrastructure at the edge where deployment can take place more quickly and only move to the end when sufficient edge infrastructure has been established to create critical mass.

This edge-then-end strategy has continuing advantages after the original deployment is achieved. By placing the complexity at the edge rather than the endpoint, we provide ourselves with the flexibility to address future challenges as the changes made to a few key points can now effect every one of the updated endpoints without the need for an additional update.

Strategy

Strategy is the process of identifying other people’s interests that are consistent with the outcomes you want to effect. No one individual or even one corporation can change a global infrastructure with a billion users by acting alone. Attempts to do so have invariably been humbling.

The rapid expansion of the Web was not an accident; the Web was designed for deployment from the start. In particular, the technology choices for the Web platforms were governed by political expediency and not technical superiority alone. SGML was not chosen as the basis for HTML because it was the best document markup language that could be developed; it was not even the best system available at the time. But SGML brought the support of the publishing industry, which had already committed to using SGML.

The key in developing a strategy is to identify areas where the interests are essential and those where compromise can be made. Compromise that makes it easier to reach critical mass is beneficial. The temptation that has to be resisted is compromise for the sake of compromise, dependencies that make it harder to reach critical mass and the self-sustaining growth phase.

When developing strategy, I consider the two principles that drive technological change: pain and opportunity.

Change in the security field tends to be driven by pain. In particular, it is pain that can be attributed to a specific narrowly defined cause—a pain point.

Every few months, I compile a list of the current pain points that I see as the top issues for Internet security. For the past five years, spam and Internet crime have been at the top of the list. Other important pain points are compliance with audit and regulatory requirements (those four horsemen again) and the rising cost and complexity of administering networks in both the home and the enterprise.

An emerging pain point is an effect given the unlovely term deperimeterization, which we will return to in Chapter 16, “Secure Networks.” Deperimeterization has attracted a lot of attention in Europe, but less in the U.S. Perimeter security is the dominant paradigm of network security today even as it is being undermined in many different ways, by laptops that move in and out of the company every night, flash memory, connections to partner networks, and wireless networking.

Deperimeterization is an important pain point for the enterprise security world because it requires us to rethink our entire approach to networking and not just the security aspect of it. It is a pain point we must be aware of when proposing solutions to Internet crime, because it is the one that is likely to drive the development of most security challenges targeted at the enterprise over the next five to ten years.

Although the principal driver for Internet security deployment is pain (that is, the need to develop controls for existing risks that are already being felt), opportunity also plays a role.

The Internet has been a mass medium for a decade. In 1995, the principal means of access was dialup. Today, most home users connect through high-speed broadband capable of delivering video on demand. But even though the potential for delivering high-quality video content to consumers via the Internet is clear, this opportunity cannot be realized without both a security infrastructure and an economic model that allows the studio to recoup the cost of films totaling $100 million or more to make.

Another important area of opportunity is personal expression. People want to use the Internet to express their own thoughts and ideas through blogs and personal networking sites such as MySpace and LinkedIn. These sites are important and valuable to the users, but the boundaries imposed by the lack of an effective security infrastructure are becoming apparent.

A third area of opportunity is the machine-machine Web. The first generation of the Web provided a natural means for a human to obtain information from a machine. Although humans are the ultimate consumers of information (at least until strong AI is developed), it is not necessarily desirable for a human to mediate every link in a chain.

Today I would arrange a business trip by interacting with a series of Web sites to register for a conference; book a flight, car, and hotel; create driving instructions from the airport to the hotel and from the hotel to the conference venue; and so on. In each case, I am the mediator in the process; I have to take information delivered by one Web site and enter it into another. A much better way to solve this problem is to have a program that can take the information provided by one site and feed it into another. Such programs are becoming known as mashups.

One way I could do this is by writing a program that takes information from each site as if it were a person, taking the HTML markup designed to render the information to the human viewer and extracting the information I need. This process is known as screen scraping. One of the earliest jobs I got in a large company was writing a program that screen scraped information from the corporate mainframe and wrote it out to a spreadsheet. A much better way to achieve the same result would, of course, have been for the programmer maintaining the mainframe to produce a report in the machine-friendly format required directly from the database, but that is not how things were done then.

Today we know better: Making the information available to the user in the most usable form makes the information, and thus the service provided most valuable to them. Web Services and the Semantic Web are two technologies that make providing and using machine-machine interfaces easier.

Design

For me, design is the fun part of the process. Designing an Internet infrastructure is like solving a huge multidimensional crossword puzzle. Design can be frustrating, but there is nothing like the feeling you get when you can find a solution to a puzzle with a large number of constraints.

Design for deployment does not require a major change in the design process; it’s just one more set of constraints: Who are going to be the first people to deploy this infrastructure? What benefit will they hope to realize? What is the best architecture to achieve critical mass? How can we use the infrastructure established by early adopters to meet our original goals? How do we avoid a situation where we arrive at a technological cul-de-sac? What type(s) of network effect do we hope for?

Design is a process of making choices; in some cases the choice made makes a difference. In most cases, the decision to make a choice is much more important than the actual choice made.

The choice of spelling for the HTTP Referer field mentioned in Chapter 4, “Making Change Happen,” is an example of this kind of choice. The protocol requires each piece of information describing a request to have a label. The label will never be seen by users in normal operation, so the specific choice of label does not matter provided that all the Web servers and all the Web browsers agree to use the same label for the same purpose. From this point of view, the labels “Shrubbery,” “Fred,” or even “Referrer” would have worked just as well. Choosing a label that clearly indicates its function has the advantage of making problems rather easier to diagnose and solve, but this is not an essential criterion.

Reducing the complexity of a design to the minimum is certainly desirable, but it is far more important to avoid reducing the complexity further than the minimum. Many systems that have grown incomprehensibly complex got that way due to an initial design that neglected important needs and failed to provide an effective extension mechanism to allow them to be added at a later date.

As we have seen in the design of the Internet, more important than the amount of complexity is where the design locates it. A design that attempts to locate complexity within the fabric of the Internet core is almost certainly doomed to failure. A solution that can be deployed at the edge is likely to be deployed more quickly than one that requires changes to be made at the endpoint, but only if the deployment strategy can identify a sufficiently powerful incentive for the maintainers of that infrastructure.

Evangelize

The mere fact that a design will help the intended early adopters is not enough for them to deploy it. The intended early adopters must know about the design before they can even consider it and, having heard about it, they need to be persuaded that deploying this particular change is more important to them than all the other competing calls on their time.

I spend a lot of time talking to customers, to providers of programs intended to implement the infrastructure. In addition to this book, I write papers, post on my corporate and personal blogs, speak at conferences, and give frequent press interviews.

The other thing I spend a lot of time on is working in standards organizations. A communications infrastructure is only possible if all the machines in the network understand the same way of talking to each other.

The common understanding of the standards process is that it exists for the purpose of creating a design. I think this view is mistaken. To arrive at the best possible design, the best approach is usually to find no more than five experts and put them in a room together to work.

Design is possible in a group of 10 or even 15 people, but it’s no more productive. If anything, progress tends to be slower. A group of this size is best for gathering data rather than analyzing it. As Fred Brooks observed in The Mythical Man-Month,[1] “Oversimplifying outrageously, we state Brooks’ Law, ‘Adding manpower to a late software project makes it later.’”

It does not take a hundred people to design an infrastructure, but it takes at least that number to get it deployed. If you are lucky, you will emerge from the standards process with a design that is slightly better than the design you started with. The real figure of merit for a standards process is the number of supporters you can collect on the way.

Key Points

  • The Internet has a billion users; changing the Internet is a major challenge.

    • Architects must design for deployment.

  • Our objective is to introduce accountability to the Internet without giving anything up.

  • Architectures where the change is made at the network edge are likely to deploy faster than at the end or core.

  • A design must establish an early-adopter community capable of establishing critical mass.

    • The process is driven by pain and opportunity.

  • Evangelism is essential for people to act.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.59.84