Chapter 16. Secure Networks

For the past decade, the world of network security has been dominated by firewalls and the perimeter security model. Perimeter security is a good idea; every major corporate HQ has a staffed reception desk. Sneaky thieves do think twice about walking past a reception desk or a security guard with a stolen computer under their arm. But fences are only as secure as their weakest point. A reception desk is a much less effective deterrent if a thief can enter and leave by the fire escape.

As they have become ubiquitous as the first line of defense, firewalls have become increasingly ineffective as the last line of defense. In an extended enterprise, the idea of defining the perimeter of the corporate network becomes meaningless. Constant mergers, acquisitions, and occasional divestitures mean that the perimeter is always moving, never fixed. Business trends such as outsourcing and close supply chain coupling with customers and suppliers means that the perimeter is increasingly blurred. And even if the front door is protected with an effective firewall, it is increasingly difficult to ensure that the back doors created by laptops, flash memory drives, and PDAs are closed.

The term deperimeterization has been coined to refer to these changes. As a description of the problem, it is useful and self-explanatory: Perimeter security is eroding, and we need to apply new thinking.

Where I am less happy with the use of the term is to describe an architecture intended to meet the challenge of the failing perimeter security model. Using the same term to refer to both the cause and the solution implies that the solution to these problems is simply to let our defenses down and everything will be fine.

There is an emerging consensus on the three main components of a security architecture to address these challenges: ubiquitous authentication, ubiquitous policy enforcement, and data-level security. What we lack is a concise term that joins these three distinct approaches into a coherent sound-bite phrase.

While considering this problem, I reread Marcus Ranum’s article The Six Dumbest Ideas in Network Security.[1] His first culprit is the concept of default permit: allowing anything that is not explicitly denied. Default permit neatly sums up what is most wrong about the typical enterprise network today: Once past the perimeter, anything goes. The opposite of default permit is default deny, a term that neatly describes the state that we want to achieve.

Default deny infrastructure is a network and application security architecture in which anything that is not explicitly allowed is prohibited.

Designing for Deployment

As always, the devil is in the deployment. Designing the architecture of default deny is the easy part; working out how to persuade the industry to back it is the real challenge. Modern network administration is already too complex before adding new security considerations.

The difficulty of network administration provides our opportunity. Few enterprises are willing to pay for network security, but every enterprise must pay for network administration whether it wants to or not. Fortunately, it is an area where there is no shortage of opportunities for improvement.

When the modern digital computer was introduced, programmers would write programs at the machine level, laboriously converting instruction sequences into number sequences by hand and entering them into the machine using punch cards or paper tape. Over time, programming was made easier with the introduction of assembly language and modern high-level programming languages. Today, the computer serves almost all the needs of almost all the users without the user having to write any program code.

Network administration is sadly stuck somewhere between the machine code and assembly stages. Instead of administering the network, we administer the individual machines connected to the network. As a result, any change that is to be made to the network almost invariably requires changes to be made at multiple points within the network. Inconsistency leads to error, which in turn leads to insecurity and unreliability.

IPv6

Design for deployment tells us to look beyond our immediate needs to others with needs that might align with our own. For much of the Internet standards community, the biggest concern is not security; it is the fact that when we originally built the Internet, we built it just slightly too small for our needs, and we are rapidly running out of room.

The Internet architecture was originally designed to connect a few hundred or a few thousand machines at the major research universities. As such, the decision to reserve 32 bits each for the source and destination address in every IPv4 data packet appeared far sighted at the time because it would allow for an almost unimaginable 4 billion Internet hosts. Today, with a billion people using the Internet and well over a billion connected Internet hosts, the old IPv4 address space is rapidly becoming exhausted.

IPv6 is designed to solve the IPv4 address space shortage by providing a 128-bit address space with more than 340 billion billion billion billion addresses (340 undecillion).

It seems clear that the Internet must “eventually” move from IPv4 to IPv6, but again the devil is in the deployment. I can set up an IPv6 network in my house, but none of the broadband providers in the area will provide me with an IPv6 Internet connection. And even if they did, the IPv6 Internet today is a lonely place.

For those of us who have spent the past 15 years building the Internet and the Web, it is easy to assume that others think as we do and believe that a problem for the Internet as a whole is a problem for the individuals who use the Internet. This is unfortunately not the case; in particular, an issue that is a problem for one Internet user (or would-be user who cannot get an address) is not a problem shared by all Internet users.

Merely running out of IPv4 address space is not going to be enough to force a transition. Today, the IPv4 Internet is the party everyone wants to go to. The party will continue without interruption long after the tickets have run out. Only those who did not get their ticket in time will be disappointed.

In a sense, we have already run out of IPv4 addresses, or we would have done so in the mid 1990s if we had not been rescued by the Network Address Translation (NAT) scheme described in Chapter 9, “Stopping Botnets.”

If the incentives for IPv6 deployment are currently ambiguous, the costs are not. Managing the transition of a major network from IPv4 to IPv6 with current network management tools is for most network managers simply impossible.

To successfully manage the transition to the next generation IPv6 Internet, we must develop tools that make the transition simple and painless.

Again, the design of high-level programming languages provides a useful model. When I first started programming, I used an 8-bit microcomputer called a Commodore PET. Later I used a 32-bit VAX computer. Later still, I used 64-bit computers. The programs I wrote in machine code for the PET would have to be rewritten from scratch for the VAX. But the programs I wrote on the VAX still run on modern computers because they are written in a high-level language.

We need a high-level language for network administration so that, in the future, the network administrator neither needs to know or care whether his network is using IPv4 or IPv6. Let the machines take care of managing the transition.

We do not have to deploy IPv6 to deploy the next generation of Internet and network security, but we face the same core problem of how to change existing Internet infrastructure. We need to be strategic and work on finding ways that further both objectives, without creating dependencies that could hinder both.

Default Deny Infrastructure

Network administration costs are already reaching the saturation point in the enterprise. It will not be long before this point is also reached in the home.

The year is 2020, and you wake to find that your robot butler has brought you your usual breakfast of coffee, freshly squeezed orange juice, and toast. Your all-purpose tablet turns on instantly as you pick it up to read the international and national news headlines. Turning to your mail, you discover that you were unsuccessful in your bid for tickets for the Rolling Stones’ latest final farewell tour, but you did get tickets for The Who.

Finally, you turn to the task you were looking forward to least: dealing with the domestic staff. The coffee maker appears to have been infected by a virus and has been hosting a phishing capture site. Meanwhile, the refrigerator and some of the light switches (you don’t know which) have been acting as part of a botnet. One of the televisions is refusing to accept media from the storage server, warning that it might have been compromised. The list continues. As the poison causes you to slowly slip from consciousness, you wish you had read the report on the coffee maker before drinking its coffee.

Unless gadgets become easier to use, consumers are simply going to stop buying them. It is time to remember that the original point was to make life easier and simpler.

Applying the default deny principle in this scenario, we see that the coffee maker does not need the ability to talk to the Internet at all, let alone host a Web server. The security problem becomes much simpler if we only allow the coffee machine to access the local network, and only for the purposes that are strictly necessary for its function: to determine the current time, to receive a request to make coffee, to report that the coffee is ready and no other purpose.

Default deny allows us to realize the principle of least risk: Unless we need a feature, turn it off so it can’t hurt us. Firewalls work by applying this principle at the perimeter. In default deny infrastructure, we want to apply the principle ubiquitously and at every level.

Ubiquitous Authentication

The first step toward simplifying administration is to have a simple, secure, and effective means of identifying the devices connecting to the network. In particular, we need a mechanism that allows a network manager to note that a new device is attempting to access the network and decide whether it should be allowed to join without leaving their desk.

The IEEE 802.1x standard provides some but not all of the tools we need. 802.1x defines a secure protocol for performing the actual authentication but is agnostic on the subject of the device credentials.

As a result, 802.1x does not by itself meet the “single desk administration” criterion. The administrator must install a credential before the device can be used. Installing the credential might only take a few minutes, but the device must be on the administrator’s desk or the administrator must attend to the device to do this. The process of unpacking the device, powering it up, shutting it down, and repacking it for shipping to its final destination can easily take an hour or so. Even a few minutes of administration can mean a huge overhead when an enterprise has more than one site.

The simplest means of meeting the “single desk administration” criterion is to apply Public Key Infrastructure (PKI), described in Chapter 11, “Establishing Trust.” Each device would have a public key pair and digital certificate installed during manufacture. The device can then present the digital certificate during 802.1x authentication to securely establish the device model and a unique device identifier.

A scheme of this type has already been deployed in the DOCSIS 2.0[2] and later specifications for cable modems. The costs of authenticating the device during manufacture and meeting the requirement are negligible.

The natural choice for a unique device identifier is the MAC/EUI address. This is a unique, stable, and already widely used network authentication measure. A MAC address is a unique 48-bit number that is assigned to every Ethernet and WiFi device during manufacture. Assignment of MAC addresses (since renamed EUI-48 addresses) is managed by the IEEE. Newer protocols such as Bluetooth use the EUI-64 scheme, which works in the same manner and provides 64 bits of address space.

802.1x was developed because network administrators had already begun to use MAC/EUI addresses as a means of authentication despite the fact that every network administrator knows that a MAC address is not secure.

The transition from using MAC addresses to 802.1x rests on the assumption that network managers will choose security over convenience. A support infrastructure already exists for use of MAC/EUI authentication. Many network devices have their MAC or EUI64 address printed on their case as a barcode, and distributors often list MAC addresses on shipping notices so the device can be added to the authorization database automatically or with a wave of a barcode scanner.

If anyone is going to chose security over convenience, it is network administrators. But it is much better to avoid forcing them to make the choice. Device certificates installed during manufacture provide the necessary glue to use the MAC/EUI address securely.

Device and Application Description

When a network administrator sitting at his desk notices that a new device is asking to join the network, he needs to know what services it offers and what services it requires to perform its function. For this, we need a device description language.

Like many topics in computer science, the design of device and protocol description languages can easily become an end in itself. For the purposes of security, simpler is usually better.

What we need is a language that allows a device to make statements such as these:

  • I am a coffee pot, model number 8080, manufactured by Robobrew.

  • I respond to the make coffee request.

  • I raise the signal coffee.

  • I provide the following status information: water level, coffee level, pots brewed, time since last brew.

  • To present the current time on my display, I require access to the NTP protocol.

Encoding this type of information in XML requires us to deal with two types of information. The first is information related to the actual function of the machine. The second is information related to the network access required to perform that function.

Describing every aspect of every machine that might be connected to the network is a hard problem, one that will require us to apply semantic Web technologies such as ontologies—shared vocabularies of terms used with a specific meaning.

Describing just the network access requirements is a much simpler problem. We still need to build an ontology of network resources, but the set of concepts we need to describe is small, and for practical purposes, it’s fixed for long periods of time. We know that we will need to express concepts such as “IP address,” “IP port,” “protocol,” and so on. Although it is not a job that will be easy to get right, it is certainly an achievable goal.

Service and Policy Discovery

Just as the network needs to know which services the device requires to perform its task, the device needs to know which services the network offers that meet its needs. Here, the secure naming architecture described in Chapter 15, “Secure Names,” is applied.

The choice of the DNS as the service and policy discovery mechanism is important because it allows us to distinguish the logical network from the physical one.

If Alice is connecting her employer-issued laptop to her employer-managed network at work, the logical and physical networks are the same. If Alice is working from home or staying overnight at a hotel, the physical network has changed, but the logical network should not.

If, on the other hand, Alice wants to become master of her own domain, she can buy her own domain name and set her policy exactly the way she wants it, if the provider of her physical network is prepared to allow a machine with her policy configuration to connect. This should not be a problem at home, where Alice is paying her ISP for the network connection, but it might be an issue at work, where her personal policy preferences might not be compatible.

Ubiquitous Policy Enforcement

The core principle of default deny is to prohibit everything that is not explicitly permitted. Ubiquitous authentication and the network security policy allow us to determine whether an action is permitted. The next question is how and where the policy is to be enforced. Should this be at the network interface, the network hub, the router, the firewall?

In practical terms: Should the coffee pot be responsible for enforcing network policy, the network hub serving the kitchen, or the cable modem connecting the house to the outside world?

The default deny answer is “All of the above.” We do not want to rely on a single point of failure; the goal is to enforce policy at every point in the network where resources allow. As CPU costs continue to decline, it becomes practical to do this at every level in the network.

What this means is that no device that connects to the network is allowed to make any more use of network resources than its function requires, as decided by the owner of the network. If Alice decides she does not want the coffee pot to access the Network Time Protocol (NTP), she turns it off and the coffee pot does not display the time.

A printer connected to the network is allowed access to the resources appropriate for printers. If it is compromised by a Trojan that attempts to send spam or perform a SYN flood denial-of-service attack, the packets are rejected and the network control center notified that an incident has occurred.

Traditional firewalls are also a form of policy enforcement mechanism, but the range of enforcement options available at the network perimeter is more narrow than deep within the network, where fine-grained control is possible.

A firewall cannot stop a Trojan infection from spreading after an initial breach has occurred. Default deny cannot prevent the printer from being infected by a Trojan, but it can make it easier to detect the compromise, and it can help prevent the infection from spreading to the coffee pot.

Most network devices will have a need to access the local DNS and NTP services for naming and time information, but a need for access is not the same as a need for unlimited access. If a printer, coffee pot, or for that matter any device is making hundreds of NTP requests per second for the current time of day, something odd is happening; possibly the NTP requests are being used as some form of control channel.

The Death of Broadcast

Two features of current-day networks that disappear almost entirely in default deny infrastructure are broadcast communication and open listener configuration.

A broadcast message is a message that is sent indiscriminately to every device in a network. Because the whole point of default deny is to suppress indiscriminate communication, there is no choice but for broadcast to go.

We would like to eliminate broadcast in any case because any protocol that relies on indiscriminate communications will only work at the local level and cannot scale. Systems that depend on broadcast messages tend to stop working in unexpected ways when a new network hub or bridge splits what was previously a single physical network span.

Early Ethernet networks consisted of a coaxial cable strung from one machine to the next. Every data packet injected onto the network was seen by every other machine on the same loop. In an open listener configuration, a device snoops on packets directed for other machines.

Applying the default deny principle, we should only allow a machine to be configured in broadcast or open listener configuration if the network policy recognizes a need to do so. A network manager might have a machine authorized to operate as an open listener for tracing network faults.

Intelligence and Control

Operating a default deny infrastructure over any extended period requires the ability to detect and monitor internal and external threats and to adapt the network security policy accordingly.

No network security scheme will ever be so good that it is possible to ignore internal or external events with impunity. If a device suddenly starts behaving in a way that is inconsistent with policy, the network management needs to know. Equally, if the opposition has launched the next Code Red or Slammer, the management of even the most secure network needs to know what is about to (or has already) hit them.

Without intelligence and control, even the best security policy becomes an electronic Maginot Line. We still need the INCH protocol described in Chapter 9.

Data-Level Security

The third leg of the default deny stool is data-level security.

It is often said that the only way to protect yourself absolutely against an Internet attack is to cut the cord connecting you to the Internet. As we saw in Chapter 1, “Motive,” this is not true, because cutting the cord does not prevent the criminals from using the Internet to conspire against you.

It is, however, possible to protect yourself against the risk of information disclosure by not having any information to disclose. If you never ask for your customer’s social security number, it cannot be stolen from your computer systems. You can’t lose what you never had.

Handling sensitive information is sometimes inescapable. Today, credit card transactions require a card number, and it will be some time before this is fixed. But merchants that store every credit card number they see are placing themselves at much greater risk than those that delete the information as soon as they no longer need it.

Applying the least-risk principle to data provides a three-step approach.

  1. Only collect, accept, or otherwise obtain confidential data when there is a business need to do so.

  2. Only continue to store confidential data when there is a business need to do so.

  3. Use strong cryptography and procedural controls to enforce access controls on confidential data.

The key to implementing such an approach is again accountability. Many enterprises have appointed a chief information security officer (CISO). One of the primary roles of a CISO is to take responsibility for information security and, in particular, to ask questions such as, “Why are we collecting and storing all this sensitive data we never use?” An organization where these questions are raised by junior engineers is at a much greater risk than one where they are raised by a senior executive reporting to the CIO or CEO.

Enforcement of such controls in large-scale databases is becoming a generally understood problem with well-established solutions. Many data breaches occur when data is extracted onto a personal system. The extraction itself might be malicious; in some cases, spammers have paid employees for lists of e-mail addresses. In many cases, the original extraction was authorized. The problem is what happens to the data afterward; the stolen auditor’s laptop containing sensitive personnel records has become cliché.

A good starting point would be to use existing security infrastructures. File encryption has been supported by premium versions of the Windows operating system since 2000 and is available on every other major operating system. An appropriate accountability control might be to bar any firm of auditors from performing security audits for some number of years should it be found after a breach that they failed to employ such measures.

Secure storage and secure transport only take us part of the way. We still rely on the (all-too-fallible) user remembering that secure transport must be used whenever the personnel spreadsheet is sent by e-mail. Our objective must be to take security to the data.

Content rights management (CRM) systems allow us to do just that. In a typical CRM system, the sensitive data is stored in encrypted form. The decryption key is only released when the CRM system receives sufficient assurance that certain constraints on the use of that data will be observed.

In the e-mail security schemes described in Chapter 13, “Secure Messaging,” Alice can send an encrypted e-mail to Bob, but after Bob decrypts the message, he can use the information in the same way that Alice can. In a CRM system, Alice can send the message to Bob with strings attached. Bob might only be allowed to forward the message to a limited distribution list (for example, other company executives). Printing the document might only be allowed on specific printers or prohibited entirely. Bob might not be able to read the document at all after a certain date.

CRM is a form of digital rights management (DRM). More precisely, CRM is a euphemism that attempts to distinguish the use of DRM to protect privacy from the more controversial use of DRM for copyright enforcement. As such, the same objections that are made against copyright enforcement schemes tend to be thrown at CRM regardless of validity.

The technical difficulty of enforcing controls depends on the size of the distribution and the incentive to the attacker. Protecting copyright content such as a Hollywood blockbuster is difficult because the film must be playable on any of the billion-plus DVD players sold. A criminal who successfully breaks the protection scheme has a ready market and can make a fortune.

Protecting confidential documents that are distributed to a few hundred senior executives represents a much more manageable task. All we need to deter the professional criminal is to ensure that the expected cost exceeds the expected profit. In order to make money, an attacker must obtain one of the machines authorized to handle the protected material. To profit, the attacker must break the protection scheme while the confidential information still has value.

The current generation of CRM systems offers a level of security best described as “advisory.” The CRM system prevents intentional and unintentional abuse by the typical user but is vulnerable to a compromise of the operating system platform and intentional abuse by a sophisticated insider. Future generations of the CRM system based on the trustworthy computing technology described in Chapter 17, “Secure Platforms,” will offer considerably greater resistance to attack.

Network Administration

We already have most of the pieces we need to deploy default deny; all we lack is the necessary glue to join the existing pieces. The most important missing glue is the infrastructure necessary to simplify network administration.

Starting a Network

Setting up a new network or adding a device to an existing network should require as little operator input as possible.

The first thing a customer needs to know when buying a piece of network equipment is whether it supports the default deny standard. An industry conformance brand similar to the “WiFi” brand is required.

Let us imagine that Alice comes home from her shopping trip with a box labeled “network hub” and another box labeled “PC.”

To install her network, she plugs the network hub into whatever socket brings high-speed broadband Internet into her home.

At this point, she is almost done. All she needs to do now is to connect her PC to the network hub and complete the installation. For the time being, let us pass over the additional complexity introduced if the network is wireless and assume that she is using Ethernet.

Alice plugs her new PC into the network hub and turns it on. After the machine has started, the network configuration dialog begins automatically. This requests the following information:

  • Billing information for the Internet service provider (optional)

  • The domain name for the network (optional; defaults to the domain provided by the Internet service provider)

  • The means of authenticating the network administrator

  • A DNS name for the machine currently in use

After the requested information is supplied, the network setup is complete.

What appears to be simple on the surface requires a significant amount of effort and choreography behind the scenes.

When the network hub is plugged in, it looks to see what other devices are available to talk to. At this point, the only device it can talk to is whatever machine is on the other end of the broadband connection, and this particular machine is not going to route packets beyond that point until it knows that it has a paid subscriber. Because the hub is just a box without either a keyboard or a display, it is not able to do anything more at this point.

When the PC connects to the network, it first obtains a network address and domain name from the network hub using a protocol called DHCP. It then registers the services it is able to offer. One of these services is providing a network administration console. The network hub can now request the additional information it needs to complete the setup.

Adding a Device to a Network

The same choreographed dance is repeated each time a new device is added to the network, except that from this point on, each new device that asks to join or rejoin the network will be required to authenticate itself via 802.1x.

When a new device asks to join the network, the request is routed to whichever machines are currently managing requests to join the network. A coffee pot requesting minimal network access might be added automatically; a machine such as a personal computer requesting comprehensive network access might require specific approval. The administrator might also want to qualify the approval, limiting the services to be offered or the network scope that is visible to the device.

We make the choreographed dance work by using the DNS-based service and policy mechanisms described earlier.

Adding Wireless Devices

Dealing with wireless devices is a little more complex, because there is frequently more than one wireless data network in a given location. We want to make sure that we connect to the right one and the wrong people do not.

If the device connecting to the network has a keyboard and a display, we can type in the domain name of the network we want to join. The problem is a little more difficult if the device is a network hub, a printer, or a light switch.

If the device has any sort of input and display capability, we might provide a list of the visible wireless networks and ask the user to select one. This is a somewhat clunky procedure, though.

The approach I prefer would make use of the fact that flash memory devices, in particular, USB “thumb” drives have become extremely cheap, as is the infrastructure to support them in silicon. If a device is to have a Bluetooth or WiFi capability, the cost of adding a USB or flash memory interface to the design is negligible. To connect a device to the network, the administrator would load a small configuration file onto a flash drive and plug it into the device he wanted to connect to the network and press the Setup/Reset button.

Coffee Shop Connection

Much of this book has been written at the local Panera. One of the main reasons for drinking coffee at Panera is that they provide free WiFi Internet.

The Panera WiFi service is among the best on offer today, but it still demonstrates the clunkiness that results from the fact that authentication was an afterthought in the WiFi specifications. As a result, the user experience falls far short of what it should be.

Using the Internet service in a hotel, airport, or coffee bar should be as simple and seamless as in the home. And, even though I am not at home, I should still be able to use my home printer, fax machine, and storage system.

DHCP makes no distinction between obtaining Internet service from a trusted local network and obtaining it from an untrusted public network. This is an important limitation. Just because I like the coffee at Panera does not mean that I want the company to take over administration of my computer, nor would Panera want to take the responsibility.

What Panera does care about is that I agree to its terms of service. In particular, I must agree not to hog bandwidth, surf porn sites, or sue the company if the connection goes down.

What I care about is that I am connecting to the real Panera network and not subject to an attack known as the “evil WiFi twin.” The WiFi specification does not provide a means for the access point to authenticate itself to a user connecting to it. Anyone can set up a WiFi network that advertises itself as “Panera.”

Dealing with these concerns is straightforward but time consuming. We need to revise the WiFi protocols without breaking any of the existing systems. EV certificates and Secure Internet Letterhead allow us to know the access point is genuine before we connect.

Securing the Internetwork

The end-to-end principle tells us to avoid all avoidable complexity in the Internet core. Securing the networks where Internet communications originate and terminate meets the bulk of our security requirements for addressing Internet crime.

This leaves the possibility of attacks against the internetwork infrastructure of the Internet. In particular, the Border Gateway Protocol (BGP) used to exchange routing information is vulnerable.

Attacks based on exploiting security weaknesses in BGP are seen today. Fortunately, they are not yet common. The attackers prefer to attack at the weakest point, which is today the two feet between the user and the screen. Social engineering attacks usually trump attacks based on technical sophistication.

Security measures protecting BGP are in place today. My concern is that they rely almost entirely on manual exception processing, and this approach might not scale in the face of a concerted criminal attack, the level of attack we might anticipate if we succeed in shutting down the simpler methods of attack.

BGP Security

Like most descriptions of how the Internet works, this book started with explaining how large messages are chopped up into smaller packets of data, which then pass from one Internet router to another until they finally arrive at their intended destination.

Like most such descriptions, I omitted one of the most important questions to be answered in any such system: How does the router know where to send the packets?

A full technical answer to this question requires at least a book in its own right.[3] A shorter explanation is that the machines that form the Internet backbone are constantly exchanging messages containing route advertisements using BGP.

When an ISP wants to start a network, it applies to its local registration authority to be issued a “block” of IP addresses. A typical allocation for a small network might be 192.168.69.0 through 192.168.69.63.[4]

The ISP connects its router to its upstream Internet provider and sends out a BGP route advertisement for the new network block. This, in effect, says, “I route to 192.168.69.0 through 64 with a distance of 0.” The router for the upstream provider then sends out a message to each of the routers it is connected to saying something like, “I route packets to 192.168.69.0 through 64 with a distance of 10.” Those routers then advertise a distance of 20 and so on.

When a router receives a packet with a destination address of 192.168.69.23, it looks up its table of known routes and chooses the one it considers “best,” taking into account the bandwidth the various links support, how congested they are, and so on.

As the Internet grew, the number of IP address blocks became unmanageably large. Instead of advertising a route for an address block directly, groups of address blocks are mapped to an Autonomous System (AS). Providing BGP security between nearest neighbors is straightforward. Unusually for a routing protocol, BGP is layered on TCP, which means we can secure the communication between nearest neighbors using SSL.

The challenge in BGP security is that each router relies on the information provided by its neighbors, which in turn rely on the information provided by their neighbors, and so on. Securing a communication to the nearest neighbors does not provide accountability to the original source.

The problem is similar to a rumor circulating at a cocktail party. Knowing who told you the rumor is not the same as knowing who started it or why.

Another similarity with the cocktail party rumor problem is that the participants have an interest in concealing their sources. For understandable reasons, the companies that provide the Internet backbone are reluctant to fully describe the internal architecture of their networks to competitors.

So far, proposals to secure BGP have tended toward the access control model of “Decide what is bad; make sure it never happens.” Some parts of the BGP security problem, in particular the mapping of IP address blocks to autonomous systems, are tractable using this approach. But the results are not encouraging when we attempt to rely on this approach alone. In many cases we can only decide what was bad after observing the onsequences.

I believe that the key to deploying BGP security at the internetwork level is to apply an accountability approach. In other words, we accept that there will be problems and develop a mechanism that allows the culprit to be identified.

An event that should help pave the way for deployment of some form of accountability-based security architecture for BGP is that a PKI is currently being deployed that will allow the holders of IP address allocations to authenticate the route advertisements they advertise.

Key Points

  • Deperimeterization means that the perimeter model is increasingly insufficient.

    • A fence is only as good as its weakest point.

    • USB flash memory and laptops create holes.

    • Outsourcing and supply chain integration make the perimeter ambiguous.

  • Default deny infrastructure responds to the challenge of deperimeterization in three ways:

    • Ubiquitous authentication—every device, user and application is authenticated before they use the network.

    • Ubiquitous policy enforcement—nothing is allowed to happen in the network unless there is a policy rule to say that it is permitted.

    • Data-level security—wherever possible we apply security to the assets we wish to protect, not just the place where they are stored.

  • The devil is in the deployment.

    • Network administration is already too costly.

      • Deployment of IPv6 faces the same challenge.

    • Domain-centric administration makes default deny practical.

      • Device authentication—every device authenticates itself to the network automatically.

      • Device description—every device provides a description of the network services it provides and requires.

      • DNS service advertisement—the DNS is used as the network and internetwork service discovery mechanism.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.53.166