CHAPTER 26

GATEWAY SECURITY DEVICES

David Brussin and Justin Opatrny

26.1 INTRODUCTION

26.1.1 Changing Security Landscape

26.1.2 Rise of the Gateway Security Device

26.1.3 Application Firewall: Beyond the Proxy

26.2 HISTORY AND BACKGROUND

26.2.1 Changing Network Models

26.2.2 Firewall Architectures

26.2.3 Firewall Platforms

26.3 NETWORK SECURITY MECHANISMS

26.3.1 Basic Roles

26.3.2 Personal and Desktop Agents

26.3.3 Additional Roles

26.4 DEPLOYMENT

26.4.1 Screened Subnet Firewall Architectures

26.4.2 Gateway Protection Device Positioning

26.4.3 Management and Monitoring Strategies

26.5 NETWORK SECURITY DEVICE EVALUATION

26.5.1 Current Infrastructure Limitations

26.5.2 New Infrastructure Requirements

26.5.3 Performance

26.5.4 Management

26.5.5 Usability

26.5.6 Price

26.5.7 Vendor Considerations

26.5.8 Managed Security Service Providers

26.6 CONCLUDING REMARKS

26.7 FURTHER READING

26.1 INTRODUCTION.

The firewall has come to represent both the concept and the realization of network and Internet security protections. Due to its rapid acceptance and evolution, the firewall has become the most visible of security technology throughout the enterprise chain of command. In distinct contrast with virtually any other single piece of technology, there is not likely to be a chief executive officer in this country who cannot say a word or two about how firewalls are used to protect enterprise systems and data.

The firewall, as originally devised, was intended to allow certain explicitly authorized communications between networks while denying all others. This approach centralizes much of the responsibility for the security of a protected network at the firewall component while distributing some responsibility to the components handling the authorized communications with outside networks. The centralized responsibility put a lot of attention on the firewall and provided for its rapid maturation as a secure gatekeeper. The maturity and widespread adoption of firewall products led to attention on the responsibility distributed behind those early firewalls: The allowed paths, frequently involving immature software and complex protocols, became the weakest link in the chain. As a result, firewalls have evolved to include features devoted largely to shoring up the allowed paths that pass through them.

The successful use of these devices in contemporary security architecture continues to depend on understanding their specific capabilities and limitations, on managing the security responsibility that rests on nonsecurity devices within protected networks, and on understanding and planning for the failure conditions of security components.

26.1.1 Changing Security Landscape.

The changes in enterprise networks, applications, and work patterns have significantly altered the typical enterprise Internet architecture from a few years ago.

Although the perimeter is now much less clearly defined, there has been increased centralization of elements that were previously distributed. Also, many of the protocols that were passed without inspection by early firewalls are now subject to control.

26.1.1.1 Borders Dissolving.

The outsourcing of information technology (IT) functions previously contained within the enterprise is one factor redefining the border between internal network and Internet. Hosted applications such as customer relationship management (CRM; e.g., Salesforce.com), e-mail and collaboration (Exchange and SharePoint), storage (Amazon S3), and even custom-built Web applications (Amazon EC2) have created paths across the public Internet for traffic previously contained within enterprise perimeters. Amazon Simple Storage Service (S3) and Elastic Compute Cloud (EC2) are examples of outsourced, Internet-based virtual computing resources.

Enterprise applications are increasingly subject to extension by customers and third parties via mashups (composite applications using various programming tools) and Web service application program interfaces (APIs), and it has become common for employees and customers to use the same systems, with applications handling permissions and security internally.

(For more information on outsourcing and security, see Chapter 68 in this Handbook.)

26.1.1.2 Mobility (Physical and Logical).

Another major factor in the re-definition of the security perimeter is the mobility of employees. Employees today work from a variety of systems and locations. Employees expect to be able to connect from anywhere, whether within the enterprise or from home or an Internet hot spot. They expect to be able to use kiosks and home systems to access enterprise applications. This mobility, and the impact on trustworthiness of mobile nodes, not only dissolves the network border, but also promotes the class of attack that depends on a compromised client. The challenge is to find a way to control these changes with centralized security devices.

26.1.1.3 Regulatory Compliance.

Recent regulatory changes have made companies responsible for a set of requirements for data protection and preservation. The Gramm-Leach-Bliley Act (GLB) and Health Information Portability and Accountability Act (HIPAA) have specific requirements about the protection of personally identifiable information and personal health information. Communications and other corporate data are required to be archived and preserved, and made searchable for legal discovery, by provisions of the Sarbanes-Oxley Act (SOX), SEC rule 17a-4, and the Federal Rules of Civil Procedure as of December 1, 2006. SOX also requires companies to control any transaction path leading to financial data.

Given the rapid, widespread deployment required to meet deadlines associated with emerging regulations, centralized control at the network level has been an attractive architecture, compared to server or desktop agent software deployments.

(See Chapter 64 in this Handbook for more details on GLB and Sox; see Chapter 71 for more information about HIPAA.)

26.1.2 Rise of the Gateway Security Device.

The firewall first centralized responsibility for authorization of allowed paths and then quickly expanded its role to include additional protections for those allowed paths. These protections have expanded to such an extent that their complexity frequently outweighs that of core firewall function, and it is now more accurate to refer to these expanded firewalls as gateway security devices (GSDs).

Gateway security device capabilities. The processing capacity available per rack unit has increased consistent with Moore's Law, so that many of the security functions that could not be centralized in large organizations are now possible. Previously centralized functions were each implemented on a dedicated, point solution device in the past, but now these functions are being consolidated onto a single platform. Also, the adoption of centralized enterprise assets, such as role-based access control systems and directory services, has made available significant additional context for security decisions.

Enterprise directory integration. Directory integration, such as with an LDAP infrastructure, enables seamless integration of a range of functions from peruser or per-group authorization of host or protocol access to user or group specific rule sets for other GSD functions, such as Web filtering.

Unified threat management. The term “unified threat management” refers to the addition of perimeter-based implementations of antivirus, antispyware, antispam, and other malware controls, along with IDS/IPS functions and some elements of content control, to the firewall workload. Providing these capabilities at the firewall's location requires significant protocol manipulation, and in many cases these are achieved through full proxy servers for the supported protocols within the device.

Content control and data leakage prevention (DLP). Control of the content of communications, whether for the purpose of filtering offensive Web sites or preventing the disclosure of proprietary information, is in the early stages of consolidation onto GSDs. Ranging from URL and dictionary-based filters for content filtering to content-indexing search engines for DLP, these controls require deep inspection of content within HTTP, SMTP, instant messaging, and other protocols. Content control also includes the implementation of data-handling controls for regulatory compliance and DLP, such as the requirement to use encryption to protect sensitive data sent by e-mail.

Archive and discovery. Regulatory requirements to archive and index corporate data, particularly e-mail communications, have created a product category in messaging security at the perimeter. As with antispam and other functions, this feature will soon be consolidated into GSDs.

26.1.3 Application Firewall: Beyond the Proxy.

The most significant of the allowed paths traversing most firewalls is the Web path, consisting primarily of a combination of HTTP and HTTP over SSL. This path has exploded in complexity with the advent of AJAX and related rich Web application technologies, and with the explosion in the number and importance of internal and external Web applications important to the enterprise. Whereas the early firewall guarded against access to misconfiguration and vulnerability in the protected servers, the Web application firewall guards against misconfiguration and vulnerability within the custom Web applications that run over the HTTP allowed path.

26.2 HISTORY AND BACKGROUND.

The responsibilities and capabilities of traditional security components have been dramatically altered by changing computing paradigms, by changing models for business interaction, and by the global change introduced by the emergence of Internet-centric computing. (For a general introduction to data communication concepts and terminology, see Chapter 5 in this Handbook. For a discussion of local area networks, see Chapter 25.)

26.2.1 Changing Network Models.

As connections between systems have changed the way computers are used, security issues have shifted to accommodate the new approaches. The progression from mainframe-centric information processing to increasingly broad networks of systems has changed the idea of the security perimeter. Additionally, as a result of the new connections, the requirements for application security have grown to include new concerns about network and host security.

The mainframe-based information processing approach had well-established security technology and procedures. The transition to client/server computing left these behind to a significant extent, and the continued shift to network-centric computing made clear the need for a new approach.

26.2.1.1 Mainframe.

Early information processing systems centered on a glass house approach to networking. Mainframe systems were often solitary, and when multiple systems were present, their connections typically were limited to a single data center. Wide area networks (WANs) were limited to direct leased-line connections between data centers.

Network and physical perimeters were typically the same, with client access provided primarily by green screen terminal services. The need for application security was limited by this single, available, and allowed path. Network security in this environment was the responsibility of the mainframe services connected to network interfaces. For example, these services had security features that controlled access and frequently were located in separate mainframe regions.

Mainframe operating systems predicted the evolution of networks and firewalls to some extent. These operating systems typically ran as instances on a lightweight virtual machine, as in the case of IBM MVS/VM, or were otherwise divided into partitions or regions. The interactions between these virtual systems could be regulated, even to the level of implementing mandatory access control (MAC). (See Chapter 9 in this Handbook for more details).

26.2.1.2 Client/Server.

As mainframe resources were augmented with midrange servers, running operating systems such as UNIX, NetWare, OS/2, and Windows NT, the number and type of network connections increased rapidly. Also, terminal-based clients were replaced in large part by personal computer (PC)–class client systems with network connections.

This shift represents the initial change in the traditional security perimeter. Network connections reached outside of the data center to the individual desktop, and WANS expanded to connect these systems across the enterprise and beyond. Among the advantages of the midrange server approach was the broad suitability of their operating systems to office productivity, back-end computing, transaction processing, database service, and other tasks. Along with these capabilities came complexity, including default configurations designed to enable maximum functionality out of the box.

As data centers, and even wiring closets, became populated with servers, the network connections and associated available network services became increasingly complex. The security perimeter no longer had any distinct boundaries within the enterprise, as clients and server were connected in increasingly complex ways. Application security was redefined, as application functionality spread across mainframe, server, and PC-class client platforms. The interactions between these platforms within a single application created multiple allowed paths, and the many available network services on a given platform created opportunities to circumvent the intended application flow.

26.2.1.3 Web.

The emergence of the commercial Internet in the 1990s was the start of another major shift in enterprise network models. Rapid development of the HTML/HTTP application path led to Web applications that approximate the capabilities of fat client applications. Emerging technologies such as Asynchronous JavaScript and XML (AJAX), which incorporates asynchronous interaction between Web browser and Web server not associated with a page view, have further increased the complexity of communications over the HTTP allowed path. This path now transports countless customized and ad hoc protocols for interaction between rich AJAX client applications and their Web servers.

Although firewalls have increasingly focused attention on inspecting and controlling HTTP-based communications, the growth in complexity has outpaced the abilities of commercial products, putting pressure on Web architects to understand the capabilities of firewall devices and the security context of rich HTTP applications. (For a discussion of Web security, see Chapters 21, 30, and 31 in this Handbook.)

26.2.2 Firewall Architectures.

As network security mechanisms evolved from functionality added to existing routing devices, to dedicated systems and appliances, the techniques used to implement firewall functionality have evolved as well. Always balancing security requirements against performance and network throughput, vendors have introduced a variety of approaches.

26.2.2.1 Access Control List.

The first firewalls were in fact routers, both dedicated routing appliances and UNIX-based bastion hosts. Such devices, routing appliances with access control lists (ACLs), are still widely used as network security mechanisms. Some routing appliances use the stateful inspection architecture discussed later in this section.

Routers using ACLs make authorization decisions for allowed path control based strictly on the packet currently being processed by the router. This decision, without context of previous traffic, is based on such packet data as source and target addresses, and port and packet flags such as the synchronizing field (SYN) present in packets attempting to initiate connections.

This focused view of individual packets has resulted in several vulnerabilities. One such vulnerability exploited combinations of the poorly compliant IP implementation in Windows NT and the strictly compliant implementation in the routers. Attackers crafted fragmented IP packets such that the initial SYN packet contained headers conforming to an ACL on the router. The following packet, however, had a fragment offset that placed new header data into the packet upon reassembly in the Windows NT host behind the firewall. Since this second fragment did not contain a SYN flag, it was not blocked at the router. Windows NT patches and router changes that prevented fragments with offsets within the packet header have addressed this vulnerability. The practice of reassembling fragmented packets at the router also became common as a preventive measure against this type of attack.

Section 26.4, Deployment, shows how existing routing infrastructure, in combination with dedicated network security mechanisms, can be used to enhance security architectures. (For more information about ACLs, see Chapter 24 in this Handbook.)

26.2.2.2 Packet Filtering.

Packet-filtering firewalls are appliance, or host-based, firewalls that use the ACL method just described for allowed path authorization but add additional firewall capabilities. These systems typically do formal logging, are capable of user-based authorization, and have intrusion detection and alerting capabilities.

Unfortunately, these firewalls also have suffered from weaknesses due to lack of context information, as described. Additionally, host-based packet-filtering firewalls have suffered from various weaknesses in the network stacks of the underlying operating systems.

Very few firewall vendors currently offer traditional packet-filtering firewalls, but many nonsecurity products now have packet-filtering capabilities. For example, various load balancers, Web caches, and switch products now offer packet-filtering firewall capabilities.

Packet-filtering firewalls are ideally suited to load-balanced and highly available environments, as they can load-balance connections among devices, between each packet, with no additional overhead and can similarly fail-over between devices in the middle of an established connection.

26.2.2.3 Stateful Inspection.

In recognition of the problems with authorizing allowed-path traffic based on the information in a single packet, vendors developed several new technologies. The basic design behind application gateways, discussed in Section 26.2.2.4, was considered by some to be too computationally expensive for real-time processing on firewall devices. A competing technology, stateful inspection, was developed to provide connection context information for allowed-path authorization and still provide for good performance, scalability, load balancing, and fail-over capabilities.

This technique calls for a table of connection information, providing context, to be maintained in memory on the firewall. In order to improve throughput, the information in this table is stored in the form of binary representations of IP packet header information. This information then can be compared to the binary header of an incoming IP packet very efficiently, in many cases using only a few native CPU instructions.

This technique, as it inspects only certain portions of the incoming packet, is effective only against known or predicted classes of IP attack. Attacks that use ignored portions of the packet to attack weak IP implementations on back-end hosts still succeed. As a result, this type of firewall, while it may be configured with a least-privilege rule base, is partially comparable to the packet filter in that it does not do least-privilege data inspection.

Some stateful inspection systems have focused so heavily on performance that they offer a fast mode that reduces inspection dramatically once a connection has been opened successfully. This mode, while very efficient, is strongly discouraged by network security experts as sacrificing too much security.

Stateful inspection technology lends itself well to load balancing and highly available solutions, albeit with significant overhead traffic. In order to support load balancing and fail-over between packets and within established connections, the state tables between clustered or paired devices must be synchronized. This operation typically is conducted via in-band network traffic or out-of-band interdevice connectivity, such as through RS-232, a standard that specifies physical connectors, signal voltage characteristics, and an information exchange protocol.

26.2.2.4 Application-Layer Gateway.

A second approach to adding context information to the allowed-path access decision came in the form of application gateway firewalls. These firewalls use protocol-specific proxies on each allowed path port to make access decisions, extract required protocol information, and build internal packets for distribution to the back-end host. Since these firewalls are performing far more complex operations than stateful inspection systems, they have some significant performance challenges to overcome.

Some commentators describe the functional separation of the inspection functions in protocol-specific proxies as analogous to the air gap that physically separates distinct, unconnected networks. The benefit of the so-called air gap approach to packet inspection, access decision, data extraction, and packet assembly workflow is that it is effective at protecting not only against known or predicted classes of attack but also against unknown attacks. Since unused packet elements are discarded, they present no danger to internal systems.

Application gateways do not lend themselves easily to load balancing and high-availability solutions. Load balancing typically is accomplished through affinities, where connections will be balanced at their initiation and not change devices thereafter. Fail-over can be accomplished through operating system–level synchronization and fail-over mechanisms, but typically not without disrupting connections in progress.

26.2.2.5 Multifunction/Hybrid.

Hybrid firewalls have emerged as a compromise between the speed and efficiency of the stateful inspection approach and the increased security of the application gateway approach. In fact, most commercial firewalls available today can be classified as hybrids.

Hybrid firewalls, evolved from stateful inspection systems, typically perform their normal inspection for all but a few protocols. These protocols, such as HTTP, are subject to additional application gateway-style inspection and/or proxy.

Application gateways in the hybrid category have always used the proxy approach for known protocols. These systems now implement stateful inspection for unknown or encrypted protocols and offer a fast mode that performs stateful inspection rather than application gateway functionality on established connections.

26.2.2.6 Host Environment Context.

Host-based security—aside from basic network firewalling—has additional complexity and considerations due to the protection requirements of the additional elements (software, services, etc.) running. The host's ability to better understand its local environment allows for more granular protection and visibility.

Instead of focusing on just allowing or denying port-based access to the network, the environmental context allows the host-based security to define what applications and services can access or receive network information, access or change other services, executing code in a virtual machine to verify if it behaves as expected, and so on.

Firewalls running on, or in special communication with, the hosts they protect can make use of additional context for making security decisions. Whether through native platform information on host-based firewalls, or data and instructions received via a protocol such as Universal Plug and Play (UPnP), information about, for example, the applications running at the time a security decision must be made can be helpful. If a particular application requires inbound connectivity on various ports, the overall risk profile of the protected network is lower if the ports are accessible only when that application is running. Similarly, information from within complex or encrypted protocol exchanges, such as IP addresses of systems expecting access, may provide context for allowed path restrictions.

The host's movement from relying on the network security measures, to context and environmentally aware security measures, will provide more robust and flexible security wherever the host goes.

26.2.3 Firewall Platforms.

The changing problem of perimeter security has produced a succession of network security mechanisms designed to restrict allowed paths and inspect network traffic.

26.2.3.1 Routing.

Routers are the heart of TCP/IP networks, directing traffic from segment to segment. Router vendors recognized the need for security controls at the boundaries between different segments and networks, and implemented simple controls that could be activated with negligible performance impact.

ACL. Using explicit allow and deny statements, router access control lists (ACLs) restrict IP traffic based on source and destination addresses and ports. In addition, these controls can limit traffic based on other parameters, such as whether a packet is in response to an established connection or not. Routers inspect each packet in a vacuum, without any context of previous traffic.

Hardware modules. High-end routers today provide a hardware extensibility platform that permits additional modules (often referred to as blades) to be installed. These modules have access to the full-speed backplane of the parent device, and can be logically inserted into the process for handling traffic. Modules are available for major router platforms that provide each of the firewall architecture types.

26.2.3.2 Host Based.

Although routers did an effective job of implementing access control rules, it was clear that they were not suited for more complex requirements. Dedicated server-based firewalls were created to provide for additional capabilities, including protocol traffic inspection, contextual traffic inspection, comprehensive logging and alerting, and air-gap application gateways, which completely rebuild network packets to protect systems on internal networks.

These firewall applications typically are built on top of an existing operating system (OS). Various UNIX variants and Windows platforms are commonly used, often with special hardening or system-monitoring components added to the OS. In some cases, hardening is implemented to the extent that components of the network stack within the underlying OS are completely replaced.

The full OS permits firewalls with significantly greater functionality than is available on routers. Also, development scope and effort can be reduced by using commercial off-the-shelf (COTS) components and scripting languages and building on real-time process scheduling, network input/output (I/O), and file system functionality. Unfortunately, these benefits come at a price: added complexity, unpredictable configurations, third-party component interactions, and uncontrolled changes.

Modern host operating systems frequently include some form of integrated software firewall. In Linux, this firewall has become the foundation of a number of host-based firewall products, and is used for host protection as well as for the protection of additional networks. Windows and Mac OS firewalls are less general in their features, acting more like personal firewalls, and typically are used only to protect the individual host.

26.2.3.3 Appliance.

An extension of the host-based firewall concept, the firewall appliance is an effort to realize the benefits of a full OS upon which to build functionality while providing the controlled operating and maintenance characteristics of routers and network appliances. In taking control of the entire host, vendors can closely control software versions and configuration, and can prevent undesired system changes. In order to do so, vendors must increase their expertise to include appliance hardware and operating systems, and they will have to face additional challenges as a hardware vendor as well as possible licensing fees for included software components.

A form of appliance that does not include vendor-supplied hardware has also become popular. So-called soft appliances typically require customers to acquire hardware fitting a rigid set of specifications and install the soft appliance by booting from a vendor-supplied disk.

26.2.3.4 Personal and Desktop Agent.

As the network perimeter faded, it became increasingly evident that hosts needed to be able to better protect themselves from threats. The most recognizable host protection is antivirus (AV). As security dynamics evolved, the pattern-based protections of AV could not provide adequate protection. The introduction of a software firewall helped reduce the host's footprint on the network. The addition of a Host Intrusion Prevention System (H-IPS) allowed the host greater visibility into finding and stopping different types of malicious packets. (See Section 26.3.2.1 for more details on H-IPS.)

Current-generation agents take advantage of the contextual and environmental awareness of the host's operating capabilities. This allows the agent to integrate deeper into the OS and provide protection where network security products cannot.

To be certain, these agents require as much, if not more, maintenance than its network partners. Pattern/definition files must be constantly updated, and client software must be kept current to remove the potential for exploit. Adding to the support and configuration complexity, the host may be anywhere in the world.

26.2.3.5 Virtual.

Virtual firewalls consist of firewall software running on virtual machines under a hypervisor (such as VMware or Xen), protecting physical and virtual networks. As virtualization of networks and applications replaces physical deployments, network security architectures must be translated as well. The simple mapping of physical architectures into virtual environments is not sufficient, since the scope of compromise and consequences of failure of the hypervisor infrastructure must be taken into account.

A special case of the virtual firewall is the virtual appliance: This platform is an extension of the soft appliance concept wherein the vendor supplies a virtual machine image intended to run on a particular hypervisor on customer-specified hardware.

26.2.3.6 Embedded.

GSD functionality is continuing to be more extensible and affordable. This functionality can be fragmented to create Web server-based plug-ins, in order to build customized application firewalls. It can also be used to scale to the level of consumer and small to medium business (SMB) appliances.

Web server plug-ins tightly integrates with the Web server platform. This provides the ability to use predeveloped, downloaded signatures as well as to develop protections specific to custom Web applications and content. Although it increases the administrative overhead of developing and monitoring another security mechanism, it also provides a level of contextual understanding often unavailable to application gateways.

Consumer and SMB appliances typically provide switching, routing, and wireless connectivity. They give smaller operations more flexibility and the ability to incorporate GSD capabilities, such as a stateful inspection firewall, network intrusion prevention systems (N-IPS), antivirus, and beyond. Without such all-in-one devices, it may be infeasible for smaller organizations to get these robust security protections.

26.3 NETWORK SECURITY MECHANISMS.

Recognition of the value of network security mechanisms has changed the way systems are built and managed, from the largest government network to the individual personal workstation. Although IT managers have increased their expertise and recognized the need for these mechanisms, they often have unrealistic expectations about their capabilities. Network security mechanisms are far from the easy answer to Internet security concerns that some believe them to be. An understanding of the capabilities and roles of these components permits the most effective realization of their benefit, both direct and otherwise, without the undesired consequence of insufficient protection in other areas.

26.3.1 Basic Roles.

Network security devices provide for allowed paths, intrusion detection, and intrusion prevention and response.

26.3.1.1 Allowed Paths.

Although network security devices such as firewalls and proxy servers create a distinct physical perimeter between different networks, they also create a logical perimeter that extends to systems within protected networks. Just as the teller windows in a bank branch office restrict customers' interactions with bank personnel to those that are intended, the allowed path protections afforded by a firewall ensure that outside traffic is able to flow only in expected and intended ways.

The perimeter protection and allowed path control roles of the network security mechanism combine to form a least-privilege gateway that comprises the original firewall function. Network security professionals quickly learned that the dangers of external traffic could easily extend to the defined allowed paths. Significant security responsibility still rested with the destination host within the protected network. The many generations of network security mechanisms that followed focused on returning more of this responsibility back to the firewall or proxy server itself.

26.3.1.1.1 Tunneling.

When the security responsibility for allowed path traffic cannot be accepted by a GSD, often for reasons of protocol support or performance, the traffic is tunneled through to an endpoint in the protected network. Tunneling fragments the responsibility for inspection and policy enforcement, and changes the scope of compromise associated with a failure, since that failure may now occur within the protected network. Tunneled traffic can be to and from both endpoints and networks; as discussed further in this chapter situations involving traffic tunneled to networks have additional complexity.

In order to mitigate the risk associated with fragmented inspection and policy enforcement, considerable care must be taken to understand the full inspection and enforcement workflow within the obviated GSD. Those workflow elements that cannot be performed on tunneled traffic must still be handled; responsibility must be assigned to a downstream component. With most endpoint-to-endpoint tunnels, IP-layer policy can still be enforced at the GSD, while application layer and protocol-aware stateful inspection must be performed downstream. In the case of network-to-network tunnels, however, the GSD does not have useful visibility or control. A downstream device with the necessary protocol support, or a separate GSD controlling exploded traffic between the tunnel endpoint and the target network, must enforce IP-layer policy.

26.3.1.1.2 Antispoofing.

Network address spoofing is an ingredient in a variety of attacks aimed at exploiting invalid assumptions about the utility of network addresses for authentication, typically at the application layer. Gateway security devices have an opportunity to inspect traffic with knowledge of the network architecture and thus can recognize and prevent many spoofing attacks.

GSDs with knowledge of which addresses and networks exist on each physical interface are able to prevent spoofing of addresses from one interface by nodes on another interface. This is typically used to prevent external nodes from spoofing internal addresses, since there could be internal systems and applications incorrectly relying on internal source addresses to allow access. Spoofing attacks, which involve spoofing of one external address by another or of one internal address by another, often cannot be detected in this way, even when the spoofed traffic flows through the GSD.

Another form of antispoofing protection is the blocking of traffic claiming to originate in reserved or unallocated address space. Spammers and other attackers are frequently able to publish routes for networks in this address space and use it to dodge filters and other protections. A variety of lists of reserved and unallocated space are available, and some GSDs make automated use of standardized lists.

26.3.1.1.3 Network Address Translation.

Network address translation (NAT) was originally intended to combat the shortage of IPv4 addresses by connecting blocks of privately addressed space using a small number of real IP addresses. Due to a few beneficial side effects, NAT has come to be considered a security tool and a necessary capability of GSDs. These side effects include obfuscation of private network size and topology, a degree of stateful filtering, and endpoint address privacy. Although there is some value to the obfuscation of private networks and endpoint address privacy, the stateful filtering benefit is no substitute for true inspection of allowed path traffic. Use of NAT adds complexity that must be understood in order to express clearly the allowed path policy in the configuration of a GSD.

26.3.1.2 Intrusion Detection.

Another primary role of the network security mechanism is that of intrusion detection: sounding an alarm when all is not well with the network perimeter. Depending on how these mechanisms are deployed, alerts may provide extremely valuable information about real problems or a torrent of information about attempted attacks rather than actual intrusions. Tactics for addressing these issues will be discussed in detail later in this chapter.

When network security mechanisms are working properly, intrusion detection information is really threat-level information, useful in maintaining knowledge of the background levels of hostile activity directed at the protected network. Tests have shown that new Internet hosts are probed and attacked within hours of being placed online and are probed almost continuously thereafter.

Some firewalls incorporate pattern-matching features, such as those found in dedicated intrusion-detection systems, in order to detect hostile traffic along allowed paths. Similar in some ways to virus scanning via pattern matching, this method can detect certain known attacks on specific protocols.

Since multiple firewalls and proxy servers often are used in a given architecture, intrusion-detection data also can report actual security failures. When a network security mechanism observes and rejects traffic that architecturally should never have been present, security failure of an upstream device is possible.

See Chapter 27 of this Handbook for more details on intrusion detection and prevention.

26.3.1.3 Intrusion Prevention/Response.

Network administrators, responsible for reacting to the alerts from firewall intrusion-detection components, knew that there had to be a more efficient way to deal with these critical events. Firewall vendors began to integrate various types and levels of intrusion response capability into their products, producing automated responses to intrusion detection alerts.

Connection termination. The simplest of intrusion response capabilities, connection termination involves the firewall terminating a specific allowed path connection from a specific address and port when intrusion-detection components detect traffic that matches known attack patterns. Typically implemented in TCP via an RST, or connection reset command, this functionality also can be implemented on connectionless UDP (User Datagram Protocol) allowed paths through packet dropping.

This intrusion response capability is effective at blocking known attacks on allowed paths but suffers from several drawbacks. Unfortunately, skilled attackers can use this capability to create a denial of service against legitimate clients on specific ports. In many cases the clients rebuild their connections, but a successful attack might deny service completely for some period of time. Also, this technique is not useful for preventing attackers from attempting additional, perhaps unknown, attacks following connection termination of initial attacks.

Dynamic rule modification. The dynamic rule modification technique takes connection termination to the next level, ensuring that attackers are prevented from attempting further attacks from the same address. By dynamically modifying the network security mechanism rule base to block traffic from the offending address, further potential attacks are blocked.

This technique addresses one of the failings of connection termination, namely the exposure to unknown attacks that might follow known attacks, but it is even more exposed to the denial of service issues just discussed. Since dynamic rule modification creates a semipermanent barrier for traffic from a given address, attackers can deny service to broad groups of users for an extended period. Since many enterprises and Internet service providers (ISPs) use only a few proxy server source addresses for all traffic, an attack could quickly deny service for a very large group, such as all AOL users.

System-level actions. Most network security mechanisms perform internal monitoring of component processes and the underlying operating system. In the event of internal problems or evidence of compromise, or certain external intrusion detection events, system-level action can be initiated.

Actions taken can range from firewall interface deactivation to firewall system shutdown. It is important to test firewall shutdown behavior carefully before using this response, as several firewall products have in the past, upon shutdown, permitted open routing of traffic via the underlying operating system.

Application inspection. Within allowed paths such as HTTP, it is now common for firewalls to look for a variety of exploits and vulnerabilities related to the implementation of Web browsers and applications. Attacks such as cross-site scripting (XSS) and SQL query injection can be detected by Web application firewalls and addressed at an application level in addition to more blunt, conservative network-level responses.

Antimalware. Within allowed paths, given sufficient visibility and context by the gateway security device, techniques such as session hijacking and quarantine can be used to respond to malware and other content-related threats. For example, a firewall might hijack a session between a protected browser and an outside Web server in order to prevent the download of malware-infected files, while continuing to download the file into quarantine.

(See Chapter 27 in this Handbook for more information about intrusion detection and intrusion prevention.)

26.3.2 Personal and Desktop Agents.

Although firewalls and GSDs play a crucial role in protecting the overall network infrastructure, it is not cost-feasible or infrastructurally sound to deploy these devices at all points in the network. Individual hosts must have a way to protect themselves from threats independent of network security devices. Unlike the network, a host has a contextual understanding of what the system can be, is, or should be doing. This understanding provides the ability to create granular controls around what can access, or leave, the host. Beyond the simple allow and deny functionality, contextual security measures can detect unexpected system configuration changes such as service changes or an application behaving in an unexpected manner.

26.3.2.1 End Point Protection.

When a mobile end point leaves the internal network, it becomes an extension of the network protection profile. By providing adequate local protections at the network and application levels, the mobile end point helps to mitigate potential issues upon reconnecting to the internal network. It is also crucial to determine the level of protection necessary and how each of these additional levels of security affects system performance, management, and end user impact.

  • Network. Hosts need to be able to determine the types and appropriateness of inbound and outbound traffic. These network restrictions may vary from the internal network to uncontrolled networks. In certain circumstances, all network traffic may be suspect and scrutinized further. For example, when on the internal network, the host allows the inbound ICMP echo request. If the host were on an uncontrolled network, it would not allow any nonestablished packets. Simple host-based network protections are not enough; additional host protection mechanisms such as intrusion prevention can detect and stop network and application-based attacks.
  • Application access. The host's contextual awareness also helps to dictate the ability for applications to send and receive data. The goal is to ensure that only the appropriate applications or services have access to the network. The host protection policy may allow applications to establish outbound connections but never listen or accept nonestablished inbound packets. For example, an HTTP server uses a daemon to listen for connection attempts. The HTTP client will attempt to make a connection to the HTTP server. By using a host protection mechanism, it would be possible to prevent one or both of these actions.
  • Hybrid protections. The Host Intrusion Prevention System (H-IPS) functions similarly to Network Intrusion Prevention Systems (N-IPS) by detecting known attack patterns and anomalous behaviors. For example, an ICMP echo request with a payload of 64 bytes of hex 0×AA does not violate any request for comments (RFCs) or seem menacing, but this single packet was the precursor reconnaissance packet used by the Nachi worm. Hybrid host protections build on the host's contextual awareness and provide the ability to monitor other unusual application level activity, such as changes to binaries, service manipulation, and spawned listeners.

26.3.3 Additional Roles.

Additional functions carried out by network security devices include encryption, acceleration, content control, and a new version of the Internet Protocol (IP).

26.3.3.1 Encryption.

Security and practical concerns have prompted the inclusion of encryption technology in network security mechanisms. Valid concerns over centralization of responsibility for security decisions, components and perimeter protection, as well as cost and complexity savings have resulted in a variety of hardware and software encryption solutions as part of firewalls and proxy servers.

(See Chapter 7 in this Handbook for more details on encryption.)

Inspection. Two approaches are commonly used by gateway security devices to inspect encrypted traffic. The first, termination, requires that the encrypted communication have its endpoint at the firewall so that inspection and control of the plaintext communication can occur. The communication may then be re-encrypted for transit to the intended endpoint. The second approach, passive decryption, involves out-of-band decryption of the communication by a firewall, using escrowed keys. The passive decryption approach can be subject to a variety of issues, and weaknesses, common to intrusion detection technologies, if the decryption and inspection is not synchronous with control of the encrypted channel.

VPN. Virtual private networks (VPNs), which extend the security perimeter of a network to include remote systems as if they were on an internal network, have increased in popularity as a mechanism for allowing remote enterprise access without extensive hardware infrastructure. VPNs over the public Internet are most commonly used in this role.

Savvy network administrators realize that remote VPN clients are very different from true internal hosts and seek a way to mitigate the risk that comes with them. Using the perimeter protection and allowed path-control capabilities of the firewall, it is possible to create a special rule base specifically for VPN clients. Certain GSD vendors also provide the ability to peer inside the encrypted tunnel.

The P in VPN, which stands for private, is implemented through encryption technology. When the firewall is responsible for allowed path control on traffic from remote VPN clients, it must be able to deal with unencrypted traffic. Rather than place additional servers or appliances outside the firewall, where they might be vulnerable to Internet attacks, vendors chose to integrate the encryption technology directly into the firewall.

(See Chapter 32 in this Handbook for more details of VPNs.)

26.3.3.2 Acceleration.

SSL, or Secure Sockets Layer, is the standard encryption protocol for protecting Web-related network traffic. In order to centralize acceleration hardware, enable intrusion detection and allow path inspection, reduce Web server load, and simplify secure Web implementations, vendors have integrated support for SSL, frequently using hardware acceleration, into the network security mechanisms.

The active termination or passive decryption of SSL traffic at a centralized location requires significant processing power, and acceleration is often necessary to permit these activities without impacting the performance of protected traffic.

26.3.3.3 Content Control.

The inspection of content along various allowed paths is performed most easily at a choke point, where all of the traffic flows through one set of components, resulting in the integration of content inspection functionality in network security mechanisms.

26.3.3.3.1 Content Filtering.

Content filtering is not strictly a security capability. In most cases, this technology permits policy enforcement with respect to the actions of internal rather than external users. Business policy regarding the use of enterprise resources, for example, is often enforced through HTTP content inspection and filtering. HTTP (Hypertext Transfer Protocol) and SMTP (Simple Mail Transfer Protocol) filtering are used to isolate users from undesired materials, such as those they might consider offensive.

This technology, which is far from perfect, uses a variety of approaches to filter content. Address-based filtering, which uses IP (Internet Protocol) addresses of destination Web sites, for example, is efficient and easy to implement. This technique requires constant updates to a blocked address list, however, and can easily block sites unintentionally due to virtual hosts sharing IP addresses. Name-based filtering is a slight improvement, using actual domain and resource names, but it still suffers from the list management problem.

In an attempt to address the list management issues, a resource-intensive technique based on real-time content scanning was developed. This technique, which can incorporate anything from keyword scanning to image analysis, also results in significant erroneous filtering.

(See Chapter 31 of this Handbook for more details on content filtering.)

26.3.3.3.2 Antimalware.

Virus scanning within network security mechanisms takes various forms, from SMTP message and attachment scanning to HTTP traffic inspection. Typically based on existing pattern recognition, virus scanning systems, this integration sometimes loops traffic through dedicated scanning systems rather than performing the work on the firewall or proxy server itself.

26.3.3.3.3 Active Content.

Active code, such as Flash, QuickTime, ActiveX, VBScript, and JavaScript, can pose a security threat to internal systems. Many network security mechanisms have been enhanced to support filtering or scanning of these components on certain allowed paths.

On HTTP connections, for example, active code filtering might simply prevent the transfer of certain types of active code. More sophisticated scanning technology can be used to identify hostile code through pattern recognition or sandbox execution.

SMTP connections have seen significant new attention in this area due to active code weaknesses in popular mail clients. The scanning and filtering of active code in SMTP traffic is now being used to address a new breed of e-mail viruses.

(See Chapter 17 of this Handbook for more details on mobile code.)

Complex protocols. The complexity of communications over encrypted paths, such as HTTP over SSL, increases the risk of tunneling encrypted traffic beyond perimeter checkpoints. Where this tunneling does occur, application firewalls should be considered to manage the allowed path at its endpoint.

Serialized objects and code. One example of the risk of complex allowed paths is evident in the serialization of code in objects, in languages such as Java, when those objects are then communicated across trust boundaries over protocols such as HTTP. If code is serialized in objects on a trusted server, communicated to an untrusted system and then back to the trusted server, and evaluated on the trusted server, there is potential for compromise. Code should never be evaluated on a trusted system if it has not been continuously in the custody of trusted systems.

26.3.3.3.4 Caching.

Proxy server vendors quickly realized that their devices could dramatically reduce Internet bandwidth consumption and improve internal performance by caching frequently requested items on various protocols. HTTP, FTP (File Transfer Protocol), and streaming media caches are common on enterprise networks.

26.3.3.3.5 Policy Enforcement.

As with the inclusion of content filtering in firewall products, the network gateway is a logical place for deployment of other policy enforcement solutions. In order to protect data assets from loss through allowed paths, and to ensure compliance with the various regulations discussed in Section 3 above, firewall products now implement controls such as requiring encryption of certain communications and scanning communications for sensitive information.

26.3.3.4 IPv6.

Internet Protocol Version 6 (IPv6) is the successor to IPv4, the current standard protocol for Internet communication. IPv6 has many features with security implications, but a few key issues should be considered during the transitional period. (For more information about IPv4 and IPv6, see Chapter 5 in this Handbook.)

Support and compatibility. Gateway security devices used to protect IPv6 networks must, at minimum, support inspection and control of a few key protocols: neighbor discovery (ND), router solicitation/advertisement (RS/RA), and multicast listener discovery (MLD).

Stateless address autoconfiguration, which allows IPv6 nodes to automatically address themselves and discover their routers through ND and RS/RA, may break the user/address audit trail in some environments. In these cases, MAC addresses must be used to uniquely identify hardware nodes.

IPv6 resolves the address-space shortage of IPv4; the IPv4 address space is 232 (~109) compared with the IPv6 address space of 2128 (~1038). A widely circulated image to describe the difference between these two sizes is that if we were to represent the IPv4 address space as a square about 1 inch on a side, the IPv6 address space would have to be a square roughly the size of our solar system. Although the increased address space of IPv6 does mitigate some of the risks of port scanning and other topology discovery, the introduction of new resource-specific multicast addresses does allow for some discovery. Multicast addresses must be blocked by IPv6 multicast and MLD-aware GSDs at the perimeter to prevent this.

The most important transitional consideration is awareness of IPv6 traffic tunneled over IPv4. Tunneled traffic should not be allowed to transit GSDs unless the GSDs are capable of enforcing the same policies on the tunneled IPv6 traffic as would be enforced on IPv4 traffic. Organizations connecting IPv6 networks via the IPv4 Internet should consider the impact of IPv4 to IPv6 gateways and the possibility of spoofing IPv6 addresses by attackers on the IPv4 Internet; the advantages of IPv6 antispoofing features are negated when IPv4-to-IPv6 gateways are in use.

Specific issues. The address space shortage of IPv4 led to the creation of NAT, and NAT is not intended to persist following the transition. Given the current use of NAT as a security tool, there are some specific risks during the transition. As nodes are deployed with both IPv4 and IPv6 network stacks, the IPv6 network may expose unprotected nodes when the protections were previously implemented via a NAT layer. For example, when the blocking of all externally originated connections is implemented via NAT, the presence of an IPv6 stack will allow external nodes to directly connect to internal nodes on all listening ports.

The introduction in IPv6 of true network mobility, which enables nodes to communicate seamlessly using a single IP address while roaming across multiple networks, challenges the traditional perimeter security model. For example, if a user on an internal network accesses an internal system from a laptop at work and then takes the laptop to lunch at a nearby café with Internet access, the TCP session to the internal system could be uninterrupted despite the fact that the laptop has moved from a local, internal network behind a GSD-protected perimeter to a remote, public network outside the perimeter. Although the remote access seems as though it should be handled as VPN traffic would be on the IPv4 network, the dynamics of movement across the perimeter complicate any stateful aspect of inspection. Significant work will have to be done by GSD vendors during the transition to account for mobility without threatening the integrity of stateful inspection capabilities.

26.4 DEPLOYMENT.

The configuration and topology of network defenses must include consideration of firewall architectures, placement, monitoring, and management.

26.4.1 Screened Subnet Firewall Architectures.

The external router is the architecture's first line of defense against attacks from the outside world. The ACLs on this router should mirror the allowed-path configuration of the external firewall interface, in order to provide the front half of a screened subnet on which the firewall will operate. This screened subnet provides several important benefits.

The firewall is able to operate at maximum efficiency, since traffic rejected based on packet-filtering rules normally would never reach the firewall. This permits the firewall to focus, in terms of load, on protocol inspection. The firewall is able to respond immediately to unexpected conditions. If, for example, the firewall inspects a packet that should never have passed the external router's ACLs, the firewall can assume that the router is not behaving normally. The firewall is then free to respond appropriately, with such actions as terminating all connections from a specific host.

26.4.1.1 Service Networks.

The increasing demands for mobility and accessibility places a heavy burden on Internet-facing systems. Externally available functionality is outpacing the usefulness of the traditional DMZ. As the number of Internet-facing systems grows, so does the administrative overhead of managing external access.

The service network principle breaks down the old DMZ conglomerate into easier-to-manage and protect external networks. Instead of lumping systems such as Web, DNS, and e-mail onto a single network, there may be an advantage by splitting these up. Utility servers such as DNS and e-mail could logically be on the same network. Web servers can demand a great deal of bandwidth, but by using this concept, can protect the entire service network with a simple inbound access rule. Extranet systems create additional complexity since they provide the user interface, while internal systems provide the relevant content. The service network principle provides more flexibility in allowing external connections to reach the extranet servers while providing those same servers access to internal resources.

26.4.1.2 Redirect Back-End Traffic through the Firewall.

Remote access systems such as VPN connection points provide a unique challenge. Even though the front-end traffic arrives encrypted, there is still a heavy risk to providing a backend link directly into the internal network. Instead, the encrypted traffic should arrive through one firewall interface, and the internal-bound traffic should use a separate unencrypted interface routed back through the firewall to the internal network. This type of deployment provides extra internal network protection in the event of a compromise of the VPN device as well as creating a traffic inspection point that is unimpeded by encryption.

26.4.2 Gateway Protection Device Positioning.

The increased use of encrypted protocols such as SSL and IPSec can create havoc when attempting to provide robust network protections. Although certain GSDs have the ability to terminate encrypted sessions, the increased processing and bandwidth requirements may exceed the limits of the device. Rather, the security architecture should deploy these countermeasures at strategic locations that avoid encrypted traffic. This way, the GSD can focus on its primary role of detecting and preventing malicious activity.

Inline. During packet analysis—whether for troubleshooting or for intrusion detection—the typical procedure is to configure a span port to replicate all data from one or more switch ports to the monitoring port. This approach allows the flexibility to move the passive monitors to different locations without affecting traffic flow, but it has two primary drawbacks. First, there is the potential to overwhelm the monitor's bandwidth. If the monitor connects at 100 Mbps but is monitoring two saturated 100 Mbps ports, there is a high probability that the monitor will miss traffic. Second, these passive devices do not normally provide protection capabilities. Since the passive device does not see the traffic directly, it may be possible for more packets to transmit before closing down a connection.

By placing the GSD inline, there are several specific advantages. This configuration provides a choke point requiring all network traffic to flow through it. If the device handles wire speeds on each interface, then there is little potential to miss network traffic. This method also allows active prevention measures to occur. When a malicious packet enters the GSD, protocol analysis will detect the anomaly and will not allow it to flow out the other interface. Although bandwidth limitations are a typical concern, improperly configured inline devices may also present a denial-of-service condition. With proper infrastructure planning and deployment, it is possible to minimize these risks.

Avoid encrypted traffic. The typical silver bullet for any GSDs is encryption. Encrypted malicious traffic can come and go at will simply because the GSD cannot evaluate the payload. Since mobile devices frequently venture outside of the controlled network, the only logical place to evaluate traffic is on the unencrypted side of the connection. This may be on the backside of a SSL terminator (in some cases on the server itself) or on the unencrypted side of a VPN connection.

26.4.3 Management and Monitoring Strategies.

Network security devices are never a plug-and-play endeavor. It is essential to take additional steps to define the security requirements for managing and monitoring GSD components. This approach helps to ensure a well-rounded security posture.

26.4.3.1 Monitoring.

Firewalls and GSDs provide complex functionality, and monitoring such systems must go beyond just verifying that the system is available. Monitoring mechanisms should cover the areas of device health, availability, and integrity.

  • Health. Firewalls and GSDs must be solid performers, but these systems require extra administrative attention. Metrics such as processor utilization, available RAM, and number of connections all have an impact on the overall functionality of the system. A centralized management console may provide the ability to monitor these metrics and to issue alerts. If this functionality is unavailable, it may be necessary to use monitoring protocols such as SNMP and/or RMON to gather these statistics. The GSD must tightly restrict the systems able to poll using these methods, because of the inherent insecurities of the monitoring protocols. By observing the trends of these metrics, it may be possible to determine when it is time to increase bandwidth or to purchase systems that are more robust.
  • Availability. When GSDs are unavailable, the functionality of the network can be dramatically reduced. A simple test of system availability is by using ICMP to Ping the device, so ensuring that the device is responding. This may be for more than one interface depending on the type and functionality of the device. However, this approach can be deceptive; just because the device itself responds does not mean it is properly processing traffic. It is also advisable to send an ICMP or trace route to something on the other side of the interface. Make sure that this other device is actually accepting the packets to ensure valid results. This approach provides a better overall picture of the availability of the GSD.
  • Integrity. The ability to trust network security systems components is paramount. A rootkit compromising a firewall or GSD is not out of the realm of possibility. These systems should have the ability to protect against modification of system components. This could occur by the device ceasing operation or alerting the change. If this functionality were unavailable, it would be possible to write a script that generates cryptographic hashes of the system components and verifies them against that of a known good version.

26.4.3.2 Policy.

The GSD policy is the core of providing and protecting allowed paths. These security systems process packets starting at the beginning of the policy and continuing until there is either a match, or the end of the rule base is reached. As discussed later in this chapter, there are situations in which certain rules process before or after the main rule base.

The ability to manage individual policies on the command line is no longer adequate. Centralized management consoles provide intuitive GUIs to configure and easily manage one or more firewall and GSD policies. Certain platforms also provide the ability to manage policies directly from the device.

Firewall-Allowed Paths. Allowed paths identify specific protocols used to implement communication. In a typical Internet environment, business services require allowed paths, such as HTTP, SSL (HTTPS), SMTP, and DNS. These requirements will vary, but for an environment, each allowed path should directly relate to a required external service.

Starting from an implicit or explicit (depending on the platform) deny-all rule, allowed paths will be added as allow rules, such as PERMIT HTTP, with specifics determined by the following sections.

The systems or groups of systems that send and receive traffic along a specific allowed path are the endpoints of that communication. Although network addressing does not provide effective authentication of systems or users, restrictive endpoints can make it much more difficult for an attacker to exploit an otherwise straightforward vulnerability. It is also important to identify the endpoints carefully, particularly in cases where these endpoints might reside on internal rather than service networks.

The direction of traffic, indicated by the source of the connection initiation, is useful for the rule definitions for several reasons. First, rules can be written so that only responses to internally originated allowed paths are allowed in from the external network, rather than permitting the protocol bidirectionally. In addition, the firewall may process rules at different times based on design or configuration.

Complexity of GSD policies. Standard firewall rules operate on simple Boolean principles: for example, allow network traffic that is going from host X to Y on port Z. The complexities required of GSDs evaluating network traffic are dramatically higher. For example, this evaluation could be a combination of a Boolean test to verify an inbound e-mail address is from a trusted source, to verify message contents are acceptable, and to scan an attachment for viruses. Administrators must understand the higher-level protocols to ensure that the GSD policy matches the types of protections expected and required. As the number and complexity of the rules increases, so do the processing requirements for the GSD.

Change management. Whether managing 1 or 100 policies, the key is to have a process to track policy changes. Change management can be cumbersome but has several advantages. First, it provides back-out information if a change were to cause issues. Second, it provides an audit trail of who requested the change and who changed the policy. Last, it provides a method for streamlining change requests by providing a single method for accepting, reviewing, implementing, and validating policy changes.

Secondary validation. Making changes to a policy tends to be a simple process, such as adding HTTPS access to a new extranet server. However, the more complicated the change, the more likely for mistakes to occur. Having another administrator evaluate the proposed change brings a fresh set of eyes that may catch a small discrepancy that could have negative consequences. This may be another step in the change management process.

26.4.3.3 Auditing/Testing.

Is your firewall or GSD working as expected? Would you bet your organization's future on it? Is the firewall truly only letting SMTP to the e-mail service network? Is the GSD catching the most recent virus-infected attachment? These are simple yet powerful reminders that you must know if the network security devices are working as expected. A proper regimen of auditing and testing will help to answer these questions. Management consoles should provide an audit log that tracks who makes changes to any part of the environment.

Penetration testing and vulnerability assessment are effective ways to determine the validity of policy rules. For example, the firewall should not allow an HTTP connection to a system only with SSL access configured. If this happens, was it a failure of the test rule, is there another rule higher in the rule base erroneously allowing this access, or is something unexpected occurring? The tools and processes must be in place to answer these questions before more serious violations occur.

(For more information on vulnerability assessments, see Chapter 46 in this Handbook; for guidance on security audits and inspections, see Chapter 54.)

26.4.3.4 Maintenance.

Patching workstations and updating virus definitions are common practice, but this process is even more crucial for systems protecting the network. It is essential to test the validity and stability of an update to ensure that faulty or rogue changes do not interrupt production operations.

Patching. No system is inherently and infinitely secure, and it may be possible to subvert a security device due to some system vulnerability. It is crucial to monitor firewall and GSD Web sites for the most recent operating system and system component patches. By monitoring information sources such as Bugtraq and other message boards, it may be possible to ascertain timely information. This additional research could result in being able to determine and implement a temporary solution until the vendor can provide a permanent fix.

Vendors can take weeks and months to develop and release a patch. In certain cases, third-party patches will become available for unpatched, actively exploited vulnerabilities. While using these patches is an option, it is inadvisable due to the inability to verify the integrity and safety of the patch code. (For more information on patch management, see Chapter 40 in this Handbook.)

Pattern updates. Digital threats change constantly. GSDs must be as current as possible to try to protect against these threats. Automatic updates provide the smallest delta of exposure for pattern-based signatures. However, blindly trusting these updates in a production environment may have adverse effects. To avoid issues, there should be a procedure that enables new signatures in monitor-only mode to ensure that they do not cause adverse effects, before implementing full protection. If automatic updates are not a viable option, then testing will require a lab environment or the use of noncritical systems to vet the new patterns.

26.4.3.5 Logging and Alerting.

The main purpose of the firewall and GSD is protecting against threats, but it is also necessary to review allowed and denied traffic. In its crudest form, any network security device should provide the ability to log the various functions, from packets allowed and denied, to system changes. In addition, there should be an alerting mechanism that has the ability contact someone (e.g., by e-mail or text message) if something violates a specified threshold.

Whether logs remain local or on a centralized management system, they take up a large amount of disk space, and if ignored, are worthless to keep. There should be a log review process. Instead of trying to determine the anomalies from an entire log set, it is helpful to remove known traffic so as to help expose the unknown or potentially malicious. For example, if you expect outbound SMTP traffic from a single network, you could filter out those known items and then be able to see more easily other nonauthorized hosts or networks originating SMTP traffic.

Logging is a highly subjective but necessary function. Organizations with a large Web-based footprint will generate a large number of HTTP- (and potentially HTTPS-) related events. Each new connection will create an entry in the log. If the firewall is not doing any payload inspection, the logs may be useful only for determining the IP addresses connecting to a Web site. If there are other metrics tracking this, it may be useful to turn off logging on selected rules to reduce the overabundance of logs that are unnecessary to review. Instead, by having a GSD do protocol inspection and logging, there is the potential of gaining a better understanding if packets are expected or malicious. Instead of weeding through many thousands of lines of firewall logs, you now would find immediately actionable information in the GSD log.

Alerting is complementary to and usually depends on the logging mechanism. Once the firewall or GSD generates a log, the administrator can configure alerting options when certain log conditions or thresholds exist. For example, if the administrator wants to know immediately if an internal system process changes state, it may be possible to configure an alerting event that will send an e-mail to notify of the unexpected change. By carefully determining alerting thresholds and notification methods, there is less opportunity to overload the administrator(s) with nuisance alerts.

(For more information on logging, see Chapter 53 in this Handbook.)

26.4.3.6 Secure Configurations.

Developing and deploying a firewall or GSD policy is not the only step to protecting the network. Another crucial step is protecting network security devices by creating a secure configuration. In certain cases, the firewall or GSD vendor provides a secured or proprietary version of an operating system. This should not be a green light that the device is secure and ready for production. Rather, this provides even more reason to take the time to evaluate the security posture of the device thoroughly.

Once a secure configuration is set, it can then become the baseline configuration for the remainder of the systems. This provides a standard configuration, helping to ensure that each device functions and is secured in the same manner. There should also be a process to ensure the integrity of the secure configuration as well as to verify and test when valid updates occur.

Default configurations. Default system and policy configuration vary widely. Some policies have an implied deny, while others may allow any. In no case should any network security device deploy with a default configuration. It would also be useful to know if the network security device fails in an open or closed posture. Inline devices may fail open in an attempt to maintain traffic flow. Depending on the network, this may or may not be the optimal response. Although potentially disruptive, a fail-closed posture reduces the likelihood of anomalous or malicious traffic passing undetected.

Implied rules. Implied rules are separate from, but can be a part of, the default configuration. During the processing of the policy, the network security device will process the implied rules prior to packets getting to the policy rules. The implied rules may allow the firewall or GSD to process specific and known administrative traffic without using processing cycles to go through the rule base. It is essential to determine if these rules exist as well as to see if they match the desired security posture. If not, disable or modify the implied rules to fit specific protection needs.

Reduce ancillary exposures. Protecting the network security device is a critical part of the security of the overall infrastructure. The two most common ways to reduce these exposures is through the administrative console and vulnerability assessment. The management or system console can provide information such as default listening ports, services running, and so on. If a service is not critical to the functioning of the firewall, disable it to remove any threat of its becoming a compromise vector. Only specific administrative hosts should have direct access to the system. As with the firewall and GSD policies, it is useful to test that configuration changes are actually making the device more secure. By conducting a vulnerability assessment, you are able to determine if unexpected services are still available or if other vulnerabilities still exist.

26.4.3.7 Disaster Recovery.

The impact of a firewall or GSD outage can range from an annoyance to a critical event, depending on implementation, necessity, and disaster recovery planning. Since these systems represent part of the backbone of the security infrastructure, it is a necessity to provide continual, protected, network access.

Fail-over/high availability. In situations where reliable access is a necessity, a high-availability (HA) configuration is essential. The HA deployment will typically have an active/standby pair, where the active member is processing live packets and the standby member is ready to take over in the case of a failure of the active member. It is also possible that the standby member is continually synchronizing connection information from the active member. In this scenario, there would be little to no loss of connectivity if the active member fails, because the standby would then become active, already having the connection information to allow continued processing of the current sessions as well as to service new sessions.

Load-balancing. Distributing the load between multiple systems is another way to reduce an availability exposure. If the load equally distributes between two or more systems, the failure of one does not eliminate all access. One caveat is that the load balancer must continue to route each connection to the same system on the back end, to maintain connection information. Otherwise, it may be possible for the load balancer to route traffic to another system that does not know about the established connection. In certain cases, clusters may be able to act as their own load-balancing group.

Backup/restore. Backup and restore functionality is the most rudimentary disaster recover method. If the system crashes or suffers a compromise, there must be a way to recover the system in an efficient manner. By having a reliable backup process, it becomes easier to restore the device configuration in the event of a failure. Depending on the vendor, backups may consist of something as simple as a text file or something more complex like a GZIP file containing the crucial configuration information. The restore process may involve restoring the operating system and the security device configuration. This may be as simple as uploading the backup file, or it may require an out-of-band connection to do prework before uploading a preserved configuration. This process should include a method to move the backup to another system to eliminate a single point of loss.

(For more information on backups, incident response, business continuity planning, and disaster recovery, see Chapters 57, 56, 58, and 59 in this Handbook respectively.)

26.5 NETWORK SECURITY DEVICE EVALUATION.

As digital threats continue to evolve, so must network security devices. Internet connectivity costs continue to decline, and the ability to detect complex threats is a necessity. It is a mistake to think the current security systems and infrastructure will be viable for an extended length of time. In essence, the infrastructure is continually under review. One crucial time to revisit these items is when it comes time to replace firewalls and/or GSDs.

Effective security lies at the intersection of protection, functionality, and usability. Every situation will have unique challenges and opportunities to develop a better security solution. The sections that follow provide a framework for evaluating network security devices currently in-service or during a request for information (RFI).

26.5.1 Current Infrastructure Limitations.

It may be technically possible to do a one-for-one replacement of an existing firewall with a GSD, but it is important to review the current infrastructure to ensure maximum effectiveness of the new device(s).

  • Are the current network security devices past useful life?
    • This may include devices that are out of warranty, no longer supported by the vendor, and unable to keep up with current needs.
  • Is the Internet-facing infrastructure limiting GSD deployment options?
    • A DMZ architecture placing servers between the external router and external interface of the GSD would limit the devices visibility into most of the DMZ network traffic. In addition, placing the GSD in the allowed path of all encrypted traffic relegates the GSD to more of a packet filter than an extensible security solution.

26.5.2 New Infrastructure Requirements.

The dynamics of network and security infrastructures shift constantly. There may already be a known trend of rapidly escalating bandwidth usage or an increased reliance on encrypted communications with business partners and consumers. The prospects of adding the capabilities inherent in GSDs have a dramatic influence on these decisions.

  • Are there new demands for more bandwidth, encryption, or traffic inspection?
    • Determining these new demands includes taking into account observed or anticipated growth.
  • Is Internet-access redundancy becoming a necessity?
    • For disaster recovery and availability reasons, it may be important to consider a GSD that has built-in redundancy capabilities such as active/passive fail-over or clustering.

26.5.3 Performance.

Technological obsolesce is a common issue throughout the digital world.

  • Is the processing power of the current device(s) becoming a bottleneck?
    • Security devices running at high processing or memory usage are good indicators that the system is at, or reaching, its maximum performance threshold.
  • Does the device have excess capacity for future needs?
    • Without extra processing capacity, a modest increase in network traffic, or enabling a new feature, could cripple the already resource-starved device.

26.5.3.1 Throughput.

The search for more bandwidth seems endless. Gigabit continues to become obsolete in the core as the price and availability of 10 Gigabit becomes increasingly obtainable. The upcoming 802.11n wireless standard will soon be able to provide speeds over 100 Mbps—at least doubling the current wireless bandwidth maximums. Organizations should take a realistic approach before paying for higher-bandwidth solutions. If an organization is small or has little Web or electronic transactional presence, it is unlikely to produce large amounts of Internet traffic. In this case, there is no reason to purchase a device that guaranties multigigabit performance.

  • Does the device meet bandwidth requirements?
    • The GSD should be able to sustain high traffic loads without interruption or service degradation.
  • Does the device have excess capacity for future needs?
    • If the device is modular, it may be possible to increase memory, create new network segments, or increase bandwidth by adding or replacing a hardware module.
  • Is there sufficient processing power to handle temporary or sustained increases in traffic load?
    • Peaks in network traffic are inevitable. Having additional bandwidth capacity for existing GSD network segments will reduce the impact of these situations.

26.5.3.2 Implications of Added Features.

Many vendors tout near wire speeds—pushing into the multiple gigabits per second. These statistics do not mirror reality, as the standard throughput tests use a 64-byte UDP packet. In actuality, packet types and sizes vary constantly, but the vendors are still able to provide high-bandwidth throughput when using basic security and encryption mechanisms.

  • What type of performance impact does activating additional security features create?
    • The greater the number of additional functions and the more detailed the payload inspection, the more impact there will be on the device's processing power and bandwidth.
    • If adding encryption, hardware acceleration, if available, will reduce much of this performance impact.

26.5.4 Management.

GSDs are not set-and-forget or plug-and-play devices. These devices can require substantial planning, configuration, monitoring, and maintenance. Having a robust management platform in a distributed GSD environment is essential to the success of the implementation.

  • Does the current environment have the management features needed or required?
    • Self-evaluation of the current management environment is usually fairly easy as the administrator(s) already have likes, dislikes, nice-to-haves, must-haves, and must-goes. This information will provide addition guidance while going through the additional criteria listed here.
  • Do you need or want distributed or centralized control of the network security devices?
    • The size and type of GSD deployment is important to consider. It probably does not make sense to use an extensive management platform for a few devices. With larger geographically distributed environments, having centralized control of all devices is necessary for efficient and effective control.
  • Do you need redundant management systems?
    • As with GSD redundancy, it may be necessary to have redundant management systems to ensure administrative control and logging of GSD devices. Be sure to review management systems redundancy options as certain instances allow a reduced set of functionality when using the secondary management system.
  • Do you want to be able to manage the system from a centralized management console or directly from the device?
    • Certain types of GSDs allow system and policy changes to be made on the device. This feature can be useful during testing but may cause confusion or misconfiguration if not properly understood and managed.
  • Do you require encryption to protect management, policy, and logging functions?
    • The need to encrypt sensitive information is second nature to security professionals, but it may be easy to overlook the transmission of GSD management and logging functions. If sent clear-text, it would be possible for someone to monitor these communications and learn a great deal about the network security architecture.
  • Do you need a more detailed reporting mechanism?
    • Most logging systems provide the ability to filter data, but it may be useful to do more in-depth reporting. Some example reports include rule-based hit percentages (i.e., what rules receive the highest percentage of hits) or traffic usage statistics (e.g., percent usage by protocol, network segment, or host).
  • What amount of granularity do you need for management permissions?
    • If there are only one or a few administrators, this may not be as significant. Larger deployment may have distributed administrators, and the GSD management system should have the ability to restrict access to certain tasks, areas, or logs.
  • Do you want drag-and-drop or command-line policy or device management options?
    • Certain GSD brands have the option of configuring devices using a more traditional command line interface and can be useful for quick troubleshooting and scripted changes. A drag-and-drop graphical environment uses the central management systems to create, change, and view policies and logs easily.

26.5.4.1 Logging and Alerting.

Although the primary responsibility of the GSD is to stop malicious traffic, logging and alerting play a critical role in the overall security posture. Logging is essential to investigating incidents, viewing trends, and may even be a regulatory requirement. Devices such as Syslog may do text-based logging; others may use a proprietary logging format and database. For more information on logging, see Chapter 53 in this Handbook.

  • Do you want a graphical interface to view logs?
    • Collecting, searching, and reviewing logs in a text format does have some advantages including consolidation and scripting for log reviews. A graphical interface increases the aesthetic appeal of the logs and can ease viewing by being able to group or filter different types of events as well as add color and graphics that help catch the eyes' attention quicker. These additional features are useful, but the graphical interfaces may be slower to load and may restrict the ability to create complex queries.
  • Do you want compressible logs?
    • Even with well-maintained logging standards and grooming processes, many security administrators will attest to the overwhelming volume of logs. Add the additional logs from the remaining GSD functions, and it is possible to create a binary ocean of data. Internal and regulatory requirements may make it necessary to keep this data for long periods of time. Text-based logs lend themselves to high compression ratios and archiving. Some of the proprietary formats may automatically provide some compression but may not be able to match that of nonproprietary formats.
  • How do you want to be able to filter logs?
    • Graphical interfaces provide the ability to filter logs based on one or more criteria but can fall short when attempting complex or recursive queries. Text-based logs require scripts to parse through logs, but they have the advantage of being able to use complex queries, and they have the ability to analyze logs virtually anywhere (instead of requiring a specific application or interface).
  • How do you want to store logs?
    • Some GSD logging options include storage on the local device, on an integrated logging and management platform, or on a separate storage device. Local storage does create the risk of data loss if the device fails. Sending the logs to other systems increases network traffic, but it allows easier management of the logs.
  • What types of alerts do you need?
    • While the logging system provides a location to store and review data, alerting helps to bring information to life without watching logs every minute of the day. It may be possible to create e-mail or SMS alerts if a certain event occurs or goes above or below a configured threshold (e.g., the GSD network throughput is over 85 percent of the maximum possible, or a host is attempting to attack the GSD).
  • How granular do you want alerts to be?
    • Simple alerting may be enough, but it may be possible to develop more detailed alerting conditions (e.g., criteria dependencies) to help minimize erroneous alerts.

26.5.4.2 Auditability.

Having an integrated audit trail of changes made to the GSD environment can be invaluable during an incident. These audit records provide an accounting of what changes occurred, when, by whom, and so on.

  • Do you want to be able to audit device changes?
    • Changes to the device itself do not occur nearly as often as changes to policies. Having the ability to collect information about these changes can help uncover system instability, accidental misconfiguration, or malicious intent.
  • Do you want to be able to audit policy changes?
    • GSD policies tend to change rapidly as new access requirement develop, new capabilities or requirement appear, and the ever-changing threat landscape progresses. Policy audit trails help to resolve mistakes and to provide archival information of what changes did occur, and when.
  • How granularly?
    • Data such as date, time, type of change, to what system(s), and from what administrator are all valuable pieces of information. Increasing auditing granularity also increases the overall number of logs and can increase processing overhead.
  • Should the audit logs be independent from the firewall event logs?
    • Having a logging system that automatically separates audit logs from other event logs helps ease viewing and searching. This also creates the ability to provide more granular permissions as to who can see what type of logs.

26.5.4.3 Disaster Recovery.

With GSDs providing consolidated security functions, there is a need to incorporate disaster recovery (DR) options into the overall GSD environment to ensure that the new device(s) do not become a single point of failure for the entire organization.

  • Do you have any specific disaster recover requirements?
    • As mentioned earlier, different types of redundancy help during DR scenarios. High-availability setups such as active/passive fail-over and clustering will help to increase overall availability.
  • Does the device have simple backup and restore mechanisms?
    • The backup mechanism should provide a simple interface to complete backup and restores as well as to move them off the local device for storage and archiving.
  • Do you want the ability to do automated restores?
    • Certain GSDs have the ability to do an automated or semiautomated restore using predefined scripts, or through pushable configurations from the management system.
  • Should the system fail open or closed?
    • Systems that fail open will allow all network traffic to flow without being evaluated. Systems that fail closed will stop all network traffic. It is important to determine if the uptime advantage of a fail-open system outweighs the problems of network traffic passing uninspected or of downtime associated with a fail-closed system.
  • Do you need out-of-band access?
    • During network failure, normal command and control of GSDs can be lost. Having an out-of-band access solution such as a direct serial connection and dial-in solution may provide the ability to monitor and change the system.

26.5.5 Usability.

Having a management environment that has every feature imaginable is worthless if it is too complex and convoluted to operate and maintain.

  • Is the management console intuitive?
    • The management console should have a consistent and simple interface to interact with the devices and policies. The more intuitive the interface, the less time the administrators will need to spend figuring out how to do necessary tasks.
  • Are the primary functions easy to accomplish?
    • Two of the primary functions of administrators are managing policy changes, and changes to log access. The interface should make it easy to follow and edit policies as well as to view and filter logs.

Learning curve. By choosing a GSD vendor that has an intuitive interface and functionality, it should be much easier to learn the environment.

  • Is training required to learn the new device(s) and management platform?
    • Even with the best interface and features, it may be necessary for the administrators to get training to use the system more effectively. Check with the vendor to see what training (basic to advanced) is available and at what cost.
  • How does the vendor approach security?
    • Every GSD vendor has a different view and different implementation of security features. Some focus primarily on allowing or denying traffic, while others are more concerned with network traffic flows. Moving from one to the other can be confusing, and could slow the conversion.

Features. Although the GSD platform may be able to provide a single source for network security protections, it may not be cost or resource effective to do so, as it must fit the organization's security needs and posture.

  • What features do you want the GSD to have?
    • Features may include firewall, IPS, VPN, antivirus, and antispam.
  • Are you looking for a device to be an all-in-one solution?
    • The ability to provide a single-solution security platform has merits, but adding each additional feature has an impact on performance, availability, and if implemented improperly, could cause problems.

26.5.6 Price.

The cost of a GSD environment varies widely.

  • What will be the true cost of the upgrade or replacement?
    • Areas for considerations are purchase price for the hardware, price for requested security features, management infrastructure, hardware and software, hardware and software maintenance fees, training costs, direct or third-party support contract, potential downtime during the conversion, and learning curve. Additional features, and elements, if added later, will increase the overall costs.

Initial cost. The initial cost revolves around the entry costs of a GSD deployment.

  • How much will the new hardware cost?
    • This includes the price of the GSD and management devices.
  • Is there an appliance?
    • The appliance may be a more cost-effective alternative if there are multiple options for deployment.
  • How much extra will shipping cost?
    • If this is a distributed deployment, there may be extra shipping costs and tariffs associated with getting the equipment to or from international locations.
  • How much will it cost to purchase the features needed?
    • This is highly dependent on the GSD. Certain GSDs come with everything out of the box, while others provide some basic functionality beyond the firewall, VPN, and basic traffic inspection but charge extra to use full functionality of the existing items or add new items.
  • Will we need to invest in training to learn the new environment?
    • Training could include attending one or more off-site courses, possibly requiring additional travel, food, and lodging expenses. In some cases, it may be more cost effective to bring a trainer in-house to provide a more tailored, and less expensive, training program.

Ongoing costs.

  • How much is the yearly hardware and software maintenance?
    • The maintenance contract is usually negotiable regarding the length of the maintenance term and the costs. These are usually based on a percentage of the purchase price.
  • What level of service do you require?
    • GSD vendors normally provide different service levels. Each level corresponds with response times, escalation ability, and access to additional information. The higher the level of service, the more it will cost.
  • Can the provider cover you at each location?
    • If necessary, major issues may require a vendor or vendor partner to come on site to complete a repair. If this service is available, specify the total costs, including time and travel expenses, in the maintenance contract
  • Is there a requirement for an on site spare or redundant equipment?
    • If the DR plan calls for quick recovery, having spare or redundant equipment on site helps meet this goal. The disadvantage of having a spare, rather than a redundant pair, is that it most likely will become an idle asset. The cost of each of these alternatives, versus a higher-level service contract must be evaluated.
  • Are you looking for additional consulting or support time?
    • Vendors and vendor partners may provide blocks of time where a dedicated resource can come on site for additional support or planning needs.

26.5.7 Vendor Considerations.

Being able to trust a vendor's product to protect the organization is crucial. There must be a level of comfort with every aspect of the product and service before turning over the network security reins. In addition to getting the GSD features needed, the vendor providing the solution should have an adequate security foundation, financial stability, and support resources.

  • Will this vendor meet the organization's current and future needs?
    • This key question focuses on the vendor's ability to provide a quality product, support that product, and grow with your needs. Be sure to evaluate the vendor's product road map to see how the product (hardware, functionality, cost, etc.) is to evolve and when currently unavailable features will be integrated into the product.

Reputation. Infighting and opinions about the best security platforms abound. The selected vendor should be an active member of the security community.

  • Does this vendor have a good reputation in the security community?
    • There is no shortage of product reviews and comparisons to help determine if a product or vendor has a solid reputation for quality and service. These are not the ultimate arbiters of a good product, but they will provide a foundation for additional research.
  • What is the vulnerability history of the current and previous systems?
    • Check vulnerability and utilize monitoring sites such as Bugtraq, SecurityFocus, and Milw0rm to see the vulnerability history of current and previous security products.
  • What is the mean time to correct vulnerabilities?
    • The optimal time to resolve the vulnerability is immediately (or before it even happens). Check to see how long it takes vendors to respond to and resolve vulnerabilities. This can include verifying that the fix corrected the problem the first time.
  • Do they already have deployments similar to yours?
    • If the vendor already has customers with an infrastructure and needs similar to yours, this may provide additional ideas and information to make the deployment more successful.
  • Will they provide references?
    • Check with the vendor to see if another company will provide a reference as to their experience with the product and service.

Support options.

  • Are there different tiers of initial call support?
    • Determine if all support calls originate through the main support service desk, or if it is possible to have a dedicated resource, possibly one shared among a few other organizations.
  • How experienced are the front-line and higher-tier technicians?
    • Ask the vendor to provide information regarding training and experience of the different levels of support technicians.
  • Do you have the ability to escalate?
    • Determine if it is possible to escalate an issue immediately, if the local administrators have already gone through standard troubleshooting steps. In addition, find out if there any costs associated with the ability to escalate an issue preemptively.
  • Are you willing to work with beta systems?
    • Depending on risk tolerance, being a beta partner with the vendor may provide early access to fixes and enhancements.
  • What are current and previous clients' experiences with the quality and availability of support?
    • Again, ask for references to get real-world perspectives on the vendor's support quality and availability.

26.5.8 Managed Security Service Providers.

Organizations now have the ability to transfer varying levels of internal network security responsibilities to a managed security service provider (MSSP). Use of the MSSP may be an opportunity to supplement off-shift log reviews, consolidating, and alerting. In addition, there may be an opportunity to transfer maintenance and change control in order to alleviate resource constraints and knowledge gaps. Choosing to use a service such as this warrants heavy investigation. This brief section gives only a few areas of focus to begin the process.

  • What is the MSSP reputation/experience/workload?
    • The MSSP should have trained and experienced personnel to support your GSD infrastructure. Check to see what the number of clients per support resource is, and if other MSSP locations can continue support and operations if a failure occurs at another MSSP location.
  • Is it possible to do this securely?
    • This includes transfer of data to the MSSP as well as secure storage, and access to the collected information.
  • If necessary, how will the MSSP follow change control?
    • Determine the change control process before turning over the reins to the MSSP. This includes determining change reviews, change windows, change approvals, SLAs regarding time to complete, and so on.
  • At what cost?
    • As the number of services and devices increases, so will the cost of doing business with the MSSP. Be sure to investigate the contract to understand all potential fees or additional requirements that may drive up the cost after the signing of the initial contract.

For more about outsourcing security functions, see Chapter 68 in this Handbook.

26.6 CONCLUDING REMARKS.

The requirements for network security are changing constantly. Organizations are under continued pressure to meet customer and business partner demands for ready access to information. Technologies such as mobile devices and extranets continue to blur the previously accepted reality of true network borders. Perimeter security is no long as easy as installing a firewall or requiring proxy services for outbound connections. Reliance on a specific technology, or security ideology, is insufficient to protect the information and systems integral to the success of an organization. The frequency and ferocity of both external and internal attacks requires a more robust and flexible mechanism to combat these increasing threats, and the GSD has advanced to provide the current generation of perimeter protection technologies.

The GSD retains all of the previous firewall-based functionality such as allowed path control, VPN services, and network address translation while providing the flexibility to integrate additional services designed to provide visibility into, and protection from, a multitude of threats. The addition of antimalware capabilities provides a new layer of protection by relieving hosts from the responsibility of being the sole providers of full detection, prevention, and remediation services. The integration of proxy services, application and content control, and intrusion prevention allows GSD deployments to simplify the network security architecture as it opens up new inspection and protection opportunities.

Today's threats provide little insight into how protection measures will need to evolve to meet the next generation of attacks and attackers. To remain a viable security option, the GSD security vendors must remain agile. The integration and implementation of new protection measures will need to be simple and seamless. As processing power continues to increase, so will the capabilities of the GSD to take on greater workloads and complexity. Basic content inspection will give way to a greater understanding of, and protection for, data context, value, and flows. The support and protection of worldwide networks will require providing the full existing set of GSD functionality for IPv6 traffic.

No matter the threat, no matter the network, no single device can provide complete security. Each organization must evaluate all avenues of protection to ensure that the technologies deployed meet the security functionality required.

26.7 FURTHER READING

Amon, C., T. W. Shinder, and A. Carasik-Henmi. Best Damn Firewall Book Period. Syngress, 2003.

Bishop, M. Computer Security: Art and Science. Addison-Wesley, 2003.

Forouzan, B. TCP/IP Protocol Suite, 2nd ed. McGraw-Hill, 2002.

McClure, S. Hacking Exposed: Network Security Secrets & Solutions, 4th ed. McGrawHill Osborne Media, 2003.

Wack, J., K. Cutler, and J. Pole. Guidelines on Firewalls and Firewall Policy: Recommendations of the National Institute of Standards and Technology. U.S. Department of Commerce, Special Publication 800-41, 2002; http://csrc.nist.gov/publications/nistpubs/800-41/sp800-41.pdf.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.15.84