Images

Infrastructure Security

The higher your structure is to be, the deeper must be its foundation.

—Saint Augustine

Images

In this chapter, you will learn how to

Images   Construct networks using different types of network devices

Images   Enhance security using security devices

Images   Understand virtualization concepts

Images   Enhance security using NAC/NAP methodologies

Images   Identify the different types of media used to carry network signals

Images   Describe the different types of storage media used to store information

Images   Use basic terminology associated with network functions related to information security

Images   Describe the different types and uses of cloud computing

Infrastructure security begins with the design of the infrastructure itself. The proper use of components improves not only performance but security as well. Network components are not isolated from the computing environment and are an essential aspect of a total computing environment. From the routers, switches, and cables that connect the devices, to the firewalls and gateways that manage communication, from the network design, to the protocols that are employed—all these items play essential roles in both performance and security.

Images Devices

A complete network computer solution in today’s business environment consists of more than just client computers and servers. Devices are needed to connect the clients and servers and to regulate the traffic between them. Devices are also needed to expand this network beyond simple client computers and servers to include yet other devices, such as wireless and handheld systems. Devices come in many forms and with many functions, from hubs and switches, to routers, wireless access points, and special-purpose devices such as virtual private network (VPN) devices. Each device has a specific network function and plays a role in maintaining network infrastructure security.

The Importance of Availability

In Chapter 2, we examined the CIA of security: confidentiality, integrity, and availability. Unfortunately, the availability component is often overlooked, even though availability is what has moved computing into the modern networked framework and plays a significant role in security.

Security failures can occur in two ways. First, a failure can allow unauthorized users access to resources and data they are not authorized to use, thus compromising information security. Second, a failure can prevent a user from accessing resources and data the user is authorized to use. This second failure is often overlooked, but it can be as serious as the first. The primary goal of network infrastructure security is to allow all authorized use and deny all unauthorized use of resources.

Workstations

Most users are familiar with the client computers used in the client/server model called workstation devices. The workstation is the machine that sits on the desktop and is used every day for sending and reading e-mail, creating spreadsheets, writing reports in a word processing program, and playing games. If a workstation is connected to a network, it is an important part of the security solution for the network. Many threats to information security can start at a workstation, but much can be done in a few simple steps to provide protection from many of these threats.

Workstations and Servers

Servers and workstations are key nodes on networks. The specifics for securing these devices are covered in Chapter 14.

Servers

Servers are the computers in a network that host applications and data for everyone to share. Servers come in many sizes—from small single-CPU boxes that may be less powerful than a workstation, to multiple-CPU monsters, up to and including mainframes. The operating systems used by servers range from Windows Server, to UNIX, to Multiple Virtual Storage (MVS) and other mainframe operating systems. The OS on a server tends to be more robust than the OS on a workstation system and is designed to service multiple users over a network at the same time. Servers can host a variety of applications, including web servers, databases, e-mail servers, file servers, print servers, and application servers for middleware applications.

Mobile Devices

Mobile devices such as laptops, tablets, and mobile phones are the latest devices to join the corporate network. Mobile devices can create a major security gap, as a user may access separate e-mail accounts—one personal, without antivirus protection, and the other corporate. Mobile devices are covered in detail in Chapter 12.

Device Security, Common Concerns

As more and more interactive devices (that is, devices you can interact with programmatically) are being designed, a new threat source has appeared. In an attempt to build security into devices, typically, a default account and password must be entered to enable the user to access and configure the device remotely. These default accounts and passwords are well known in the hacker community, so one of the first steps you must take to secure such devices is to change the default credentials. Anyone who has purchased a home office router knows the default configuration settings and can check to see if another user has changed theirs. If they have not, this is a huge security hole, allowing outsiders to “reconfigure” their network devices.

Default Accounts

Always reconfigure all default accounts on all devices before exposing them to external traffic. This is to prevent others from reconfiguring your devices based on known access settings.

Network-Attached Storage

Because of the speed of today’s Ethernet networks, it is possible to manage data storage across the network. This has led to a type of storage known as network-attached storage (NAS). The combination of inexpensive hard drives, fast networks, and simple application-based servers has made NAS devices in the terabyte range affordable for even home users. Because of the large size of video files, this has become popular for some users as a method of storing TV and video libraries. Because NAS is a network device, it is susceptible to various attacks, including sniffing of credentials and a variety of brute-force attacks to obtain access to the data.

Removable Storage

Because removable devices can move data outside of the corporate- controlled environment, their security needs must be addressed. Removable devices can bring unprotected or corrupted data into the corporate environment. All removable devices should be scanned by antivirus software upon connection to the corporate environment. Corporate policies should address the copying of data to removable devices. Many mobile devices can be connected via USB to a system and used to store data—and in some cases vast quantities of data. This capability can be used to avoid some implementations of data loss prevention mechanisms.

Images Virtualization

Virtualization technology is used to allow a computer to have more than one OS present and, in many cases, operating at the same time. Virtualization is an abstraction of the OS layer, creating the ability to host multiple OSs on a single piece of hardware. One of the major advantages of virtualization is the separation of the software and the hardware, creating a barrier that can improve many system functions, including security. The underlying hardware is referred to as the host machine, and on it is a host OS. Either the host OS has built-in hypervisor capability or an application is needed to provide the hypervisor function to manage the virtual machines (VMs). The virtual machines are typically referred to as the guest OSs.

Images

A hypervisor is the interface between a virtual machine and the host machine hardware. Hypervisors are the layer that enables virtualization.

Newer OSs are designed to natively incorporate virtualization hooks, enabling virtual machines to be employed with greater ease. There are several common virtualization solutions, including Microsoft Hyper-V, VMware, Oracle VM VirtualBox, Parallels, and Citrix Xen. It is important to distinguish between virtualization and boot loaders that allow different OSs to boot on hardware. Apple’s Boot Camp allows you to boot into Microsoft Windows on Apple hardware. This is different from Parallels, a product with complete virtualization capability for Apple hardware.

Virtualization offers much in terms of host-based management of a system. From snapshots that allow easy rollback to previous states, faster system deployment via preconfigured images, ease of backup, and the ability to test systems, virtualization offers many advantages to system owners. The separation of the operational software layer from the hardware layer can offer many improvements in the management of systems.

Hypervisor

Virtualization technology is used to allow a computer to have more than one OS present and, in many cases, operating at the same time. Virtualization is an abstraction of the OS layer, creating the ability to host multiple OSs on a single piece of hardware. To enable virtualization, a hypervisor is employed. A hypervisor is a low-level program that allows multiple operating systems to run concurrently on a single host computer. Hypervisors use a thin layer of code to allocate resources in real time. The hypervisor acts as the traffic cop that controls I/O and memory management. One of the major advantages of virtualization is the separation of the software and the hardware, creating a barrier that can improve many system functions, including security. The underlying hardware is referred to as the host machine, and on it is a host OS. Either the host OS has built-in hypervisor capability or an application is needed to provide the hypervisor function to manage the virtual machines (VMs). The virtual machines are typically referred to as the guest OSs. Two types of hypervisors exist: Type 1 and Type 2.

Images

A hypervisor is the interface between a virtual machine and the host machine hardware. Hypervisors are the layer that enables virtualization.

Type 1

Type 1 hypervisors run directly on the system hardware. They are referred to as a native, bare-metal, or embedded hypervisors in typical vendor literature. Type 1 hypervisors are designed for speed and efficiency, as they do not have to operate through another OS layer. Examples of Type 1 hypervisors include KVM (Kernel-based Virtual Machine, a Linux implementation), Xen (Citrix Linux implementation), Microsoft Windows Server Hyper-V (a headless version of the Windows OS core), and VMware’s vSphere/ESXi platforms. All of these are designed for the high-end server market in enterprises and allow multiple VMs on a single set of server hardware. These platforms come with management toolsets to facilitate VM management in the enterprise.

Images

Type 2

Type 2 hypervisors run on top of a host operating system. In the beginning of the virtualization movement, Type 2 hypervisors were the most popular. Administrators could buy the VM software and install it on a server they already had running. Typical Type 2 hypervisors include Oracle’s VirtualBox and VMware’s VMware Workstation Player. These are designed for limited numbers of VMs, typically in a desktop or small server environment.

Application Cells/Containers

A hypervisor-based virtualization system enables multiple OS instances to coexist on a single hardware platform. Application cells/containers are the same idea, but rather than having multiple independent OSs, a container holds the portions of an OS that it needs separate from the kernel. In essence, multiple containers can share an OS, yet have separate memory, CPU, and storage threads, thus guaranteeing that they will not interact with other containers. This allows multiple instances of an application or different application to share a host OS with virtually no overhead. This also allows portability of the application to a degree separate from the OS stack. There are multiple major container platforms in existence, and the industry has coalesced around a standard form called the Open Container Initiative, designed to enable standardization and the market stability of the container marketplace.

One can think of containers as the evolution of the VM concept into the application space. A container consists of an entire runtime environment—an application, plus all the dependencies, libraries and other binaries, and configuration files needed to run it, all bundled into one package. This eliminates the differences between the development, test, and production environments because the differences are in the container as a standard solution. Because the application platform, including its dependencies, is containerized, any differences in OS distributions, libraries, and underlying infrastructure are abstracted away and rendered moot.

VM Sprawl Avoidance

Sprawl is the uncontrolled spreading of disorganization caused by a lack of an organizational structure when many similar elements require management. Just as you can lose a file or an e-mail and have to go hunt for it, virtual machines can suffer from being misplaced. When you only have a few files, sprawl isn’t a problem, but when you have hundreds of files, developed over a long period of time, and not necessarily in an organized manner, sprawl does become a problem. The same is happening to virtual machines in the enterprise. In the end, a virtual machine is a file that contains a copy of a working machine’s disk and memory structures. If an enterprise only has a couple of virtual machines, then keeping track of them is relatively easy. But as the number grows, sprawl can set in. VM sprawl is a symptom of a disorganized structure. If the servers in a server farm could move between racks at random, there would be an issue finding the correct machine when you needed to go physically find it. The same effect occurs with VM sprawl. As virtual machines are moved around, finding the one you want in a timely manner can be an issue. VM sprawl avoidance is a real thing and needs to be implemented via policy. You can fight VM sprawl though using naming conventions and proper storage architectures so that the files are in the correct directories, thus making finding a specific VM easy and efficient. But like any filing system, it is only good if it is followed; therefore, policies and procedures need to ensure that proper VM naming and filing are done on a regular basis.

Images

Virtual environments have several specific topics that may be asked on the exam. Understand the difference between Type 1 and Type 2 hypervisors, and where you would use each. Understand the differences between VM sprawl and VM escape, and the effects of each. These are all subjects that can be used as questions on the exam, with the other terms serving as distractors.

VM Escape Protection

When multiple VMs are operating on a single hardware platform, one concern is VM escape. This is where software (typically malware) or an attacker escapes from one VM to the underlying OS and then resurfaces in a different VM. When you examine the problem from a logical point of view, you see that both VMs use the same RAM, the same processors, and so on; therefore, the difference is one of timing and specific combinations of elements within the VM environment. The VM system is designed to provide protection, but as with all things of larger scale, the devil is in the details. Large-scale VM environments have specific modules designed to detect escape and provide VM escape protection to other modules.

Snapshots

A snapshot is a point-in-time saving of the state of a virtual machine. Snapshots have great utility because they are like a savepoint for an entire system. Snapshots can be used to roll a system back to a previous point in time, undo operations, or provide a quick means of recovery from a complex, system-altering change that has gone awry. Snapshots act as a form of backup and are typically much faster than normal system backup and recovery operations.

Patch Compatibility

Having an OS operate in a virtual environment does not change the need for security associated with the OS. Patches are still needed and should be applied, independent of the virtualization status. Because of the nature of a virtual environment, it should have no effect on the utility of patching because the patch is for the guest OS.

Host Availability/Elasticity

When you set up a virtualization environment, protecting the host OS and hypervisor level is critical for system stability. The best practice is to avoid the installation of any applications on the host-level machine. All apps should be housed and run in a virtual environment. This aids in the system stability by providing separation between the application and the host OS. The term elasticity refers to the ability of a system to expand/contract as system requirements dictate. One of the advantages of virtualization is that a virtual machine can be moved to a larger or smaller environment based on need. If a VM needs more processing power, then migrating the VM to a new hardware system with greater CPU capacity allows the system to expand without you having to rebuild it.

Security Control Testing

When applying security controls to a system to manage security operations, you need to test the controls to ensure they are providing the desired results. Putting a system into a VM does not change this requirement. In fact, it may complicate it because of the nature of the relationship between the guest OS and the hypervisor. It is essential to specifically test all security controls inside the virtual environment to ensure their behavior is still effective.

Sandboxing

Sandboxing refers to the quarantining or isolation of a system from its surroundings. Virtualization can be used as a form of sandboxing with respect to an entire system. You can build a VM, test something inside the VM, and, based on the results, make a decision with regard to stability or whatever concern was present.

Images Networking

Networks are used to connect devices together. Networks are composed of components that perform networking functions to move data between devices. Networks begin with network interface cards, then continue in layers of switches and routers. Specialized networking devices are used for specific purposes, such as security and traffic management.

Network Interface Cards

To connect a server or workstation to a network, a device known as a network interface card (NIC) is used. A NIC is a card with a connector port for a particular type of network connection, either Ethernet or Token Ring. The most common network type in use for LANs is the Ethernet protocol, and the most common connector is the RJ-45 connector.

A NIC is the physical connection between a computer and the network. The purpose of a NIC is to provide lower-level protocol functionality from the OSI (Open System Interconnection) model. Because the NIC defines the type of physical layer connection, different NICs are used for different physical protocols. NICs come in single-port and multiport varieties, and most workstations use only a single-port NIC, as only a single network connection is needed. Figure 10.1 shows a common form of a NIC. For servers, multiport NICs are used to increase the number of network connections, thus increasing the data throughput to and from the network.

Images

Figure 10.1   Linksys network interface card (NIC)

Each NIC port is serialized with a unique code, 48 bits long, referred to as a Media Access Control address (MAC address). These are created by the manufacturer, with 24 bits representing the manufacturer and 24 bits being a serial number, guaranteeing uniqueness. MAC addresses are used in the addressing and delivery of network packets to the correct machine and in a variety of security situations. Unfortunately, these addresses can be changed, or “spoofed,” rather easily. In fact, it is common for personal routers to clone a MAC address to allow users to use multiple devices over a network connection that expects a single MAC.

Device/OSI Level Interaction

Different network devices operate using different levels of the OSI networking model to move packets from device to device:

Images

Hubs

A hub is networking equipment that connects devices that are using the same protocol at the physical layer of the OSI model. A hub allows multiple machines in an area to be connected together in a star configuration, with the hub as the center. This configuration can save significant amounts of cable and is an efficient method of configuring an Ethernet backbone. All connections on a hub share a single collision domain, a small cluster in a network where collisions occur. As network traffic increases, it can become limited by collisions. The collision issue has made hubs obsolete in newer, higher-performance networks, with inexpensive switches and switched Ethernet keeping costs low and usable bandwidth high. Hubs also create a security weakness in that all connected devices see all traffic, enabling sniffing and eavesdropping to occur. In today’s networks, hubs have all but disappeared, being replaced by low-cost switches.

Bridges

Bridges are networking equipment that connect devices using the same protocol at the data link layer of the OSI model. A bridge operates at the data link layer, filtering traffic based on MAC addresses. Bridges can reduce collisions by separating pieces of a network into two separate collision domains, but this only cuts the collision problem in half. Although bridges are useful, a better solution is to use switches for network connections.

Switches

A switch forms the basis for connections in most Ethernet-based LANs. Although hubs and bridges still exist, in today’s high-performance network environment, switches have replaced both. A switch has separate collision domains for each port. This means that for each port, two collision domains exist: one from the port to the client on the downstream side, and one from the switch to the network upstream. When full duplex is employed, collisions are virtually eliminated from the two nodes, host and client.

Images

MAC filtering can be employed on switches, permitting only specified MACs to connect to the switch. This can be bypassed if an attacker can learn an allowed MAC because they can clone the permitted MAC onto their own NIC and spoof the switch. To filter edge connections, IEEE 802.1X is more secure (it’s covered in Chapter 11). This can also be referred to as MAC limiting. Be careful to pay attention to context on the exam, however, because MAC limiting also can refer to preventing flooding attacks on switches by limiting the number of MAC addresses that can be “learned” by a switch.

Switches operate at the data link layer, while routers act at the network layer. For intranets, switches have become what routers are on the Internet—the device of choice for connecting machines. As switches have become the primary network connectivity device, additional functionality has been added to them. A switch is usually a Layer 2 device, but Layer 3 switches incorporate routing functionality.

Hubs have been replaced by switches because switches perform a number of features that hubs cannot perform. For example, the switch improves network performance by filtering traffic. It filters traffic by only sending the data to the port on the switch where the destination system resides. The switch knows what port each system is connected to and sends the data only to that port. The switch also provides security features, such as the option to disable a port so that it cannot be used without authorization. The switch also supports a feature called port security, which allows the administrator to control which systems can send data to each of the ports. The switch uses the MAC address of the systems to incorporate traffic-filtering and port security features, which is why it is considered a Layer 2 device.

Images

Network traffic segregation by switches can also act as a security mechanism, preventing access to some devices from other devices. This can prevent someone from accessing critical data servers from a machine in a public area.

Port address security based on MAC addresses can determine whether a packet is allowed or blocked from a connection. This is the very function that a firewall uses for its determination, and this same functionality is what allows an 802.1X device to act as an “edge device.”

Images

To secure a switch, you should disable all access protocols other than a secure serial line or a secure protocol such as Secure Shell (SSH). Using only secure methods to access a switch will limit the exposure to hackers and malicious users. Maintaining secure network switches is even more important than securing individual boxes, for the span of control to intercept data is much wider on a switch, especially if it’s reprogrammed by a hacker.

One of the security concerns with switches is that, like routers, they are intelligent network devices and are therefore subject to hijacking by hackers. Should a hacker break into a switch and change its parameters, they might be able to eavesdrop on specific or all communications, virtually undetected. Switches are commonly administered using the Simple Network Management Protocol (SNMP) and Telnet protocol, both of which have a serious weakness in that they send passwords across the network in cleartext. A hacker armed with a sniffer that observes maintenance on a switch can capture the administrative password. This allows the hacker to come back to the switch later and configure it as an administrator. An additional problem is that switches are shipped with default passwords, and if these are not changed when the switch is set up, they offer an unlocked door to a hacker.

Switches are also subject to electronic attacks, such as ARP poisoning and MAC flooding. ARP poisoning is where a device spoofs the MAC address of another device, attempting to change the ARP tables through spoofed traffic and the ARP table-update mechanism. MAC flooding is where a switch is bombarded with packets from different MAC addresses, flooding the switch table and forcing the device to respond by opening all ports and acting as a hub. This enables devices on other segments to sniff traffic.

Loop Protection

Switches operate at Layer 2, where there is no countdown mechanism to kill packets that get caught in loops or on paths that will never resolve. The Layer 2 space acts as a mesh, where potentially the addition of a new device can create loops in the existing device interconnections. To prevent loops, a technology called spanning trees is employed by virtually all switches. The Spanning Tree Protocol (STP) allows for multiple, redundant paths, while breaking loops to ensure a proper broadcast pattern.

Routers

A router is a network traffic management device used to connect different network segments together. Routers operate at the network layer (Layer 3) of the OSI model, using the network address (typically an IP address) to route traffic and using routing protocols to determine optimal routing paths across a network. Routers form the backbone of the Internet, moving traffic from network to network, inspecting packets from every communication as they move traffic in optimal paths.

Routers operate by examining each packet, looking at the destination address, and using algorithms and tables to determine where to send the packet next. This process of examining the header to determine the next hop can be done in quick fashion.

Images

Access control lists (ACLs) can require significant effort to establish and maintain. Creating them is a straightforward task, but their judicious use will yield security benefits with a limited amount of maintenance at Scale.

Routers use access control lists (ACLs) as a method of deciding whether a packet is allowed to enter the network. With ACLs, it is also possible to examine the source address and determine whether or not to allow a packet to pass. This allows routers equipped with ACLs to drop packets according to rules built into the ACLs. This can be a cumbersome process to set up and maintain, and as the ACL grows in size, routing efficiency can be decreased. It is also possible to configure some routers to act as quasi–application gateways, performing stateful packet inspection and using contents as well as IP addresses to determine whether or not to permit a packet to pass. This can tremendously increase the time for a router to pass traffic and can significantly decrease router throughput. Configuring ACLs and other aspects of setting up routers for this type of use are beyond the scope of this book.

One serious security concern regarding router operation is limiting who has access to the router and control of its internal functions. Like a switch, a router can be accessed using SNMP and Telnet and programmed remotely. Because of the geographic separation of routers, this can become a necessity because many routers in the world of the Internet can be hundreds of miles apart, in separate locked structures. Physical control over a router is absolutely necessary because if any device—be it a server, switch, or router—is physically accessed by a hacker, it should be considered compromised. Therefore, such access must be prevented. As with switches, it is important to ensure that the administrator password is never passed in the clear, that only secure mechanisms are used to access the router, and that all of the default passwords are reset to strong passwords.

As with switches, the most assured point of access for router management control is via the serial control interface port. This allows access to the control aspects of the router without having to deal with traffic-related issues. For internal company networks, where the geographic dispersion of routers may be limited, third-party solutions to allow out-of-band remote management exist. This allows complete control over the router in a secure fashion, even from a remote location, although additional hardware is required.

Routers are available from numerous vendors and come in sizes big and small. A typical small home office router for use with cable modem/DSL service is shown in Figure 10.2. Larger routers can handle traffic of up to tens of gigabytes per second per channel, using fiber-optic inputs and moving tens of thousands of concurrent Internet connections across the network. These routers, which can cost hundreds of thousands of dollars, form an essential part of e-commerce infrastructure, enabling large enterprises such as Amazon and eBay to serve many customers’ use concurrently.

Images

Figure 10.2   A small home office router for cable modem/DSL

Testing network connectivity

There are a variety of tools that can be used to test and detail the connectivity between systems, including the ICMP methods of ping and traceroute, the application programs nmap and superscan, and even wireshark. Explore these commands/methods on your own system to learn the details they can provide.

Images

A firewall is a network device (hardware, software, or combination of the two) that enforces a security policy. All network traffic passing through the firewall is examined—traffic that does not meet the specified security criteria or violates the firewall policy is blocked.

Firewalls

A firewall is a network device—hardware, software, or a combination thereof—whose purpose is to enforce a security policy across its connections by allowing or denying traffic to pass into or out of the network. A firewall is a lot like a gate guard at a secure facility. The guard examines all the traffic trying to enter the facility—cars with the correct sticker or delivery trucks with the appropriate paperwork are allowed in; everyone else is turned away (see Figure 10.3).

Images

Figure 10.3   How a firewall works

The heart of a firewall is the set of security policies that it enforces. Management determines what is allowed in the form of network traffic between devices, and these policies are used to build rule sets for the firewall devices used to filter network traffic across the network.

Firewall Rules

Firewalls are in reality policy enforcement devices. Each rule in a firewall should have a policy behind it, as this is the only manner of managing firewall rule sets over time. The steps for successful firewall management begin and end with maintaining a policy list by firewall of the traffic restrictions to be imposed. Managing this list via a configuration-management process is important to prevent network instabilities from faulty rule sets or unknown “left-over” rules.

Firewall security policies are a series of rules that defines what traffic is permissible and what traffic is to be blocked or denied. These are not universal rules, and there are many different sets of rules for a single company with multiple connections. A web server connected to the Internet may be configured only to allow traffic on port 80 for HTTP, and have all other ports blocked. An e-mail server may have only necessary ports for e-mail open, with others blocked. A key to security policies for firewalls is the same as has been seen for other security policies—the principle of least access. Only allow the necessary access for a function; block or deny all unneeded functionality. How an organization deploys its firewalls determines what is needed for security policies for each firewall. You may even have a small office/home office (SOHO) firewall at your house, such as the RVS4000 shown in Figure 10.4. This device from Linksys provides both routing and firewall functions.

Images

Figure 10.4   Linksys RVS4000 SOHO firewall

Images

Orphan or left-over rules are rules that were created for a special purpose (testing, emergency, visitor or vendor, and so on) and then forgotten about and not removed after their use ended. These rules can clutter up a firewall and result in unintended challenges to the network security team.

The security topology determines what network devices are employed at what points in a network. At a minimum, the corporate connection to the Internet should pass through a firewall, as shown in Figure 10.5. This firewall should block all network traffic except that specifically authorized by the security policy. This is actually easy to do: blocking communications on a port is simply a matter of telling the firewall to close the port. The issue comes in deciding what services are needed and by whom, and thus which ports should be open and which should be closed. This is what makes a security policy useful but, in some cases, difficult to maintain.

Images

Figure 10.5   Logical depiction of a firewall protecting an organization from the Internet

The perfect firewall policy is one that the end user never sees and one that never allows even a single unauthorized packet to enter the network. As with any other perfect item, it is rare to find the perfect security policy for a firewall.

To develop a complete and comprehensive security policy, it is first necessary to have a complete and comprehensive understanding of your network resources and their uses. Once you know what your network will be used for, you will have an idea of what to permit. Also, once you understand what you need to protect, you will have an idea of what to block. Firewalls are designed to block attacks before they get to a target machine. Common targets are web servers, e-mail servers, DNS servers, FTP services, and databases. Each of these has separate functionality, and each of these has separate vulnerabilities. Once you have decided who should receive what type of traffic and what types should be blocked, you can administer this through the firewall.

Images

Routers help control the flow of traffic into and out of your network. Through the use of ACLs, routers can act as first-level firewalls and can help weed out malicious traffic.

How Do Firewalls Work?

Firewalls enforce the established security policies. They can do this through a variety of mechanisms, including the following:

Images   Network Address Translation (NAT) As you may remember from Chapter 9, NAT translates private (nonroutable) IP addresses into public (routable) IP addresses.

Images   Basic packet filtering Basic packet filtering looks at each packet entering or leaving the network and then either accepts the packet or rejects the packet based on user-defined rules. Each packet is examined separately.

Images   Stateful packet filtering Stateful packet filtering also looks at each packet, but it can examine the packet in its relation to other packets. Stateful firewalls keep track of network connections and can apply slightly different rule sets based on whether or not the packet is part of an established session.

Images   Access control lists (ACLs) ACLs are simple rule sets that are applied to port numbers and IP addresses. They can be configured for inbound and outbound traffic and are most commonly used on routers and switches.

Images   Application layer proxies An application layer proxy can examine the content of the traffic as well as the ports and IP addresses. For example, an application layer has the ability to look inside a user’s web traffic, detect a malicious web site attempting to download malware to the user’s system, and block the malware.

Images

NAT is the process of modifying network address information in datagram packet headers while in transit across a traffic-routing device, such as a router or firewall, for the purpose of remapping a given address space into another. See Chapter 9 for a more detailed discussion on NAT.

One of the most basic security functions provided by a firewall is NAT. This service allows you to mask significant amounts of information from outside of the network. This allows an outside entity to communicate with an entity inside the firewall without truly knowing its address.

Basic packet filtering, also known as stateless packet inspection, involves looking at packets, their protocols and destinations, and checking that information against the security policy. Telnet and FTP connections may be prohibited from being established to a mail or database server, but they may be allowed for the respective service servers. This is a fairly simple method of filtering based on information in each packet header, like IP addresses and TCP/UDP ports. This will not detect and catch all undesired packets, but it is fast and efficient.

Firewalls and Access Control Lists

Many firewalls read firewall and ACL rules from top to bottom and apply the rules in sequential order to the packets they are inspecting. Typically they will stop processing rules when they find a rule that matches the packet they are examining. If the first line in your rule set reads “allow all traffic,” then the firewall will pass any network traffic coming into or leaving the firewall—ignoring the rest of your rules below that line. Many firewalls have an implied “deny all” line as part of their rule sets. This means that any traffic that is not specifically allowed by a rule will get blocked by default.

To look at all packets, determining the need for each and its data, requires stateful packet filtering. Advanced firewalls employ stateful packet filtering to prevent several types of undesired communications. Should a packet come from outside the network, in an attempt to pretend that it is a response to a message from inside the network, the firewall will have no record of it being requested and can discard it, blocking access. As many communications will be transferred to high ports (above 1023), stateful monitoring will enable the system to determine which sets of high-port communications are permissible and which should be blocked. The disadvantage to stateful monitoring is that it takes significant resources and processing to do this type of monitoring, and this reduces efficiency and requires more robust and expensive hardware. However, this type of monitoring is essential in today’s comprehensive networks, particularly given the variety of remotely accessible services.

As they are in routers, switches, servers, and other network devices, ACLs are a cornerstone of security in firewalls. Just as you must protect the device from physical access, ACLs do the same task for electronic access. Firewalls can extend the concept of ACLs by enforcing them at a packet level when packet-level stateful filtering is performed. This can add an extra layer of protection, making it more difficult for an outside hacker to breach a firewall.

Images

Many firewalls contain, by default, an implicit deny at the end of every ACL or firewall rule set. This simply means that any traffic not specifically permitted by a previous rule in the rule set is denied.

Some high-security firewalls also employ application layer proxies. As the name implies, packets are not allowed to traverse the firewall, but data instead flows up to an application that in turn decides what to do with it. For example, an SMTP proxy may accept inbound mail from the Internet and forward it to the internal corporate mail server, as depicted in Figure 10.6. While proxies provide a high level of security by making it very difficult for an attacker to manipulate the actual packets arriving at the destination, and while they provide the opportunity for an application to interpret the data prior to forwarding it to the destination, they generally are not capable of the same throughput as stateful packet-inspection firewalls. The tradeoff between performance and speed is a common one and must be evaluated with respect to security needs and performance requirements.

Images

Figure 10.6   Firewall with SMTP application layer proxy

Firewall Operations

Application layer firewalls such as proxy servers can analyze information in the header and data portion of the packet, whereas packet-filtering firewalls can analyze only the header of a packet.

Firewalls can also act as network traffic regulators in that they can be configured to mitigate specific types of network-based attacks. In denial-of-service (DOS) and distributed denial-of-service attacks (DDOS), an attacker can attempt to flood a network with traffic. Firewalls can be tuned to detect these types of attacks and act as flood guards, mitigating the effect on the network.

Images

Firewalls can act as flood guards, detecting and mitigating specific types of DoS/DDoS attacks.

Next-Generation Firewalls

Firewalls operate by inspecting packets and by using rules associated with IP addresses and ports. Next-generation firewalls have significantly more capability and are characterized by these features:

Images   Deep packet inspection

Images   Move beyond port/protocol inspection and blocking

Images   Add application-level inspection

Images   Add intrusion prevention

Images   Bring intelligence from outside the firewall

Next-generation firewalls are more than just a firewall and IDS coupled together; they offer a deeper look at what the network traffic represents. In a legacy firewall, with port 80 open, all web traffic is allowed to pass. Using a next-generation firewall, traffic over port 80 can be separated by web site, or even activity on a web site (for example, allow Facebook, but not games on Facebook). Because of the deeper packet inspection and the ability to create rules based on content, traffic can be managed based on content, not merely site or URL.

Web Application Firewalls vs. Network Firewalls

Increasingly, the term firewall is getting attached to any device or software package that is used to control the flow of packets or data into or out of an organization. For example, a web application firewall is the term given to any software package, appliance, or filter that applies a rule set to HTTP/HTTPS traffic. Web application firewalls shape web traffic and can be used to filter out SQL injection attacks, malware, cross-site scripting (XSS), and so on. By contrast, a network firewall is a hardware or software package that controls the flow of packets into and out of a network. Web application firewalls operate on traffic at a much higher level than network firewalls, as web application firewalls must be able to decode the web traffic to determine whether or not it is malicious. Network firewalls operate on much simpler aspects of network traffic such as source/destination port and source/destination address.

Concentrators

Network devices called concentrators act as traffic-management devices, managing flows from multiple points into single streams. Concentrators typically act as endpoints for a particular protocol, such as SSL/TLS or VPN. The use of specialized hardware can enable hardware-based encryption and provide a higher level of specific service than a general-purpose server. This provides both architectural and functional efficiencies.

Images

To prevent unauthorized wireless access to the network, configuration of remote access protocols to a wireless access point is common. Forcing authentication and verifying authorization is a seamless method of performing basic network security for connections in this fashion. These access protocols are covered in Chapter 11.

Wireless Devices

Wireless devices bring additional security concerns. There is, by definition, no physical connection to a wireless device; radio waves or infrared carry data, which allows anyone within range access to the data. This means that unless you take specific precautions, you have no control over who can see your data. Placing a wireless device behind a firewall does not do any good, because the firewall stops only physically connected traffic from reaching the device. Outside traffic can come literally from the parking lot directly to the wireless device and into the network.

The point of entry from a wireless device to a wired network is performed at a device called a wireless access point. Wireless access points can support multiple concurrent devices accessing network resources through the network node they create. A typical wireless access point is shown here.

Images

Several mechanisms can be used to add wireless functionality to a machine. For PCs, this can be done via an expansion card. For notebooks, a PCMCIA adapter for wireless networks is available from several vendors. For both PCs and notebooks, vendors have introduced USB-based wireless connectors. The following image shows one vendor’s card—note the extended length used as an antenna. Not all cards have the same configuration, although they all perform the same function: to enable a wireless network connection. The numerous wireless protocols (802.11a, b, g, i, n. and ac) are covered in Chapter 12. Wireless access points and cards must be matched by protocol for proper operation.

Images

Modems

Modems were once a slow method of remote connection that was used to connect client workstations to remote services over standard telephone lines. Modem is a shortened form of modulator/demodulator, converting analog signals to digital, and vice versa. Connecting a digital computer signal to the analog telephone line required one of these devices. Today, the use of the term has expanded to cover devices connected to special digital telephone lines (DSL modems) and to cable television lines (cable modems). Although these devices are not actually modems in the true sense of the word, the term has stuck through marketing efforts directed at consumers. DSL and cable modems offer broadband high-speed connections and the opportunity for continuous connections to the Internet. Along with these new desirable characteristics come some undesirable ones, however. Although they both provide the same type of service, cable and DSL modems have some differences. A DSL modem provides a direct connection between a subscriber’s computer and an Internet connection at the local telephone company’s switching station. This private connection offers a degree of security, as it does not involve others sharing the circuit. Cable modems are set up in shared arrangements that theoretically could allow a neighbor to sniff a user’s cable modem traffic.

Cable modems were designed to share a party line in the terminal signal area, and the cable modem standard, Data Over Cable Service Interface Specification (DOCSIS), was designed to accommodate this concept. DOCSIS includes built-in support for security protocols, including authentication and packet filtering. Although this does not guarantee privacy, it prevents ordinary subscribers from seeing others’ traffic without using specialized hardware.

Figure 10.7 is a modern cable modem. It has an imbedded wireless access point, a voice over IP (VoIP) connection, a local router, and a DHCP server. The size of the device is fairly large, but it has a built-in lead-acid battery to provide VoIP service when power is out.

Images

Figure 10.7   Modern cable modem

Both cable and DSL services are designed for a continuous connection, which brings up the question of IP address life for a client. Although some services originally used a static IP arrangement, virtually all have now adopted the Dynamic Host Configuration Protocol (DHCP) to manage their address space. A static IP address has the advantage of remaining the same and enabling convenient DNS connections for outside users. Because cable and DSL services are primarily designed for client services, as opposed to host services, this is not a relevant issue. The security issue with a static IP address is that it is a stationary target for hackers. The move to DHCP has not significantly lessened this threat, however, because the typical IP lease on a cable modem DHCP server is for days. This is still relatively stationary, and some form of firewall protection needs to be employed by the user.

Cable/DSL Security

The modem equipment provided by the subscription service converts the cable or DSL signal into a standard Ethernet signal that can then be connected to a NIC on the client device. This is still just a direct network connection, with no security device separating the two. The most common security device used in cable/DSL connections is a router that acts as a hardware firewall. The firewall/router needs to be installed between the cable/DSL modem and client computers.

Coexisting Communications

Data and voice communications have coexisted in enterprises for decades. Recent connections inside the enterprise of voice over IP (VoIP) and traditional private branch exchange solutions increase both functionality and security risks. Specific firewalls to protect against unauthorized traffic over telephony connections are available to counter the increased risk.

Telephony

A private branch exchange (PBX) is an extension of the public telephone network into a business. Although typically considered separate entities from data systems, PBXs are frequently interconnected and have security requirements as part of this interconnection, as well as security requirements of their own. PBXs are computer-based switching equipment designed to connect telephones into the local phone system. Basically digital switching systems, they can be compromised from the outside and used by phone hackers (known as phreakers) to make phone calls at the business’s expense. Although this type of hacking has decreased as the cost of long-distance calling has decreased, it has not gone away, and as several firms learn every year, voicemail boxes and PBXs can be compromised and the long-distance bills can get very high, very fast.

Another problem with PBXs arises when they are interconnected to the data systems, either by corporate connection or by rogue modems in the hands of users. In either case, a path exists for connection to outside data networks and the Internet. Just as a firewall is needed for security on data connections, one is needed for these connections as well. Telecommunications firewalls are a distinct type of firewall designed to protect both the PBX and the data connections. The functionality of a telecommunications firewall is the same as that of a data firewall: it is there to enforce security policies. Telecommunication security policies can be enforced even to cover hours of phone use, to prevent unauthorized long-distance usage through the implementation of access codes and/or restricted service hours.

VPN Concentrator

A virtual private network (VPN) is a construct used to provide a secure communication channel between users across public networks such as the Internet. A VPN concentrator is a special endpoint inside a network designed to accept multiple VPN connections and integrate these independent connections into the network in a scalable fashion. The most common implementation of VPN is via IPsec, a protocol for IP security. IPsec is mandated in IPv6 and is optional in IPv4. IPsec can be implemented in hardware, software, or a combination of both and is used to encrypt all IP traffic. In Chapter 11, a variety of techniques are described that can be employed to instantiate a VPN connection. The use of encryption technologies allows either the data in a packet to be encrypted or the entire packet to be encrypted. If the data is encrypted, the packet header can still be sniffed and observed between source and destination, but the encryption protects the contents of the packet from inspection. If the entire packet is encrypted, it is then placed into another packet and sent via tunnel across the public network. Tunneling can protect even the identity of the communicating parties.

Images

A VPN concentrator is a hardware device designed to act as a VPN endpoint, managing VPN connections to an enterprise.

Images Security Devices

A range of devices can be employed at the network layer to instantiate security functionality. Devices can be used for intrusion detection, network access control, and a wide range of other security functions. Each device has a specific network function and plays a role in maintaining network infrastructure security.

Intrusion Detection Systems

Intrusion detection systems (IDSs) are an important element of infrastructure security. IDSs are designed to detect, log, and respond to unauthorized network or host use, both in real time and after the fact. IDSs are available from a wide selection of vendors and are an essential part of a comprehensive network security program. These systems are implemented using software, but in large networks or systems with significant traffic levels, dedicated hardware is typically required as well. IDSs can be divided into two categories: network-based systems and host-based systems.

Intrusion Detection

From a network infrastructure point of view, network-based IDSs can be considered part of infrastructure, whereas host-based IDSs are typically considered part of a comprehensive security program and not necessarily infrastructure. Two primary methods of detection are used: signaturebased and anomaly-based. IDSs are covered in detail in Chapter 13.

Network Access Control

Networks comprise connected workstations and servers. Managing security on a network involves managing a wide range of issues, from various connected hardware and the software operating these devices. Assuming that the network is secure, each additional connection involves risk. Managing the endpoints on a case-by-case basis as they connect is a security methodology known as network access control. Two main competing methodologies exist that deal with network access control: Network Access Protection (NAP) is a Microsoft technology for controlling network access of a computer host, and Network Admission Control (NAC) is Cisco’s technology for controlling network admission. Microsoft’s NAP system is based on measuring the system health of the connecting machine, including patch levels of the OS, antivirus protection, and system policies. The objective behind NAP is to enforce policy and governance standards on network devices before they are allowed data-level access to a network. NAP was first utilized in Windows XP Service Pack 3, Windows Vista, and Windows Server 2008, and it requires additional infrastructure servers to implement the health checks. The system includes enforcement agents that interrogate clients and verify admission criteria. Admission criteria can include client machine ID, status of updates, and so forth. Using NAP, network administrators can define granular levels of network access based on multiple criteria, such as who a client is, what groups a client belongs to, and the degree to which that client is compliant with corporate client health requirements. These health requirements include OS updates, antivirus updates, and critical patches. Response options include rejection of the connection request and restriction of admission to a subnet. NAP also provides a mechanism for automatic remediation of client health requirements and restoration of normal access when healthy.

NAC Agents

NAC systems can be employed using agents on a client, and these agents can either persist (permanent) or be renewed (dissolvable) with every connection. The agents perform the health checks and report to the NAC system in the enterprise the condition of the system being connected. It is also possible to perform these same functions with software that resides in the network itself, and these are typically referred to as agentless systems.

Cisco’s NAC system is built around an appliance that enforces policies chosen by the network administrator. A series of third-party solutions can interface with the appliance, allowing the verification of many different options, including client policy settings, software updates, and client security posture. The use of third-party devices and software makes this an extensible system across a wide range of equipment.

Both Cisco NAC and Microsoft NAP are in their nearing end of life – NAC being discontinued, and NAP being phased out as an active product. Both of these fell to adoption of 802.1X, which while it can only confirm identity of user or machine, is used widely in networks and has been seen as good enough. The concept of automated admission checking based on client device characteristics is here to stay, as it provides timely control in the ever-changing network world of today’s enterprises.

Network Monitoring/Diagnostic

A computer network itself can be considered a large computer system, with performance and operating issues. Just as a computer needs management, monitoring, and fault resolution, so too do networks. SNMP was developed to perform this function across networks. The idea is to enable a central monitoring and control center to maintain, configure, and repair network devices, such as switches and routers, as well as other network services, such as firewalls, IDSs, and remote access servers. SNMP has some security limitations, and many vendors have developed software solutions that sit on top of SNMP to provide better security and better management tool suites.

Images

SNMP, the Simple Network Management Protocol, is a part of the Internet Protocol suite of protocols. It is an open standard, designed for transmission of management functions between devices. Do not confuse this with SMTP, the Simple Mail Transfer Protocol, which is used to transfer mail between machines.

The concept of a network operations center (NOC) comes from the old phone company network days, when central monitoring centers supervised the health of the telephone network and provided interfaces for maintenance and management. This same concept works well with computer networks, and companies with midsize and larger networks employ the same philosophy. The NOC allows operators to observe and interact with the network, using the self-reporting and, in some cases, self-healing nature of network devices to ensure efficient network operation. Although generally a boring operation under normal conditions, when things start to go wrong, as in the case of a virus or worm attack, the NOC can become a busy and stressful place, as operators attempt to return the system to full efficiency while not interrupting existing traffic.

Virtual IPs

In a load balanced environment, the IP addresses for the target servers of a load balancer will not necessarily match the address associated with the router sending the traffic. Load balancers handle this through the concept of virtual IP addresses, virtual IPs, that allow for multiple systems to be reflected back as a single IP address.

Because networks can be spread out literally around the world, it is not feasible to have a person visit each device for control functions. Software enables controllers at NOCs to measure the actual performance of network devices and make changes to the configuration and operation of devices remotely. The ability to make remote connections with this level of functionality is both a blessing and a security issue. Although this allows for efficient network operations management, it also provides an opportunity for unauthorized entry into a network. For this reason, a variety of security controls are used, from secondary networks to VPNs and advanced authentication methods with respect to network control connections.

Network monitoring is an ongoing concern for any significant network. In addition to monitoring traffic flow and efficiency, monitoring of security-related events is necessary. IDSs act merely as alarms, indicating the possibility of a breach associated with a specific set of activities. These indications still need to be investigated and an appropriate response needs to be initiated by security personnel. Simple items such as port scans may be ignored by policy, but an actual unauthorized entry into a network router, for instance, would require NOC personnel to take specific actions to limit the potential damage to the system. In any significant network, coordinating system changes, dynamic network traffic levels, potential security incidents, and maintenance activities are daunting tasks requiring numerous personnel working together. Software has been developed to help manage the information flow required to support these tasks. Such software can enable remote administration of devices in a standard fashion so that the control systems can be devised in a hardware vendor–neutral configuration.

SNMP is the main standard embraced by vendors to permit interoperability. Although SNMP has received a lot of security-related attention of late due to various security holes in its implementation, it is still an important part of a security solution associated with network infrastructure. Many useful tools have security issues; the key is to understand the limitations and to use the tools within correct boundaries to limit the risk associated with the vulnerabilities. Blind use of any technology will result in increased risk, and SNMP is no exception. Proper planning, setup, and deployment can limit exposure to vulnerabilities. Continuous auditing and maintenance of systems with the latest patches is a necessary part of operations and is essential to maintaining a secure posture.

Scheduling Load Balancing

The scheduling of the next recipient of load balanced traffic is either by affinity scheduling or round robin. Affinity scheduling maintains a connection to a specific resource, while round robin moves to the next available resource. The other issue is in redundancy, and they can be either active-passive or active-active. The first word indicates the state of the primary system and the second word indicates the state of the redundant system.

Load Balancers

Certain systems, such as servers, are more critical to business operations and should therefore be the object of fault-tolerance measures. Load balancers are designed to distribute the processing load over two or more systems. They are used to help improve resource utilization and throughput, but they also have the added advantage of increasing the fault tolerance of the overall system since a critical process may be split across several systems. Should any one system fail, the others can pick up the processing it was handling.

Proxies

Proxies serve to manage connections between systems, acting as relays for the traffic. Proxies can function at the circuit level, where they support multiple traffic types, or they can be application-level proxies, which are designed to relay specific application traffic. An HTTP proxy can manage an HTTP conversation as it understands the type and function of the content. Application-specific proxies can serve as security devices if they are programmed with specific rules designed to provide protection against undesired content.

Though not strictly a security tool, a proxy server (or simply proxy) can be used to filter out undesirable traffic and prevent employees from accessing potentially hostile web sites. A proxy server takes requests from a client system and forwards them to the destination server on behalf of the client, as shown in Figure 10.8. Proxy servers can be completely transparent (these are usually called gateways or tunneling proxies), or a proxy server can modify the client request before sending it on, or even serve the client’s request without needing to contact the destination server. Several major categories of proxy servers are in use:

Images

Figure 10.8   HTTP proxy handling client requests and web server responses

Images   Anonymizing proxy An anonymizing proxy is designed to hide information about the requesting system and make a user’s web browsing experience “anonymous.” This type of proxy service is often used by individuals who are concerned about the amount of personal information being transferred across the Internet and the use of tracking cookies and other mechanisms to track browsing activity.

Images   Caching proxy This type of proxy keeps local copies of popular client requests and is often used in large organizations to reduce bandwidth usage and increase performance. When a request is made, the proxy server first checks to see whether it has a current copy of the requested content in the cache; if it does, it services the client request immediately without having to contact the destination server. If the content is old or the caching proxy does not have a copy of the requested content, the request is forwarded to the destination server.

Images   Content-filtering proxy Content-filtering proxies examine each client request and compare it to an established acceptable use policy (AUP). Requests can usually be filtered in a variety of ways, including by the requested URL, destination system, or domain name or by keywords in the content itself. Content-filtering proxies typically support user-level authentication, so access can be controlled and monitored and activity through the proxy can be logged and analyzed. This type of proxy is very popular in schools, corporate environments, and government networks.

Images   Open proxy An open proxy is essentially a proxy that is available to any Internet user and often has some anonymizing capabilities as well. This type of proxy has been the subject of some controversy, with advocates for Internet privacy and freedom on one side of the argument, and law enforcement, corporations, and government entities on the other side. As open proxies are often used to circumvent corporate proxies, many corporations attempt to block the use of open proxies by their employees.

Images   Reverse proxy A reverse proxy is typically installed on the server side of a network connection, often in front of a group of web servers. The reverse proxy intercepts all incoming web requests and can perform a number of functions, including traffic filtering and shaping, SSL decryption, serving of common static content such as graphics, and performing load balancing.

Images   Web proxy A web proxy is solely designed to handle web traffic and is sometimes called a web cache. Most web proxies are essentially specialized caching proxies.

Images

A proxy server is a system or application that acts as a go-between for clients’ requests for network services. The client tells the proxy server what it wants and, if the client is authorized to have it, the proxy server connects to the appropriate network service and gets the client what it asked for. Web proxies are the most commonly deployed type of proxy server.

Deploying a proxy solution within a network environment is usually done either by setting up the proxy and requiring all client systems to configure their browsers to use the proxy or by deploying an intercepting proxy that actively intercepts all requests without requiring client-side configuration.

From a security perspective, proxies are most useful in their ability to control and filter outbound requests. By limiting the types of content and web sites employees can access from corporate systems, many administrators hope to avoid loss of corporate data, hijacked systems, and infections from malicious web sites. Administrators also use proxies to enforce corporate AUPs and track use of corporate resources. Most proxies can be configured to either allow or require individual user authentication—this gives them the ability to log and control activity based on specific users or groups. For example, an organization might want to allow the human resources group to browse Facebook during business hours but not allow the rest of the organization to do so.

Web Security Gateways

Some security vendors combine proxy functions with content-filtering functions to create a product called a web security gateway. Web security gateways are intended to address the security threats and pitfalls unique to web-based traffic. Web security gateways typically provide the following capabilities:

Images   Real-time malware protection (a.k.a. malware inspection) The ability to scan all outgoing and incoming web traffic to detect and block undesirable traffic such as malware, spyware, adware, malicious scripts, file-based attacks, and so on

Images   Content monitoring The ability to monitor the content of web traffic being examined to ensure that it complies with organizational policies

Images   Productivity monitoring The ability to measure types and quantities of web traffic being generated by specific users, groups of users, or the entire organization

Images   Data protection and compliance Scanning web traffic for sensitive or proprietary information being sent outside of the organization as well as the use of social network sites or inappropriate sites

Internet Content Filters

With the dramatic proliferation of Internet traffic and the push to provide Internet access to every desktop, many corporations have implemented content-filtering systems, called Internet content filters, to protect them from employees’ viewing of inappropriate or illegal content at the workplace and the subsequent complications that occur when such viewing takes place. Internet content filtering is also popular in schools, libraries, homes, government offices, and any other environment where there is a need to limit or restrict access to undesirable content. In addition to filtering undesirable content, such as pornography, some content filters can also filter out malicious activity such as browser hijacking attempts or cross-site scripting (XSS) attacks. In many cases, content filtering is performed with or as a part of a proxy solution, as the content requests can be filtered and serviced by the same device. Content can be filtered in a variety of ways, including via the requested URL, the destination system, the domain name, by keywords in the content itself, and by type of file requested.

Images

The term Internet content filter, or just content filter, is applied to any device, application, or software package that examines network traffic (especially web traffic) for undesirable or restricted content. A content filter could be a software package loaded on a specific PC or a network appliance capable of filtering an entire organization’s web traffic.

Content-filtering systems face many challenges, because the ever-changing Internet makes it difficult to maintain lists of undesirable sites (sometimes called black lists); terms used on a medical site can also be used on a pornographic site, making keyword filtering challenging, and determined users are always seeking ways to bypass proxy filters. To help administrators, most commercial content-filtering solutions provide an update service, much like IDS or antivirus products, that updates keywords and undesirable sites automatically.

Data Loss Prevention

Data loss prevention (DLP) refers to technology employed to detect and prevent transfers of data across an enterprise. Employed at key locations, DLP technology can scan packets for specific data patterns. This technology can be tuned to detect account numbers, secrets, specific markers, or files. When specific data elements are detected, the system can block the transfer. The primary challenge in employing DLP technologies is the placement of the sensor. The DLP sensor needs to be able observe the data, so if the channel is encrypted, DLP technology can be thwarted.

Unified Threat Management

Many security vendors offer “all-in-one security appliances,” which are devices that combine multiple functions into the same hardware appliance. Most commonly these functions are firewall, IDS/IPS, and antivirus, although all-in-one appliances can include VPN capabilities, antispam, malicious web traffic filtering, antispyware, content filtering, traffic shaping, and so on. All-in-one appliances are often sold as being cheaper, easier to manage, and more efficient than having separate solutions that accomplish each of the functions the all-in-one appliance is capable of performing. A common name for these all-in-one appliances is a unified threat management (UTM) appliance. Using a UTM solution simplifies the security activity as a single task, under a common software package for operations. This reduces the learning curve to a single tool rather than a collection of tools. A UTM solution can have better integration and efficiencies in handling network traffic and incidents than a collection of tools connected together.

Figure 10.9 illustrates the advantages of UTM processing. Rather than processing elements in a linear fashion, as shown in 10.9a, the packets are processed in a parallelized fashion, as shown in 10.9b. There is a need to coordinate between the elements, and many modern solutions do this with parallelized hardware.

Images

Figure 10.9   Unified threat management architecture

URL Filtering

URL filters block connections to web sites that are in a prohibited list. The use of a UTM appliance, typically backed by a service to keep the list of prohibited web sites updated, provides an automated means to block access to sites deemed dangerous or inappropriate. Because of the highly volatile nature of web content, automated enterprise-level protection is needed to ensure a reasonable chance of blocking sources of inappropriate content, malware, and other malicious content.

Content Inspection

Instead of just relying on a URL to determine the acceptability of content, UTM appliances can also inspect the actual content being served. Content inspection is used to filter web requests that return content with specific components, such as names of body parts, music or video content, and other content that is inappropriate for the business environment.

Malware Inspection

Malware is another item that can be detected during network transmission, and UTM appliances can be tuned to detect malware. Network-based malware detection has the advantage of having to update only a single system, as opposed to all machines.

Images Media

The base of communications between devices is the physical layer of the OSI model. This is the domain of the actual connection between devices, whether by wire, fiber, or radio frequency waves. The physical layer separates the definitions and protocols required to transmit the signal physically between boxes from higher-level protocols that deal with the details of the data itself. Four common methods are used to connect equipment at the physical layer:

Images   Coaxial cable

Images   Twisted-pair cable

Images   Fiber-optics

Images   Wireless

Coaxial Cable

Coaxial cable is familiar to many households as a method of connecting televisions to VCRs or to satellite or cable services. It is used because of its high bandwidth and shielding capabilities. Compared to standard twisted-pair lines such as telephone lines, coaxial cable (commonly known as coax) is much less prone to outside interference. It is also much more expensive to run, both from a cost-per-foot measure and from a cable-dimension measure. Coax costs much more per foot than standard twisted-pair wires and carries only a single circuit for a large wire diameter.

Images

An original design specification for Ethernet connections, coax was used from machine to machine in early Ethernet implementations. The connectors were easy to use and ensured good connections, and the limited distance of most office LANs did not carry a large cost penalty. Today, almost all of this older Ethernet specification has been replaced by faster, cheaper twisted-pair alternatives, and the only place you’re likely to see coax in a data network is from the cable box to the cable modem.

Because of its physical nature, it is possible to drill a hole through the outer part of a coax cable and connect to the center connector. This is called a “vampire tap” and is an easy method to get access to the signal and data being transmitted.

UTP/STP

Twisted-pair wires have all but completely replaced coaxial cables in Ethernet networks. Twisted-pair wires use the same technology used by the phone company for the movement of electrical signals. Single pairs of twisted wires reduce electrical crosstalk and electromagnetic interference. Multiple groups of twisted pairs can then be bundled together in common groups and easily wired between devices.

Images

Twisted pairs come in two types: shielded and unshielded. Shielded twisted-pair (STP) has a foil shield around the pairs to provide extra shielding from electromagnetic interference. Unshielded twisted-pair (UTP) relies on the twist to eliminate interference. UTP has a cost advantage over STP and is usually sufficient for connections, except in very noisy electrical areas.

Twisted-pair lines are categorized by the level of data transmission they can support. Four categories are currently in use:

Images   Category 3 (Cat 3) Minimum for voice and 10-Mbps Ethernet. Bandwidth

Images   Category 5 (Cat 5/Cat 5e) For 100-Mbps Fast Ethernet; Cat 5e is an enhanced version of the Cat 5 specification to address far-end crosstalk and is suitable for 1000 Mbps.

Images   Category 6 (Cat 6/Cat 6a) For 10-Gigabit Ethernet over short distances; Cat 6a is used for longer, up to 100m, 10-Gbps cables.

Images   Category 7 (Cat 7) For 10-Gigabit Ethernet and higher. Cat 7 has been used for 100 GB up to 15 meters.

A comparison of the different cables is shown next. Note that UTP is unshielded twisted pair, STP is shielded twisted pair, and S/FTP is shielded/foil twisted pair.

Images

Images

Images

The standard method for connecting twisted-pair cables is via an 8-pin connector, called an RJ-45 connector, which looks like a standard phone jack connector but is slightly larger. One nice aspect of twisted-pair cabling is that it’s easy to splice and change connectors. Many a network administrator has made Ethernet cables from stock Cat-5 wire, two connectors, and a crimping tool. This ease of connection is also a security issue; because twisted-pair cables are easy to splice into, rogue connections for sniffing could be made without detection in cable runs. Both coax and fiber are much more difficult to splice because each requires a tap to connect, and taps are easier to detect.

Fiber

Fiber-optic cable uses beams of laser light to connect devices over a thin glass wire. The biggest advantage to fiber is its bandwidth, with transmission capabilities into the terabits per second range. Fiber-optic cable is used to make high-speed connections between servers and is the backbone medium of the Internet and large networks. For all of its speed and bandwidth advantages, fiber has one major drawback—cost.

Images

The cost of using fiber is a two-edged sword. When measured by bandwidth, using fiber is cheaper than using competing wired technologies. The length of runs of fiber can be much longer, and the data capacity of fiber is much higher. But connections to a fiber are difficult and expensive, and fiber is impossible to splice. Making the precise connection on the end of a fiber-optic line is a highly skilled job and is done by specially trained professionals who maintain a level of proficiency. Once the connector is fitted on the end, several forms of connectors and blocks are used, as shown in the preceding images.

Splicing fiber is practically impossible; the solution is to add connectors and connect through a repeater. This adds to the security of fiber in that unauthorized connections are all but impossible to make. The high cost of connections to fiber and the higher cost of fiber per foot also make it less attractive for the final mile in public networks where users are connected to the public switching systems. For this reason, cable companies use coax and DSL providers use twisted-pair to handle the “last mile” scenario.

Unguided Media

Electromagnetic waves have been transmitted to convey signals literally since the inception of radio. Unguided media is a term used to cover all transmission media not guided by wire, fiber, or other constraints; it includes radio frequency, infrared, and microwave methods. All types of unguided media have one attribute in common: because they are unguided, they can travel to many machines simultaneously. Transmission patterns can be modulated by antennas, but the target machine can be one of many in a reception zone. As such, security principles are even more critical, as they must assume that unauthorized users have access to the signal.

Infrared

Infrared (IR) is a band of electromagnetic energy just beyond the red end of the visible color spectrum. IR has been used in remote-control devices for years. IR made its debut in computer networking as a wireless method to connect to printers. Now that wireless keyboards, wireless mice, and mobile devices exchange data via IR, it seems to be everywhere. IR can also be used to connect devices in a network configuration, but it is slow compared to other wireless technologies. IR cannot penetrate walls but instead bounces off them. Nor can it penetrate other solid objects; therefore, if you stack a few items in front of the transceiver, the signal is lost.

RF/Microwave

The use of radio frequency (RF) waves to carry communication signals goes back to the beginning of the 20th century. RF waves are a common method of communicating in a wireless world. They use a variety of frequency bands, each with special characteristics. The term microwave is used to describe a specific portion of the RF spectrum that is used for communication and other tasks, such as cooking.

Wireless Options

There are numerous radio-based alternatives for carrying network traffic. They vary in capacity, distance, and other features. Commonly found examples are Wi-Fi, WiMAX, ZigBee, Bluetooth, 900 MHz, and NFC. Understanding the security requirements associated with each is important and is covered in more detail in Chapter 12.

Point-to-point microwave links have been installed by many network providers to carry communications over long distances and rough terrain. Many different frequencies are used in the microwave bands for many different purposes. Today, home users can use wireless networking throughout their house and enable laptops to surf the Web while they’re moved around the house. Corporate users are experiencing the same phenomenon, with wireless networking enabling corporate users to check e-mail on laptops while riding a shuttle bus on a business campus. These wireless solutions are covered in detail in Chapter 12.

One key feature of microwave communications is that microwave RF energy can penetrate reasonable amounts of building structure. This allows you to connect network devices in separate rooms, and it can remove the constraints on equipment location imposed by fixed wiring. Another key feature is broadcast capability. By its nature, RF energy is unguided and can be received by multiple users simultaneously. Microwaves allow multiple users access in a limited area, and microwave systems are seeing application as the last mile of the Internet in dense metropolitan areas. Point-to-multipoint microwave devices can deliver data communication to all the business users in a downtown metropolitan area through rooftop antennas, reducing the need for expensive building-to-building cables. Just as microwaves carry cell phone and other data communications, the same technologies offer a method to bridge the “last mile” problem.

The “last mile” problem is the connection of individual consumers to a backbone, an expensive proposition because of the sheer number of connections and unshared line at this point in a network. Again, cost is an issue, as transceiver equipment is expensive, but in densely populated areas, such as apartments and office buildings in metropolitan areas, the user density can help defray individual costs. Speed on commercial microwave links can exceed 10 Gbps, so speed is not a problem for connecting multiple users or for high-bandwidth applications.

Images Removable Media

One concept common to all computer users is data storage. Sometimes storage occurs on a file server and sometimes it occurs on movable media, which can then be transported between machines. Moving storage media represents a security risk from a couple of angles—the first being the potential loss of control over the data on the moving media. Second is the risk of introducing unwanted items, such as a virus or a worm, when the media is attached back to a network. Both of these issues can be remedied through policies and software. The key is to ensure that the policies are enforced and the software is effective. To describe media-specific issues, media can be divided into three categories: magnetic, optical, and electronic.

Images

Removable and transportable media make the physical security of the data a more difficult task. The only solution to this problem is encryption, which is covered in Chapter 5.

Magnetic Media

Magnetic media stores data through the rearrangement of magnetic particles on a nonmagnetic substrate. Common forms include hard drives, floppy disks, zip disks, and magnetic tape. Although the specific format can differ, the basic concept is the same. All these devices share some common characteristics: each has sensitivity to external magnetic fields. Attach a floppy disk to the refrigerator door with a magnet if you want to test the sensitivity. They are also affected by high temperatures, as in fires, and by exposure to water.

Hard Drives

Hard drives used to require large machines in mainframes. Now they are small enough to attach to mobile devices. The concepts remain the same among all of them: a spinning platter rotates the magnetic media beneath heads that read the patterns in the oxide coating. As drives have gotten smaller and rotation speeds have increased, the capacities have also grown. Today, gigabytes of data can be stored in a device slightly larger than a bottle cap. Portable hard drives in the 1TB to 3TB range are now available and affordable.

One of the security controls available to help protect the confidentiality of the data is full drive encryption built into the drive hardware. Using a key that is controlled, through a Trusted Platform Module (TPM) interface, for instance, this technology protects the data if the drive itself is lost or stolen. This may not be important if a thief takes the whole PC, but in larger storage environments, drives are placed in separate boxes and remotely accessed. In the specific case of notebook machines, this layer can be tied to smart card interfaces to provide more security. As this is built into the controller, encryption protocols such as Advanced Encryption Standard (AES) and Triple Data Encryption Standard (3DES) can be performed at full drive speed.

Images

Diskettes

Floppy disks were the computer industry’s first attempt at portable magnetic media. The movable medium was placed in a protective sleeve, and the drive remained in the machine. Capacities up to 1.4MB were achieved, but the fragility of the device as the size increased, as well as competing media, has rendered floppies almost obsolete. Diskettes are part of history now.

Tape

Magnetic tape has held a place in computer centers since the beginning of computing. Its primary use has been bulk offline storage and backup. Tape functions well in this role because of its low cost. The disadvantage of tape is its nature as a serial access medium, making it slow to work with for large quantities of data. Several types of magnetic tape are in use today, ranging from quarter inch to digital linear tape (DLT) and digital audio tape (DAT). These cartridges can hold upward of 60GB of compressed data.

Images

Tapes are still a major concern from a security perspective, as they are used to back up many types of computer systems. The physical protection afforded the tapes is of concern, because if a tape is stolen, an unauthorized user could establish a network and recover your data on their system, because it’s all stored on the tape. Offsite storage is needed for proper disaster recovery protection, but secure offsite storage and transport is what is really needed. This important issue is frequently overlooked in many facilities. The simple solution to maintain control over the data even when you can’t control the tape is through encryption. Backup utilities can secure the backups with encryption, but this option is frequently not used, for a variety of reasons. Regardless of the rationale for not encrypting data, once a tape is lost, not using the encryption option becomes a lamented decision.

Optical Media

Optical media involves the use of a laser to read data stored on a physical device. Instead of having a magnetic head that picks up magnetic marks on a disk, a laser picks up deformities embedded in the media containing the information. As with magnetic media, optical media can be read-write, although the read-only version is still more common.

Images

CD-R/DVD

The compact disc (CD) took the music industry by storm, and then it took the computer industry by storm as well. A standard CD holds more than 640MB of data, in some cases up to 800MB and a digital video disc (DVD) can hold almost 5GB of data single sided, or 8.5GB dual layer. These devices operate as optical storage, with little marks burned in them to represent 1’s and 0’s on a microscopic scale. The most common type of CD is the read-only version, in which the data is written to the disc once and only read afterward. This has become a popular method for distributing computer software, although higher-capacity DVDs have replaced CDs for program distribution.

A second-generation device, the recordable compact disc (CD-R), allows users to create their own CDs using a burner device in their PC and special software. Users can now back up data, make their own audio CDs, and use CDs as high-capacity storage. Their relatively low cost has made them economical to use. CDs have a thin layer of aluminum inside the plastic, upon which bumps are burned by the laser when recorded. CD-Rs use a reflective layer, such as gold, upon which a dye is placed that changes upon impact by the recording laser. A newer type, CD-RW, has a different dye that allows discs to be erased and reused. The cost of the media increases from CD, to CD-R, to CD-RW.

Blu-ray Discs

The latest version of optical disc is the Blu-ray disc. Using a smaller, violet-blue laser, this system can hold significantly more information than a DVD. Blu-ray discs can hold up to 128GB in four layers. The transfer speed of Blu-ray at more than 48 Mbps is over four times greater than that of DVD systems. Designed for high-definition (HD) video, Blu-ray offers significant storage for data as well.

Backup Lifetimes

A common misconception is that data backed up onto magnetic media will last for long periods of time. Although once touted as lasting decades, modern micro-encoding methods are proving less durable than expected, sometimes with lifetimes less than ten years. A secondary problem is maintaining operating system access via drivers to legacy equipment. As technology moves forward, finding drivers for ten-year-old tape drives for Windows 7 or the latest version of Linux will prove to be a major hurdle.

DVDs now occupy the same role that CDs have in the recent past, except that they hold more than seven times the data of a CD. This makes full-length movie recording possible on a single disc. The increased capacity comes from finer tolerances and the fact that DVDs can hold data on both sides. A wide range of formats for DVDs include DVD+R, DVD-R, dual layer, and now HD formats, HD-DVD, and Blu-ray. This variety is due to competing “standards” and can result in confusion. DVD+R and -R are distinguishable only when recording, and most devices since 2004 should read both. Dual layers add additional space but require appropriate dual-layer–enabled drives.

Images

Electronic Media

The latest form of removable media is electronic memory. Electronic circuits of static memory, which can retain data even without power, fill a niche where high density and small size are needed. Originally used in audio devices and digital cameras, these electronic media come in a variety of vendor-specific types, such as smart cards, SmartMedia, SD cards, flash cards, memory sticks, and CompactFlash devices. These memory devices range from small card-like devices, of which microSD cards are smaller than dimes and hold 2GB, to USB sticks that hold up to 64GB. These devices are becoming ubiquitous, with new PCs and netbooks containing built-in slots to read them like any other storage device.

Although they are used primarily for photos and music, these devices could be used to move any digital information from one machine to another. To a machine equipped with a connector port, these devices look like any other file storage location. They can be connected to a system through a special reader or directly via a USB port. In newer PC systems, a USB boot device has replaced the older floppy drive. These devices are small, can hold a significant amount of data—over 1 TB at time of writing—and are easy to move from machine to machine. Another novel interface is a mouse that has a slot for a memory stick. This dual-purpose device conserves space, conserves USB ports, and is easy to use. The memory stick is placed in the mouse, which can then be used normally. The stick is easily removable and transportable. The mouse works with or without the memory stick; it is just a convenient device to use for a portal.

Images

The advent of large-capacity USB sticks has enabled users to build entire systems, OSs, and tools onto them to ensure the security and veracity of the OS and tools. With the expanding use of virtualization, a user could carry an entire system on a USB stick and boot it using virtually any hardware. With USB 3.0 and its 640-Mbps speeds, this is a highly versatile form of memory that enables many new capabilities.

Solid-State Hard Drives

With the rise of solid-state memory technologies comes a solid-state “hard drive.” Solid-state drives (SSDs) are moving into mobile devices, desktops, and even servers. Memory densities are significantly beyond physical drives, there are no moving parts to wear out or fail, and SSDs have vastly superior performance specifications. Figure 10.10 shows a 512GB SSD from a laptop, on a half-height minicard mSATA interface. The only factor that has slowed the spread of this technology has been cost, but recent cost reductions have made this form of memory a first choice in many systems.

Images

Figure 10.10   512GB solid-state half-height minicard

Images Security Concerns for Transmission Media

The primary security concern for a system administrator has to be preventing physical access to a server by an unauthorized individual. Such access will almost always spell disaster—with direct access and the correct tools, any system can be infiltrated. One of the administrator’s next major concerns should be preventing unfettered access to a network connection. Access to switches and routers is almost as bad as direct access to a server, and access to network connections would rank third in terms of worst-case scenarios. Preventing such access is costly, yet the cost of replacing a server because of theft is also costly.

Images Physical Security Concerns

A balanced approach is the most sensible approach when addressing physical security, and this applies to transmission media as well. Keeping network switch rooms secure and cable runs secure seems obvious, but cases of using janitorial closets for this vital business purpose abound. One of the keys to mounting a successful attack on a network is information. Usernames, passwords, server locations—all of these can be obtained if someone has the ability to observe network traffic in a process called sniffing. A sniffer can record all the network traffic, and this data can be mined for accounts, passwords, and traffic content, all of which can be useful to an unauthorized user. One starting point for many intrusions is the insertion of an unauthorized sniffer into the network, with the fruits of its labors driving the remaining unauthorized activities. Many common scenarios exist when unauthorized entry to a network occurs, including these:

Images   Inserting a node and functionality that is not authorized on the network, such as a sniffer device or unauthorized wireless access point

Images   Modifying firewall security policies

Images   Modifying ACLs for firewalls, switches, or routers

Images   Modifying network devices to echo traffic to an external node

Network devices and transmission media become targets because they are dispersed throughout an organization, and physical security of many dispersed items can be difficult to manage. Although limiting physical access is difficult, it is essential. The least level of skill is still more than sufficient to accomplish unauthorized entry into a network if physical access to the network signals is allowed. This is one factor driving many organizations to use fiber-optics because these cables are much more difficult to tap. Although many tricks can be employed with switches and VLANs to increase security, it is still essential that you prevent unauthorized contact with the network equipment.

Physical Infrastructure Security

The best first effort is to secure the actual network equipment to prevent this type of intrusion. As you should remember from Chapter 8, physical access to network infrastructure opens up a myriad of issues, and most of them can be catastrophic with respect to security. Physically securing access to network components is one of the “must dos” of a comprehensive security effort.

Wireless networks make the intruder’s task even easier, as they take the network to the users, authorized or not. A technique called war-driving involves using a laptop and software to find wireless networks from outside the premises. A typical use of war-driving is to locate a wireless network with poor (or no) security and obtain free Internet access, but other uses can be more devastating. A simple solution is to place a firewall between the wireless access point and the rest of the network and authenticate users before allowing entry. Business users use VPN technology to secure their connection to the Internet and other resources, and home users can do the same thing to prevent neighbors from “sharing” their Internet connections. To ensure that unauthorized traffic does not enter your network through a wireless access point, you must either use a firewall with an authentication system or establish a VPN.

Images Cloud Computing

Cloud computing is a common term used to describe computer services provided over a network. These computing services are computing, storage, applications, and services that are offered via the Internet Protocol. One of the characteristics of cloud computing is transparency to the end user. This improves usability of this form of service provisioning. Cloud computing offers much to the user: improvements in performance, scalability, flexibility, security, and reliability, among other items. These improvements are a direct result of the specific attributes associated with how cloud services are implemented.

Security is a particular challenge when data and computation are handled by a remote party, as in cloud computing. The specific challenge is how does one allow data outside their enterprise and yet remain in control over how the data is used, and the common answer is encryption. When data is properly encrypted before it leaves the enterprise, external storage can still be performed securely.

Cloud Types

Depending on the size and particular needs of an organization, there are three basic types of cloud: public, private, and hybrid.

Private

If your organization is highly sensitive to sharing resources, you might want to consider the use of a private cloud. Private clouds are essentially reserved resources used only for your organization—your own little cloud within the cloud. This service will be considerably more expensive, but it should also carry less exposure and should enable your organization to better define the security, processing, and handling of data that occurs within your cloud.

Public

The term public cloud refers to when the cloud service is rendered over a system that is open for public use. In most cases, there is little operational difference between public and private cloud architectures, but the security ramifications can be substantial. Although public cloud services will separate users with security restrictions, the depth and level of these restrictions, by definition, will be significantly less in a public cloud.

Hybrid

A hybrid cloud structure is one where elements are combined from private, public, and community cloud structures. When examining a hybrid structure, you need to remain cognizant that operationally these differing environments may not actually be joined, but rather used together. Sensitive information can be stored in the private cloud and issue-related information can be stored in the community cloud, all of which is accessed by an application. This makes the overall system a hybrid cloud system.

Community

A community cloud system is one where several organizations with a common interest share a cloud environment for the specific purposes of the shared endeavor. For example, local public entities and key local firms may share a community cloud dedicated to serving the interests of community initiatives. This can be an attractive cost-sharing mechanism for specific data-sharing initiatives.

Images

Be sure you understand the differences between the cloud computing service models Platform as a Service, Software as a Service, and Infrastructure as a Service.

Cloud Computing Service Models

Clouds can be created by many entities, both internal and external to an organization. Commercial cloud services are already available and offered by a variety of firms, as large as Google and Amazon and as small as local providers. Internal services can replicate the advantages of cloud computing while improving the utility of limited resources. The promise of cloud computing is improved utility and, as such, is marketed under the concepts of Software as a Service, Platform as a Service, and Infrastructure as a Service.

Software as a Service

Software as a Service (SaaS) is the offering of software to end users from within the cloud. Rather than software being installed on client machines, SaaS acts as software on demand, where the software runs from the cloud. This has several advantages, as updates are often seamless to end users and integration between components is enhanced.

Platform as a Service

Platform as a Service (PaaS) is a marketing term used to describe the offering of a computing platform in the cloud. Multiple sets of software, working together to provide services, such as database services, can be delivered via the cloud as a platform.

Infrastructure as a Service

Infrastructure as a Service (IaaS) is a term used to describe cloud-based systems that are delivered as a virtual platform for computing. Rather than firms building data centers, IaaS allows them to contract for utility computing as needed.

Images VDI/VDE

Virtual desktop infrastructure (VDI) and virtual desktop environment (VDE) are terms used to describe the hosting of a desktop environment on a central server. There are several advantages to this desktop environment. From a user perspective, their “machine” and all of its data are persisted in the server environment. This means that a used can move from machine to machine and have a singular environment following them around. And because the end-user devices are just simple doors back to the server instance of the user’s desktop, the computing requirements at the edge point are considerably lower and can be performed on older machines. Users can utilize a wide range of machines, even mobile phones to access their desktop, and get their work finished. Security can be a very large advantage of VDI/VDE. Because all data, even when being processed, resides on servers inside the enterprise, there is nothing to compromise if a device is lost.

Images On-premises vs. Hosted vs. Cloud

Systems can exist in a wide array of places, from on-premises to hosted to in the cloud. On-premises is just that—the system resides within a local enterprise. Whether a VM, storage, or even a service, if the solution is locally hosted and maintained, it is referred to as “on-premises.” The advantage is one of total control and generally high connectivity. The disadvantage is that it requires local resources and is not as easy to scale. Hosted services refers to having the services housed somewhere else, commonly in a shared environment, and you have a set the cost based on the amount you use. This has cost advantages, especially when scale is included. After all, does it make sense to have all the local infrastructure, including personnel, if you have a small, informational web site? Of course not; you would have that hosted. Storage works the opposite with scale: small-scale storage needs are easily met in-house, whereas large-scale storage needs are typically either hosted or in the cloud.

Images Security as a Service

Just as one can get Software as a Service or Infrastructure as a Service, one can contract with a security firm for Security as a Service, which is the outsourcing of security functions to a vendor that has advantages in scale, costs, or speed. Security is a complex, wide-ranging cornucopia of technical specialties all working together to provide appropriate risk reductions in today’s enterprise. This means there are technical people, management, specialized hardware and software, and fairly complex operations, both routine and in response to incidents. Any or all of this can be outsourced to a security vendor, and firms routinely examine vendors for solutions where the business economics make outsourcing attractive.

Images

Several types of items are delivered as a service—software, infrastructure, platforms, cloud access, and security—each with a specific deliverable and value proposition. Be sure to understand the differences and read the question carefully to determine which is the best solution—at times, the differentiating factor may be a single word in the question.

Different security vendors offer different specializations—from network security, to web application security, e-mail security, incident response services, and even infrastructure updates. These can all be managed from a third party. Depending on architecture, needs, and scale, these third-party vendors can oftentimes offer a compelling economic advantage for part of a security solution.

Cloud Access Security Broker

Cloud access security brokers (CASBs) are integrated suites of tools or services offered as Security as a Service, or third-party managed security service providers (MSSPs), focused on cloud security. CASB vendors provide a range of security services designed to protect cloud infrastructure and data. CASBs act as security policy enforcement points between cloud service providers and their customers to enact enterprise security policies as the cloud-based resources are utilized.

Chapter 10 Review

images   Chapter Summary


After reading this chapter and completing the exercises, you should understand the following aspects of networking and secure infrastructures.

Construct networks using different types of network devices

Images   Understand the differences between basic network devices, such as hubs, bridges, switches, and routers.

Images   Understand the security implications of network devices and how to construct a secure network infrastructure.

Enhance security using security devices

Images   Understand the use of firewalls, next-generation firewalls, and intrusion detection systems.

Images   Understand the role of load balancers and proxy servers as part of a secure network solution.

Images   Understand the use of security appliances, such as web security gateways, data loss prevention, and unified threat management.

Understand virtualization concepts

Images   Type 1 hypervisors run directly on system hardware.

Images   Type 2 hypervisors run on top of a host operating system.

Enhance security using NAC/NAP methodologies

Images   The Cisco NAC protocol and the Microsoft NAP protocol provide security functionality when attaching devices to a network.

Images   NAC and NAP play a crucial role in the securing of infrastructure as devices enter and leave the network.

Images   NAC and NAP can be used together to take advantage of the strengths and investments in each technology to form a strong network admission methodology.

Identify the different types of media used to carry network signals

Images   Guided and unguided media can both carry network traffic.

Images   Wired technology, from coax cable through twisted-pair Ethernet, provides a cost-effective means of carrying network traffic.

Images   Fiber technology is used to carry higher bandwidth.

Images   Unguided media, including infrared and RF (including wireless and Bluetooth), provide short-range network connectivity.

Describe the different types of storage media used to store information

Images   There are a wide array of removable media types, from memory sticks to optical discs to portable drives.

Images   Data storage on removable media, because of increased physical access, creates significant security implications.

Use basic terminology associated with network functions related to information security

Images   Understanding and using the correct vocabulary for device names and relationships to networking are important as a security professional.

Images   Security appliances add terminology, including specific items for IDS and firewalls.

Describe the different types and uses of cloud computing

Images   Understand the types of clouds in use.

Images   Understand the use of Software as a Service, Infrastructure as a Service, and Platform as a Service.

images   Key Terms


basic packet filtering (288)

bridge (283)

cloud computing (311)

coaxial cable (301)

collision domain (283)

concentrator (291)

data loss prevention (DLP) (299)

firewall (286)

hypervisor (279)

hub (283)

Infrastructure as a Service (IaaS) (313)

Internet content filters (299)

load balancer (296)

modem (292)

network access control (294)

Network Access Protection (NAP) (294)

Network Admission Control (NAC) (294)

network-attached storage (NAS) (278)

network interface card (NIC) (283)

network operations center (NOC) (295)

next-generation firewall (290)

Platform as a Service (PaaS) (313)

private branch exchange (PBX) (293)

proxy server (297)

router (285)

sandboxing (282)

servers (277)

shielded twisted-pair (STP) (302)

Software as a Service (SaaS) (312)

solid-state drive (SSD) (309)

switch (284)

unified threat management (UTM) (300)

unshielded twisted-pair (UTP) (302)

virtualization (279)

web security gateway (298)

wireless access point (291)

workstation (277)

images   Key Terms Quiz


Use terms from the Key Terms list to complete the sentences that follow. Don’t use the same term more than once. Not all terms will be used.

1.   A(n) _______________ routes packets based on IP addresses.

2.   To offer software to end users from the cloud is a form of _______________.

3.   To connect a computer to a network, you use a(n) _______________.

4.   A(n) _______________ or _______________ distributes traffic based on MAC addresses.

5.   To verify that a computer is properly configured to connect to a network, the network can use _______________.

6.   _______________ is a name for the typical computer a user uses on a network.

7.   A(n) _______________ repeats all data traffic across all connected ports.

8.   Cat 5 is an example of _______________ cable.

9.   Basic packet filtering occurs at the ____________.

10.   A(n) _______________ is an extension of the telephone service into a firm’s telecommunications network.

Images   Multiple-Choice Quiz


1.   Switches operate at which layer of the OSI model?

A.   Physical layer

B.   Network layer

C.   Data link layer

D.   Application layer

2.   UTP cables are terminated for Ethernet using what type of connector?

A.   A BNC plug

B.   An Ethernet connector

C.   A standard phone jack connector

D.   An RJ-45 connector

3.   Coaxial cable carries how many physical channels?

A.   Two

B.   Four

C.   One

D.   None of the above

4.   Network access control is associated with which of the following?

A.   NAP

B.   IPsec

C.   IPv6

D.   NAT

5.   The purpose of twisting the wires in twisted-pair circuits is to:

A.   Increase speed

B.   Increase bandwidth

C.   Reduce crosstalk

D.   Allow easier tracing

6.   Microsoft NAP permits:

A.   Limiting connections to a restricted subnet only

B.   Checking a client OS patch level before a network connection is permitted

C.   Denying a connection based on client policy settings

D.   All of the above

7.   SNMP is a protocol used for which of the following functions?

A.   Secure e-mail

B.   Secure encryption of network packets

C.   Remote access to user workstations

D.   Remote access to network infrastructure

8.   Firewalls can use which of the following in their operation?

A.   Stateful packet inspection

B.   Port blocking to deny specific services

C.   NAT to hide internal IP addresses

D.   All of the above

9.   SMTP is a protocol used for which of the following functions?

A.   E-mail

B.   Secure encryption of network packets

C.   Remote access to user workstations

D.   None of the above

10.   USB-based flash memory is characterized by:

A.   High cost

B.   Low capacity

C.   Slow access

D.   None of the above

Images   Essay Quiz


1.   Compare and contrast routers and switches by describing what the advantages and disadvantages each have.

2.   Describe the common threats to the transmission media in a network, by type of transmission media.

Lab Projects

   Lab Project 10.1

Configure two PCs and a small home office–type router to communicate across the network with each other.

   Lab Project 10.2

Demonstrate network connectivity using Windows command-line tools.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.36.141