In the Networks and Communication domain, students will learn about the network structure, data transmission methods, transport formats, and the security measures used to maintain integrity, availability, authentication, and confidentiality of the information being transmitted. Concepts for both public and private communication networks will be discussed.
Topics
The security practitioner is expected to participate in the following areas related to network and telecommunications security:
There are many issues that the SSCP will need to address as part of a comprehensive approach to network security. Areas such as the OSI model, network topologies, ports, and protocols all play an important part in network security. As we begin our discussions in these areas, the SSCP should keep in mind the operational concept of defense in depth, which specifies that a secure design will incorporate multiple overlapping layers of protection mechanisms to ensure that the concerns regarding confidentiality, integrity, and availability are addressed.
Network communication is usually described in terms of layers. Several layering models exist; the most commonly used are:
One feature that is common to both models and highly relevant from a security perspective is encapsulation. This means that not only do the different layers operate independently from each other, but they are also isolated on a technical level. Short of technical failures, the contents of any lower or higher layer protocol are inaccessible from any particular layer. This function of the models allows the security architect to ensure that their designs can provide both confidentiality and integrity. It also allows the security practitioner to implement those designs and operate them effectively, knowing that data flowing up and down the model’s layers is being safeguarded.
The seven-layer Open System Interconnect (OSI) model was defined in 1984 and published as an international standard, ISO/IEC 7498—1.3 The last revision to this standard was in 1994. Although sometimes considered complex, it has provided a practical and widely accepted way to describe networking. In practice, some layers have proven to be less crucial to the concept (such as the presentation layer), while others (such as the network layer) have required more specific structure, and applications overlapping and transgressing layer boundaries exist. See Figure 6-1.4
Physical topologies are defined at this layer. Because the required signals depend on the transmitting media (e.g., required modem signals are not the same as ones for an Ethernet network interface card), the signals are generated at the physical layer. Not all hardware consists of layer 1 devices. Even though many types of hardware, such as cables, connectors, and modems, operate at the physical layer, some operate at different layers. Routers and switches, for example, operate at the network and data-link layers, respectively.
The data link layer prepares the packet that it receives from the network layer to be transmitted as frames on the network. This layer ensures that the information that it exchanges with its peers is error free. If the data link layer detects an error in a frame, it will request that its peer resend that frame. The data link layer converts information from the higher layers into bits in the format that is expected for each networking technology, such as Ethernet, Token Ring, etc. Using hardware addresses, this layer transmits frames to devices that are physically connected only. As an analogy, consider the path between the end nodes on the network as a chain and each link as a device in the path. The data link layer is concerned with sending frames to the next link.
The Institute of Electrical and Electronics Engineers (IEEE) data link layer is divided into two sublayers:
It is important to clearly distinguish between the functions of the network and data link layers. The network layer moves information between two hosts that are not physically connected. On the other hand, the data link layer is concerned with moving data to the next physically connected device. Also, whereas the data link layer relies on hardware addressing, the network layer uses logical addressing that is created when hosts are configured.
Internet Protocol (IP) is part of the TCP/IP suite and is the most important network layer protocol. IP has two functions:
IP is a connectionless protocol that does not guarantee error-free delivery. Layer 3 devices, such as routers, read the destination layer 3 address (e.g., destination IP address) in received packets and use their routing table to determine the next device on the network (the next hop) to send the packet. If the destination address is not on a network that is directly connected to the router, it will send the packet to another router.
Routing tables are built either statically or dynamically. Static routing tables are configured manually and change only when updated. Dynamic routing tables are built automatically when routers periodically share information that reflects their view of the network, which changes as routers go on and offline. When traffic congestion develops, this allows the routers to effectively route packets as network conditions change. Some examples of other protocols that are traditionally considered to work at layer 3 are as follows:
For a listing of protocols associated with layer 3 of the OSI model, see the following:
The transport layer creates an end-to-end transport between peer hosts. User Datagram Protocol (UDP) and Transmission Control Protocol (TCP) are important transport layer protocols in the TCP/IP suite. UDP does not ensure that transmissions are received without errors, and therefore it is classified as a connectionless, unreliable protocol. This does not mean that UDP is poorly designed. Rather, the application will perform the error checking instead of the protocol.
Connection-oriented reliable protocols, such as TCP, ensure integrity by providing error-free transmission. They divide information from multiple applications on the same host into segments to be transmitted on a network. Because it is not guaranteed that the peer transport layer receives segments in the order that they were sent, reliable protocols reassemble received segments into the correct order. When the peer layer receives a segment, it responds with an acknowledgment. If an acknowledgment is not received, the segment is retransmitted. Lastly, reliable protocols ensure that each host does not receive more data than it can process without loss of data.
TCP data transmissions, connection establishment, and connection termination maintain specific control parameters that govern the entire process. The control bits are listed as follows:
These control bits are used for many purposes; chief among them is the establishment of a guaranteed communication session via a process referred to as the TCP three way handshake, as described below:
For a listing of protocols associated with layer 4 of the OSI model, see below:
This layer provides a logical, persistent connection between peer hosts. A session is analogous to a conversation that is necessary for applications to exchange information. The session layer is responsible for creating, maintaining, and tearing down the session. Three modes are offered:
For a listing of protocols associated with layer 5 of the OSI model, see below:
The applications that are communicating over a network may represent information differently, such as using incompatible character sets. This layer provides services to ensure that the peer applications use a common format to represent data. For example, if a presentation layer wants to ensure that Unicode-encoded data can be read by an application that understands the ASCII character set only, it could translate the data from Unicode to a standard format. The peer presentation layer could translate the data from the standard format into the ASCII character set.
In many widely used applications and protocols, no distinction is made between the presentation and application layers. For example, Hypertext Transfer Protocol (HTTP), generally regarded as an application layer protocol, has presentation layer aspects such as the ability to identify character encoding for proper conversion, which is then done in the application layer.
This layer is the application’s portal to network-based services, such as determining the identity and availability of remote applications. When an application or the operating system transmits or receives data over a network, it uses the services from this layer. Many well-known protocols, such as Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), and Simple Mail Transfer Protocol (SMTP), operate at this layer. It is important to remember that the application layer is not the application, especially when an application has the same name as a layer 7 protocol. For example, the FTP command on many operating systems initiates an application called FTP, which eventually uses the FTP protocol to transfer files between hosts. While some protocols are easily ascribed to a certain layer based on their form and function, others are very difficult to place precisely. An example of a protocol that falls into this category would be the Border Gateway Protocol.
Border Gateway Protocol (BGP) was created to replace the Exterior Gateway Protocol (EGP) to allow fully decentralized routing. This allowed the Internet to become a truly decentralized system. BGP performs inter-domain routing in Transmission-Control Protocol/Internet Protocol (TCP/IP) networks. BGP is a protocol for exchanging routing information between gateway hosts (each with its own router) in a network of autonomous systems. BGP is often the protocol used between gateway hosts on the Internet. The routing table contains a list of known routers, the addresses they can reach, and a cost metric associated with the path to each router so that the best available route is chosen.
Hosts using BGP communicate using the Transmission Control Protocol (TCP) and send updated router table information only when one host has detected a change. Only the affected part of the routing table is sent. BGP-4, the latest version, lets administrators configure cost metrics based on policy statements.11
Many consider BGP an application that happens to affect the routing table. There are also those that would consider BGP a routing protocol as opposed to an application that affects the routing table. BGP creates and uses code attached to sockets. Does that mean that it should be considered an application? In the case of BGP, when viewed in a traffic sniffer, there is a layer 4 header between the IP Header and the Routing Protocol header. Does that mean that we can say that BGP is an application that transports routing information at layer 4?
Perhaps a more appropriate way to classify a protocol is to look at the services it provides. BGP clearly provides services to the network layer, not the traditional transport services, but rather, BGP provides control information about how the network layer operates. This could allow us to move BGP down to the network layer.
This perspective is especially useful for management, control, and supervisory protocols that can be seen as applications of other protocols, and yet providing the necessary control, management, and supervisory information to the managed infrastructure that is on lower layers than the application layer. From this viewpoint, while BGP is truly just an application running over TCP, it is intimately tied into the operation of the network layer because it provides the necessary information about how the network layer should operate. That means that we could say that BGP is implemented as an application layer protocol, but with respect to its function, it is a network layer protocol.
As a security practitioner, you should understand how BGP works in real networks. Following are several links to simulators that can be used to model BGP and demonstrate how it works:
For a listing of protocols associated with layer 7 of the OSI model, see below:
The U.S. Department of Defense developed the TCP/IP model, which is very similar to the OSI model but with fewer layers, as shown in Figure 6-2.
The link layer provides physical communication and routing within a network. It corresponds to everything required to implement an Ethernet. It is sometimes described as two layers, a physical layer and a link layer. In terms of the OSI model, it covers layers 1 and 2. The network layer includes everything that is required to move data between networks. It corresponds to the IP protocol, but also Internet Control Message Protocol (ICMP) and Internet Group Management Protocol (IGMP). In terms of the OSI model, it corresponds to layer 3.
The transport layer includes everything required to move data between applications. It corresponds to TCP and UDP. In terms of the OSI model, it corresponds to layer 4. The application layer covers everything specific to a session or application; in other words, everything relating to the data payload. In terms of the OSI model, it corresponds to layers 5 through 7. Owing to its coarse structure, it is not well suited to describe application-level information exchange.
As with the OSI model, data that is transmitted on the network enters the top of the stack, and each of the layers, with the exception of the physical layer, encapsulates information for its peer at the beginning and sometimes the end of the message that it receives from the next highest layer. On the remote host, each layer removes the information that is peer encapsulated before the remote layer passes the message to the next higher layer. Also, each layer processes messages in a modular fashion, without concern for how the other layers on the same host process the message.
Internet protocol (IP) is responsible for sending packets from the source to the destination hosts. Because it is an unreliable protocol, it does not guarantee that packets arrive error free or in the correct order. That task is left to protocols on higher layers. IP will subdivide packets into fragments when a packet is too large for a network.
Hosts are distinguished by the IP addresses of their network interfaces. The address is expressed as four octets separated by a dot (.), for example, 216.12.146.140. Each octet may have a value between 0 and 255. However, 0 and 255 are not used for hosts. The latter is used for broadcast addresses, and the former’s meaning depends on the context in which it is used. Each address is subdivided into two parts: the network number and the host. The network number, assigned by an external organization, such as the Internet Corporation for Assigned Names and Numbers (ICANN), represents the organization’s network. The host represents the network interface within the network.
Originally, the part of the address that represented the network number depended on the network’s class. As shown in Table 6-1, a Class A network used the leftmost octet as the network number, Class B used the leftmost two octets, etc.
Table 6-1: Network Classes
Class | Range of First Octet | Number of Octets for Network Number | Number of Hosts in Network |
A | 1–126 | 1 | 16,777,214 |
B | 128–191 | 2 | 65,534 |
C | 192–223 | 3 | 254 |
D | 224–239 | Multicast | |
E | 240–255 | Reserved |
The part of the address that is not used as the network number is used to specify the host. For example, the address 216.12.146.140 represents a Class C network. Therefore, the network portion of the address is represented by the 216.12.146, and the unique host address within the network block is represented by 140.
127 is reserved for a computer’s loopback address. Usually the address 127.0.0.1 is used. The loopback address is used to provide a mechanism for self-diagnosis and troubleshooting at the machine level. This mechanism allows a network administrator to treat a local machine as if it were a remote machine and ping the network interface to establish whether or not it is operational.
The explosion of Internet utilization in the 1990s caused a shortage of unallocated IPv4 addresses. To help remedy the problem, Classless Inter-Domain Routing (CIDR) was implemented. CIDR does not require that a new address be allocated based on the number of hosts in a network class. Instead, addresses are allocated in contiguous blocks from the pool of unused addresses.
To ease network administration, networks are typically subdivided into subnets. Because subnets cannot be distinguished with the addressing scheme discussed so far, a separate mechanism, the subnet mask, is used to define the part of the address that is used for the subnet. Bits in the subnet mask are 1 when the corresponding bits in the address are used for the subnet. The remaining bits in the mask are 0. For example, if the leftmost three octets (24 bits) are used to distinguish subnets, the subnet mask is 11111111 11111111 11111111 00000000. A string of 32 1s and 0s is very unwieldy, so the mask is usually converted to decimal notation: 255.255.255.0. Alternatively, the mask is expressed with a slash (/) followed by the number of 1s in the mask. The above mask would be written as /24.
After the explosion of Internet usage in the mid-1990s, IP began to experience serious growing pains. It was obvious that the phenomenal usage of the Internet was stretching the protocol to its limit. The most obvious problems were a shortage of unallocated IP addresses and serious shortcomings in security. IPv6 is a modernization of IPv4 that includes:
The Transmission Control Protocol (TCP) provides connection-oriented data management and reliable data transfer. TCP and UDP map data connections through the association of port numbers with services provided by the host. TCP and UDP port numbers are managed by the Internet Assigned Numbers Authority (IANA). A total of 65,536 (216) ports exist. These are broken into three ranges:
Attacks against TCP include sequence number attacks, session hijacking, and SYN floods. More information about attacks can be found later in this domain.
The User Datagram Protocol (UDP) provides a lightweight service for connectionless data transfer without error detection and correction. For UDP, the same considerations for port numbers as described for TCP in the section on Transmission Control Protocol apply. A number of protocols within the transport layer have been defined on top of UDP, thereby effectively splitting the transport layer into two. Protocols stacked between layers 4 and 5 include Real-time Protocol (RTP) and Real-time Control Protocol (RTCP) as defined in RFC 3550, MBone, a multicasting protocol, Reliable UDP (RUDP), and Stream Control Transmission Protocol (SCTP) as defined in RFC 2960. As a connectionless protocol, UDP services are easy prey for spoofing attacks.
The Internet, a global network of independently managed, interconnected networks, has changed life on earth. People from anywhere on the globe can share information almost instantaneously using a variety of standardized tools such as Web technologies or email.
An intranet, on the other hand, is a network of interconnected internal networks within an organization, which allows information to be shared within the organization and sometimes with trusted partners and suppliers. For instance, during a project, staff in a global company can easily access and exchange documents, thereby working together almost as if they were in the same office. As with the Internet, the ease with which information can be shared comes with the responsibility to protect it from harm. Intranets will typically host a wide range of organizational data. For this reason, access to these resources is usually coupled with existing internal authentication services even though they are technically on an internal network, such as a directory service coupled with multi-factor authentication.
An extranet differs from a DMZ (demilitarized network zone) in the following way: An extranet is made available to authenticated connections that have been granted an access account to the resources in the extranet. Conversely, a DMZ will host publicly available resources that must support unauthenticated connections from just about any source, such as DNS servers and email servers. Due to the need for companies to share large quantities of information, often in an automated fashion, typically one company will grant the other controlled access to an isolated segment of its network to exchange information through the use of an extranet.
Granting an external organization access to a network comes with significant risk. Both companies have to be certain that the controls, both technical and nontechnical (e.g., operational and policy), effectively minimize the risk of unauthorized access to information. Where access must be granted to external organizations, additional controls such as deterministic routing can be applied upstream by service providers. This sort of safeguard is relatively simple to employ and has significant advantages because the ability for malicious entities to target an extranet for compromise leading to internal network penetration is abbreviated.
Companies that access extranets often treat the information within these networks and their servers as “trusted,” confidential, and possessing integrity (uncorrupted and valid). However, these companies do not have control of each other’s security profile. Who knows what kind of trouble a user can get into if he or she accesses supposedly trusted information through an extranet from an organization whose network has been compromised? To mitigate this potential risk, security architects and practitioners need to demand that certain security controls are in place before granting access to an extranet.
System and network administrators are busy people and hardly have the time to assign IP addresses to hosts and track which addresses are allocated. To relieve administrators from the burden of manually assigning addresses, many organizations use the Dynamic Host Configuration Protocol (DHCP) to automatically assign IP addresses to workstations (servers and network devices usually are assigned static addresses).
Dynamically assigning a host’s IP configuration is fairly simple. When a workstation boots, it broadcasts a DHCPDISCOVER request on the local LAN, which could be forwarded by routers. DHCP servers will respond with a DHCPOFFER packet, which contains a proposed configuration, including an IP address. The DHCP client selects a configuration from the received DHCPOFFER packets and replies with a DHCPREQUEST. The DHCP server replies with a DHCPACK (DHCP acknowledgment), and the workstation adapts the configuration. Receiving a DHCP-assigned IP address is referred to as receiving a lease.
A client does not request a new lease every time it boots. Part of the negotiation of IP addresses includes establishing a time interval for which the lease is valid and timers that reflect when the client must attempt to renew the lease. This timer is referred to as a Time to Live counter, or just simply as the TTL. As long as the timers have not expired, the client is not required to ask for a new lease. Within the DHCP servers, administrators create address pools from which addresses are dynamically assigned when requested by a client. In addition, they can assign specific hosts to have static (i.e., permanent) addresses through the use of client reservations.
Because the DHCP server and client do not always authenticate with each other, neither host can be sure that the other is legitimate. For example, in a DHCP network, an attacker can plug his or her workstation into a network jack and receive an IP address, without having to obtain one by guessing or through social engineering. Also, a client cannot be certain that a DHCPOFFER packet is from a DHCP server instead of an intruder masquerading as a server.
To counteract these concerns, in June 2001 the IETF published RFC 3118, which specifies how to implement Authentication for DHCP Messages.12 This standard describes an enhancement that replaces the normal DHCP messages with authenticated ones. Clients and servers check the authentication information and reject messages that come from invalid sources. The technology involves the use of a new DHCP option type, the Authentication option, and operating changes to several of the leasing processes to use this option. Although these vulnerabilities are not trivial, the ease of administration of IP addresses usually makes the risk from the vulnerabilities acceptable, except in very high security environments. Ultimately, the security architect will need to weigh the risks associated with using DHCP without an authentication option and decide how best to proceed.
The Internet Control Message Protocol (ICMP) is used for the exchange of control messages between hosts and gateways and is used for diagnostic tools such as ping and traceroute. ICMP can be leveraged for malicious behavior, including man-in-the-middle and denial-of-service attacks.
Ping is a diagnostic program used to determine if a specified host is on the network and can be reached by the pinging host. It sends an ICMP echo packet to the target host and waits for the target to return an ICMP echo reply. Amazingly, an enormous number of operating systems would crash or become unstable upon receiving an ICMP echo greater than the legal packet limit of 65,536 bytes. Before the ping of death became famous, the source of the attack was difficult to find because many system administrators would ignore a seemingly harmless ping in their logs.
A router may send an ICMP redirect to a host to tell it to use a different, more effective default route. However, an attacker can send an ICMP redirect to a host telling it to use the attacker’s machine as a default route. The attacker will forward all of the redirected traffic to a router so that the victim will not know that his or her traffic has been intercepted. This is a good example of a man-in-the-middle attack. Some operating systems will crash if they receive a storm of ICMP redirects. The security practitioner should have several tools in his or her toolbox to be able to model and interact with attacks such as the ICMP redirect attack in order to better understand them. One such tool that will be very effective is called Scapy.
Scapy is a powerful interactive packet manipulation program. It is able to forge or decode packets of a wide number of protocols, send them on the wire, capture them, match requests and replies, and much more. It can easily handle most classical tasks like scanning, tracerouting, probing, unit tests, attacks, or network discovery. (It can replace hping, 85% of nmap, arpspoof, arp-sk, arping, tcpdump, tethereal, p0f, etc.) It also performs very well at a lot of other specific tasks that most other tools cannot handle, like sending invalid frames, injecting your own 802.11 frames, and combining techniques (VLAN hopping+ARP cache poisoning, VOIP decoding on WEP encrypted channel, etc.). You can find Scapy here: http://www.secdev.org/projects/scapy/.
Ping scanning is a basic network mapping technique that helps narrow the scope of an attack. An attacker can use one of many tools such as Very Simple Network Scanner for Windows based platforms and NMAP for Linux and Windows based platforms to ping all of the addresses in a range.15 If a host replies to a ping, then the attacker knows that a host exists at that address.
Traceroute is a diagnostic tool that displays the path a packet traverses between a source and destination host. Traceroute can be used maliciously to map a victim network and learn about its routing. In addition, there are tools, such as Firewalk, that use techniques similar to those of traceroute to enumerate a firewall rule set.16 The Firewalk tool stopped being actively developed and maintained as of version 5.0 in 2003. The functionality of the Firewalk tool has been subsumed into NMAP as part of the rule set that can be configured for use.17
What the Firewalk host rule tries to do is to discover firewall rules using an IP TTL expiration technique known as firewalking. To determine a rule on a given gateway, the scanner sends a probe to a metric located behind the gateway, with a TTL one higher than the gateway. If the probe is forwarded by the gateway, then we can expect to receive an ICMP_TIME_EXCEEDED reply from the gateway next hop router, or eventually the metric itself if it is directly connected to the gateway. Otherwise, the probe will time out.
Remote procedure calls (RPCs) represent the ability to allow for the executing of objects across hosts, with a client sending a set of instructions to an application residing on a different host on the network. Generically, several (mutually incompatible) services in this category exist, such as distributed computing environment RPC (DCE RPC) and Sun’s Open Network Computing RPC (ONC RPC, also referred to as SunRPC or simply RPC). It is important to note that RPC does not in fact provide any services on its own; instead, it provides a brokering service by providing (basic) authentication and a way to address the actual service. Common Object Request Broker Architecture (CORBA) and Microsoft Distributed Component Object Model (DCOM) can be viewed as RPC-type protocols. Security problems with RPC include its weak authentication mechanism, which can be leveraged for privilege escalation by an attacker.
There are many network topographies and relationships. It is important for the SSCP to understand the merits of each of them. The following sections explore all of the major topographies and related important information.
A bus topology is a LAN with a central cable (bus) to which all nodes (devices) connect. All nodes transmit directly on the central bus. Each node listens to all of the traffic on the bus and processes only the traffic that is destined for it. This topology relies on the data-link layer to determine when a node can transmit a frame on the bus without colliding with another frame on the bus. A LAN with a bus topology is shown in Figure 6-3.
Advantages of buses include:
Disadvantages of buses include:
A tree topology is similar to a bus. Instead of all of the nodes connecting to a central bus, the devices connect to a branching cable. Like a bus, every node receives all of the transmitted traffic and processes only the traffic that is destined for it. Furthermore, the data-link layer must transmit a frame only when there is not a frame on the wire. A network with a tree topology is shown in Figure 6-4.
Advantages of a tree include:
Disadvantages of a tree include:
A ring is a closed-loop topology. Data is transmitted in one direction only, based on the direction that the ring was initialed to transmit in, either clockwise or counter-clockwise. Each device receives data from its upstream neighbor only and transmits data to its downstream neighbor only. Typically, rings use coaxial cables or fiber optics. A Token Ring network is shown in Figure 6-5.
Advantages of rings include:
Disadvantages of rings include:
In a mesh network, all nodes are connected to every other node on the network. A full mesh network is usually too expensive because it requires many connections. As an alternative, a partial mesh can be employed in which only selected nodes (typically the most critical) are connected in a mesh, and the remaining nodes are connected to a few devices. As an example, core switches, firewalls, and routers and their hot standbys are often all connected to ensure as much availability as possible. A mesh network is shown in Figure 6-6.
Advantages of a mesh include:
Disadvantages of a mesh include:
All nodes in a star network are connected to a central device, such as a hub, switch, or router. Modern LANs usually employ a star typology. A star network is shown in Figure 6-7.
Advantages of a star include:
Disadvantages of a star include:
There are many points that the security architect and practitioner must consider about transmitting information from sender to receiver. For example, will the information be expressed as an analog or digital wave? How many recipients will be there? If the transmission media will be shared with others, how can one ensure that the signals will not interfere with each other?
Most communication, especially that directly initiated by a user, is from one host to another. For example, when a person uses a browser to send a request to a web server, he or she sends a packet to the web server. A transmission with one receiving host is called a unicast transmission.
A host can send a broadcast to everyone on its network or sub-network. Depending on the network topology, the broadcast could have anywhere from one to tens of thousands of recipients. Like a person standing on a soapbox, this is a noisy method of communication. Typically, only one or two destination hosts are interested in the broadcast; the other recipients waste resources to process the transmission. However, there are productive uses for broadcasts. Consider a router that knows a device’s IP address but must determine the device’s MAC address. The router will broadcast an Address Resolution Protocol (ARP) request asking for the device’s MAC address.
Notice how one broadcast could result in hundreds or even thousands of packets on the network. Intruders often leverage this fact in denial-of-service attacks.
Public and private networks are used more often than ever for streaming transmissions, such as movies, videoconferences, and music. Given the intense bandwidth needed to transmit these streams, and that the sender and recipients are not necessarily on the same network, how does one transmit the stream to only the interested hosts? The sender could send a copy of the stream via unicast to each receiver. Unless there is a very small audience, unicast delivery is not practical because the multiple simultaneous copies of the large stream on the network at the same time could cause congestion. Delivery with broadcasts is another possibility, but every host would receive the transmission, even if they were not interested in the stream.
Multicasting was designed to deliver a stream to only interested hosts. Radio broadcasting is a typical analogy for multicasting. To select a specific radio show, you tune a radio to the broadcasting station. Likewise, to receive a desired multicast, you join the corresponding multicast group.
Multicast agents are used to route multicast traffic over networks and administer multicast groups. Each network and sub-network that supports multicasting must have at least one multicast agent. Hosts use Internet Group Management Protocol (IGMP) to tell a local multicast agent that it wants to join a specific multicast group. Multicast agents also route multicasts to local hosts that are members of the multicast’s group and relay multicasts to neighboring agents.
When a host wants to leave a multicast group, it sends an IGMP message to a local multicast agent. Multicasts do not use reliable sessions. Therefore, the multicasts are transmitted as best effort, with no guarantee that datagrams are received. As an example, consider a server multicasting a videoconference to desktops that are members of the same multicast group as the server. The server transmits to a local multicast agent. Next, the multicast agent relays the stream to other agents. All of the multicast agents transmit the stream to local hosts that are members of the same multicast group as the server.
Circuit-switched networks establish a dedicated circuit between endpoints. These circuits consist of dedicated switch connections. Neither endpoint starts communicating until the circuit is completely established. The endpoints have exclusive use of the circuit and its bandwidth. Carriers base the cost of using a circuit-switched network on the duration of the connection, which makes this type of network only cost-effective for a steady communication stream between the endpoints. Examples of circuit-switched networks are the plain old telephone service (POTS), Integrated Services Digital Network (ISDN), and Point-to-Point Protocol (PPP).
Packet-switched networks do not use a dedicated connection between endpoints. Instead, data is divided into packets and transmitted on a shared network. Each packet contains meta-information so that it can be independently routed on the network. Networking devices will attempt to find the best path for each packet to its destination. Because network conditions could change while the partners are communicating, packets could take different paths as they transverse the network and arrive in any order. It is the responsibility of the destination endpoint to ensure that the received packets are in the correct order before sending them up the stack.
Virtual circuits provide a connection between endpoints over high-bandwidth, multiuser cable or fiber that behaves as if the circuit were a dedicated physical circuit. There are two types of virtual circuits, based on when the routes in the circuit are established. In a permanent virtual circuit, the carrier configures the circuit’s routes when the circuit is purchased. Unless the carrier changes the routes to tune the network, responds to an outage, etc., the routes do not change. On the other hand, the routes of a switched virtual circuit are configured dynamically by the routers each time the circuit is used.
As the name implies, Carrier Sense Multiple Access (CSMA) is an access protocol that uses the absence/presence of a signal on the medium that it wants to transmit on as permission to speak. Only one device may transmit at a time; otherwise, the transmitted frames will be unreadable. Because there is not an inherent mechanism that determines which device may transmit, all of the devices must compete for available bandwidth. For this reason, CSMA is referred to as a contention-based protocol. Also, because it is impossible to predict when a device may transmit, CSMA is also nondeterministic.
There are two variations of CSMA based on how collisions are handled. LANs using Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) require devices to announce their intention to transmit by broadcasting a jamming signal. When devices detect the jamming signal, they know not to transmit; otherwise, there will be a collision. After sending the jamming signal, the device waits to ensure that all devices have received that signal, and then it broadcasts the frames on the media. CSMA/CA is used in the IEEE 802.11 wireless standard.
Devices on a LAN using Carrier Sense Multiple Access with Collision Detection (CSMA/CD) listen for a carrier before transmitting data. If another transmission is not detected, the data will be transmitted. It is possible that a station will transmit before another station’s transmission had enough time to propagate. If this happens, two frames will be transmitted simultaneously, and a collision will occur. Instead of all stations simply retransmitting their data, which will likely cause more collisions, each station will wait a randomly generated interval before retransmitting. CSMA/CD is part of the IEEE 802.3 standard.18
A network that employs polling avoids contention by allowing a device (a slave) to transmit on the network only when it is asked to by a master device. Polling is used mostly in mainframe protocols, such as Synchronous Data Link. The point coordination function, an optional function of the IEEE standard, uses polling as well.
Token passing takes a more orderly approach to media access. With this access method, only one device may transmit on the LAN at a time, thus avoiding retransmissions.
A special frame, known as a token, circulates through the ring. When a device wishes to transmit on the network, it must possess the token. The device replaces the token with a frame containing the message to be transmitted and sends the frame to its neighbor. When each device receives the frame, it relays it to its neighbor if it is not the recipient. The process continues until the recipient possesses the frame. That device will copy the message, modify the frame to signify that the message was received, and transmit the frame on the network.
When the modified frame makes a trip back to the sending device, the sending device knows that the message was received. Token passing is used in Token Ring and FDDI networks. An example of a LAN using token passing can be seen in Figure 6-8.
Ethernet, which is defined in IEEE 802.3, played a major role in the rapid proliferation of LANs in the 1980s. The architecture was flexible and relatively inexpensive, and it was easy to add and remove devices from the LAN. Even today, for the same reasons, Ethernet is the most popular LAN architecture. The physical topologies that are supported by Ethernet are bus, star, and point to point, but the logical topology is the bus.
With the exception of full-duplex Ethernet (which does not have the issues of collisions), the architecture uses CSMA/CD. This protocol allows devices to transmit data with a minimum of overhead (compared to Token Ring), resulting in an efficient use of bandwidth. However, because devices must retransmit when more than one device attempts to send data on the medium, too many retransmissions due to collisions can cause serious throughput degradation.
The Ethernet standard supports coaxial cable, unshielded twisted pair, and fiber optics as transmission media.
Ethernet was originally rated at 10 Mbps, but like 10-megabyte disk drives, users quickly figured out how to use and exceed its capacity and needed faster LANs. To meet the growing demand for more bandwidth, 100 Base-TX (100 Mbps over twisted pair) and 100 Base-FX (100 Mbps over multimode fiber optics) were defined. When the demand grew for even more bandwidth over unshielded twisted pair, 1000 Base-T was defined, and 1000 Base-SX and 1000 Base-LX were defined for fiber optics. These standards support 1,000 Mbps.
Originally designed by IBM, Token Ring was adapted with some modification by the IEEE as IEEE 802.5. Despite the architecture’s name, Token Ring uses a physical star topology. The logical topology, however, is a ring. Each device receives data from its upstream neighbor and transmits to its downstream neighbor. Token Ring uses ring passing to mediate which device may transmit. As mentioned in the section on token passing, a special frame, called a token, is passed on the LAN. To transmit, a device must possess the token.
To transmit on the LAN, the device appends data to the token and sends it to its next downstream neighbor. Devices retransmit frames whenever the token is not the intended recipient. When the destination device receives the frame, it copies the data, marks the frame as read, and sends it to its downstream neighbor. When the packet returns to the source device, it confirms that the packet has been read. It then removes the frame from the ring. Token ring is now considered a “legacy” technology that is rarely seen and on those rare occasions, it is only because there has been no reason for an organization to upgrade away from it. Token ring has almost entirely been replaced with Ethernet technology.
Fiber Distributed Data Interface (FDDI) is a token-passing architecture that uses two rings. Because FDDI employs fiber optics, FDDI was designed to be a 100-Mbps network backbone. Only one ring (the primary) is used; the other one (secondary) is used as a backup. Information in the rings flows in opposite directions from each other. Hence, the rings are referred to as counter rotating. FDDI is also considered a legacy technology and has been supplanted by more modern transport technologies; initially Asynchronous Transfer Mode (ATM) but more recently Multiprotocol Label Switching (MPLS).
Multiprotocol Label Switching (MPLS) has attained a significant amount of popularity at the core of the carrier networks as of late because it manages to couple the determinism, speed, and QoS controls of established switched technologies like ATM and Frame Relay with the flexibility and robustness of the Internet Protocol world. (MPLS is developed and propagated through the Internet Engineering Task Force [IETF].) Additionally, the once faster and higher bandwidth ATM switches are being outperformed by Internet backbone routers. Equally important, MPLS offers simpler mechanisms for packet-oriented traffic engineering and multi-service functionality with the added benefit of greater scalability.
MPLS is often referred to as IP VPN because of the ability to couple highly deterministic routing with IP services. In effect, this creates a VPN-type service that makes it logically impossible for data from one network to be mixed or routed over to another network without compromising the MPLS routing device itself. MPLS does not include encryption services; therefore, any MPLS service called “IP VPN” does not in fact contain any cryptographic services. The traffic on these links would be visible to the service providers. The following guidelines should be considered by the network and security architects during the negotiation of MPLS bandwidth and associated service level agreements (SLAs) to ensure that services live up to the assurance requirements for the assets relying upon the network:
Local Area Networks (LANs) service a relatively small area, such as a home, office building, or office campus. In general, LANs service the computing needs of their local users. LANs consist of most modern computing devices, such as workstations, servers, and peripherals connected in a star topology or internetworked stars. Ethernet is the most popular LAN architecture because it is inexpensive and very flexible. Most LANs have connectivity to other networks, such as dial-up or dedicated lines to the Internet, access to other LANs via WANs, and so on.
When the SSCP considers the protocols and ports to be used to drive secure communication in the network, they will need to ensure that they are making wise choices. Having an understanding of name resolution through the use of DNS and directory services through the use of LDAP are two examples of areas that the SSCP will want to ensure they focus on.
The Domain Name System (DNS) is a hierarchical distributed naming system for computers, services, or any resource connected to the Internet or a private network.21 DNS associates various pieces of information with domain names assigned to each of the participating entities. It translates domain names to the numerical IP addresses needed for the purpose of locating computer services and devices worldwide. By providing a worldwide, distributed keyword-based redirection service, the Domain Name System is an essential component of the functionality of the Internet. The security practitioner can think of the Domain Name System as the phone book for the Internet, allowing people all over the world to find resources by translating human-friendly computer hostnames into IP addresses. For example, the domain name www.isc2.org translates to the addresses 209.188.91.140, 0:0:0:0:0:ffff:d1bc:5b8c (6to4 address), and 2002:d1bc:5b8c:0:0:0:0:0 (IPv4-mapped IPv6 address). DNS can be quickly updated, allowing a service’s location on the network to change without affecting the end users, who continue to use the same host name. Users take advantage of this when they use meaningful Uniform Resource Locators (URLs) and email addresses without having to know how the computer actually locates the services.
DNS is a globally distributed, scalable, hierarchical, and dynamic database that provides a mapping between hostnames, IP addresses (both IPv4 and IPv6), text records, mail exchange information (MX records), name server information (NS records), and security key information defined in Resource Records (RRs). The information defined in RRs is grouped into zones and maintained locally on a DNS server so it can be retrieved globally through the distributed DNS architecture. DNS can use either the User Datagram Protocol (UDP) or Transmission Control Protocol (TCP) and uses a destination port of 53. When the DNS protocol uses UDP as the transport, it has the ability to deal with UDP retransmission and sequencing.
DNS is composed of a hierarchical domain name space that contains a tree-like data structure of linked domain names (nodes). Domain name space uses RRs to store information about the domain. The tree-like data structure for the domain name space starts at the root zone “.”, which is the top most level of the DNS hierarchy. Although it is not typically displayed in user applications, the DNS root is represented as a trailing dot in a fully qualified domain name (FQDN). For example, the right-most dot in www.isc2.org. represents the root zone. From the root zone, the DNS hierarchy is then split into sub-domain (branches) zones.
Each domain name is composed of one or more labels. Labels are separated with “.” and may contain a maximum of 63 characters. A FQDN may contain a maximum of 255 characters, including the “.”. Labels are constructed from right to left, where the label at the far right is the top level domain (TLD) for the domain name. Figure 6-9 shows how to identify the TLD for a domain name.
org is the TLD for www.isc2.org as it is the label furthest to the right.
DNS’s central element is a set of hierarchical name (domain) trees, starting from a so-called top-level domain (TLD). A number of so-called root servers manage the authoritative list of TLD servers. To resolve any domain name, each Domain Name System in the world must hold a list of these root servers.
To understand DNS, security practitioners should be familiar with the following terms:
DNS primarily translates hostnames to IP addresses or IP addresses to hostnames. This translation process is accomplished by a DNS resolver; this could be a client application such as a web browser or an email client, or a DNS application such as BIND, sending a DNS query to a DNS server requesting the information defined in a RR. Some examples of the DNS resolution process are below:
To understand DNS security, security practitioners should be familiar with the following attack types:
ns1.syrianelectronicarmy.com
and ns2.syrianarmyelectronicarmy.com
.25 Such an attack allows hackers to redirect email and other services provided to clients. The changes created by this attack are globally cached on recursive DNS servers for a full day. Unless they are purged, it can take a full day or longer for the effects to be reversed.Table 6-2 provides a DNS quick reference of ports and definitions.
Table 6-2: DNS Quick Reference
Ports | 53/TCP, 53/UDP |
Definition | RFC 882 |
RC 1034 | |
RFC 1035 |
Various extensions to DNS have been proposed, to enhance its functionality and security, for instance, by introducing authentication through the use of DNSSEC, multicasting, or service discovery.26
Lightweight Directory Access Protocol (LDAP) is a client/server-based directory query protocol loosely based upon X.500, commonly used for managing user information. As opposed to DNS, for instance, LDAP is a front end and not used to manage or synchronize data per se. Back ends to LDAP can be directory services, such as NIS (see the section “Network Information Service (NIS), NIS +”), Microsoft’s Active Directory Service, Sun’s iPlanet Directory Server (renamed to Sun Java System Directory Server), and Novell’s eDirectory. LDAP provides only weak authentication based on host name resolution. It would therefore be easy to subvert LDAP security by breaking DNS (see the section “Domain Name System (DNS)”).
Table 6-3 provides an LDAP quick reference of ports and definitions.
Table 6-3: LDAP Quick Reference
Ports | 389/TCP, 389/UDP |
Definition | RFC 1777 |
LDAP communication is transferred in cleartext and therefore is easily intercepted. One way for the security architect to address the issues of weak authentication and cleartext communication would be through the deployment of LDAP over SSL, providing authentication, integrity, and confidentiality.
The Network Basic Input Output System (NetBIOS) application programming interface (API) was developed in 1983 by IBM. NetBIOS was later ported to TCP/IP (NetBIOS over TCP/IP, also known as NetBT). Under TCP/IP, NetBIOS runs over TCP on ports 137 and 138 and over UDP on port 139. In addition, it uses port 135 for remote procedure calls (see the section “Remote Procedure Calls”).
Table 6-4 provides a NetBIOS quick reference of ports and definitions.
Table 6-4: NetBIOS Quick Reference
Ports | 135/UDP |
137/TCP | |
138/TCP | |
139/UDP | |
Definition | RFC 1001 |
RFC 1002 |
Network Information Service (NIS) and NIS + are directory services developed by Sun Microsystems and are mostly used in UNIX environments. They are commonly used for managing user credentials across a group of machines, for instance, a UNIX workstation cluster or client/server environment, but they can be used for other types of directories as well.
NIS uses a flat namespace in so-called domains. It is based on RPC and manages all entities on a server (NIS server). NIS servers can be set up redundantly through the use of slave servers. NIS is known for a number of security weaknesses. The fact that NIS does not authenticate individual RPC requests can be used to spoof responses to NIS requests from a client. This would, for instance, enable an attacker to inject fake credentials and thereby obtain or escalate privileges on the target machine. Retrieval of directory information is possible if the name of a NIS domain has become known or is guessable, as any of the clients can associate themselves with a NIS domain. A number of guides have been published on how to secure NIS servers. The basic steps that the security architect and practitioner would need to take are the following: Secure the platform a NIS server is running on, isolate the NIS server from traffic outside of a LAN, and configure it so the probability for disclosure of authentication credentials, especially system privileged ones, is limited.
NIS+ uses a hierarchical namespace. It is based on Secure RPC (see the section “Remote Procedure Calls”). Authentication and authorization concepts in NIS+ are more mature; they require authentication for each access of a directory object. However, NIS+ authentication in itself will only be as strong as authentication to one of the clients in a NIS+ environment, as NIS+ is built on a trust relationship between different hosts. The most relevant attacks against a correctly configured NIS+ network come from attacks against its cryptographic security. NIS+ can be run at different security levels; however, most levels available are irrelevant for an operational network.28
Common Internet File System (CIFS)/Server Message Block (SMB) is a file-sharing protocol prevalent on Windows systems. A UNIX/Linux implementation exists in the free Samba project. SMB was originally designed to run on top of the NetBIOS protocol (see the section “Network Basic Input Output System (NetBIOS)”); it can, however, be run directly over TCP/IP.
Table 6-5 provides a CIFS/SMB quick reference of ports and definitions.
Table 6-5: CIFS/SMB Quick Reference
Ports | 445/TCP |
See also “Network Basic Input Output System (NetBIOS)” | |
Definition | Proprietary |
CIFS is capable of supporting user-level and tree/object-level (share-level) security. Authentication can be performed via challenge/response authentication as well as by transmission of credentials in cleartext. This second provision has been added largely for backward compatibility in legacy Windows environments.
The main attacks against CIFS are based upon obtaining credentials, be it by sniffing for cleartext authentication or by cryptographic attacks.
Network File System (NFS) is a client/server file-sharing system common to the UNIX platform. It was originally developed by Sun Microsystems, but implementations exist on all common UNIX platforms, including Linux, as well as Microsoft Windows. NFS has been revised several times, including updates to NFS Versions 2 and 3. NFS version 2 was based on UDP, and version 3 introduced TCP support. Both are implemented on top of RPC (see the section “Remote Procedure Calls”). NFS versions 2 and 3 are stateless protocols, mainly due to performance considerations. As a consequence, the server must manage file locking separately.
Table 6-6 provides an NFS quick reference of ports and definitions.
Table 6-6: NFS Quick Reference
Ports | See the section “Remote Procedure Calls” |
Definition | RFC 1094 |
RFC 1813 | |
RFC 3010 |
Secure NFS (SNFS) offers secure authentication and encryption using Data Encryption Standard (DES) encryption. In contrast to standard NFS, secure NFS (or rather secure RPC) will authenticate each RPC request. This will increase latency for each request as the authentication is performed and introduces a light performance premium, mainly paid for in terms of computing capacity. Secure NFS uses DES encrypted time stamps as authentication tokens. If server and client do not have access to the same time server, this can lead to short-term interruptions until server and client have resynchronized themselves.
NFS version 4 is a stateful protocol that uses TCP port 2049. UDP support (and dependency) has been discontinued. NFS version 4 implements its own encryption protocols on the basis of Kerberos and has discontinued use of RPC. Foregoing RPC also means that additional ports are no longer dynamically assigned, which enables use of NFS through firewalls. Another approach that the security architect could consider as part of his or her design is to plan for the securing of NFS where it must be deployed, by tunneling NFS through Secure Shell (SSH), which can be integrated with operating system authentication schemes.
Using port 25/TCP, Simple Mail Transfer Protocol (SMTP) is a client/server protocol utilized to route email on the Internet. Information on mail servers for Internet domains is managed through DNS, using mail exchange (MX) records. Although SMTP takes a simple approach to authentication, it is robust in the way it deals with unavailability; an SMTP server will try to deliver email over a configurable period.
From a protocol perspective, SMTP’s main shortcomings are the complete lack of authentication and encryption. Identification is performed by the sender’s email address. A mail server will be able to restrict sending access to certain hosts, which should be on the same network as the mail server, as well as set conditions on the sender’s email address, which should be one of the domains served by this particular mail server. Otherwise, the mail server may be configured as an open relay, although this is not a recommended practice traditionally because it poses a variety of security concerns and may get the server placed on ban lists of anti-spam organizations.
To address the weaknesses identified in SMTP, an enhanced version of the protocol, ESMTP, was defined. ESMTP is modular in that client and server can negotiate the enhancements used. ESMTP does offer authentication, among other things, and allows for different authentication mechanisms, including basic and several secure authentication mechanisms.
A quick summary comparison of SMTP and ESMTP can be seen in Table 6-7.
Table 6-7: Comparison between SMTP and EMTP
SMTP | ESMTP |
Simple Mail Transfer Protocol | Extended Simple Mail Transfer Protocol |
First command in SMTP session: | First command in ESMTP session: |
HELO sayge.com | EHLO sayge.com |
RFC 821 | RFC 1869 |
SMTP MAIL FROM and RCPT TO allows size only of 512 characters including <CRLF>. | ESMTP MAIL FROM and RCPT TO allows size greater than 512 characters. |
SMTP alone cannot be extended with new commands. | ESMTP is a framework that has enhanced capabilities, allowing it to extend existing SMTP commands. |
Before the advent of the World Wide Web and proliferation of Hypertext Transfer Protocol (HTTP), which is built on some of its features, File Transfer Protocol (FTP) was the protocol for publishing or disseminating data over the Internet.
Table 6-8 provides an FTP quick reference of ports and definitions.
Table 6-8: FTP Quick Reference
Ports | 20/TCP (data stream) |
21/TCP (control stream) | |
Definition | RFC 959 |
Although this authentication weakness can be addressed through the use of encryption, this approach carries with it the need for additional requirements to be imposed on the client. These requirements and methods are briefly outlined below:
Trivial File Transfer Protocol (TFTP) is a simplified version of FTP, which is used when authentication is not needed and quality of service is not an issue. TFTP runs on port 69 over UDP. It should therefore only be used in trusted networks with low latency.
Table 6-9 provides a TFTP quick reference of ports and definitions.
Table 6-9: TFTP Quick Reference
Ports | 69/UDP |
Definition | RFC 1350 |
In practice, TFTP is used mostly in LANs for the purpose of pulling packages, for instance, in booting up a diskless client or when using imaging services to deploy client environments.
Hypertext Transfer Protocol (HTTP) is the layer 7 foundation of the World Wide Web (WWW). HTTP, originally conceived as a stateless, stripped-down version of FTP, was developed at the European Organization for Nuclear Research (CERN) to support the exchange of information in Hypertext Markup Language (HTML).
Table 6-10 provides an HTTP quick reference of ports and definitions.
Table 6-10: HTTP Quick Reference
Ports | 80/TCP; other ports are in use, especially for proxy services |
Definition | RFC 1945 |
RFC 2109 | |
RFC 2616 |
HTTP’s popularity caused the deployment of an unprecedented number of Internet facing servers; many were deployed with out-of-the-box, vendor-preset configurations. Often these settings were geared at convenience rather than security. As a result, numerous previously closed applications were suddenly marketed as “Web enabled.” By implication, not much time was spent on developing the Web interface in a secure manner, and authentication was simplified to become a browser-based style.
Even though HTTP does not natively support quality of service or bidirectional communication, workarounds were quickly developed to deal with Quality of Service (QoS) concerns and bidirectional communication needs. Consequently, HTTP will work from within most networks, shielded or not, and thereby lends itself to tunneling an impressive number of other protocols.
HTTP does not natively support encryption and has a fairly simple authentication mechanism based on domains, which in turn are normally mapped to directories on a web server. Although HTTP authentication is extensible, it is most often used in the classic username/password style.
Because HTTP is transmitting data in cleartext and generates a slew of logging information on web servers and proxy servers, the resulting information can be readily used for illegitimate activities, such as industrial espionage. To address this significant concern, the security practitioner can use any of the commercial and free services available that allow for the anonymization of HTTP requests. These services are mainly geared at the privacy market but have also attracted a criminal element seeking to obfuscate activity. A relatively popular free service is Java Anonymous Proxy, or JAP, also referred to as project AN.ON, or Anonymity.Online. JAP is referred to as JonDo within the commercially available solution JonDonym anonymous proxy server.29
Like open mail relays, open proxy servers allow unrestricted access to GET commands from the Internet. They can therefore be used as stepping stones for launching attacks or simply to obscure the origin of illegitimate requests. More importantly, an open proxy server bears an inherent risk of opening access to protected intranet pages from the Internet. (A misconfigured firewall allowing inbound HTTP requests would need to be present on top of the open proxy to allow this to happen.)
As a general rule, HTTP proxy servers should not allow queries from the Internet. For the security architect, it is a best practice to separate application gateways (sometimes implemented as reverse proxies) from the proxy for Web browsing because both have very different security levels and business importance. (It would be even better to implement the application gateway as an application proxy and not an HTTP proxy, but this is not always possible.)
In many organizations, the HTTP proxy is used as a means to implement content filtering, for instance, by logging or blocking traffic that has been defined as or is assumed to be nonbusiness related for some reason. Although filtering on a proxy server or firewall as part of a layered defense can be quite effective to prevent virus infections (though it should never be the only protection against viruses), it will be only moderately effective in preventing access to unauthorized services (such as certain remote-access services or file sharing), as well as preventing the download of unwanted content.
HTTP tunneling is technically a misuse of the protocol on the part of the designer of such tunneling applications. It has become a popular feature with the rise of the first streaming video and audio applications and has been implemented into many applications that have a market need to bypass user policy restrictions. Usually, HTTP tunneling is applied by encapsulating outgoing traffic from an application in an HTTP request and incoming traffic in a response. This is usually not done to circumvent security but rather to be compatible with existing firewall rules and allow an application to function through a firewall without the need to apply special rules or additional configurations. Many of the most prevalent and successful malicious software packages, including viruses, worms, and especially botnets, will use HTTP as the means to transmit stolen data or control information from infected hosts through firewalls.
Suitable countermeasures that the security practitioner should consider include filtering on a firewall or proxy server and assessing clients for installations of unauthorized software. However, a security professional will have to balance the business value and effectiveness of these countermeasures with the incentive for circumvention that a restriction of popular protocols will create.
Multi-layer protocols have ushered in an era of new vulnerabilities that were once unthinkable. In the past, several “networked” solutions were developed to provide control and communications with industrial devices. These often proprietary protocols evolved over time and eventually merged with other networking technologies such as Ethernet and Token Ring. Several vendors now use the TCP/IP stack to channel and route their own protocols. These protocols are used to control coils, actuators, and machinery in multiple industries such as energy, manufacturing, construction, fabrication, mining, and farming to name a few. Insecurities in these systems often have real world visibility and impact. Given the fact that the life expectancy of many of the devices under control is 20 years or longer, it is easy to see how systems can become outdated. Often, critical infrastructure, such as power grids, is controlled using multi-layer protocols. Table 6-11 from the Idaho National Laboratory illustrates some of the differences and related challenges of control systems vs. standard information technology.30
Table 6-11: Differences and Challenges for Control Systems vs. Information Technology
SECURITY TOPIC | INFORMATION TECHNOLOGY | CONTROL SYSTEMS |
Antivirus/Mobile Code | Common Widely used | Uncommon/impossible to deploy effectively |
Support Technology Lifetime | 2-3 years Diversified vendors | Up to 20 years Single vendor |
Outsourcing | Common Widely used | Operations are often outsourced but not diverse to various providers |
Application of Patches | Regular Scheduled | Rare, unscheduled Vendor specific |
Change Management | Regular Scheduled | Highly managed and complex |
Time Critical Content | Generally delays accepted | Delays are unacceptable |
Availability | Generally delays accepted | 24x7x365 (continuous) |
Security Awareness | Moderate in both private and public sector | Poor except for physical |
Security Testing/Audit | Part of a good security program | Occasional testing for outages |
Physical Security | Secure (server rooms, etc.) | Remote/Unmanned Secure |
The term most often associated with multi-layer protocols is supervisory control and data acquisition (SCADA). Another term used in relation with multi-layer protocols is industrial control system or ICS. In general, SCADA systems are designed to operate with several different communication methods including modems, WANS, and various networking equipment. The following figure shows a general layout of a SCADA system:31
As Figure 6-10 demonstrates, a great complexity of devices and information exists in SCADA systems. Most SCADA systems minimally contain the following:
Given the unique design of SCADA systems, and the critical infrastructures that they control, it is little wonder they are a new focus of attacks. Security architects and practitioners responsible for implementing or protecting SCADA systems should be aware of the following types of attacks:
In late October of 2013, U.S.-based researchers identified 25 zero-day vulnerabilities in industrial control SCADA software from 20 suppliers that are used to control critical infrastructure systems. Attackers could exploit some of these vulnerabilities to gain control of electrical power and water systems. The vulnerabilities were found in devices that are used for serial and network communications between servers and substations. Serial communication has not been considered as an important or viable attack vector up until now, but breaching a power system through serial communication devices can be easier than attacking through the IP network because it does not require bypassing layers of firewalls. In theory, an intruder could exploit the vulnerabilities simply by breaching the wireless radio network over which the communication passes to the server.
Another issue that the security professional needs to contend with is the inability of antivirus software to address the threats facing SCADA/ICS environments. The Flame virus, for example, avoided detection from 43 different antivirus tools and took more than two years to detect. What the security practitioner needs to do, instead of continuing to rely on the traditional enterprise tools that may work well for desktops and servers, is to have tools in place that allow them to identify threats, respond, and expedite forensic analysis in real time within these complicated systems. For the security practitioner to achieve this, continuous monitoring of all log data generated by IT systems is required to automatically baseline normal, day-to-day activity across systems and identify any and all anomalous activity immediately.
It was announced in March of 2014 that more than 7,600 different power, chemical, and petrochemical plants may still be vulnerable to a handful of SCADA vulnerabilities. A researcher at Rapid 7, the Boston-based firm responsible for the popular pen testing software Metasploit, and an independent security researcher discovered three bugs in Yokogawa Electric’s CENTUM CS3000 R3 product. The Windows-based software is primarily used by infrastructure in power plants, airports, and chemical plants across Europe and Asia. The vulnerabilities are essentially a series of buffer overflows, heap based and stack based, that could open the software up to attack. All of them affect computers where CENTUM CS 3000, software that helps operate and monitor industrial control systems, is installed. With the first one, an attacker could send a specially crafted sequence of packets to BKCLogSvr.exe
and trigger a heap-based buffer overflow, which in turn could cause a DoS and allow the execution of arbitrary code with system privileges. The second would involve a similar situation; a special packet could be sent to BKHOdeq.exe
and cause a stack-based buffer overflow, allowing execution of arbitrary code with the privileges of the CENTUM user. Lastly, another stack-based buffer overflow, this involving the BKBCopyD.exe
service, could allow the execution of arbitrary code as well.32
In April of 2014, attackers were able to compromise a utility in the United States through an Internet-connected system that gave the attackers access to the utility’s internal control system network. The utility had remote access enabled on some of its Internet-connected hosts, and the systems were only protected by simple passwords. Officials at the ICS-CERT, an incident response and forensics organization inside the Department of Homeland Security that specializes in ICS and SCADA systems, said that the public utility was compromised “when a sophisticated threat actor gained unauthorized access to its control system network.” The attacker apparently used a simple brute force attack to gain access to the Internet-facing systems at the utility and then compromised the ICS network. “After notification of the incident, ICS-CERT validated that the software used to administer the control system assets was accessible via Internet facing hosts. The systems were configured with a remote access capability, utilizing a simple password mechanism; however, the authentication method was susceptible to compromise via standard brute forcing techniques,” ICS-CERT said in a published report.33
The security of industrial control systems and SCADA systems has become a serious concern in recent years as attackers and researchers have begun to focus their attention on them. Many of these systems, which control mechanical devices, manufacturing equipment, utilities, nuclear plants, and other critical infrastructure, are connected to the Internet, either directly or through networks, and this has drawn the attention of attackers looking to do reconnaissance or cause trouble on these networks. Researchers have been sharply critical of the security in the SCADA and ICS industries, saying it’s “laughable” and has no formal security development lifecycle. The ICS-CERT report states that the systems in the compromised utility probably were the target of a number of attacks: “It was determined that the systems were likely exposed to numerous security threats and previous intrusion activity was also identified.” The investigators were able to identify the issues and found that the attackers likely had not done any damage to the ICS system at the utility.
In the same report, ICS-CERT detailed a separate compromise at an organization that also had a control system connected to the Internet. Attackers were able to compromise the ICS system, which operates an unspecified mechanical device, but did not do any real damage. “The device was directly Internet accessible and was not protected by a firewall or authentication access controls. At the time of compromise, the control system was mechanically disconnected from the device for scheduled maintenance,” the report says. “ICS-CERT provided analytic assistance and determined that the actor had access to the system over an extended period of time and had connected via both HTTP and the SCADA protocol. However, further analysis determined that no attempts were made by the threat actor to manipulate the system or inject unauthorized control actions.”
The security architect and practitioner should familiarize themselves with the latest alerts released by the ICS-CERT. These can be found at https://ics-cert.us-cert.gov/alerts/.
Security professionals should also consider the following list as a starting point for defensive actions that can be used to help secure SCADA/ICS systems:
Modbus and Fieldbus are standard industrial communication protocols designed by separate groups. The focus of the design around these protocols is not security; rather it is uptime and control of devices. Many of these protocols send information in cleartext across transmission media. Additionally, many of these protocols and the devices they support require little or no authentication to execute commands on a device. The security architect and practitioner need to work together to ensure that strict logical and physical controls are implemented to ensure these protocols are encapsulated and isolated from any public or open network.
The area of telecommunications technologies is a broad one that encompasses many things that the SSCP will need to know. IP convergence, VoIP, POTS, PBX, cellular, and attacks and countermeasures are all items that are important to understand fully. The SSCP will want to ensure that they have a good, broad understanding of the key enabling technologies in this area, as well as the use of these technologies within the enterprise in a secure manner.
IP convergence can be defined as using the Internet Protocol (IP) to transmit all of the information that transits a network, such as voice, data, music, or video.
The benefits of IP convergence that the security architect and practitioner can bring to the enterprise through the design, deployment, and management of a converged network infrastructure are as follows:
Now that 10 GbE is becoming more widespread, Fibre Channel over Ethernet (FCoE) is the next attempt to converge block storage protocols onto Ethernet. FCoE takes advantage of 10 GbE performance and compatibility with existing Fibre Channel protocols. It relies on an Ethernet infrastructure that uses the IEEE Data Center Bridging (DCB) standards. The DCB standards can apply to any IEEE 802 network, but most often the term DCB refers to enhanced Ethernet.
The DCB standards define four new technologies:34
FCoE is a lightweight encapsulation protocol and lacks the reliable data transport of the TCP layer. Therefore, FCoE must operate on DCB-enabled Ethernet and use lossless traffic classes to prevent Ethernet frame loss under congested network conditions. FCoE on a DCB network mimics the lightweight nature of native FC protocols and media. It does not incorporate TCP or even IP protocols. This means that FCoE is a layer 2 (non-routable) protocol just like FC. FCoE is only for short-haul communication within a data center.
iSCSI is Internet SCSI (Small Computer System Interface), an Internet Protocol (IP)-based storage networking standard for linking data storage facilities, developed by the Internet Engineering Task Force (IETF) as RFC 3270.35 By carrying SCSI commands over IP networks, iSCSI is used to facilitate data transfers over intranets and to manage storage over long distances. Because of the ubiquity of IP networks, iSCSI can be used to transmit data over local area networks (LANs), wide area networks (WANs), or the Internet and can enable location-independent data storage and retrieval.
When an end user or application sends a request, the operating system generates the appropriate SCSI commands and data request, which then go through encapsulation and, if necessary, encryption procedures. A packet header is added before the resulting IP packets are transmitted over an Ethernet connection. When a packet is received, it is decrypted (if it was encrypted before transmission) and disassembled, separating the SCSI commands and request. The SCSI commands are sent on to the SCSI controller and from there to the SCSI storage device. Because iSCSI is bidirectional, the protocol can also be used to return data in response to the original request.
Multi-Protocol Label Switching (MPLS) is best summarized as a Layer 2.5 networking protocol. In a traditional IP network, each router performs an IP lookup (“routing”), determines a next-hop based on its routing table, and forwards the packet to that next-hop. Every router does the same, each making its own independent routing decisions, until the final destination is reached. MPLS does label switching instead. The first device does a routing lookup, just like before, but instead of finding a next-hop, it finds the final destination router. And it finds a pre-determined path from “here” to that final router. The router applies a “label” (or “shim”) based on this information. Future routers use the label to route the traffic without needing to perform any additional IP lookups. At the final destination router, the label is removed, and the packet is delivered via normal IP routing.36
So why do security professionals still care about MPLS? Three reasons:
Voice over Internet Protocol (VoIP) is a technology that allows you to make voice calls using a broadband Internet connection instead of a regular (or analog) phone line. VoIP is simply the transmission of voice traffic over IP-based networks. VoIP is also the foundation for more advanced unified communications applications such as Web and video conferencing. VoIP systems are based on the use of the Session Initiation Protocol (SIP), which is the recognized standard. Any SIP compatible device can talk to any other. Any SIP based IP-phone can call another right over the Internet; you do not need any additional equipment or even a phone provider. Just plug your SIP phone into an Internet connection, configure it, and then dial the other person right over the Internet.
In all VoIP systems, your voice is converted into packets of data and then transmitted to the recipient over the Internet and decoded back into your voice at the other end. To make it quicker, these packets are compressed before transmission with certain codecs, almost like zipping a file on the fly. There are many codecs with different ways of achieving compression and managing bitrates; thus each codec has its own bandwidth requirements and provides different voice quality for VoIP calls.
VoIP systems employ session control and signaling protocols to control the signaling, set-up, and tear-down of calls. They transport audio streams over IP networks using special media delivery protocols that encode voice, audio, video with audio codecs, and video codecs as digital audio by streaming media. Various codecs exist that optimize the media stream based on application requirements and network bandwidth; some implementations rely on narrowband and compressed speech, while others support high fidelity stereo codecs. Some popular codecs include μ-law and a-law versions of G.711, G.722, which is a high-fidelity codec marketed as HD Voice by Polycom, a popular open source voice codec known as iLBC, a codec that only uses 8 kbit/s each way called G.729, and many others.
As its name implies, Session Initiation Protocol (SIP) is designed to manage multimedia connections. SIP is designed to support digest authentication structured by realms, similar to HTTP (basic username/password authentication has been removed from the protocol as of RFC 3261).38 In addition, SIP provides integrity protection through MD5 hash functions. SIP supports a variety of encryption mechanisms, such as TLS. Privacy extensions to SIP, including encryption and caller ID suppression, have been defined in extensions to the original Session Initiation Protocol (RFC 3325).39
A technique called packet loss concealment (PLC) is used in VoIP communications to mask the effect of dropped packets. There are several techniques that may be used by different implementations. Zero substitution is the simplest PLC technique that requires the least computational resources. These simple algorithms generally provide the lowest quality sound when a significant number of packets are discarded. Waveform substitution is used in older protocols, and it works by substituting the lost frames with artificially generated, substitute sound. The simplest form of substitution simply repeats the last received packet. Unfortunately, waveform substitution often results in unnatural, “robotic” sound when a long burst of packets is lost.
Unlike network delay, jitter does not occur because of the packet delay, but because of a variation of packet delays. As VoIP endpoints try to compensate for jitter by increasing the size of the packet buffer, jitter causes delays in the conversation. If the variation becomes too high and exceeds 150ms, callers notice the delay and often revert to a walkie-talkie style of conversation. By definition, reducing the delays on the network helps keep the buffer under 150ms even if a significant variation is present.
Some VoIP systems discard packets received out of order, while other systems discard out-of-order packets if they exceed the size of the internal buffer, which in turn causes jitter. Sequence errors can also cause significant degradation of call quality. Sequence errors may occur because of the way packets are routed. Packets may travel along different paths through different IP networks, causing different delivery times. As a result, lower-numbered packets may arrive at the endpoint later than higher numbered ones.
A codec is software that converts audio signals into digital frames and vice versa. Codecs are characterized by different sampling rates and resolutions. Different codecs employ different compression methods and algorithms, using different bandwidth and computational requirements.
There are two standard phone systems used for telecommunications: POTS and PBX.
Plain old telephone service (POTS) is commonly found in the “last mile” of most residential and business telephone services. Once called “Post Office Telephone Service” in some countries, the name has mostly been retired due to the proliferation of phones in homes and businesses. POTS typically represents a bidirectional analog telephone interface that was designed to carry the sound of the human voice. POTS lacks the mobility of cellular phones and the bandwidth of several competing products; however, it is one of the most reliable systems available with an uptime close to or exceeding 99.999%. POTS is still often the telecom method of choice when high reliability is required and bandwidth is not. Typical applications include alarm systems and “out of band” command links for routers and other network devices.
A private branch exchange (PBX) is an enterprise class phone system typically used in businesses or large organizations. A PBX often includes an internal switching network and a controller that is attached to telecommunications trunks. Many PBXs had default manufacturer configuration codes, ports, and control interfaces that could be exploited if the security professional did not reconfigure them prior to deployment. A PBX is often targeted by war dialers who can then use the PBX to route long distance calling or eavesdrop on the organization. Analog POTS PBXs have largely been replaced with VoIP based or VoIP enabled PBXs.
A cellular network or mobile network is a radio network distributed over land areas called cells, each served by at least one fixed-location transceiver, known as a cell site or base station. In a cellular network, each cell characteristically uses a different set of radio frequencies from all their immediate neighboring cells to avoid any interference. When joined together these cells provide radio coverage over a wide geographic area. This enables a large number of portable transceivers (e.g., mobile phones, pagers, etc.) to communicate with each other and with fixed transceivers and telephones anywhere in the network, via base stations, even if some of the transceivers are moving through more than one cell during transmission.
DDoS affects many types of systems. Some have used the term TDoS to refer to DDoS or DoS attacks on telecommunications systems (Telecommunications Denial of Service). Typical motives can be anything from revenge, extortion, political/ideological, and distraction from a larger set of financial crimes. The Dirt Jumper bot has been used to create distractions by launching DDoS attacks upon financial institutions and financial infrastructure at the same time that fraud is taking place (with the Zeus Trojan, or other banking malware or other attack technique). See Figure 6-11. Similarly, DDoS aimed at telecommunications is being used to create distractions that allow other crimes to go unnoticed for a longer period.
A successful cyber-attack on a telecommunications operator could disrupt service for thousands of phone customers, sever Internet service for millions of consumers, cripple businesses, and shut down government operations. And there’s reason to worry: Cyber-attacks against critical infrastructure are soaring. For instance, in 2012, the U.S. Computer Emergency Readiness Team (US-CERT), a division of the Department of Homeland Security, processed approximately 190,000 cyber incidents involving U.S. government agencies, critical infrastructure, and the department’s industry partners. This represents a 68% increase over 2011.40
Another issue is the DDoS “attack for hire” networks that exist in the wild today. For example, advertisements like the ones pictured in Figures 6-12 and 6-13 can easily be found on the Internet for traditional DDoS services that also include phone attack services starting at $20 per day.
SIP flooding attacks are another attack vector that the security practitioner needs to be aware of. Often, SIP flooding attacks take place because attackers are running brute-force password guessing scripts that overwhelm the processing capabilities of the SIP device, but pure flooding attacks on SIP servers also occur. Once the attackers obtain credentials into a VoIP or other PBX system, that system can become a pawn in their money-making scheme to perform DoS, Vishing, or other types of attacks. Default credentials are one of the security weaknesses that the attackers leverage to gain access to the VoIP/PBX systems, so organizations should ensure that their telecommunications systems credentials are strong enough to resist brute force attack, and that the ability to reach the telephone system is limited as much as possible in order to reduce the attack surface and convince the attacker to move on to the next victim.
Any system is subject to availability attacks at any point where an application layer or other processor-intensive operation exists as well as the networks that supply these systems via link saturation and state-table exhaustion. Telecommunications systems are no exception to this principle.
The best way to control access to anything is to monitor any and all routes that can be used to access the item in question. Through continuous monitoring of the known access pathways, any attempts to gain access will be observed and can then be managed. The same thought process is applied to network access. The SSCP will want to understand how to ensure that all identified pathways to gain access to a system are monitored and that the appropriate control mechanisms are deployed to secure them. These may include the use of secure routing, DMZs, and hardware such as firewalls.
While it is possible to establish corporate wide area networks (WANs) using the Internet and VPN technology, it is not desirable. Relying on the Internet to provide connectivity means that there is little ability to control the routes that traffic takes or to remedy performance issues. Deterministic routing means that WAN connectivity is supplied based upon a limited number of different routes, typically supplied by a large network provider. Deterministic routing means that traffic only travels by pre-determined routes that are known to be either secure or less susceptible to compromise. Similarly, deterministic routing from a large carrier will make it much easier to address performance issues and to maintain the service levels required by the applications on the WAN. If the WAN is supporting converged applications like voice (VOIP) or video (for security monitoring or video conferencing), then deterministic routing becomes even more essential to the assurance of the network.
Boundary routers primarily advertise routes that external hosts can use to reach internal ones. However, they should also be part of an organization’s security perimeter by filtering external traffic that should never be allowed to enter the internal network. For example, boundary routers may prevent external packets from the Finger service from entering the internal network because that service is used to gather information about hosts.
A key function of boundary routers is the prevention of inbound or outbound IP spoofing attacks. In using a boundary router, spoofed IP addresses would not be routable across the network perimeter. Examples of IP spoofing attacks are:
The security perimeter is the first line of protection between trusted and untrusted networks. In general, it includes a firewall and router that help filter traffic. Security perimeters may also include proxies and devices, such as an intrusion detection system (IDS), to warn of suspicious traffic. The defensive perimeter extends out from these first protective devices to include proactive defense such as boundary routers, which can provide early warning of upstream attacks and threat activities.
It is important to note that while the security perimeter is the first line of defense, it must not be the only one. If there are not sufficient defenses within the trusted network, then a misconfigured or compromised device could allow an attacker to enter the trusted network.
Segmenting networks into domains of trust is an effective way to help enforce security policies. Controlling which traffic is forwarded between segments will go a long way to protecting an organization’s critical digital assets from malicious and unintentional harm.
A dual-homed host has two network interface cards (NICs), each on a separate network. Provided that the host controls or prevents the forwarding of traffic between NICs, this can be an effective measure to isolate a network.
Bastion hosts serve as a gateway between a trusted and untrusted network that gives limited, authorized access to untrusted hosts. For instance, a bastion host at an Internet gateway could allow external users to transfer files to it via FTP. This permits files to be exchanged with external hosts without granting them access to the internal network in an uncontrolled manner.
If an organization has a network segment that has sensitive data, it can control access to that network segment by requiring that all access must be from the bastion host. In addition to isolating the network segment, users will have to authenticate to the bastion host, which will help audit access to the sensitive network segment. For example, if a firewall limits access to the sensitive network segment, allowing access to the segment from only the bastion host will eliminate the need for allowing many hosts access to that segment. For instance, terminal servers are a form of bastion host, which allow authenticated users deeper into the network.
A bastion host may also include functionality called a “data diode.” In the world of electronics, a diode is a device that only allows current to flow in a single direction. A data diode only allows information to flow in a single direction; for instance, it enforces rules that allow information to be read, but nothing may be written (changed or created or moved).
A bastion host is a specialized computer that is deliberately exposed on a public network. From a secured network perspective, it is the only node exposed to the outside world and is therefore very prone to attack. It is placed outside the firewall in single firewall systems or, if a system has two firewalls, it is often placed between the two firewalls or on the public side of a demilitarized zone (DMZ).
The bastion host processes and filters all incoming traffic and prevents malicious traffic from entering the network, acting much like a gateway. The most common examples of bastion hosts are mail, domain name system, Web, and file transfer protocol (FTP) servers. Firewalls and routers can also become bastion hosts.
The bastion host node is usually a very powerful server with improved security measures and custom software. It often hosts only a single application because it needs to be very good at what it does. The software is usually customized, proprietary, and not available to the public. This host is designed to be the strong point in the network to protect the system behind it. Therefore, it often undergoes regular maintenance and audit. Sometimes bastion hosts are used to draw attacks so that the source of the attacks may be traced.
For one to maintain the security of bastion hosts, all unnecessary software, daemons, and users are removed. The operating system is continually updated with the latest security updates, and an intrusion detection system is installed.41
A demilitarized zone (DMZ), also known as a screened subnet, allows an organization to give external hosts limited access to public resources, such as a company website, without granting them access to the internal network. See Figure 6-14. Typically, the DMZ is an isolated subnet attached to a firewall (when the firewall has three interfaces—internal, external, and DMZ—this configuration is sometimes called a three-legged firewall). Because external hosts by design have access to the DMZ (albeit controlled by the firewall), organizations should only place in the DMZ hosts and information that are not sensitive.
Networks use a vast array of hardware, including modems, concatenators, front-end processors, multiplexers, hubs, repeaters, bridges, switches, and routers.
Modems (modulator/demodulator) allow users remote access to a network via analog phone lines. Essentially, modems convert digital signals to analog and vice versa. A modem that is connected to the user’s computer converts a digital signal to analog to be transmitted over a phone line. On the receiving end, a modem converts the user’s analog signal to digital and sends it to the connected device, such as a server. Of course, the process is reversed when the server replies. The server’s reply is converted from digital to analog and transmitted over the phone line, and so on.
In order to mitigate some of the risks that exist from the legacy analogue work of communications, vendors have developed and taken to market telephony firewalls, which act not unlike IP firewalls but are designed specifically to focus on analog signals. These firewalls will sit at the demarcation point between the public switched telephone network (PSTN) and the internal organizational network, whether it is an IP phone system or an analog phone system. Telephony firewalls will monitor both incoming and outgoing analog calls to enforce rule-sets.
Concentrators multiplex connected devices into one signal to be transmitted on a network. For instance, a fiber distributed data interface (FDDI) concentrator multiplexes transmissions from connected devices to a FDDI ring.
Some hardware architectures employ a hardware front-end processor that sits between the input/output devices and the main computer. By servicing input/output on behalf of the main computer, front-end processors reduce the main computer’s overhead.
A multiplexer overlays multiple signals into one signal for transmission. Using a multiplexer is much more efficient than transmitting the same signals separately. Multiplexers are used in devices from simple hubs to very sophisticated dense-wave division multiplexers (DWDMs) that combine multi-optical signals on one strand of optical fiber.
Hubs are used to implement a physical star topology. All of the devices in the star connect to the hub. Essentially, hubs retransmit signals from each port to all other ports. Although hubs can be an economical method to connect devices, there are several important disadvantages:
Bridges are layer 2 devices that filter traffic between segments based on media access control (MAC) addresses. In addition, they amplify signals to facilitate physically larger networks. A basic bridge filters out frames that are not destined for another segment. Bridges can connect LANs with unlike media types, such as connecting an unshielded twisted pair (UTP) segment with a segment that uses coaxial cable. Bridges do not reformat frames, such as converting a Token Ring frame to Ethernet. This means that only identical layer 2 architectures can be connected with a simple bridge (e.g., Ethernet to Ethernet, etc.). Network administrators can use encapsulating bridges to connect dissimilar layer 2 architectures, such as Ethernet to Token Ring. These bridges encapsulate incoming frames into frames of the destination’s architecture. Other specialized bridges filter outgoing traffic based on the destination MAC address. Bridges do not prevent an intruder from intercepting traffic on the local segment. A common type of bridge for many organizations is a wireless bridge based upon one of the IEEE 802.11 standards. While wireless bridges offer compelling efficiencies, they can pose devastating security issues to organizations by effectively making all traffic crossing the bridge visible to anyone connected to the LAN. Wireless bridges must absolutely apply link-layer encryption and any other available native security features such as access lists to ensure secure operation.
Switches solve the same issues posed at the beginning of this section, except the solutions are more sophisticated and more expensive. Essentially, a basic switch is a multiport device to which LAN hosts connect. Switches forward frames only to the device specified in the frame’s destination MAC address, which greatly reduces unnecessary traffic. Switches can perform more sophisticated functions to increase network bandwidth. Due to the increased processing speed of switches, models exist that can make forwarding decisions based on IP address and prioritization of types of network traffic. Similar to hubs and bridges, switches forward broadcasts.
Routers forward packets to other networks. They read the destination layer 3 address (e.g., destination IP address) in received packets, and based on the router’s view of the network, it determines the next device on the network (the next hop) to send the packet. If the destination address is not on a network that is directly connected to the router, it will send the packet to another router. Routers can be used to interconnect different technologies. For example, connecting Token Ring and Ethernet networks to the same router would allow IP Ethernet packets to be forwarded to a Token Ring network.
When the SSCP considers wired transmission media, they should be thinking about items such as Ethernet network cables and fiber optic cables. Both of these cable types are used to carry information and provide data transmission across the wired networks that make up the corporate LAN. In addition, understanding the differences in cable transmission speeds and distance based on the type of cable used is an important area of knowledge for the SSCP to focus on.
Here are some parameters that should be considered when selecting cables:
Pairs of copper wires are twisted together to reduce electromagnetic interference and cross talk. Each wire is insulated with a fire-resistant material, such as Teflon. The twisted pairs are surrounded by an outer jacket that physically protects the wires. The quality of cable, and therefore its appropriate application, is determined by the number of twists per inch, the type of insulation, and conductive material. Cables are assigned into categories to help determine which cables are appropriate for an application or environment. See Table 6-12.
Table 6-12: Cable Categories
Cable Categories | Speed | Application or Environment |
Category 1 | Less than 1 Mbps | Analog voice and basic interface rate (BRI) in Integrated Services Digital Network (ISDN) |
Category 2 | 4 Mbps | 4 Mbps IBM Token Ring LAN |
Category 3 | 10 Mbps | 10 Base-T Ethernet |
Category 4 | 16 Mbps | 16 Mbps Token Ring |
Category 5 | 100 Mbps | 100 Base-TX and Asynchronous Transfer Mode(ATM) |
Category 5e | 1,000 Mbps (1GB) | 1000 Base-T Ethernet |
Category 6 | Up to 10,000 Mbps (10GB) | 10BASE-T, 100BASE-TX, 1000BASE-TX, and 10GBASE-T |
Category 6a | Up to 10,000 Mbps (10GB) | 10BASE-T, 100BASE-TX, 1000BASE-TX, and 10GBASE-T |
Unshielded Twisted Pair (UTP) has several drawbacks. Unlike shielded twisted-pair cables, UTP does not have shielding and is therefore susceptible to interference from external electrical sources, which could reduce the integrity of the signal. Also, to intercept transmitted data, an intruder can install a tap on the cable or monitor the radiation from the wire. Thus, UTP may not be a good choice when transmitting very sensitive data or when installed in an environment with much electromagnetic interference (EMI) or radio frequency interference (RFI). Despite its drawbacks, UTP is the most common cable type. UTP is inexpensive, can be easily bent during installation, and, in most cases, the risk from the above drawbacks is not enough to justify more expensive cables.
Shielded twisted pair (STP) is similar to UTP. Pairs of insulated twisted copper are enclosed in a protective jacket. However, STP uses an electronically grounded shield to protect the signal. The shield surrounds each of the twisted pairs in the cable, surrounds the bundle of twisted pairs, or both. The shield protects the electronic signals from outside. Although the shielding protects the signal, STP has disadvantages over UTP. STP is more expensive and is bulkier and hard to bend during installation.
Instead of a pair of wires twisted together, coaxial cable (or simply, coax) uses one thick conductor that is surrounded by a grounding braid of wire. A non-conducting layer is placed between the two layers to insulate them. The entire cable is placed within a protective sheath. The conducting wire is much thicker than the twisted pair and therefore can support greater bandwidth and longer cable lengths. The superior insulation protects coaxial cable from electronic interference, such as EMI and RFI. Likewise, the shielding makes it harder for an intruder to monitor the signal with antennae or install a tap. Coaxial cable has some disadvantages. The cable is expensive and is difficult to bend during installation. For this reason, coaxial cable is used in specialized applications, such as cable TV.
As an alternative to directly connecting devices, devices are connected to the patch panel. Then, a network administrator can connect two of these devices by attaching a small cable, called a patch cord, to two jacks in the panel. To change how these devices are connected, network administrators only have to reconnect patch cords. Patch panels and wiring closets must be secured since they offer an excellent place to tap into the network and egress the product. Wiring must be well laid out, neat, and records kept in a secure location; otherwise, it is much easier to hide a tap in a mess of wires. Shared wiring closets should be avoided.
Fiber optics use light pulses to transmit information down fiber lines instead of using electronic pulses to transmit information down copper lines. At one end of the system is a transmitter. This is the place of origin for information coming on to fiber-optic lines. The transmitter accepts coded electronic pulse information coming from copper wire. It then processes and translates that information into equivalently coded light pulses. A light-emitting diode (LED) or an injection-laser diode (ILD) can be used for generating the light pulses. Using a lens, the light pulses are funneled into the fiber-optic medium where they travel down the cable.
There are three types of fiber optic cable commonly used: single mode, multimode, and plastic optical fiber (POF). See Table 6-13.
Table 6-13: Fiber Types and Typical Specifications
Core/Cladding | Attenuation | Bandwidth | Applications/Notes | |||
Multi-mode Graded-Index | ||||||
@850/1300 nm | @850/1300 nm | |||||
50/125 microns | 3/1 dB/km | 500/500 MHz-km | Laser-rated for GbE LANs | |||
50/125 microns | 3/1 dB/km | 2000/500 MHz-km | Optimized for 850 nm VCSELs | |||
62.5/125 microns | 3/1 dB/km | 160/500 MHz-km | Most common LAN fiber | |||
100/140 microns | 3/1 dB/km | 150/300 MHz-km | Obsolete | |||
Single-mode | ||||||
@1310/1550 nm | ||||||
8-9/125 microns | 0.4/0.25 dB/km | HIGH! | Telco/CATV/long high speed LANs | |||
~100 Terahertz | ||||||
Multi-mode Step-Index | ||||||
@850 nm | @850 nm | |||||
200/240 microns | 4-6 dB/km | 50 MHz-km | Slow LANs & links | |||
POF (Plastic Optical Fiber) | ||||||
@ 650 nm | @ 650 nm | |||||
1 mm | ~ 1 dB/m | ~5 MHz-km | Short links & cars |
Workstations should be hardened, and users should be using limited access accounts whenever possible in accordance with the concept of “least privilege.” Workstations should minimally have:
While workstations are clearly the endpoint most will associate with endpoint attacks, the landscape is changing. Mobile devices such as smartphones, tablets, and personal devices are beginning to make up more and more of the average organization’s endpoints. With this additional diversity of devices, there becomes a requirement for the security architect to also increase the diversity and agility of an organization’s endpoint defenses. For mobile devices such as smartphones and tablets, security practitioners should consider:
In the age of convergence, the area of voice technologies is an important one for the SSCP to consider when it comes to a discussion around the concepts of controlling network access. If voice technologies are not made a part of the plan that the SSCP will use to ensure that network communications enforce data confidentiality and integrity, then a defense in depth architecture cannot be fully realized. In that case, the SSCP is not exercising due care and due diligence with regard to the security of the network design and operation.
The Public Switched Telephone Network (PSTN) is a circuit-switched network that was originally designed for analog voice communication. When a person places a call, a dedicated circuit is created between the two phones. Although it appears to the callers that they are using a dedicated line, they are actually communicating through a complex network. As with all circuit-switched technology, the path through the network is established before communication between the two endpoints begins, and barring an unusual event, such as a network failure, the path remains constant during the call. Phones connect to the PSTN with copper wires to a central office (CO), which services an area of about 1 to 10 km. The central offices are connected to a hierarchy of tandem offices (for local calls) and toll offices (toll calls), with each higher level of the hierarchy covering a larger area. Including the COs, the PSTN has five levels of offices. When both endpoints of a call are connected to the same CO, the traffic is switched within the CO. Otherwise, the call must be switched between a toll center and a tandem office. The greater the distance between the calls, the higher in the hierarchy the calls are switched. To accommodate the high volume of traffic, toll centers communicate with each other over fiber-optic cables.
Although modems allowed remote access to networks from almost anywhere, they could be used as a portal into the network by an attacker. Using automated dialing software, the attacker could dial the entire range of phone numbers used by the company to identify modems. If the host, to which the modem was attached, had a weak password, then the attacker would easily gain access to the network. Worse yet, if voice and data shared the same network, then both voice and data could be compromised.
The best defense against this attack is not to leave unattended modems turned on, and keep an up-to-date inventory of all modems so none get orphaned and left to operate without the knowledge and oversight of the security professional. All modems should require some form of authentication, at least a single factor, although the industry standard has moved to two-factor authentication for modem connections due to the risks that use of these devices poses. If modems are necessary, then organizations must ensure that the passwords protecting the attached host are strong, preferably with the help of authentication mechanisms, such as RADIUS, one-time passwords, etc.
The use of multimedia collaboration technologies is standard practice in the enterprise today. The need to understand and secure these technologies as part of a defense in depth approach to network security is part of the responsibilities that the SSCP has in their role as a security practitioner. Whether it is P2P, remote meeting technology, or instant messaging clients, the unrestricted usage of these applications and technologies can lead to vulnerabilities being exploited by bad actors to the detriment of the organization.
Peer-to-Peer (P2P) applications are often designed to open an uncontrolled channel through network boundaries (normally through tunneling). They therefore provide a way for dangerous content, such as botnets, spyware applications, and viruses, to enter an otherwise protected network. Because P2P networks can be established and managed using a series of multiple, overlapping master and slave nodes, they can be very difficult to fully detect and shut down. If one master node is detected and shutdown, the “bot herder” who controls the P2P botnet can make one of the slave nodes a master and use that as a redundant staging point, allowing for botnet operations to continue unimpeded.
Several technologies and services exist that allow organizations and individuals to meet virtually. These applications are typically Web-based, and they install extensions in the browser or client software on the host system. These technologies also typically allow desktop sharing as a feature. This feature not only allows the viewing of a user’s desktop but also control of the system by a remote user.
Some organizations use dedicated equipment such as cameras, monitors, and meeting rooms to host and participate in remote meetings. These devices are often a combination of VoIP and in some cases POTS technology. They are also subject to the same risks including but not limited to:
Instant messaging systems can generally be categorized in three classes: peer-to-peer networks, brokered communication, and server-oriented networks. All these classes will support basic “chat” services on a one-to-one basis and frequently on a many-to-many basis. Most instant messaging applications do offer additional services beyond their text messaging capability, for instance, screen sharing, remote control, exchange of files, and voice and video conversation. Some applications even allow command scripting. Instant messaging and chat is increasingly considered a significant business application used for office communications, customer support, and “presence” applications. Instant message capabilities will frequently be deployed with a bundle of other IP-based services such as VoIP and video conferencing support. It should be noted that many of the risks mentioned here apply also to online games, which today offer instant communication between participants. For instance, multiplayer role-playing games, such as multiuser domains (MUDs), rely heavily on instant messaging that is similar in nature to Internet Relay Chat (IRC), even though it is technically based on a variant of the TELNET protocol.
In order to be able to properly secure network access, it is important to understand the traffic that is moving through the network at all times. The ability to monitor traffic is impacted by the types of traffic being created and used in a network. The SSCP should have an understanding of the protocols, applications, and services that are in use on the networks that they are charged with defending. For example, the use of Jabber should raise a red flag for a security professional, as Jabber may be a potential source of confidentiality concerns if not configured properly. This section will discuss these issues and concerns and help the SSCP to understand what they need to do to address them.
Jabber is an open instant messaging protocol for which a variety of open source clients exist. A number of commercial services based on Jabber exist. Jabber has been formalized as an Internet standard under the name Extensible Messaging and Presence Protocol (XMPP), as defined in RFC 3920 and RFC 3921.43
Jabber is a server-based application. Its servers are designed to interact with other instant messaging applications. As with IRC, anybody can host a Jabber server. The Jabber server network can therefore not be considered trusted. Although Jabber traffic can be encrypted via TLS, this does not prevent eavesdropping on the part of server operators. However, Jabber does provide an API to encrypt the actual payload data.44 Jabber itself offers a variety of authentication methods, including cleartext and challenge/response authentication. To implement interoperability with other instant messaging systems from the server, however, the server will have to cache the user’s credentials for the target network, enabling a number of attacks, mainly on behalf of the server operator but also for anyone able to break into a server.
Internet Relay Chat (IRC) is a client/server-based network. IRC is unencrypted, and therefore an easy target for sniffing attacks. The basic architecture of IRC, founded on trust among servers, enables special forms of denial-of-service attacks. For instance, a malicious user can hijack a channel while a server or group of servers has been disconnected from the rest (net split). IRC is also a common platform for social engineering attacks, aimed at inexperienced or technically unskilled users.
Control of HTTP tunneling can happen on the firewall or the proxy server. It should, however, be considered that in the case of peer-to-peer protocols, this would require a “deny by default” policy, and blocking instant messaging without providing a legitimate alternative is not likely to foster user acceptance and might give users incentive to utilize even more dangerous workarounds. It should be noted that inbound file transfers can also result in circumvention of policy or restrictions in place, in particular for the spreading of viruses. An effective countermeasure can be found in on-access antivirus scanning on the client, which should be enabled anyway.
Remote access technologies are an important area for the SSCP to focus on with regards to the ability to build a complete solution that addresses all aspects of controlling network access. Being able to balance the needs of users that will want to be able to work remotely and still maintain access to necessary corporate resources against the organization’s needs to ensure the confidentiality, integrity, and availability of the same resources is difficult. The ability for remote access technologies to address the needs and concerns of both audiences hinges on the design, implementation, and operation of the technology solution being used. It is up to the SSCP to develop the level of knowledge required to ensure that remote access technologies such as VPNs and tunneling protocols such as L2TP are used properly to ensure availability of resources while also safeguarding the confidentiality and integrity of them as well.
A Virtual Private Network (VPN) is an encrypted tunnel between two hosts that allows them to securely communicate over an untrusted network; e.g., the Internet. Remote users employ VPNs to access their organization’s network, and depending on the VPN’s implementation, they may have most of the same resources available to them as if they were physically at the office. As an alternative to expensive dedicated point-to-point connections, organizations use gateway-to-gateway VPNs to securely transmit information over the Internet between sites or even with business partners.
A tunnel is a communications channel between two networks that is used to transport another network protocol by encapsulation of its packets. Protocols such as Point-to-Point Tunneling Protocol (PPTP) and Layer 2 Tunneling (L2TP) are used to create these tunnels and to allow for the secure transmission of data between two endpoints, whether they are on the same or different networks. Authentication protocols such as Remote Authentication Dial In User Service (RADIUS) are deployed alongside of tunneling protocols to ensure that user authentication is handled properly within these solutions.
Point-to-Point Tunneling Protocol (PPTP) is a VPN protocol that runs over other protocols. PPTP relies on generic routing encapsulation (GRE) to build the tunnel between the endpoints. After the user authenticates, typically with Microsoft Challenge Handshake Authentication Protocol version 2 (MSCHAPv2), a Point-to-Point Protocol (PPP) session creates a tunnel using GRE. A key weakness of PPTP is the fact that it derives its encryption key from the user’s password. This violates the cryptographic principle of randomness and can provide a basis for attacks. Password-based VPN authentication in general violates the recommendation to use two-factor authentication for remote access. The security architect and practitioner both need to consider known weaknesses, such as the issues identified with PPTP, when planning for the deployment and use of remote access technologies.
Layer 2 Tunneling Protocol (L2TP) is a hybrid of Cisco’s Layer 2 Forwarding (L2F) and Microsoft’s PPTP. It allows callers over a serial line using PPP to connect over the Internet to a remote network. A dial-up user connects to his ISP’s L2TP access concentrator (LAC) with a PPP connection. The LAC encapsulates the PPP packets into L2TP and forwards it to the remote network’s layer 2 network server (LNS). At this point, the LNS authenticates the dial-up user. If authentication is successful, the dial-up user will have access to the remote network. LAC and LNS may authenticate each other with a shared secret, but as RFC 2661 states, the authenticating is effective only while the tunnel between the LAC and LNS is being created.46 L2TP does not provide encryption and relies on other protocols, such as tunnel mode IPSec, for confidentiality.
Remote Authentication Dial-in User Service (RADIUS) is an authentication protocol used mainly in networked environments, such as ISPs, or for similar services requiring single sign-on for layer 3 network access, for scalable authentication combined with an acceptable degree of security. On top of this, RADIUS provides support for consumption measurement such as connection time. RADIUS authentication is based on provision of simple username/password credentials. These credentials are encrypted by the client using a shared secret with the RADIUS server. Overall, RADIUS has the following issues:
Simple Network Management Protocol (SNMP) is designed to manage network infrastructure.
Table 6-14 provides an SNMP quick reference of ports and definitions.
Table 6-14: SNMP Quick Reference 49
Ports | 161/TCP, 161/UDP |
162/TCP, 162/UDP | |
Definition | RFC 1157 |
SNMP architecture consists of a management server (called the manager in SNMP terminology) and a client, usually installed on network devices such as routers and switches called an agent. SNMP allows the manager to retrieve “get” values of variables from the agent, as well as “set” variables. Such variables could be routing tables or performance-monitoring information.
Although SNMP has proven to be remarkably robust and scalable, it does have a number of clear weaknesses. Some of them are by design; others are subject to configuration parameters.
Probably the most easily exploited SNMP vulnerability is a brute force attack on default or easily guessable SNMP passwords known as “community strings” often used to manage a remote device. Given the scale of SNMP v1 and v2 deployment, combined with a lack of clear direction from the security professional with regards to the risks associated with using SNMP without additional security enhancements to protect the community string, it is certainly a realistic scenario, and a potentially severe but easily mitigated risk.
Until version 2, SNMP did not provide any degree of authentication or transmission security. Authentication consists of an identifier called a community string, by which a manager will identify itself against an agent (this string is configured into the agent) and a password sent with a command. As a result, passwords can be easily intercepted, which could then result in commands being sniffed and potentially faked. Similar to the previous problem, SNMP version 2 did not support any form of encryption, so that passwords (community strings) were passed as cleartext. SNMP version 3 addresses this particular weakness with encryption for passwords.50
The services described under this section, TELNET and rlogin, while present in many UNIX operations, and, when combined with NFS and NIS, provide the user with seamless remote working capabilities, do in fact form a risky combination if not configured and managed properly. Conceptually, because they are built on mutual trust, they can be misused to obtain access and to horizontally and vertically escalate privileges in an attack. Their authentication and transmission capabilities are insecure by design; they therefore have had to be retrofitted or replaced altogether, as TELNET and rlogin have been through the use of SSH.
TELNET is a command line protocol designed to give command line access to another host. Although implementations for Windows exist, TELNET’s original domain was the UNIX server world, and in fact, a TELNET server is standard equipment for any UNIX server. (Whether it should be enabled is another question entirely, but in small LAN environments, TELNET is still widely used.)
In its most generic form, remote log-in (rlogin) is a protocol used for granting remote access to a machine, normally a UNIX server. Similarly, remote shell (rsh) grants direct remote command execution, while rcp copies data from or to a remote machine. If an rlogin daemon (rlogind) is running on a machine, rlogin access can be granted in two ways, through the use of a central configuration file or through a user configuration. By the latter, a user may grant access that was not permitted by the system administrator. The same mechanism applies to rsh and remote copy (rcp), although they are relying on a different daemon (rshd). Authentication can be considered host/IP address based. Although rlogin grants access based on user ID, it is not verified; i.e., the ID a remote client claims to possess is taken for granted if the request comes from a trusted host. The rlogin protocol transmits data without encryption and is hence subject to eavesdropping and interception.
The rlogin protocol is of limited value—its main benefit can be considered its main drawback: remote access without supplying a password. It should only be used in trusted networks, if at all. A more secure replacement is available in the form of SSHv2 for rlogin, rsh, and rcp.
A screen scraper is a program that can extract data from output on a display intended for a human. Screen scrapers are used in a legitimate fashion when older technologies are unable to interface with modern ones. In a nefarious sense, this technology can also be used to capture images from a user’s computer such as PIN pad sequences at a banking website when implemented by a virus or malware.
Virtual terminal service is a tool frequently used for remote access to server resources. Virtual terminal services allow the desktop environment for a server to be exported to a remote workstation. This allows users at the remote workstation to execute desktop commands as though they were sitting at the server terminal interface in person. The advantage of terminal services such as those provided by Citrix, Microsoft, or public domain VNC services is that they allow for complex administrative commands to be executed using the native interface of the server rather than a command-line interface, which might be available through SSHv2 or telnet. Terminal services also allow for the authentication and authorization services integrated into the server to be leveraged for remote users, in addition to all the logging and auditing features of the server as well.
Common issues such as visitor control, physical security, and network control are almost impossible to address with teleworkers. Strong VPN connections between the teleworker and the organization need to be established, and full device encryption should be the norm for protecting sensitive information. If the user works in public places or a home office, the following should also be considered:
Data can be transmitted using analog communication or digital communication.
Analog signals use electronic properties, such as frequency and amplitude, to represent information. Analog recordings are a classic example: A person speaks into a microphone, which converts the vibration from acoustical energy to an electrical equivalent. The louder the person speaks, the greater the electrical signal’s amplitude. Likewise, the higher the pitch of the person’s voice, the higher the frequency of the electrical signal. Analog signals are transmitted on wires, such as twisted pair, or with a wireless device.
Whereas analog communication uses complex waveforms to represent information, digital communication uses two electronic states (on and off). By convention, 1 is assigned to the on state and 0 to off. Electrical signals that consist of these two states can be transmitted over a cable, converted to light and transmitted over fiber optics, and broadcasted with a wireless device. In all of the above media, the signal would be a series of one of two states: on and off. It is easier to ensure the integrity of digital communication because the two states of the signal are sufficiently distinct. When a device receives a digital transmission, it can determine which digits are 0s and which are 1s (if it cannot, then the device knows the signal is erroneous or corrupted). On the other hand, analog complex waveforms make ensuring integrity very difficult.
In order for the SSCP to be able to manage LAN-based security concerns effectively within the enterprise, they must understand the concept of separation between the data and control planes of a network. The control plane is where routing is handled, while the data plane is where commands are executed based on input from the control plane. In addition, the use of technologies such as logical segmentation of the network through the use of one or more VLANs is also important for the SSCP to be familiar with. The implementation of security solutions such as MACsec and secure device management are also important pieces of the LAN-based security puzzle that need to be considered carefully.
The control plane is where forwarding/routing decisions are made. Switches and routers have to figure out where to send frames (L2) and packets (L3). The switches and routers that run the network run as discrete components, but since they are in a network, they have to exchange information such as host reachability and status with neighbors. This is done in the control plane using protocols like spanning tree, OSPF, BGP, QoS enforcement, etc.
The data plane is where the action takes place. It includes things like the forwarding tables, routing tables, ARP tables, queues, tagging and re-tagging, etc. The data plane carries out the commands of the control plane. Figure 6-15 shows the control, data, and management plane as they would appear in a logical design diagram.
For example, in the control plane, you set up IP networking and routing (routing protocols, route preferences, static routers, etc.) and connect hosts and switches/routers together. Each switch/router figures out what is directly connected to it, and then tells its neighbor what it can reach and how it can reach it. The switches/routers also learn how to reach hosts and networks not attached to it. Once all of the routers/switches have a coherent picture—shared via the control plane—the network is converged.
In the data plane, the routers/switches use what the control plane built to dispose of incoming and outgoing frames and packets. Some get sent to another router, for example. Some may get queued up when congested. Some may get dropped if congestion gets bad enough.
In simple terms, a VLAN is a set of workstations within a LAN that can communicate with each other as though they were on a single, isolated LAN. What does it mean to say that they “communicate with each other as though they were on a single, isolated LAN”?
Among other things, it means that:
The basic reason for splitting a network into VLANs is to reduce congestion on a large LAN. There are several advantages to using VLANs:
The act of creating a VLAN on a switch involves defining a set of ports and defining the criteria for VLAN membership for workstations connected to those ports. By far the most common VLAN membership criterion is port-based. With port-based VLANs, the ports of a switch are simply assigned to VLANs, with no extra criteria. All devices connected to a given port automatically become members of the VLAN to which that port was assigned. In effect, this just divides a switch up into a set of independent sub-switches.
It is important to remember that VLANs do not guarantee a network’s security. At first glance, it may seem that traffic cannot be intercepted because communication within a VLAN is restricted to member devices. However, there are attacks that allow a malicious user to see traffic from other VLANs (so-called VLAN hopping). Therefore, a VLAN can be created so that engineers can efficiently share confidential documents, but the VLAN does not significantly protect the documents from unauthorized access. The following lists the most common attacks that could be launched against VLANs at the data link layer:
While many of these attacks are old, and may not be effective unless certain circumstances or misconfiguration issues are allowed to go unchecked within the network, the security practitioner needs to be aware of these attack vectors and ensure that they understand how they operate and what appropriate countermeasures are available.
Media Access Control Security (MACsec) provides point-to-point security on Ethernet links between directly connected nodes. MACsec identifies and prevents most threats, including denial of service, intrusion, man-in-the-middle, masquerading, passive wiretapping, and playback attacks. MACsec is standardized in IEEE 802.1AE. When combined with other security protocols such as IP Security (IPsec) and Secure Sockets Layer (SSL), MACsec can provide end-to-end network security.
MACsec uses secured point-to-point Ethernet links. Matching security keys are exchanged and verified between the interfaces at each end of the link. Ports, MAC addresses, and other user-configurable parameters are similarly verified. Data integrity checks are used to secure and verify all data that traverses the link. If the data integrity check detects anything irregular about the traffic, the traffic is dropped. MACsec can also be configured to encrypt all data on the Ethernet link to prevent if from being viewed by anyone who might be monitoring traffic on the link.53
MACsec is configured using connectivity associations. You can configure MACsec using static secure association key (SAK) security mode or static connective association key (CAK) mode. Both modes use secure channels that send and receive data on the MACsec-enabled link. When you use SAK security mode, you configure the secure channels, which also transmit the SAKS across the link to enable the MACsec. Typically, you configure two secure channels, one for inbound traffic and the other for outbound traffic. When you use CAK security mode, you create and configure the connectivity association. Your secure channels are automatically created and configured in the process of configuring the connectivity association. The secure channels are cannot be separately configured by users.54
Configuration Management/Monitoring (CM) is the application of sound program practices to establish and maintain consistency of a product’s or system’s attributes with its requirements and evolving technical baseline over its life. It involves interaction among systems engineering, hardware/software engineering, specialty engineering, logistics, contracting, and production in an integrated product team environment. A configuration management/monitoring process guides the system products, processes, and related documentation, and facilitates the development of open systems. Configuration management/monitoring efforts result in a complete audit trail of plans, decisions, and design modifications.
Automated CM tools can help the security practitioner to:
Secure Shell’s (SSH) services include remote log-on, file transfer, and command execution. It also supports port forwarding, which redirects other protocols through an encrypted SSH tunnel. Many users protect less secure traffic of protocols, such as X Windows and virtual network computing (VNC) , by forwarding them through an SSH tunnel. The SSH tunnel protects the integrity of communication, preventing session hijacking and other man-in-the-middle attacks.
There are two incompatible versions of the protocol, SSH-1 and SSH-2, though many servers support both. SSH-2 has improved integrity checks (SSH-1 is vulnerable to an insertion attack due to weak CRC-32 integrity checking) and supports local extensions and additional types of digital certificates such as Open PGP. SSH was originally designed for UNIX, but there are now implementations for other operating systems, including Windows, Macintosh, and OpenVMS.
According to Nick Sullivan at Cloudfare, “The point of DNSSEC is to provide a way for DNS records to be trusted by whoever receives them. The key innovation of DNSSEC is the use of public key cryptography to ensure that DNS records are authentic. DNSSEC not only allows a DNS server to prove the authenticity of the records it returns. It also allows the assertion of “non-existence of records.”
The DNSSEC trust chain is a sequence of records that identify either a public key or a signature of a set of resource records. The root of this chain of trust is the root key, which is maintained and managed by the operators of the DNS root. DNSSEC is defined by the IETF in RFCs 4033, 4034, and 4035.
There are several important new record types:
A DNSKEY record is a cryptographic public key; DNSKEYs can be classified into two roles, which can be handled by separate keys or a single key
For a given domain name and question, there are a set of answers. For example, if you ask for the A record for ISC2.org, you get a set of A records as the answer:
ISC2.org. IN A 208.78.71.5
ISC2.org. IN A 208.78.70.5
ISC2.org. IN A 208.78.72.5
ISC2.org. IN A 208.78.73.5
The set of all records of a given type for a domain is called an RRset. A Resource Record SIGnature (RRSIG) is essentially a digital signature for an RRset. Each RRSIG is associated with a DNSKEY. The RRset of DNSKEYs are signed with the key signing key (KSK). All others are signed with the zone signing key (ZSK). Trust is conferred from the DNSKEY to the record though the RRSIG: If you trust a DNSKEY, then you can trust the records that are correctly signed by that key.
However, the domain’s ZSK is signed by itself, making it difficult to trust. The way around this is to walk the domain up to the next/parent zone. To verify that the DNSKEY for ISC2.org is valid, you have to ask the .org authoritative server. This is where the DS record comes into play: It acts as a bridge of trust to the parent level of the DNS.
The DS record is a hash of a DNSKEY. The .org zone stores this record for each zone that has supplied DNSSEC keying information. The DS record is part of an RRset in the zone for .org and therefore has an associated RRSIG. This time, the RRset is signed by the .com ZSK. The .org DNSKEY RRset is signed by the .org KSK.
The ultimate root of trust is the KSK DNSKEY for the DNS root. This key is universally known and published.
Here is the DNSKEY root KSK that was published in August 2010 and will be used until sometime in 2015 or 2016 (encoded in base64):
AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQbSEW0O8gcC jFFVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh/RStIoO8g0NfnfL2MTJR kxoXbfDaUeVPQuYEhg37NZWAJQ9VnMVDxP/VHL496M/QZxkjf5/Efu cp2gaDX6RS6CXpoY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA 6G3LQpzW5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGOYl7OyQd XfZ57relSQageu+ipAdTTJ25AsRTAoub8ONGcLmqrAmRLKBP1dfwhY B4N7knNnulqQxA+Uk1ihz0=
By following the chain of DNSKEY, DS, and RRSIG records to the root, any record can be trusted.”55
Key concepts for the security professional to consider regarding network-based security devices include:
It is a common misperception that network security is the endpoint for all other security measures; i.e., that firewalls only will protect an organization. This is a flawed perception; perimeter defense (defending the edges of the network) is merely part of the overall solution set, or enterprise security architecture that the security professional needs to ensure is in place in the enterprise. Perimeter defense is part of a wider concept known as defense in depth, which simply holds that security must be a multi-layer effort including the edges but also the hosts, applications, network elements (routers, switches, DHCP, DNS, wireless access points), people, and operational processes. Techniques associated with proactivity can be undertaken independently by organizations with a willingness and resources to do so. Others such as upstream intercession (assuming an external source of the threat such as DDOS, spam/phish, or botnet attacks) can be accomplished fairly easily and affordably through cooperation with telecommunications suppliers and Internet service providers (ISP). Finally, the most effective proactive network defense is related to the ability to disable attack tools before they can be deployed and applied against you.
In the context of telecommunications and network security, confidentiality is the property of nondisclosure to unauthorized parties. Attacks against confidentiality are by far the most prevalent today because information can be sold or exploited for profit in a huge variety of (mostly criminal) ways. The network, as the carrier of almost all digital information within the enterprise, provides an attractive target to bypass access control measures on the assets using the network and access information while it is in transit on the wire. Among the information that can be acquired is not just the payload information but also credentials, such as passwords. Conversely, an attacker might not even be interested in the information transmitted but simply in the fact that the communication has occurred. An overarching class of attacks carried out against confidentiality is known as “eavesdropping.”
To access information from the network, an attacker must have access to the network itself in the first place. An eavesdropping computer can be a legitimate client on the network or an unauthorized one. It is not necessary for the eavesdropper to become a part of the network (for instance, having an IP address); it is often far more advantageous for an attacker to remain invisible (and un-addressable) on the network. This is particularly easy in wireless LANs, where no physical connection is necessary. Countermeasures to eavesdropping include encryption of network traffic on a network or application level, traffic padding to prevent identification of times when communication happens, rerouting of information to anonymize its origins and potentially split different parts of a message, and mandating trusted routes for data such that information is only traversing trusted network domains.
In the context of telecommunications and network security, integrity is the property association with corruption or change (intentional or accidental). A network needs to support and protect the integrity of its traffic. In many ways, the provisions taken for protection against interception to protect confidentiality will also protect the integrity of a message. Attacks against integrity are often an interim step to compromising confidentiality or availability as opposed to the overall objective of the attack. Although the modification of messages will often happen at the higher network layers (i.e., within applications), networks can be set up to provide robustness or resilience against interception and change of a message (man-in-the-middle attacks) or replay attacks. Ways to accomplish this can be based on encryption or checksums on messages, as well as on access control measures for clients that would prevent an attacker from gaining the necessary access to send a modified message into the network in the first place. Conversely, many protocols, such as SMTP, HTTP, or even DNS, do not provide any degree of authentication. Consequently, it becomes relatively easy for the attacker to inject messages with fake sender information into a network from the outside through an existing gateway. The fact that no application can rely on the security or authenticity of underlying protocols has become a common design factor in networking.
In the context of telecommunications and network security, availability is the property of a network service related to its uptime, speed, and latency. Availability of the service is commonly the most obvious business requirement especially with highly converged networks, where multiple assets (data, voice, physical security) are riding on top of the same network. For this very reason, network availability has also become a prime target for attackers and a key business risk that security professionals need to be prepared to address. While a variety of availability threats and risks are addressed in this domain, an overarching class of attack against availability is known as denial of service.
Attacks on the transport layer of the OSI model (layer 4) seek to manipulate, disclose, or prevent delivery of the payload as a whole. This can, for instance, happen by reading the payload (as would happen in a sniffer attack) or changing it (which could happen in a man-in-the-middle attack). While disruptions of service can be executed at other layers as well, the transport layer has become a common attack ground via ICMP.
Domain names are subject to trademark risks, related to a risk of temporary unavailability or permanent loss of an established domain name. For the business in question, the consequences can be equivalent to the loss of its whole Internet presence in an IT-related disaster. Businesses should therefore put in place contingency plans if they are concerned with trademark disputes of any kind over a domain name used as their main Web and email address. Such contingency plans might include setting up a second domain unrelated to the trademark in question (based, for instance, on the trademark of a parent company) that can be advertised on short notice, if necessary. Cyber-squatting and the illegitimate use of similar domains, containing common misspellings or representing the same second-level domain under a different top-level domain, is occurring more frequently as the range of domains continues to expand. The only way to protect a business from this kind of fraud is the registration of the most prominent adjacent domains or by means of trademark litigation. A residual risk will always remain, relating not only to public misrepresentation but also to potential loss or disclosure of email.
An open mail relay server is an SMTP service that allows inbound SMTP connections for domains it does not serve; i.e., for which it does not possess a DNS MX record. An open mail relay is generally considered a sign of bad system administration. Open mail relays are a principal tool for the distribution of spam because they allow an attacker to hide their identity. A number of blacklists for open mail relay servers exist that can be used for blacklisting open mail relays; i.e., a legitimate mail server would not accept any email from this host because it has a high likelihood of being spam. Although using blacklists as one indicator in spam filtering has its merits, it is risky to use them as an exclusive indicator. Generally, they are run by private organizations and individuals according to their own rules, they are able to change their policies on a whim, they can vanish overnight for any reason, and they can rarely be held accountable for the way they operate their lists.
By far the most common way of suppressing spam is email filtering on an email gateway. A large variety of commercial products exists, based on a variety of algorithms. Filtering based on simple keywords can be regarded as technically obsolete because this method is prone to generating false-positives, and spammers are able to easily work around this type of filter simply by manipulating the content and key words in their messages. More sophisticated filters, based, for instance, upon statistical analysis or analysis of email traffic patterns, have come to market. Filtering can happen on an email server (mail transfer agent [MTA]) or in the client (mail user agent [MUA]). The administrator of a mail server can configure it to limit or slow down an excessive number of connections (tar pit). A mail server can be configured to honor blacklists of spam sources either as a direct blocking list or as one of several indicators for spam.
Firewalls and proxies are both considered to be gateway protection devices. They are deployed by the SSCP to help to support a defense in depth architecture design for the enterprise. A firewall is used to examine the flow of traffic entering and exiting a network, and to match that traffic flow against a predetermined set of rules that are used to determine whether that traffic should be allowed or denied. A proxy device is used to manage the exchange of traffic between networks and can also be used to filter traffic, examining it for layer 7 protocols such as HTTP or FTP.
Firewalls are devices that enforce administrative security policies by filtering incoming traffic based on a set of rules. Often firewalls are thought of as protectors of an Internet gateway only. While a firewall should always be placed at Internet gateways, there are also internal network considerations and conditions where a firewall would be employed, such as network zoning. Additionally, firewalls are also threat management appliances with a variety of other security services embedded, such as proxy services and intrusion prevention services which seek to monitor and alert proactively at the network perimeter.
Firewalls will not be effective right out of the box. Firewall rules must be defined correctly in order to not inadvertently grant unauthorized access. Like all hosts on a network, administrators must install patches to the firewall and disable all unnecessary services. Also, firewalls offer limited protection against vulnerabilities caused by applications flaws in server software on other hosts. For example, a firewall will not prevent an attacker from manipulating a database to disclose confidential information.
Firewalls filter traffic based on a rule set. Each rule instructs the firewall to block or forward a packet based on one or more conditions. For each incoming packet, the firewall will look through its rule set for a rule whose conditions apply to that packet, and block or forward the packet as specified in that rule. Below are two important conditions used to determine if a packet should be filtered.
Firewalls can change the source address of each outgoing (from trusted to untrusted network) packet to a different address. This has several applications, most notably to allow hosts with RFC 1918 addresses access to the Internet by changing their non-routable address to one that is routable on the Internet.56 A non-routable address is one that will not be forwarded by an Internet router, and therefore remote attacks using non-routable internal addresses cannot be launched over the open Internet. Anonymity is another reason to use NAT. Many organizations do not want to advertise their IP addresses to an untrusted host and thus unnecessarily give information about the network. They would rather hide the entire network behind translated addresses. NAT also greatly extends the capabilities of organizations to continue using IPv4 address spaces.
An extension to NAT is to translate all addresses to one routable IP address and translate the source port number in the packet to a unique value. The port translation allows the firewall to keep track of multiple sessions that are using PAT.
When a firewall uses static packet filtering, it examines each packet without regard to the packet’s context in a session. Packets are examined against static criteria, for example, blocking all packets with a port number of 79 (Finger). Because of its simplicity, static packet filtering requires very little overhead, but it has a significant disadvantage. Static rules cannot be temporarily changed by the firewall to accommodate legitimate traffic. If a protocol requires a port to be temporarily opened, administrators have to choose between permanently opening the port and disallowing the protocol.
Stateful inspection examines each packet in the context of a session, which allows it to make dynamic adjustments to the rules to accommodate legitimate traffic and block malicious traffic that would appear benign to a static filter. Consider FTP. A user connects to an FTP server on TCP port 21 and then tells the FTP server on which port to transfer files. The port can be any TCP port above 1023. So, if the FTP client tells the server to transfer files on TCP port 1067, the server will attempt to open a connection to the client on that port. A stateful inspection firewall would watch the interaction between the two hosts, and even though the required connection is not permitted in the rule set, it would allow the connection to occur because it is part of FTP.
Static packet filtering, in contrast, would block the FTP server’s attempt to connect to the client on TCP port 1067 unless a static rule was already in place. In fact, because the client could instruct the FTP server to transfer files on any port above 1023, a static rule would have to be in place to permit access to the specified port.
Following the principle of security in depth, personal firewalls should be installed on workstations, which protect the user from all hosts on the network. It is critical for home users with DSL or cable modem access to the Internet to have a personal firewall installed on every PC, especially if they do not have a firewall protecting their network.
A proxy firewall mediates communications between untrusted endpoints (servers/hosts/clients) and trusted endpoints (servers/hosts/clients). From an internal perspective, a proxy may forward traffic from known, internal client machines to untrusted hosts on the Internet, creating the illusion for the untrusted host that the traffic originated from the proxy firewall, thus hiding the trusted internal client from potential attackers. To the user, it appears that he or she is communicating directly with the untrusted server. Proxy servers are often placed at Internet gateways to hide the internal network behind one IP address and to prevent direct communication between internal and external hosts.
A circuit-level proxy creates a conduit through which a trusted host can communicate with an untrusted one. This type of proxy does not inspect any of the traffic that it forwards, which adds very little overhead to the communication between the user and untrusted server. The lack of application awareness also allows circuit-level proxies to forward any traffic to any TCP and UDP port. The disadvantage is that traffic will not be analyzed for malicious content.
An application-level proxy relays the traffic from a trusted endpoint running a specific application to an untrusted endpoint. The most significant advantage of application-level proxies is that they analyze the traffic that they forward for protocol manipulation and various sorts of common attacks such as buffer overflows. Application-level proxies add overhead to using the application because they scrutinize the traffic that they forward.
Web proxy servers are a very popular example of application-level proxies. Many organizations place one at their Internet gateway and configure their users’ web browsers to use the web proxy whenever they browse an external web server (other controls are implemented to prevent users from bypassing the proxy server). The proxies typically include required user authentication, inspection of URLs to ensure that users do not browse inappropriate sites, logging, and caching of popular webpages. In fact, web proxies for internal users are one of the prime manners in which acceptable usage policies can be enforced because external sites can be blacklisted by administrators and logs of user traffic kept for later analysis if required for evidentiary purposes.
It is important for the SSCP to understand the difference between intrusion detection and intrusion prevention. Intrusion detection technologies are considered “passive,” only recording activity and potentially alerting an administrator that something unusual is occurring. Intrusion prevention technologies are considered “active,” meaning that they will do everything that an intrusion detection device does, but they will also have the capacity to react to the unusual or suspicious behavior in preprogrammed ways. It is important for the SSCP to carefully consider which technology or mix of technologies will be the best for their organization’s security needs.
Port scanning is the act of probing for TCP services on a machine. It is performed by establishing the initial handshake for a connection. Although not in itself an attack, it allows an attacker to test for the presence of potentially vulnerable services on a target system. Port scanning can also be used for fingerprinting an operating system by evaluating its response characteristics, such as timing of a response, and details of the handshake. Protection from port scanning includes restriction of network connections; e.g., by means of a host-based or network-based firewall or by defining a list of valid source addresses on an application level.
In FIN scanning, a stealth scanning method, a request to close a connection is sent to the target machine. If no application is listening on that port, a TCP RST or an ICMP packet will be sent. This attack commonly only works on UNIX machines, as Windows machines behave in a slightly different manner, deviating from RFC 793 (always responding to a FIN packet with an RST, thereby rendering recognition of open ports impossible) and thereby not being susceptible to the scan.57 Firewalls that put a system into stealth mode (i.e., suppressing system responses to FIN packets) are available. In NULL scanning, no flags are set on the initiating TCP packet; in XMAS scanning, all TCP flags are set (or “lit,” as in a Christmas tree). Otherwise, these scans work in the same manner as the FIN scan.
To detect and correct loss of data packets, TCP attaches a sequenced number to each data packet that is transmitted. If a transmission is not reported back as successful, a packet will be retransmitted. By eavesdropping on traffic, these sequence numbers can be predicted and fake packets with the correct sequence number can be introduced into the data stream by a third party. This class of attacks can, for instance, be used for session hijacking. Protection mechanisms against TCP sequence number attacks have been proposed based on better randomization of sequence numbers as described in RFC 1948.58
Security attacks have been described formally as attack tree models. Attack trees are based upon the goal of the attacker, the risks to the defender, and the vulnerabilities of the defense systems. They are a specialized form of decision tree that can be used to formally evaluate system security. The following methodology describes not the attack tree itself (which is a defender’s view) but the steps that an attacker would undergo to successfully traverse the tree toward his or her target.
To detect the presence of unauthorized changes, which could indicate access from an attacker or backdoors into the system, the use of host-based or network-based intrusion detection systems can provide useful detection services. However, it is important to keep in mind that because an IDS relies on constant external input in the form of attack signature updates to remain effective, these systems are only as “good” as the quality and timeliness of the updates being applied to them. The output from the host-based IDS (such as regular snapshots or file hashes) needs to be stored in such a way that it cannot be overwritten from the source system in order to insure integrity.
Finally, yet importantly, the attacker may look to remotely maintain control of the system to regain access at a later time or to use it for other purposes, such as sending spam or as a stepping stone for other attacks. To such an end, the attacker could avail himself of prefabricated “rootkits” to sustain and maintain control over time. Such a rootkit will not only allow access but also hide its own existence from traditional cursory inspection methods.
Bots and botnets are responsible for most of the activity leading to unauthorized, remote control of compromised systems today. Machines that have become infected, and are now considered to be bots: essentially, zombies controlled by shadowy entities from the dark places on the Internet. Bots and botnets are the largest source of spam email and can be coordinated by bot herders to inflict highly effective denial-of-services attacks, all without the knowledge of the system owners.
Tools can make a security practitioner’s job easier. Regardless of whether they are aids in collecting input for risk analysis or scanners to assess how well a server is configured, tools automate processes, which saves time and reduces error. Do not allow yourself to fall into the trap of reducing network security to collecting and using tools however.
Intrusion detection systems (IDS) monitor activity and send alerts when they detect suspicious traffic. (See Figure 6-16.) There are two broad classifications of IDS: host-based IDS, which monitor activity on servers and workstations, and network-based IDS, which monitor network activity. Network IDS services are typically stand-alone devices or at least independent blades within network chassis. Network IDS logs would be accessed through a separate management console that will also generate alarms and alerts.
Currently, there are two approaches to the deployment and use of intrusion detection systems. An appliance on the network can monitor traffic for attacks based on a set of signatures (analogous to antivirus software), or the appliance can watch the network’s traffic for a while, learn what traffic patterns are normal, and send an alert when it detects an anomaly. Of course, the IDS can be deployed using a hybrid of the two approaches as well.
Independent of the approach, how an organization uses an IDS determines whether the tool is effective. Despite its name, the IDS should not be used to detect intrusions because IDS solutions are not designed to be able to take preventative actions as part of their response. Instead, it should send an alert when it detects interesting, abnormal traffic that could be a prelude to an attack. For example, someone in the engineering department trying to access payroll information over the network at 3 a.m. is probably very interesting and not normal. Or perhaps a sudden rise in network utilization should be noted.
Security Event Management (SEM)/ Security Incident and Event Management (SIEM) is a solution that involves harvesting logs and event information from a variety of different sources on individual servers or assets, and analyzing it as a consolidated view with sophisticated reporting. Similarly, entire IT infrastructures can have their logs and event information centralized and managed by large-scale SEM/SIEM deployments. SEM/SIEM will not only aggregate logs but will perform analysis and issue alerts (email, pager, audible, etc.) according to suspicious patterns.
Aggregation and consolidation of logs and events will also potentially require additional network resources to transfer log and event data from distinct servers and arrays to a central location. This transfer will also need to occur in as close to real time as possible if the security information is to possess value beyond forensics. SEM/SIEM systems can benefit immensely from Security Intelligence Services (SIS). The output from security appliances is esoteric and lacks the real-world context required for predictive threat assessments. It falls short of delivering consistently relevant intelligence to businesses operating in a competitive marketplace. SIS uses all-source collection and analysis methods to produce and deliver precise and timely intelligence guiding not only the business of security but the security of the business. SIS services are built upon accurate security metrics (cyber and physical), market analysis, and technology forecasting, and are correlated to real-world events, giving business decision makers time and precision. SIS provides upstream data from proactive cyber defense systems monitoring darkspace and darkweb.
A network scanner can be used in several ways by the security practitioner:
The following tools are commonly used scanning tools that are worth understanding:
A network tap or span is a device that has the ability to selectively copy all data flowing through a network in real time for analysis and storage. (See Figure 6-17.) Network taps may be deployed for the purposes of network diagnostics and maintenance or for purposes of forensic analysis related to incidents or suspicious events. Network taps will generally be fully configurable and will function at all layers from the physical layer up. In other words, a tap should be capable of copying everything from layer 1 (Ethernet, for instance) upward, including all payload information within the packets. Additionally, a tap can be configured to vacuum up every single packet of data, or perhaps just focus on selected application traffic from selected sources.
The Internet Protocol (IP) uses datagram fragmentation to split up the pieces of data being transmitted between two network interfaces so that they are able to be sized at, or under the maximum transmission unit (MTU) value set for the network. When packets are fragmented, they can be used to hide attack data, obfuscating the attack from the monitoring devices designed to protect the network. The following sections discuss the details of these attacks and how they may be addressed.
In this attack, IP packet fragments are constructed so that the target host calculates a negative fragment length when it attempts to reconstruct the packet. If the target host’s IP stack does not ensure that fragment lengths are set within appropriate boundaries, the host could crash or become unstable. This problem is easily fixed with a vendor patch.
Overlapping fragment attacks are used to subvert packet filters that only inspect the first fragment of a fragmented packet. The technique involves sending a harmless first fragment, which will satisfy the packet filter. Other packets follow that overwrite the first fragment with malicious data, thus resulting in harmful packets bypassing the packet filter and being accepted by the victim host. A solution to this problem is for TCP/IP stacks not to allow fragments to overwrite each other.
Instead of only permitting routers to determine the path a packet takes to its destination, IP allows the sender to explicitly specify the path. An attacker can abuse source routing so that the packet will be forwarded between network interfaces on a multi-homed computer that is configured not to forward packets. This could allow an external attacker access to an internal network. Source routing is specified by the sender of an IP datagram, whereas the routing path would normally be left to the router to decide. The best solution is to disable source routing on hosts and to block source-routed packets.
Both attacks use broadcasts to create denial-of-service attacks. A Smurf attack misuses the ICMP echo request to create denial-of-service attacks. In a Smurf attack, the intruder sends an ICMP echo request with a spoofed source address of the victim. The packet is sent to a network’s broadcast address, which forwards the packet to every host on the network. Because the ICMP packet contains the victim’s host as the source address, the victim will be overwhelmed by the ICMP echo replies, causing a denial-of-service attack.
The Fraggle attack uses UDP instead of ICMP. The attacker sends a UDP packet on port 7 with a spoofed source address of the victim. Like the Smurf attack, the packet is sent to a network’s broadcast address, which will forward the packet to all of the hosts on the network. The victim host will be overwhelmed by the responses from the network.
The first step in setting up an NFS connection will be the publication (exporting) of file system trees from the server. These trees can be arbitrarily chosen by the administrator. Access privileges are granted based upon the client IP address and directory tree. Within the tree, the privileges of the server file system will be mapped to client users.
Several points of risk exist:
From a security perspective, the main shortcoming in Network News Transport Protocol (NNTP) is authentication. One of the earlier solutions users found to this problem was signing messages with Pretty Good Privacy (PGP). However, this did not prevent impersonation or faked identities, as digital signatures were not a requirement, and indeed would be unsuitable for the repudiation problem implied. To make matters worse, NNTP offers a cancellation mechanism to withdraw articles already published. Naturally, the same authentication weakness applies to the control messages used for these cancellations, allowing users with even moderate skills to delete messages at will.
Finger is an identification service that allows a user to obtain information about the last login time of a user and whether he or she is currently logged into a system. The “fingered” user has the possibility to have information from two files in their home directory displayed (the .project
and .plan files
). For all practical purposes, the Finger protocol has become obsolete. Its use should be restricted to situations where no alternatives are available.
Network Time Protocol (NTP) synchronizes computer clocks in a network. This can be extremely important for operational stability (for instance, under NIS) but also for maintaining consistency and coherence of audit trails, such as in log files. A variant of NTP exists in Simple Network Time Protocol (SNTP), offering a less resource intensive but also less exact form of synchronization. From a security perspective, our main objective with NTP is to prevent an attacker from changing time information on a client or a whole network by manipulating its local time server. NTP can be configured to restrict access based upon IP address. From NTP version 3 onward, cryptographic authentication has become available, based upon symmetric encryption, but is to be replaced by public key cryptography in NTP version 4.
To make a network robust against accidental or deliberate timing inaccuracies, a network should have its own time server and possibly a dedicated, highly accurate clock. As a standard precaution, a network should never depend on one external time server alone, but it should synchronize with several trusted time sources. Thus, manipulation of a single source will have no immediate effect. To detect de-synchronization, one can use standard logging mechanisms with NTP to ensure synchronicity of time stamping.
There are a number of different types of attacks that can be launched against a network, many of which we have already discussed. A special category is the denial of service and distributed denial of service attacks. Denial of service (DoS) attacks are carried out when an attacker sets out to attack a system through a set of activities designed to culminate in knocking the target offline and denying it ongoing access to the network for the duration of the attack. The key is that this attack is carried out by a single attacker. A distributed denial of service (DDoS) attack is designed to achieve the same end result as a DoS attack, but it is perpetrated by multiple attackers simultaneously coordinating their efforts.
The easiest attack to carry out against a network, or so it may seem, is to overload it through excessive traffic or traffic that has been “crafted” to confuse the network into shutting down or slowing to the point of uselessness. Countermeasures include, but are not limited to, multiple layers of firewalls, careful filtering on firewalls, routers and switches, internal network access controls (NAC), redundant (diverse) network connections, load balancing, reserved bandwidth (quality of service, which would at least protect systems not directly targeted), and blocking traffic from an attacker on an upstream router. Bear in mind that malicious agents can and will shift IP address or DNS name to sidestep the attack, as well as employing potentially thousands of unique IP addresses during the execution of an attack. Enlisting the help of upstream service providers and carriers is ultimately the most effective countermeasure, especially if the necessary agreements and relationships have been established proactively or as part of agreed service levels.
It is instructive to note that many protocols contain basic protection from message loss that would at least mitigate the effects of denial-of-service attacks. This starts with TCP managing packet loss within certain limits, and it ends with higher level protocols, such as SMTP, that will provide robustness against temporary connection outages (store and forward).60
There are several tactics used to disrupt or deny service to a computer, such as congesting netflow traffic to a computer or attempting to overwhelm a server on the application level. Distributed denial-of-service (DDoS) attacks can be broadly divided in three types:
Security practitioners need to be aware of the impact of DDoS attacks against their networks. Perhaps because they are a convenient attack vector requiring little specialized expertise, DDoS attacks are increasing in scope to include bigger targets and more packets. During Q1 2013, the average DDoS attack bandwidth totaled 48.25 Gbps, a 718% increase over the previous quarter—and the average packet-per-second rate reached 32.4 million. “DDoS challenges have spiked for enterprises in 2013,” noted Lawrence Orans of the research firm Gartner in a recent report. “Gartner estimates that its DDoS inquiry level quadrupled from September 2012 through September 2013. An increase of higher-volume and application-based DDoS attacks on corporate networks will force Chief Information Security Officers (CISOs) and security teams to find new, proactive solutions for reducing downtime.” 62
Some examples of DDoS attacks include the following:
.cn
, China’s country code top-level domain. The China Internet Network Information Center (CNNIC), which maintains the registry for .cn
, issued an apology and a notice that at 2 and 4 in the morning early Sunday local time, its National Nodes DNS was hit with two big attacks.64 CloudFlare CEO Matthew Prince told the Wall Street Journal that his company saw a 32% drop in traffic for the thousands of Chinese domains on the company’s network during the attack period, compared with the same timeframe 24 hours prior.Countermeasures are similar to those of conventional denial-of-service attacks, but simple IP or port filtering might not work.
A SYN flood attack is a denial-of-service attack against the initial handshake in a TCP connection. Many new connections from faked, random IP addresses are opened in short order, overloading the target’s connection table.
Countermeasures include tuning of operating system parameters such as the size of the backlog table according to vendor specifications. Another solution, which requires modification to the TCP/IP stack, is SYN cookies changing TCP numbers in a way that makes faked packets immediately recognizable.65
Daniel J. Bernstein, the primary inventor of this approach, defines them as “particular choices of initial TCP sequence numbers by TCP servers. The difference between the server’s initial sequence number and the client’s initial sequence number is:
A server that uses SYN cookies does not have to drop connections when its SYN queue fills up. Instead it sends back a SYN+ACK, exactly as if the SYN queue had been larger. When the server receives an ACK, it checks that the secret function works for a recent value of t, and then rebuilds the SYN queue entry from the encoded MSS.”
One of the most successful variants of SYN flooding can be carried out by the botnets discussed earlier. Botnets have the ability to direct potentially thousands of SYN requests to hosts at the same time, overwhelming not only the hosts but also the network connections that they rest upon. Under such circumstances, there are no host-configuration countermeasures available because a host without a network is as good as dead anyway. While SYN flooding might be the mode of attack, it is not being employed in any cunning manner with spoofed IP addresses from possibly a single malicious host; it is being applied as a pure brute-force form of attack.
Countermeasures include protecting the operating system through securing its network stack. This is not normally something the user or owner of a system has any degree of control over; it is a task for the vendor.
Finally, the network needs to be included in a corporation’s disaster recovery and business contingency plans. For local area networks, one may set high recovery objectives and provide appropriate contingency, based upon the fact that any recovery of services is likely to be useless without at least a working local area network (LAN) infrastructure. As wide area networks are usually outsourced, contingency measures might include acquisition of backup lines from a different provider, procurement of telephone or digital subscriber loop (DSL) lines, etc.
Spoofing is defined as acting with the intent of impersonating someone or something, with the goal of attempting to get a target to accept you as the legitimate party, even though you are not. When spoofing is used by a bad actor to attempt to trick a target machine on the corporate network into accepting the traffic being sent to it from the machine that is controlled by the attacker, this can present a challenge to confidentiality, integrity, and availability. The SSCP needs to understand the intricacies of spoofing and the types of attacks that may be used with spoofing to gain access to the network.
Packets are sent with a bogus source address so that the victim will send a response to a different host. Spoofed addresses can be used to abuse the three-way handshake that is required to start a TCP session. Under normal circumstances, a host offers to initiate a session with a remote host by sending a packet with the SYN option. The remote host responds with a packet with the SYN and ACK options. The handshake is completed when the initiating host responds with a packet with the ACK option.
An attacker can launch a denial-of-service attack by sending the initial packet with the SYN option with a source address of a host that does not exist. The victim will respond to the forged source address by sending a packet with the SYN and ACK options, and then wait for the final packet to complete the handshake. Of course, that packet will never arrive because the victim sent the packet to a host that does not exist. If the attacker sends a storm of packets with spoofed addresses, the victim may reach the limit of uncompleted (half-open) three-way handshakes and refuse other legitimate network connections.
The above scenario takes advantage of a protocol flaw. To mitigate the risk of a successful attack, vendors have released patches that reduce the likelihood of the limit of uncompleted handshakes being reached. In addition, security devices, such as firewalls, can block packets that arrive from an external interface with a source address from an internal network.
As SMTP does not possess an adequate authentication mechanism, email spoofing is extremely simple. The most effective protection against this is a social one, whereas the recipient can confirm or simply ignore implausible email. Spoofing email sender addresses is extremely simple, and it can be done with a simple TELNET command to port 25 of a mail server and by issuing a number of SMTP commands. Email spoofing is frequently used as a means to obfuscate the identity of a sender in spamming, whereas the purported sender of a spam email is in fact another victim of spam, whose email address has been harvested by or sold to a spammer.
To resolve a domain name query, such as mapping a web server address to an IP address, the user’s workstation will in turn have to undertake a series of queries through the Domain Name System hierarchy. Such queries can be either recursive (a name server receiving a request will forward it and return the resolution) or iterative (a name server receiving a request will respond with a reference).
An attacker aiming to poison a DNS server’s (name server) cache (information related to previous queries, which is stored for reuse in future queries for speed and efficiency) by injecting fake records, and thereby falsifying responses to client requests, will need to send a query to this very name server. The attacker now knows that the name server will shortly send out a query for resolution.
In the first case, the attacker has sent a query for a domain, whose primary name server he controls. The response from this query will contain additional information that was not originally requested, but which the target server will now cache. The second case is a dissimilar method that can also be used in iterative queries. Using IP spoofing, the attacker will send a response to his own query before the authoritative (correct) name server has a chance to respond.
In both cases, the attacker has used an electronic conversation to inject false information into the name server’s cache. Not only will this name server now use the cached information, but the false information will propagate to other servers, making inquiries to this one. Due to the caching nature of DNS, attacks on DNS servers as well as countermeasures always have certain latency, determined by the configuration of a (domain) zone.
There are two principal vulnerabilities here, both inherent in the design of the DNS protocol: It is possible for a DNS server to respond to a recursive query with information that was not requested, and the DNS server will not authenticate information. Approaches to address or mitigate this threat have only been partly successful.
Later versions of DNS server software are programmed to ignore responses that do not correspond to a query. Authentication has been proposed, but attempts to introduce stronger (or even “any”) authentication into DNS (for instance, through the use of DNSSEC) have not found wide acceptance. Authentication services have been delegated upward to higher protocol layers. Applications in need of guaranteeing authenticity cannot rely on DNS to provide such, but will have to implement a solution themselves.
The ultimate solution to DNS security issues for many organizations is to establish DNS servers dedicated to their domains and vigorously monitor them. An “internal” DNS server will also be established, which only accepts queries from internal networks and users, and therefore it is considered to be substantially more difficult for outsiders to compromise and use as a staging point for penetrating internal networks.
Technically, the following two techniques are only indirectly related to DNS weaknesses. However, it is worth mentioning them in the context of DNS because they seek to manipulate name resolution in other ways.
Pharming is the manipulation of DNS records; for instance, through the “hosts” file on a workstation. A hosts file (/etc/hosts on many UNIX machines, C:WindowsSystem32driversetc on a Windows machine) is the resource first queried before a DNS request is issued. It will always contain the mapping of the host name local host to the IP address 127.0.0.1 (loopback interface, as defined in RFC 3330) and potentially other hosts.66 A virus or malware may add addresses of antivirus software vendors with invalid IP addresses to the hosts file to prevent download of virus pattern files. Alternately, Internet banking sites might have their IP addresses substituted for rogue, imposters’ sites, which will attempt to trick the user into providing login information. A further form of DNS pharming is to compromise a DNS server itself and thereby re-direct all users of the DNS server to imposter websites even though their workstation itself may be free from compromise.
Social engineering techniques will not try to manipulate a query on a technical level but can trick the user into misinterpreting a DNS address that is displayed to him in a phishing email or in his web browser address bar. One way to achieve this in email or hypertext markup language (HTML) documents is to display a link in text where the actual target address is different from what is displayed. Another way to achieve this is the use of non-ASCII character sets (for instance, Unicode—ISO/IEC 10646:2012—characters) that closely resemble ASCII (i.e., Latin) characters to the user.67 This may become a popular technique with the popularization of internationalized domain names.
Smaller corporate networks do not split naming zones; i.e., names of hosts that are accessible only from an intranet are visible from the Internet. Although knowing a server name will not enable anyone to access it, this knowledge can aid and facilitate preparation of a planned attack as it provides an attacker with valuable information on existing hosts (at least with regard to servers), network structure, and, for instance, details such as organizational structure or server operating systems (if the OS is part of the host name, etc.).
An organization should therefore operate split DNS zones wherever possible and refrain from using telling naming conventions for their machines. In addition, a domain registrar’s database of administrative and billing domain contacts (whois database) can be an attractive target for information and email harvesting.
Session hijacking is the act of unauthorized insertion of packets into a data stream. It is normally based on sequence number attacks, where sequence numbers are either guessed or intercepted. Different types of session hijacking exist:
Countermeasures against IP spoofing can be executed at layer 3 (see the section “IP Address Spoofing and SYN-AcCKAttacks”). As TCP sessions only perform an initial authentication, application layer encryption can be used to protect against man-in-the-middle attacks.
As traditional TCP scans became widely recognized and were blocked, various stealth scanning techniques were developed. In TCP half scanning (also known as TCP SYN scanning), no complete connection is opened; instead, only the initial steps of the handshake are performed. This makes the scan harder to recognize; for instance, it would not show up in application log files. However, it is possible to recognize and block TCP SYN scans with an appropriately equipped firewall.
The security practitioner needs to understand the common wireless technologies, networks, and methods. With that foundation, it is easier to understand common vulnerabilities and countermeasures.
There are several types of wireless technologies. They include the following:
Wireless technologies are used in a variety of wireless networks. Types of wireless networks include the following:
It is important to understand the principles and methodologies of delivering wireless information. These include the following:
Two kinds of spread spectrum are available:
Wireless technologies rely on a variety of protocols and authentication systems that have vulnerabilities that can be exploited. Fortunately, there are wireless security devices and countermeasures that can be used to provide stronger security.
The protocols and authentication methods that wireless technologies employ are intrinsically related to how secure they are. It is important to understand both the strengths and vulnerabilities that come with them.
Open System Authentication (OSA) is the default authentication protocol for the 802.11 standard. It consists of a simple authentication request containing the station ID and an authentication response containing success or failure data. Upon successful authentication, both stations are considered mutually authenticated. It can be used with the Wired Equivalent Privacy (WEP) protocol to provide better communication security; however, it is important to note that the authentication management frames are still sent in cleartext during the authentication process. WEP is used only for encrypting data once the client is authenticated and associated. Any client can send its station ID in an attempt to associate with the AP. In effect, no authentication is actually done.
Shared Key Authentication (SKA) is a standard challenge and response mechanism that makes use of WEP and a shared secret key to provide authentication. Upon encrypting the challenge text with WEP using the shared secret key, the authenticating client will return the encrypted challenge text to the access point for verification. Authentication succeeds if the access point decrypts the same challenge text.
Ad-hoc mode is one of the networking topologies provided in the 802.11 standard. It consists of at least two wireless endpoints where there is no access point involved in their communication. Ad-hoc mode WLANs are normally less expensive to run, as no APs are needed for their communication. However, this topology cannot scale for larger networks and lacks some security features like MAC filtering and access control.
Infrastructure mode is another networking topology in the 802.11 standard. It consists of a number of wireless stations and access points. The access points usually connect to a larger wired network. This network topology can scale to form large networks with arbitrary coverage and complex architectures.
Wired Equivalent Privacy (WEP) Protocol is a basic security feature in the IEEE 802.11 standard, intended to provide confidentiality over a wireless network by encrypting information sent over the network. A key-scheduling flaw has been discovered in WEP, so it is now considered to be insecure because a WEP key can be cracked in a few minutes with the aid of automated tools.
Wi-Fi Protected Access (WPA) provides users with a higher level of assurance that their data will remain protected by using the Temporal Key Integrity Protocol (TKIP) for data encryption. 802.1x authentication has been introduced in this protocol to improve user authentication. Wi-Fi Protected Access 2 (WPA2), based on IEEE 802.11i, is a wireless security protocol that allows only authorized users to access a wireless device, with features supporting stronger cryptography (e.g., Advanced Encryption Standard or AES), stronger authentication control (e.g., Extensible Authentication Protocol or EAP), key management, replay attack protection, and data integrity.
In July 2010, a security vendor claimed they discovered a vulnerability in the WPA2 protocol, named “Hole 196.”68 By exploiting the vulnerability, an internally authenticated Wi-Fi user could decrypt the private data of other users and inject malicious traffic into the wireless network. After a thorough investigation, it turned out that such an attack cannot actually recover, break, or crack any WPA2 encryption keys (AES or TKIP). Instead, attackers could only masquerade as access points and launch a man-in-the-middle attack when clients attached to them. In addition, if the security architect does their job properly, such an attack would not be able to succeed in a properly configured environment in the first place. If the client isolation feature is enabled on all access points, wireless clients are not allowed to talk with each other when they are attached to the same access point. As a result of this simple security configuration setting being applied, an attacker is unable to launch a man-in-the-middle attack against other wireless users.
TKIP was initially designed to be used with WPA, while the stronger algorithm AES was designed to be used with WPA2. Some devices may allow WPA to work with AES, while some others may allow WPA2 to work with TKIP. In November 2008, a vulnerability in TKIP was uncovered that would allow an attacker to be able to decrypt small packets and inject arbitrary data into a wireless network. Thus, TKIP encryption is no longer considered to be secure. The security architect should consider using the stronger combination of WPA2 with AES encryption.
The design flaws in the security mechanisms of the 802.11 standard also give rise to a number of potential attacks, both passive and active. These attacks enable intruders to eavesdrop on, or tamper with, wireless transmissions.
Access points emit radio signals in a circular pattern, and the signals almost always extend beyond the physical boundaries of the area they are intended to cover. Signals can be intercepted outside of buildings, or even through the floors in multi-story buildings. As a result, attackers can implement a “parking lot” attack, where they actually sit in the organization’s parking lot and try to access internal hosts via the wireless network. If a network is compromised, the attacker has achieved a high level of penetration into the network. They are now through the firewall and have the same level of network access as trusted employees within the enterprise. An attacker may also fool legitimate wireless clients into connecting to the attacker’s own network by placing an unauthorized access point with a stronger signal in close proximity to wireless clients. The aim is to capture end-user passwords or other sensitive data when users attempt to log on to these rogue servers.
Shared key authentication can easily be exploited through a passive attack by eavesdropping on both the challenge and the response between the access point and the authenticating client. Such an attack is possible because the attacker can capture both the plaintext (the challenge) and the ciphertext (the response). WEP uses the RC4 stream cipher as its encryption algorithm. A stream cipher works by generating a keystream, i.e., a sequence of pseudo-random bits, based on the shared secret key, together with an initialization vector (IV). The keystream is then XORed against the plaintext to produce the ciphertext. An important property of a stream cipher is that if both the plaintext and the ciphertext are known, the keystream can be recovered by simply XORing the plaintext and the ciphertext together, in this case the challenge and the response. The recovered keystream can then be used by the attacker to encrypt any subsequent challenge text generated by the access point to produce a valid authentication response by XORing the two values together. As a result, the attacker can be authenticated to the access point.
Access points come with vendor provided default Service Set Identifiers (SSIDs) programmed into them. If the default SSID is not changed, it is very likely that an attacker will be able to successfully attack the device due to the use of the default configuration. In addition, SSIDs are embedded in management frames that will be broadcast in cleartext from the device, unless the access point is configured to disable SSID broadcasting or is using encryption. By conducting analysis on the captured network traffic from the air, the attacker could be able to obtain the network SSID and may be able to perform further attacks as a result.
Data passing through a wireless LAN with WEP disabled (which is the default setting for most products) is susceptible to eavesdropping and data modification attacks. However, even when WEP is enabled, the confidentiality and integrity of wireless traffic is still at risk because a number of flaws in WEP have been revealed, which seriously undermine its claims to security. In particular, the following attacks on WEP are possible:
The Temporal Key Integrity Protocol (TKIP) attack uses a mechanism similar to the WEP attack, in that it tries to decode data one byte at a time by using multiple replays and observing the response over the air. Using this mechanism, an attacker can decode small packets like ARP frames in about 15 minutes. If Quality of Service (QoS) is enabled in the network, the attacker can further inject up to 15 arbitrary frames for every decrypted packet. Potential attacks include ARP poisoning, DNS manipulation, and denial of service. Although this is not a key recovery attack and it does not lead to compromise of TKIP keys or decryption of all subsequent frames, it is still a serious attack and poses risks to all TKIP implementations on both WPA and WPA2 networks.
As wireless enterprise networks become more pervasive, increasingly sophisticated attacks are developed to exploit these networks. In response, many organizations consider the deployment of wireless intrusion protection and wireless intrusion detection systems (WIPS/WIDS). These systems can offer sophisticated monitoring and reporting capabilities to identify attacks against wireless infrastructure while stopping multiple classes of attack before they are successful against a network.
When selecting a WIDS vendor, it is important for the security practitioner to first understand the deployment methodologies supported by each system. The available WIDS deployment models include overlay, integrated, and hybrid.
In an overlay monitoring deployment, organizations augment their existing WLAN infrastructure with dedicated wireless sensors or Air Monitors (AMs). The AMs are connected to the network in a manner similar to access points (APs). They can be deployed in ceilings or on walls and supported by power over Ethernet (PoE) injectors in wiring closets. While APs are responsible for providing client connectivity, AMs are primarily passive devices that monitor the air for signs of attack or other undesired wireless activity. In an overlay WIDS system, the WIDS vendor provides a controller in the form of a server or appliance that collects and assesses information from the AMs that is monitored by an administrator. These devices do not otherwise participate with the rest of the wireless network, and are limited to assessing traffic at the physical layer (layer 1) and the data-link layer (layer 2).
In an integrated monitoring deployment, organizations leverage existing access point hardware as dual-purpose AP/AM devices. APs are responsible for providing client connectivity in an infrastructure role, and for analyzing wireless traffic to identify attacks and other undesired activity at the same time. This is often a less-costly approach compared to overlay monitoring because organizations use existing hardware for both monitoring and infrastructure access without the need for additional sensors or an overlay management controller.
A hybrid monitoring approach leverages the strengths of both the overlay and integrated monitoring models. A hybrid approach uses both dual-purpose APs and dedicated AMs for intrusion detection and protection. Organizations can use an existing deployment of APs and augment that protection with dedicated AMs, or they can deploy a dedicated monitoring infrastructure consisting solely of AM devices. In either case, analysis is performed by a centralized controller similar to what is used with an overlay model, rather than the approach used in an integrated WIDS deployment, where processing is handled by distributed access points.
To mitigate attacks on the wireless network, WIDS vendors have augmented the analysis components of their products with reactive components, often known as Wireless Intrusion Prevention Services (WIPS). When the analysis mechanism recognizes an attack, such as an attempt at accelerated WEP key cracking, the wireless device reacts to the event by reporting it to the administrator and by taking steps to prevent the attack from succeeding.
Network and communications security can be a complex set of topics for the SSCP to understand. The need to be able to describe network-related security issues can involve multiple topics made up of many moving parts. The ability to identify protective measures for telecommunications technologies can be challenging, as the speed at which technology changes and evolves continues to increase. The SSCP needs to be able to also identify the processes best suited for managing LAN-based security, but at the same time, take into account the needs of the organization overall, and the necessary procedures for operating and configuring network-based security devices such as IDS and IPS solutions. The security professional should be able to put all of these issues and concerns into context, understand their main goals, and apply a common sense approach to typical scenarios. The focus here is to maintain operational resilience and protect valuable operational assets through a combination of people, processes, and technologies. At the same time, security services must be managed effectively and efficiently just like any other set of services in the enterprise.
3.146.37.250