Domain 6
Networks and Communications Security

In the Networks and Communication domain, students will learn about the network structure, data transmission methods, transport formats, and the security measures used to maintain integrity, availability, authentication, and confidentiality of the information being transmitted. Concepts for both public and private communication networks will be discussed.

Topics

  • The following topics are addressed in this chapter:
    • Security issues related to networks
      • OSI and TCP/IP models
      • Network topographies and relationships (e.g., ring, star, bus, mesh, tree)
      • Commonly used ports and protocols
    • Telecommunications technologies
      • Converged communications
      • VoIP
      • POTS, PBX
      • Cellular
      • Attacks and countermeasures
    • Network access
      • Access control and monitoring (e.g., NAC, remediation, quarantine, admission)
      • Access control standards and protocols (e.g., IEEE 802.1X, RADIUS, TACACS)
      • Remote Access operation and configuration (e.g., thin client, SSL VPN, IPSec VPN)
      • Attacks and countermeasures
    • LAN-based security
      • Separation of data plane and control plane
      • Segmentation (e.g., VLAN, ACLs)
      • MACsec (e.g., IEEE 802.1AE)
    • Secure device management
    • Network-based security devices
      • Firewalls and proxies
      • Network intrusion detection/prevention systems
      • Routers and switches
      • Traffic shaping devices (e.g., WAN optimization)
      • Frameworks for data sharing (e.g., trusted computing groups IF-MAP)
    • Wireless technologies
      • Transmission security (e.g., WPA, WPA2/802.11i, AES, TKIP)
      • Wireless security devices (e.g., integrated/dedicated WIPS, WIDS)
      • Common vulnerabilities and countermeasures (e.g., management protocols)

Objectives

The security practitioner is expected to participate in the following areas related to network and telecommunications security:

  • Describe network related security issues.
  • Identify protective measures for telecommunication technologies.
  • Define processes for controlling network access.
  • Identify processes for managing LAN-based security.
  • Describe procedures for operating and configuring network-based security devices.
  • Define procedures to implement and operate wireless technologies.

Security Issues Related to Networks

There are many issues that the SSCP will need to address as part of a comprehensive approach to network security. Areas such as the OSI model, network topologies, ports, and protocols all play an important part in network security. As we begin our discussions in these areas, the SSCP should keep in mind the operational concept of defense in depth, which specifies that a secure design will incorporate multiple overlapping layers of protection mechanisms to ensure that the concerns regarding confidentiality, integrity, and availability are addressed.

OSI and TCP/IP Models

Network communication is usually described in terms of layers. Several layering models exist; the most commonly used are:

  • The OSI reference model, structured into seven layers (physical layer, data-link layer, network layer, transport layer, session layer, presentation layer, application layer)1
  • The TCP/IP or Department of Defense (DoD) model (not to be confused with the TCP/IP protocols), structured into four layers (link layer, network layer, transport layer, application layer)2

One feature that is common to both models and highly relevant from a security perspective is encapsulation. This means that not only do the different layers operate independently from each other, but they are also isolated on a technical level. Short of technical failures, the contents of any lower or higher layer protocol are inaccessible from any particular layer. This function of the models allows the security architect to ensure that their designs can provide both confidentiality and integrity. It also allows the security practitioner to implement those designs and operate them effectively, knowing that data flowing up and down the model’s layers is being safeguarded.

OSI Model

The seven-layer Open System Interconnect (OSI) model was defined in 1984 and published as an international standard, ISO/IEC 7498—1.3 The last revision to this standard was in 1994. Although sometimes considered complex, it has provided a practical and widely accepted way to describe networking. In practice, some layers have proven to be less crucial to the concept (such as the presentation layer), while others (such as the network layer) have required more specific structure, and applications overlapping and transgressing layer boundaries exist. See Figure 6-1.4

Layer 1: Physical Layer

Physical topologies are defined at this layer. Because the required signals depend on the transmitting media (e.g., required modem signals are not the same as ones for an Ethernet network interface card), the signals are generated at the physical layer. Not all hardware consists of layer 1 devices. Even though many types of hardware, such as cables, connectors, and modems, operate at the physical layer, some operate at different layers. Routers and switches, for example, operate at the network and data-link layers, respectively.

c06f001.tif

Figure 6-1: The Seven Layer OSI Reference Model

Layer 2: Data Link Layer

The data link layer prepares the packet that it receives from the network layer to be transmitted as frames on the network. This layer ensures that the information that it exchanges with its peers is error free. If the data link layer detects an error in a frame, it will request that its peer resend that frame. The data link layer converts information from the higher layers into bits in the format that is expected for each networking technology, such as Ethernet, Token Ring, etc. Using hardware addresses, this layer transmits frames to devices that are physically connected only. As an analogy, consider the path between the end nodes on the network as a chain and each link as a device in the path. The data link layer is concerned with sending frames to the next link.

The Institute of Electrical and Electronics Engineers (IEEE) data link layer is divided into two sublayers:

  • Logical Link Control (LLC)—Manages connections between two peers. It provides error and flow control and control bit sequencing.
  • Media Access Control (MAC)—Transmits and receives frames between peers. Logical topologies and hardware addresses are defined at this sublayer. An Ethernet’s 48-bit hardware address is often called a MAC address as a reference to the name of the sublayer.
Layer 3: Network Layer

It is important to clearly distinguish between the functions of the network and data link layers. The network layer moves information between two hosts that are not physically connected. On the other hand, the data link layer is concerned with moving data to the next physically connected device. Also, whereas the data link layer relies on hardware addressing, the network layer uses logical addressing that is created when hosts are configured.

Internet Protocol (IP) is part of the TCP/IP suite and is the most important network layer protocol. IP has two functions:

  • Addressing—IP uses the destination IP address to transmit packets through networks until the packets’ destination is reached.
  • Fragmentation—IP will subdivide a packet if its size is greater than the maximum size allowed on a local network.

IP is a connectionless protocol that does not guarantee error-free delivery. Layer 3 devices, such as routers, read the destination layer 3 address (e.g., destination IP address) in received packets and use their routing table to determine the next device on the network (the next hop) to send the packet. If the destination address is not on a network that is directly connected to the router, it will send the packet to another router.

Routing tables are built either statically or dynamically. Static routing tables are configured manually and change only when updated. Dynamic routing tables are built automatically when routers periodically share information that reflects their view of the network, which changes as routers go on and offline. When traffic congestion develops, this allows the routers to effectively route packets as network conditions change. Some examples of other protocols that are traditionally considered to work at layer 3 are as follows:

  1. Routing Information Protocol (RIP) Versions 1 and 2 The RIP v1 standard is defined in RFC 1058.5 Routing Information Protocol (RIP) is a standard for exchange of routing information among gateways and hosts. RIP is most useful as an “interior gateway protocol.” RIP uses distance vector algorithms to determine the direction and distance to any link in the internetwork. If there are multiple paths to a destination, RIP selects the path with the least number of hops. However, because hop count is the only routing metric used by RIP, it does not necessarily select the fastest path to a destination.
  2. RIP v1 allows routers to update their routing tables at programmable intervals. The default interval is 30 seconds. The continual sending of routing updates by RIP v1 means that network traffic builds up quickly. To prevent a packet from looping infinitely, RIP allows a maximum hop count of 15. If the destination network is more than 15 routers away, the network is considered unreachable and the packet is dropped.
  3. The RIP v2 standard is defined in RFC 1723 and updated for cryptographic authentication by RFC 4822.6 RIP v2 provides the following advances over RIP v1:
  • Carries a subnet mask.
  • Supports password authentication security.
  • Specifies the next hop address.
  • Does not require that routes be aggregated on the network boundary.
  1. Open Shortest Path First (OSPF) Versions 1 and 2 The OSPF v1 standard is defined in RFC 1131.7 Open Shortest Path First (OSPF) is an interior gateway routing protocol developed for IP networks based on the shortest path first or link-state algorithm. Routers use link-state algorithms to send routing information to all nodes in an internetwork by calculating the shortest path to each node based on a topography of the Internet constructed by each node. Each router sends that portion of the routing table (keeps track of routes to particular network destinations) that describes the state of its own links, and it also sends the complete routing structure (topography).
  2. The advantage of shortest path first algorithms is that their use results in smaller, more frequent updates everywhere. They converge quickly, thus preventing such problems as routing loops and Count-to-Infinity (when routers continuously increment the hop count to a particular network). This makes for a more stable network. The disadvantage of shortest path first algorithms is that they require large amounts of CPU power and memory.
  3. OSPF v2 is defined in RFC 1583 and updated by RFC 2328.8 It is used to allow routers to dynamically learn routes from other routers and to advertise routes to other routers. Advertisements containing routes are referred to as link state advertisements (LSAs) in OSPF. OSPF routers keep track of the state of all the various network connections (links) between itself and a network it is trying to send data to. This is the behavior that makes it a link-state routing protocol.
  4. OSPF supports the use of classless IP address ranges and is very efficient. OSPF uses areas to organize a network into a hierarchal structure; it summarizes route information to reduce the number of advertised routes and thereby reduce network load and uses a designated router (elected via a process that is part of OSPF) to reduce the quantity and frequency of link state advertisements.
  5. OSPF selects the best routes by finding the lowest cost paths to a destination. All router interfaces (links) are given a cost. The cost of a route is equal to the sum of all the costs configured on all the outbound links between the router and the destination network, plus the cost configured on the interface that OSPF received the link state advertisement on.
  6. Internet Control Message Protocol (ICMP) Internet Control Message Protocol (ICMP) is documented in RFC 792.9 ICMP messages are classified into two main categories:
  • ICMP error messages
  • ICMP query messages
  1. ICMP’s goals are to provide a means to send error messages for non-transient error conditions and to provide a way to probe the network in order to determine general characteristics about the network. Some of ICMP’s functions are to:
  • Announce Network Errors—Such as a host or entire portion of the network being unreachable, due to some type of failure. A TCP or UDP packet directed at a port number with no receiver attached is also reported via ICMP.
  • Announce Network Congestion—When a router begins buffering too many packets, due to an inability to transmit them as fast as they are being received, it will generate ICMP Source Quench messages. Directed at the sender, these messages should cause the rate of packet transmission to be slowed.
  • Assist Troubleshooting—ICMP supports an Echo function, which just sends a packet on a round trip between two hosts. Ping, a common network management tool, is based on this feature. Ping will transmit a series of packets, measuring average round trip times and computing loss percentages.
  • Announce Timeouts—If an IP packet’s TTL field drops to zero, the router discarding the packet will often generate an ICMP packet announcing this fact. TraceRoute is a tool that maps network routes by sending packets with small TTL values and watching the ICMP timeout announcements.
  1. Internet Group Management Protocol (IGMP) Internet Group Management Protocol (IGMP) is used to manage multicasting groups, which are a set of hosts anywhere on a network that are interested in a particular multicast. Multicast agents administer multicast groups, and hosts send IGMP messages to local agents to join and leave groups. There are three versions of IGMP, as highlighted below:10
  • Version 1—Multicast agents periodically send queries to a host on its network to update its database of multicast groups’ membership. Hosts stagger their replies to prevent a storm of traffic to the agent. When replies no longer come from a group, agents will stop forwarding multicasts to that group.
  • Version 2—This version extends the functionality of version 1. It defines two types of queries: a general query to determine membership of all groups and a group-specific query to determine the membership of a particular group. In addition, a member can notify all multicast routers that it wishes to leave a group.
  • Version 3—This version further enhances IGMP by allowing hosts to specify from which sources they want to receive multicasts.

For a listing of protocols associated with layer 3 of the OSI model, see the following:

  • IPv4/IPv6—Internet Protocol
  • DVMRP—Distance Vector Multicast Routing Protocol
  • ICMP—Internet Control Message Protocol
  • IGMP—Internet Group Multicast Protocol
  • IPsec—Internet Protocol Security
  • IPX—Internetwork Packet Exchange
  • DDP—Datagram Delivery Protocol
  • SPB—Shortest Path Bridging
Layer 4: Transport Layer

The transport layer creates an end-to-end transport between peer hosts. User Datagram Protocol (UDP) and Transmission Control Protocol (TCP) are important transport layer protocols in the TCP/IP suite. UDP does not ensure that transmissions are received without errors, and therefore it is classified as a connectionless, unreliable protocol. This does not mean that UDP is poorly designed. Rather, the application will perform the error checking instead of the protocol.

Connection-oriented reliable protocols, such as TCP, ensure integrity by providing error-free transmission. They divide information from multiple applications on the same host into segments to be transmitted on a network. Because it is not guaranteed that the peer transport layer receives segments in the order that they were sent, reliable protocols reassemble received segments into the correct order. When the peer layer receives a segment, it responds with an acknowledgment. If an acknowledgment is not received, the segment is retransmitted. Lastly, reliable protocols ensure that each host does not receive more data than it can process without loss of data.

TCP data transmissions, connection establishment, and connection termination maintain specific control parameters that govern the entire process. The control bits are listed as follows:

  • URG—Urgent Pointer field significant
  • ACK—Acknowledgement field significant
  • PSH—Push Function
  • RST—Reset the connection
  • SYN—Synchronize sequence numbers
  • FIN—No more data from sender

These control bits are used for many purposes; chief among them is the establishment of a guaranteed communication session via a process referred to as the TCP three way handshake, as described below:

  1. First, the client sends a SYN segment. This is a request to the server to synchronize the sequence numbers. It specifies its initial sequence number (ISN), which is incremented by 1, and that is sent to the server. To initialize a connection, the client and server must synchronize each other’s sequence numbers.
  2. Second, the server sends an ACK and a SYN in order to acknowledge the request of the client for synchronization. At the same time, the server is also sending its request to the client for synchronization of its sequence numbers. There is one major difference in this transmission from the first one. The server transmits an acknowledgement number to the client. The acknowledgement is just proof to the client that the ACK is specific to the SYN the client initiated. The process of acknowledging the client’s request allows the server to increment the client’s sequence number by one and uses it as its acknowledgement number.
  3. Third, the client sends an ACK in order to acknowledge the request from the server for synchronization. The client uses the same algorithm the server implemented in providing an acknowledgement number. The client’s acknowledgment of the server’s request for synchronization completes the process of establishing a reliable connection.

For a listing of protocols associated with layer 4 of the OSI model, see below:

  • FCP—Fiber Channel Protocol
  • RDP—Reliable Datagram Protocol
  • SCTP—Stream Control Transmission Protocol
  • SPX—Sequenced Packet Exchange
  • SST—Structured Stream Transport
  • TCP—Transmission Control Protocol
  • UDP—User Datagram Protocol
Layer 5: Session Layer

This layer provides a logical, persistent connection between peer hosts. A session is analogous to a conversation that is necessary for applications to exchange information. The session layer is responsible for creating, maintaining, and tearing down the session. Three modes are offered:

  • Full Duplex—Both hosts can exchange information simultaneously, independent of each other.
  • Half Duplex—Hosts can exchange information but only one host at a time.
  • Simplex—Only one host can send information to its peer. Information travels in one direction only.

For a listing of protocols associated with layer 5 of the OSI model, see below:

  • H.245—Call Control Protocol for Multimedia Communication
  • iSNS—Internet Storage Name Service
  • PAP—Password Authentication Protocol
  • PPTP—Point-to-Point Tunneling Protocol
  • RPC—Remote Procedure Call Protocol
  • RTCP—Real-time Transport Control Protocol
  • SMPP—Short Message Peer-to-Peer
Layer 6: Presentation Layer

The applications that are communicating over a network may represent information differently, such as using incompatible character sets. This layer provides services to ensure that the peer applications use a common format to represent data. For example, if a presentation layer wants to ensure that Unicode-encoded data can be read by an application that understands the ASCII character set only, it could translate the data from Unicode to a standard format. The peer presentation layer could translate the data from the standard format into the ASCII character set.

In many widely used applications and protocols, no distinction is made between the presentation and application layers. For example, Hypertext Transfer Protocol (HTTP), generally regarded as an application layer protocol, has presentation layer aspects such as the ability to identify character encoding for proper conversion, which is then done in the application layer.

Layer 7: Application Layer

This layer is the application’s portal to network-based services, such as determining the identity and availability of remote applications. When an application or the operating system transmits or receives data over a network, it uses the services from this layer. Many well-known protocols, such as Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), and Simple Mail Transfer Protocol (SMTP), operate at this layer. It is important to remember that the application layer is not the application, especially when an application has the same name as a layer 7 protocol. For example, the FTP command on many operating systems initiates an application called FTP, which eventually uses the FTP protocol to transfer files between hosts. While some protocols are easily ascribed to a certain layer based on their form and function, others are very difficult to place precisely. An example of a protocol that falls into this category would be the Border Gateway Protocol.

Border Gateway Protocol (BGP)

Border Gateway Protocol (BGP) was created to replace the Exterior Gateway Protocol (EGP) to allow fully decentralized routing. This allowed the Internet to become a truly decentralized system. BGP performs inter-domain routing in Transmission-Control Protocol/Internet Protocol (TCP/IP) networks. BGP is a protocol for exchanging routing information between gateway hosts (each with its own router) in a network of autonomous systems. BGP is often the protocol used between gateway hosts on the Internet. The routing table contains a list of known routers, the addresses they can reach, and a cost metric associated with the path to each router so that the best available route is chosen.

Hosts using BGP communicate using the Transmission Control Protocol (TCP) and send updated router table information only when one host has detected a change. Only the affected part of the routing table is sent. BGP-4, the latest version, lets administrators configure cost metrics based on policy statements.11

Many consider BGP an application that happens to affect the routing table. There are also those that would consider BGP a routing protocol as opposed to an application that affects the routing table. BGP creates and uses code attached to sockets. Does that mean that it should be considered an application? In the case of BGP, when viewed in a traffic sniffer, there is a layer 4 header between the IP Header and the Routing Protocol header. Does that mean that we can say that BGP is an application that transports routing information at layer 4?

Perhaps a more appropriate way to classify a protocol is to look at the services it provides. BGP clearly provides services to the network layer, not the traditional transport services, but rather, BGP provides control information about how the network layer operates. This could allow us to move BGP down to the network layer.

This perspective is especially useful for management, control, and supervisory protocols that can be seen as applications of other protocols, and yet providing the necessary control, management, and supervisory information to the managed infrastructure that is on lower layers than the application layer. From this viewpoint, while BGP is truly just an application running over TCP, it is intimately tied into the operation of the network layer because it provides the necessary information about how the network layer should operate. That means that we could say that BGP is implemented as an application layer protocol, but with respect to its function, it is a network layer protocol.

As a security practitioner, you should understand how BGP works in real networks. Following are several links to simulators that can be used to model BGP and demonstrate how it works:

  • BGPlay, an HTML widget that presents a graphical visualization of BGP routes and updates for any real AS on the Internet. The link is:
  1. https://stat.ripe.net/widget/bgplay
  • SSFnet, SSFnet network simulator includes a BGP implementation developed by BJ Premore. The link is:
  1. http://www.ssfnet.org/homePage.html
  • C-BGP, a BGP simulator able to perform large scale simulation trying to model the ASes of the Internet or modelling ASes as large as Tier-1. The link is:
  1. http://c-bgp.sourceforge.net/
  • NetViews, a Java application that monitors and visualizes BGP activity in real time. The link is:
  1. http://netlab.cs.memphis.edu/projects_netviews.html

For a listing of protocols associated with layer 7 of the OSI model, see below:

  • DHCP—Dynamic Host Configuration Protocol
  • DNS—Domain Name System
  • HTTP—Hypertext Transfer Protocol
  • IMAP—Instant Message Access Protocol
  • LDAP—Lightweight Directory Access Protocol
  • SMTP—Simple Mail Transfer Protocol
  • FTP—File Transfer Protocol

TCP/IP Reference Model

The U.S. Department of Defense developed the TCP/IP model, which is very similar to the OSI model but with fewer layers, as shown in Figure 6-2.

The link layer provides physical communication and routing within a network. It corresponds to everything required to implement an Ethernet. It is sometimes described as two layers, a physical layer and a link layer. In terms of the OSI model, it covers layers 1 and 2. The network layer includes everything that is required to move data between networks. It corresponds to the IP protocol, but also Internet Control Message Protocol (ICMP) and Internet Group Management Protocol (IGMP). In terms of the OSI model, it corresponds to layer 3.

c06f002.tif

Figure 6-2: OSI Model & TCP/IP Model

The transport layer includes everything required to move data between applications. It corresponds to TCP and UDP. In terms of the OSI model, it corresponds to layer 4. The application layer covers everything specific to a session or application; in other words, everything relating to the data payload. In terms of the OSI model, it corresponds to layers 5 through 7. Owing to its coarse structure, it is not well suited to describe application-level information exchange.

As with the OSI model, data that is transmitted on the network enters the top of the stack, and each of the layers, with the exception of the physical layer, encapsulates information for its peer at the beginning and sometimes the end of the message that it receives from the next highest layer. On the remote host, each layer removes the information that is peer encapsulated before the remote layer passes the message to the next higher layer. Also, each layer processes messages in a modular fashion, without concern for how the other layers on the same host process the message.

IP Networking

Internet protocol (IP) is responsible for sending packets from the source to the destination hosts. Because it is an unreliable protocol, it does not guarantee that packets arrive error free or in the correct order. That task is left to protocols on higher layers. IP will subdivide packets into fragments when a packet is too large for a network.

Hosts are distinguished by the IP addresses of their network interfaces. The address is expressed as four octets separated by a dot (.), for example, 216.12.146.140. Each octet may have a value between 0 and 255. However, 0 and 255 are not used for hosts. The latter is used for broadcast addresses, and the former’s meaning depends on the context in which it is used. Each address is subdivided into two parts: the network number and the host. The network number, assigned by an external organization, such as the Internet Corporation for Assigned Names and Numbers (ICANN), represents the organization’s network. The host represents the network interface within the network.

Originally, the part of the address that represented the network number depended on the network’s class. As shown in Table 6-1, a Class A network used the leftmost octet as the network number, Class B used the leftmost two octets, etc.

Table 6-1: Network Classes

ClassRange of First OctetNumber of Octets for Network NumberNumber of Hosts in Network
A1–126116,777,214
B128–191265,534
C192–2233254
D224–239Multicast
E240–255Reserved

The part of the address that is not used as the network number is used to specify the host. For example, the address 216.12.146.140 represents a Class C network. Therefore, the network portion of the address is represented by the 216.12.146, and the unique host address within the network block is represented by 140.

127 is reserved for a computer’s loopback address. Usually the address 127.0.0.1 is used. The loopback address is used to provide a mechanism for self-diagnosis and troubleshooting at the machine level. This mechanism allows a network administrator to treat a local machine as if it were a remote machine and ping the network interface to establish whether or not it is operational.

The explosion of Internet utilization in the 1990s caused a shortage of unallocated IPv4 addresses. To help remedy the problem, Classless Inter-Domain Routing (CIDR) was implemented. CIDR does not require that a new address be allocated based on the number of hosts in a network class. Instead, addresses are allocated in contiguous blocks from the pool of unused addresses.

To ease network administration, networks are typically subdivided into subnets. Because subnets cannot be distinguished with the addressing scheme discussed so far, a separate mechanism, the subnet mask, is used to define the part of the address that is used for the subnet. Bits in the subnet mask are 1 when the corresponding bits in the address are used for the subnet. The remaining bits in the mask are 0. For example, if the leftmost three octets (24 bits) are used to distinguish subnets, the subnet mask is 11111111 11111111 11111111 00000000. A string of 32 1s and 0s is very unwieldy, so the mask is usually converted to decimal notation: 255.255.255.0. Alternatively, the mask is expressed with a slash (/) followed by the number of 1s in the mask. The above mask would be written as /24.

IPv6

After the explosion of Internet usage in the mid-1990s, IP began to experience serious growing pains. It was obvious that the phenomenal usage of the Internet was stretching the protocol to its limit. The most obvious problems were a shortage of unallocated IP addresses and serious shortcomings in security. IPv6 is a modernization of IPv4 that includes:

  • A much larger address field: IPv6 addresses are 128 bits, which supports 2 hosts. Suffice it to say that we will not run out of addresses.
  • Improved security: As we will discuss below, IPSec must be implemented in IPv6. This will help ensure the integrity and confidentiality of IP packets and allow communicating partners to authenticate with each other.
  • A more concise IP packet header: Hosts will require less time to process each packet, which will result in increased throughput.
  • Improved quality of service: This will help services obtain an appropriate share of a network’s bandwidth.

Transmission Control Protocol (TCP)

The Transmission Control Protocol (TCP) provides connection-oriented data management and reliable data transfer. TCP and UDP map data connections through the association of port numbers with services provided by the host. TCP and UDP port numbers are managed by the Internet Assigned Numbers Authority (IANA). A total of 65,536 (216) ports exist. These are broken into three ranges:

  • Well-Known Ports—Ports 0 through 1023 are considered to be well known. Ports in this range are assigned by IANA and, on most systems, can only be used by privileged processes and users.
  • Registered Ports—Ports 1024 through 49151 can be registered with IANA by application developers but are not assigned by them. The reason for choosing a registered instead of a well-known port can be that on most systems the user may not have the privileges to run an application on a well-known port.
  • Dynamic or Private Ports—Ports 49152 through 65535 can be freely used by applications; one typical use for these ports is initiation of return connections for requested data or services.

Attacks against TCP include sequence number attacks, session hijacking, and SYN floods. More information about attacks can be found later in this domain.

User Datagram Protocol (UDP)

The User Datagram Protocol (UDP) provides a lightweight service for connectionless data transfer without error detection and correction. For UDP, the same considerations for port numbers as described for TCP in the section on Transmission Control Protocol apply. A number of protocols within the transport layer have been defined on top of UDP, thereby effectively splitting the transport layer into two. Protocols stacked between layers 4 and 5 include Real-time Protocol (RTP) and Real-time Control Protocol (RTCP) as defined in RFC 3550, MBone, a multicasting protocol, Reliable UDP (RUDP), and Stream Control Transmission Protocol (SCTP) as defined in RFC 2960. As a connectionless protocol, UDP services are easy prey for spoofing attacks.

Internet—Intranet

The Internet, a global network of independently managed, interconnected networks, has changed life on earth. People from anywhere on the globe can share information almost instantaneously using a variety of standardized tools such as Web technologies or email.

An intranet, on the other hand, is a network of interconnected internal networks within an organization, which allows information to be shared within the organization and sometimes with trusted partners and suppliers. For instance, during a project, staff in a global company can easily access and exchange documents, thereby working together almost as if they were in the same office. As with the Internet, the ease with which information can be shared comes with the responsibility to protect it from harm. Intranets will typically host a wide range of organizational data. For this reason, access to these resources is usually coupled with existing internal authentication services even though they are technically on an internal network, such as a directory service coupled with multi-factor authentication.

Extranet

An extranet differs from a DMZ (demilitarized network zone) in the following way: An extranet is made available to authenticated connections that have been granted an access account to the resources in the extranet. Conversely, a DMZ will host publicly available resources that must support unauthenticated connections from just about any source, such as DNS servers and email servers. Due to the need for companies to share large quantities of information, often in an automated fashion, typically one company will grant the other controlled access to an isolated segment of its network to exchange information through the use of an extranet.

Granting an external organization access to a network comes with significant risk. Both companies have to be certain that the controls, both technical and nontechnical (e.g., operational and policy), effectively minimize the risk of unauthorized access to information. Where access must be granted to external organizations, additional controls such as deterministic routing can be applied upstream by service providers. This sort of safeguard is relatively simple to employ and has significant advantages because the ability for malicious entities to target an extranet for compromise leading to internal network penetration is abbreviated.

Companies that access extranets often treat the information within these networks and their servers as “trusted,” confidential, and possessing integrity (uncorrupted and valid). However, these companies do not have control of each other’s security profile. Who knows what kind of trouble a user can get into if he or she accesses supposedly trusted information through an extranet from an organization whose network has been compromised? To mitigate this potential risk, security architects and practitioners need to demand that certain security controls are in place before granting access to an extranet.

Dynamic Host Configuration Protocol (DHCP)

System and network administrators are busy people and hardly have the time to assign IP addresses to hosts and track which addresses are allocated. To relieve administrators from the burden of manually assigning addresses, many organizations use the Dynamic Host Configuration Protocol (DHCP) to automatically assign IP addresses to workstations (servers and network devices usually are assigned static addresses).

Dynamically assigning a host’s IP configuration is fairly simple. When a workstation boots, it broadcasts a DHCPDISCOVER request on the local LAN, which could be forwarded by routers. DHCP servers will respond with a DHCPOFFER packet, which contains a proposed configuration, including an IP address. The DHCP client selects a configuration from the received DHCPOFFER packets and replies with a DHCPREQUEST. The DHCP server replies with a DHCPACK (DHCP acknowledgment), and the workstation adapts the configuration. Receiving a DHCP-assigned IP address is referred to as receiving a lease.

A client does not request a new lease every time it boots. Part of the negotiation of IP addresses includes establishing a time interval for which the lease is valid and timers that reflect when the client must attempt to renew the lease. This timer is referred to as a Time to Live counter, or just simply as the TTL. As long as the timers have not expired, the client is not required to ask for a new lease. Within the DHCP servers, administrators create address pools from which addresses are dynamically assigned when requested by a client. In addition, they can assign specific hosts to have static (i.e., permanent) addresses through the use of client reservations.

Because the DHCP server and client do not always authenticate with each other, neither host can be sure that the other is legitimate. For example, in a DHCP network, an attacker can plug his or her workstation into a network jack and receive an IP address, without having to obtain one by guessing or through social engineering. Also, a client cannot be certain that a DHCPOFFER packet is from a DHCP server instead of an intruder masquerading as a server.

To counteract these concerns, in June 2001 the IETF published RFC 3118, which specifies how to implement Authentication for DHCP Messages.12 This standard describes an enhancement that replaces the normal DHCP messages with authenticated ones. Clients and servers check the authentication information and reject messages that come from invalid sources. The technology involves the use of a new DHCP option type, the Authentication option, and operating changes to several of the leasing processes to use this option. Although these vulnerabilities are not trivial, the ease of administration of IP addresses usually makes the risk from the vulnerabilities acceptable, except in very high security environments. Ultimately, the security architect will need to weigh the risks associated with using DHCP without an authentication option and decide how best to proceed.

Internet Control Message Protocol (ICMP)

The Internet Control Message Protocol (ICMP) is used for the exchange of control messages between hosts and gateways and is used for diagnostic tools such as ping and traceroute. ICMP can be leveraged for malicious behavior, including man-in-the-middle and denial-of-service attacks.

Ping of Death13

Ping is a diagnostic program used to determine if a specified host is on the network and can be reached by the pinging host. It sends an ICMP echo packet to the target host and waits for the target to return an ICMP echo reply. Amazingly, an enormous number of operating systems would crash or become unstable upon receiving an ICMP echo greater than the legal packet limit of 65,536 bytes. Before the ping of death became famous, the source of the attack was difficult to find because many system administrators would ignore a seemingly harmless ping in their logs.

ICMP Redirect Attacks14

A router may send an ICMP redirect to a host to tell it to use a different, more effective default route. However, an attacker can send an ICMP redirect to a host telling it to use the attacker’s machine as a default route. The attacker will forward all of the redirected traffic to a router so that the victim will not know that his or her traffic has been intercepted. This is a good example of a man-in-the-middle attack. Some operating systems will crash if they receive a storm of ICMP redirects. The security practitioner should have several tools in his or her toolbox to be able to model and interact with attacks such as the ICMP redirect attack in order to better understand them. One such tool that will be very effective is called Scapy.

Scapy is a powerful interactive packet manipulation program. It is able to forge or decode packets of a wide number of protocols, send them on the wire, capture them, match requests and replies, and much more. It can easily handle most classical tasks like scanning, tracerouting, probing, unit tests, attacks, or network discovery. (It can replace hping, 85% of nmap, arpspoof, arp-sk, arping, tcpdump, tethereal, p0f, etc.) It also performs very well at a lot of other specific tasks that most other tools cannot handle, like sending invalid frames, injecting your own 802.11 frames, and combining techniques (VLAN hopping+ARP cache poisoning, VOIP decoding on WEP encrypted channel, etc.). You can find Scapy here: http://www.secdev.org/projects/scapy/.

Ping Scanning

Ping scanning is a basic network mapping technique that helps narrow the scope of an attack. An attacker can use one of many tools such as Very Simple Network Scanner for Windows based platforms and NMAP for Linux and Windows based platforms to ping all of the addresses in a range.15 If a host replies to a ping, then the attacker knows that a host exists at that address.

Traceroute Exploitation

Traceroute is a diagnostic tool that displays the path a packet traverses between a source and destination host. Traceroute can be used maliciously to map a victim network and learn about its routing. In addition, there are tools, such as Firewalk, that use techniques similar to those of traceroute to enumerate a firewall rule set.16 The Firewalk tool stopped being actively developed and maintained as of version 5.0 in 2003. The functionality of the Firewalk tool has been subsumed into NMAP as part of the rule set that can be configured for use.17

What the Firewalk host rule tries to do is to discover firewall rules using an IP TTL expiration technique known as firewalking. To determine a rule on a given gateway, the scanner sends a probe to a metric located behind the gateway, with a TTL one higher than the gateway. If the probe is forwarded by the gateway, then we can expect to receive an ICMP_TIME_EXCEEDED reply from the gateway next hop router, or eventually the metric itself if it is directly connected to the gateway. Otherwise, the probe will time out.

Remote Procedure Calls

Remote procedure calls (RPCs) represent the ability to allow for the executing of objects across hosts, with a client sending a set of instructions to an application residing on a different host on the network. Generically, several (mutually incompatible) services in this category exist, such as distributed computing environment RPC (DCE RPC) and Sun’s Open Network Computing RPC (ONC RPC, also referred to as SunRPC or simply RPC). It is important to note that RPC does not in fact provide any services on its own; instead, it provides a brokering service by providing (basic) authentication and a way to address the actual service. Common Object Request Broker Architecture (CORBA) and Microsoft Distributed Component Object Model (DCOM) can be viewed as RPC-type protocols. Security problems with RPC include its weak authentication mechanism, which can be leveraged for privilege escalation by an attacker.

Network Topographies and Relationships

There are many network topographies and relationships. It is important for the SSCP to understand the merits of each of them. The following sections explore all of the major topographies and related important information.

Bus

A bus topology is a LAN with a central cable (bus) to which all nodes (devices) connect. All nodes transmit directly on the central bus. Each node listens to all of the traffic on the bus and processes only the traffic that is destined for it. This topology relies on the data-link layer to determine when a node can transmit a frame on the bus without colliding with another frame on the bus. A LAN with a bus topology is shown in Figure 6-3.

c06f003.tif

Figure 6-3: Network with a bus topology

Advantages of buses include:

  • Adding a node to the bus is easy.
  • A node failure will not likely affect the rest of the network.

Disadvantages of buses include:

  • Because there is only one central bus, a bus failure will leave the entire network inoperable.

Tree

A tree topology is similar to a bus. Instead of all of the nodes connecting to a central bus, the devices connect to a branching cable. Like a bus, every node receives all of the transmitted traffic and processes only the traffic that is destined for it. Furthermore, the data-link layer must transmit a frame only when there is not a frame on the wire. A network with a tree topology is shown in Figure 6-4.

c06f004.tif

Figure 6-4: Network with a tree topology

Advantages of a tree include:

  • Adding a node to the tree is easy.
  • A node failure will not likely affect the rest of the network.

Disadvantages of a tree include:

  • A cable failure could leave the entire network inoperable.

Ring

A ring is a closed-loop topology. Data is transmitted in one direction only, based on the direction that the ring was initialed to transmit in, either clockwise or counter-clockwise. Each device receives data from its upstream neighbor only and transmits data to its downstream neighbor only. Typically, rings use coaxial cables or fiber optics. A Token Ring network is shown in Figure 6-5.

c06f005.tif

Figure 6-5: Network with a ring topology

Advantages of rings include:

  • Because rings use tokens, one can predict the maximum time that a node must wait before it can transmit (i.e., the network is deterministic).
  • Rings can be used as a LAN or network backbone.

Disadvantages of rings include:

  • Simple rings have a single point of failure. If one node fails, the entire ring fails. Some rings, such as fiber distributed data interface (FDDI), use dual rings for failover.

Mesh

In a mesh network, all nodes are connected to every other node on the network. A full mesh network is usually too expensive because it requires many connections. As an alternative, a partial mesh can be employed in which only selected nodes (typically the most critical) are connected in a mesh, and the remaining nodes are connected to a few devices. As an example, core switches, firewalls, and routers and their hot standbys are often all connected to ensure as much availability as possible. A mesh network is shown in Figure 6-6.

c06f006.tif

Figure 6-6: Network with a mesh topology

Advantages of a mesh include:

  • Mesh networks provide a high level of redundancy.

Disadvantages of a mesh include:

  • Mesh networks are very expensive because of the enormous number of cables that are required.

Star

All nodes in a star network are connected to a central device, such as a hub, switch, or router. Modern LANs usually employ a star typology. A star network is shown in Figure 6-7.

c06f007.tif

Figure 6-7: Network with a star topology

Advantages of a star include:

  • Star networks require fewer cables than full or partial mesh.
  • Star networks are easy to deploy, and nodes can be easily added or removed.

Disadvantages of a star include:

  • The central connection device is a single point of failure. If it is not functional, all of the connected nodes lose network connectivity.

There are many points that the security architect and practitioner must consider about transmitting information from sender to receiver. For example, will the information be expressed as an analog or digital wave? How many recipients will be there? If the transmission media will be shared with others, how can one ensure that the signals will not interfere with each other?

Unicast, Multicast, and Broadcast Transmissions

Most communication, especially that directly initiated by a user, is from one host to another. For example, when a person uses a browser to send a request to a web server, he or she sends a packet to the web server. A transmission with one receiving host is called a unicast transmission.

A host can send a broadcast to everyone on its network or sub-network. Depending on the network topology, the broadcast could have anywhere from one to tens of thousands of recipients. Like a person standing on a soapbox, this is a noisy method of communication. Typically, only one or two destination hosts are interested in the broadcast; the other recipients waste resources to process the transmission. However, there are productive uses for broadcasts. Consider a router that knows a device’s IP address but must determine the device’s MAC address. The router will broadcast an Address Resolution Protocol (ARP) request asking for the device’s MAC address.

Notice how one broadcast could result in hundreds or even thousands of packets on the network. Intruders often leverage this fact in denial-of-service attacks.

Public and private networks are used more often than ever for streaming transmissions, such as movies, videoconferences, and music. Given the intense bandwidth needed to transmit these streams, and that the sender and recipients are not necessarily on the same network, how does one transmit the stream to only the interested hosts? The sender could send a copy of the stream via unicast to each receiver. Unless there is a very small audience, unicast delivery is not practical because the multiple simultaneous copies of the large stream on the network at the same time could cause congestion. Delivery with broadcasts is another possibility, but every host would receive the transmission, even if they were not interested in the stream.

Multicasting was designed to deliver a stream to only interested hosts. Radio broadcasting is a typical analogy for multicasting. To select a specific radio show, you tune a radio to the broadcasting station. Likewise, to receive a desired multicast, you join the corresponding multicast group.

Multicast agents are used to route multicast traffic over networks and administer multicast groups. Each network and sub-network that supports multicasting must have at least one multicast agent. Hosts use Internet Group Management Protocol (IGMP) to tell a local multicast agent that it wants to join a specific multicast group. Multicast agents also route multicasts to local hosts that are members of the multicast’s group and relay multicasts to neighboring agents.

When a host wants to leave a multicast group, it sends an IGMP message to a local multicast agent. Multicasts do not use reliable sessions. Therefore, the multicasts are transmitted as best effort, with no guarantee that datagrams are received. As an example, consider a server multicasting a videoconference to desktops that are members of the same multicast group as the server. The server transmits to a local multicast agent. Next, the multicast agent relays the stream to other agents. All of the multicast agents transmit the stream to local hosts that are members of the same multicast group as the server.

Circuit-Switched Networks

Circuit-switched networks establish a dedicated circuit between endpoints. These circuits consist of dedicated switch connections. Neither endpoint starts communicating until the circuit is completely established. The endpoints have exclusive use of the circuit and its bandwidth. Carriers base the cost of using a circuit-switched network on the duration of the connection, which makes this type of network only cost-effective for a steady communication stream between the endpoints. Examples of circuit-switched networks are the plain old telephone service (POTS), Integrated Services Digital Network (ISDN), and Point-to-Point Protocol (PPP).

Packet-Switched Networks

Packet-switched networks do not use a dedicated connection between endpoints. Instead, data is divided into packets and transmitted on a shared network. Each packet contains meta-information so that it can be independently routed on the network. Networking devices will attempt to find the best path for each packet to its destination. Because network conditions could change while the partners are communicating, packets could take different paths as they transverse the network and arrive in any order. It is the responsibility of the destination endpoint to ensure that the received packets are in the correct order before sending them up the stack.

Switched Virtual Circuits (SVCs) and Permanent Virtual Circuits (PVCs)

Virtual circuits provide a connection between endpoints over high-bandwidth, multiuser cable or fiber that behaves as if the circuit were a dedicated physical circuit. There are two types of virtual circuits, based on when the routes in the circuit are established. In a permanent virtual circuit, the carrier configures the circuit’s routes when the circuit is purchased. Unless the carrier changes the routes to tune the network, responds to an outage, etc., the routes do not change. On the other hand, the routes of a switched virtual circuit are configured dynamically by the routers each time the circuit is used.

Carrier Sense Multiple Access

As the name implies, Carrier Sense Multiple Access (CSMA) is an access protocol that uses the absence/presence of a signal on the medium that it wants to transmit on as permission to speak. Only one device may transmit at a time; otherwise, the transmitted frames will be unreadable. Because there is not an inherent mechanism that determines which device may transmit, all of the devices must compete for available bandwidth. For this reason, CSMA is referred to as a contention-based protocol. Also, because it is impossible to predict when a device may transmit, CSMA is also nondeterministic.

There are two variations of CSMA based on how collisions are handled. LANs using Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) require devices to announce their intention to transmit by broadcasting a jamming signal. When devices detect the jamming signal, they know not to transmit; otherwise, there will be a collision. After sending the jamming signal, the device waits to ensure that all devices have received that signal, and then it broadcasts the frames on the media. CSMA/CA is used in the IEEE 802.11 wireless standard.

Devices on a LAN using Carrier Sense Multiple Access with Collision Detection (CSMA/CD) listen for a carrier before transmitting data. If another transmission is not detected, the data will be transmitted. It is possible that a station will transmit before another station’s transmission had enough time to propagate. If this happens, two frames will be transmitted simultaneously, and a collision will occur. Instead of all stations simply retransmitting their data, which will likely cause more collisions, each station will wait a randomly generated interval before retransmitting. CSMA/CD is part of the IEEE 802.3 standard.18

Polling

A network that employs polling avoids contention by allowing a device (a slave) to transmit on the network only when it is asked to by a master device. Polling is used mostly in mainframe protocols, such as Synchronous Data Link. The point coordination function, an optional function of the IEEE standard, uses polling as well.

Token Passing

Token passing takes a more orderly approach to media access. With this access method, only one device may transmit on the LAN at a time, thus avoiding retransmissions.

A special frame, known as a token, circulates through the ring. When a device wishes to transmit on the network, it must possess the token. The device replaces the token with a frame containing the message to be transmitted and sends the frame to its neighbor. When each device receives the frame, it relays it to its neighbor if it is not the recipient. The process continues until the recipient possesses the frame. That device will copy the message, modify the frame to signify that the message was received, and transmit the frame on the network.

When the modified frame makes a trip back to the sending device, the sending device knows that the message was received. Token passing is used in Token Ring and FDDI networks. An example of a LAN using token passing can be seen in Figure 6-8.

c06f007.tif

Figure 6-8: LAN token passing

Ethernet (IEEE 802.3)

Ethernet, which is defined in IEEE 802.3, played a major role in the rapid proliferation of LANs in the 1980s. The architecture was flexible and relatively inexpensive, and it was easy to add and remove devices from the LAN. Even today, for the same reasons, Ethernet is the most popular LAN architecture. The physical topologies that are supported by Ethernet are bus, star, and point to point, but the logical topology is the bus.

With the exception of full-duplex Ethernet (which does not have the issues of collisions), the architecture uses CSMA/CD. This protocol allows devices to transmit data with a minimum of overhead (compared to Token Ring), resulting in an efficient use of bandwidth. However, because devices must retransmit when more than one device attempts to send data on the medium, too many retransmissions due to collisions can cause serious throughput degradation.

The Ethernet standard supports coaxial cable, unshielded twisted pair, and fiber optics as transmission media.

Ethernet was originally rated at 10 Mbps, but like 10-megabyte disk drives, users quickly figured out how to use and exceed its capacity and needed faster LANs. To meet the growing demand for more bandwidth, 100 Base-TX (100 Mbps over twisted pair) and 100 Base-FX (100 Mbps over multimode fiber optics) were defined. When the demand grew for even more bandwidth over unshielded twisted pair, 1000 Base-T was defined, and 1000 Base-SX and 1000 Base-LX were defined for fiber optics. These standards support 1,000 Mbps.

Token Ring (IEEE 802.5)

Originally designed by IBM, Token Ring was adapted with some modification by the IEEE as IEEE 802.5. Despite the architecture’s name, Token Ring uses a physical star topology. The logical topology, however, is a ring. Each device receives data from its upstream neighbor and transmits to its downstream neighbor. Token Ring uses ring passing to mediate which device may transmit. As mentioned in the section on token passing, a special frame, called a token, is passed on the LAN. To transmit, a device must possess the token.

To transmit on the LAN, the device appends data to the token and sends it to its next downstream neighbor. Devices retransmit frames whenever the token is not the intended recipient. When the destination device receives the frame, it copies the data, marks the frame as read, and sends it to its downstream neighbor. When the packet returns to the source device, it confirms that the packet has been read. It then removes the frame from the ring. Token ring is now considered a “legacy” technology that is rarely seen and on those rare occasions, it is only because there has been no reason for an organization to upgrade away from it. Token ring has almost entirely been replaced with Ethernet technology.

Fiber Distributed Data Interface (FDDI)

Fiber Distributed Data Interface (FDDI) is a token-passing architecture that uses two rings. Because FDDI employs fiber optics, FDDI was designed to be a 100-Mbps network backbone. Only one ring (the primary) is used; the other one (secondary) is used as a backup. Information in the rings flows in opposite directions from each other. Hence, the rings are referred to as counter rotating. FDDI is also considered a legacy technology and has been supplanted by more modern transport technologies; initially Asynchronous Transfer Mode (ATM) but more recently Multiprotocol Label Switching (MPLS).

Multiprotocol Label Switching (MPLS)19

Multiprotocol Label Switching (MPLS) has attained a significant amount of popularity at the core of the carrier networks as of late because it manages to couple the determinism, speed, and QoS controls of established switched technologies like ATM and Frame Relay with the flexibility and robustness of the Internet Protocol world. (MPLS is developed and propagated through the Internet Engineering Task Force [IETF].) Additionally, the once faster and higher bandwidth ATM switches are being outperformed by Internet backbone routers. Equally important, MPLS offers simpler mechanisms for packet-oriented traffic engineering and multi-service functionality with the added benefit of greater scalability.

MPLS is often referred to as IP VPN because of the ability to couple highly deterministic routing with IP services. In effect, this creates a VPN-type service that makes it logically impossible for data from one network to be mixed or routed over to another network without compromising the MPLS routing device itself. MPLS does not include encryption services; therefore, any MPLS service called “IP VPN” does not in fact contain any cryptographic services. The traffic on these links would be visible to the service providers. The following guidelines should be considered by the network and security architects during the negotiation of MPLS bandwidth and associated service level agreements (SLAs) to ensure that services live up to the assurance requirements for the assets relying upon the network:

  • Site Availability—Make certain MPLS is available for all desired locations; i.e., all the planned remote connections (offices) have MPLS service available in that area.
  • End-to-End Network Availability—Inquire about peering relationships for MPLS for network requirements that cross Tier 1 carrier boundaries.
  • Provisioning—How fast can new links in new sites be provisioned?

Local Area Network (LAN)

Local Area Networks (LANs) service a relatively small area, such as a home, office building, or office campus. In general, LANs service the computing needs of their local users. LANs consist of most modern computing devices, such as workstations, servers, and peripherals connected in a star topology or internetworked stars. Ethernet is the most popular LAN architecture because it is inexpensive and very flexible. Most LANs have connectivity to other networks, such as dial-up or dedicated lines to the Internet, access to other LANs via WANs, and so on.

Commonly Used Ports and Protocols

When the SSCP considers the protocols and ports to be used to drive secure communication in the network, they will need to ensure that they are making wise choices. Having an understanding of name resolution through the use of DNS and directory services through the use of LDAP are two examples of areas that the SSCP will want to ensure they focus on.

Domain Name System (DNS) 20

The Domain Name System (DNS) is a hierarchical distributed naming system for computers, services, or any resource connected to the Internet or a private network.21 DNS associates various pieces of information with domain names assigned to each of the participating entities. It translates domain names to the numerical IP addresses needed for the purpose of locating computer services and devices worldwide. By providing a worldwide, distributed keyword-based redirection service, the Domain Name System is an essential component of the functionality of the Internet. The security practitioner can think of the Domain Name System as the phone book for the Internet, allowing people all over the world to find resources by translating human-friendly computer hostnames into IP addresses. For example, the domain name www.isc2.org translates to the addresses 209.188.91.140, 0:0:0:0:0:ffff:d1bc:5b8c (6to4 address), and 2002:d1bc:5b8c:0:0:0:0:0 (IPv4-mapped IPv6 address). DNS can be quickly updated, allowing a service’s location on the network to change without affecting the end users, who continue to use the same host name. Users take advantage of this when they use meaningful Uniform Resource Locators (URLs) and email addresses without having to know how the computer actually locates the services.

DNS is a globally distributed, scalable, hierarchical, and dynamic database that provides a mapping between hostnames, IP addresses (both IPv4 and IPv6), text records, mail exchange information (MX records), name server information (NS records), and security key information defined in Resource Records (RRs). The information defined in RRs is grouped into zones and maintained locally on a DNS server so it can be retrieved globally through the distributed DNS architecture. DNS can use either the User Datagram Protocol (UDP) or Transmission Control Protocol (TCP) and uses a destination port of 53. When the DNS protocol uses UDP as the transport, it has the ability to deal with UDP retransmission and sequencing.

DNS is composed of a hierarchical domain name space that contains a tree-like data structure of linked domain names (nodes). Domain name space uses RRs to store information about the domain. The tree-like data structure for the domain name space starts at the root zone “.”, which is the top most level of the DNS hierarchy. Although it is not typically displayed in user applications, the DNS root is represented as a trailing dot in a fully qualified domain name (FQDN). For example, the right-most dot in www.isc2.org. represents the root zone. From the root zone, the DNS hierarchy is then split into sub-domain (branches) zones.

Each domain name is composed of one or more labels. Labels are separated with “.” and may contain a maximum of 63 characters. A FQDN may contain a maximum of 255 characters, including the “.”. Labels are constructed from right to left, where the label at the far right is the top level domain (TLD) for the domain name. Figure 6-9 shows how to identify the TLD for a domain name.

org is the TLD for www.isc2.org as it is the label furthest to the right.

c06f009.tif

Figure 6-9: The DNS database structure

DNS’s central element is a set of hierarchical name (domain) trees, starting from a so-called top-level domain (TLD). A number of so-called root servers manage the authoritative list of TLD servers. To resolve any domain name, each Domain Name System in the world must hold a list of these root servers.

To understand DNS, security practitioners should be familiar with the following terms:

  • Resolver—A DNS client that sends DNS messages to obtain information about the requested domain name space.
  • Recursion—The action taken when a DNS server is asked to query on behalf of a DNS resolver.
  • Authoritative Server—A DNS server that responds to query messages with information stored in RRs for a domain name space stored on the server.
  • Recursive Resolver—A DNS server that recursively queries for the information asked in the DNS query.
  • FQDN—A Fully Qualified Domain Name is the absolute name of a device within the distributed DNS database.
  • RR—A Resource Record is a format used in DNS messages that is composed of the following fields: NAME, TYPE, CLASS, TTL, RDLENGTH, and RDATA.
  • Zone—A database that contains information about the domain name space stored on an authoritative server.

DNS primarily translates hostnames to IP addresses or IP addresses to hostnames. This translation process is accomplished by a DNS resolver; this could be a client application such as a web browser or an email client, or a DNS application such as BIND, sending a DNS query to a DNS server requesting the information defined in a RR. Some examples of the DNS resolution process are below:

  • If the DNS server is only configured as an authoritative server and it receives a DNS query message asking about information that the server is authoritative for, it will cause the server to inspect locally stored RR information and return the value of the record in the Answer Section of a DNS response message. If the requested information for the DNS query message does not exist, the DNS server will respond with a NXDOMAIN (Non-Existent Domain) DNS response message or a DNS Referral Response message.
  • If the DNS server is authoritative, not configured as a recursive resolver, and it receives a DNS query message asking about information that the server is not authoritative for, it will cause the server to issue a DNS response message containing RRs in the Authority Section and the address mapping for the FQDN from that section may be present in the Additional Section. This informs the DNS resolver where to send queries in order to obtain authoritative information for the question in the DNS query. This is also known as a DNS Referral Response message.
  • If the DNS server is not authoritative but is configured as a recursive resolver and it receives a DNS query asking about information, it will cause the server to recursively query the DNS architecture for the authoritative DNS server of the information included in the DNS request. Once the recursive DNS resolver has obtained this information, it will provide that information to the original DNS resolver using a DNS response message, and the RR will be non-authoritative (since the recursive DNS resolver is not authoritative for the requested information). The recursive DNS resolver may also have knowledge about the requested information stored in DNS cache. If the requested information is present in the DNS cache, then the recursive DNS resolver will respond with that RR information.

To understand DNS security, security practitioners should be familiar with the following attack types:

  • DNS Denial-of-Service (DoS) attacks—Reflector attacks are examples of exploits of denial-of-service vulnerabilities in default DNS configurations. A reflector attack is launched when an attacker delivers traffic to the victim of their attack by reflecting it off of a third party so that the origin of the attack is concealed from the victim.
  • DNS Distributed Denial-of-Service (DDoS) attacks—Amplification attacks are examples of attacks that combine reflection with amplification to achieve an attack that causes the byte count of traffic received by the victim to be substantially greater than the byte count of traffic sent by the attacker, thus amplifying or multiplying the sending power of the attacker. To perform a DNS amplification attack, an attacker begins by identifying a large set of resolvers that can be used as reflectors. Then, from one or more machines the attacker causes UDP DNS queries to be sent to the reflecting resolvers, with the source IP addresses for the queries set to the address of the target/victim. The reflecting servers process the recursive query and send the response to the IP address from which they believe the queries originated. Because of the spoofed origin, the replies are actually sent to the target. The unrequested DNS responses are discarded when they reach the target machine, but by the time they have done so, they have consumed network resources and a small portion of CPU time on the target machine.
  • Query or Request Redirection—Request redirection occurs when the DNS query is intercepted and modified in transit to the DNS server. If the request is redirected on the path to the caching name server, this indicates that the interception occurred on the LAN. This is significant because the mitigation technique differs from redirection that occurs outside of the local network. Query interception can also occur on recursive queries outside of the local network. Query redirection that occurs outside of the local network can be mitigated through the use of Domain Name System Security Extensions (DNSSEC).22 The recursive resolver must turn on the DNSSEC enables flag in the name daemon configuration file. DNSSEC-enabled validation checks work when the zone data is signed. The problem is that not all public zones are signed. In the case of a LAN interception, or an unsigned response, traffic monitoring is needed to address the attack. New addresses should be minimally compared against black lists and whois registrations in order to attempt to validate them.
  • DNS Cache-Poisoning Attack—Attackers inject malicious DNS data into the recursive DNS servers operated by internet service providers (ISPs). The damage caused by this attack is localized to specific users connecting to the compromised servers. One of the most famous cache poisoning attacks is the Kaminsky bug. Discovered by researcher Dan Kaminsky, the bug resulted when the random values for transaction ID and source port were easily guessed, thereby, allowing the attacker to insert a “poisoned” value. 23 When the Kaminsky attack was first discovered, it was noted that sites running DNSSEC with DNSSEC validation enabled were immune to the attack. This led to an increase in the number of DNSSEC deployments. The patch for this problem made the random number for the return port a stronger number to crack.
  • Zone Enumeration—Enumeration of zone data occurs when a user invokes DNS diagnostic commands, such as dig or nslookup, against a site in an attempt to gain information about the site’s network architecture. Often times this behavior precedes an attempt at an attack. Mitigating zone-enumeration threats requires the site administrator to determine what DNS information the site wishes to make available. Many sites will use “split brain” DNS views by running internal and external DNS servers.
  • Tunnels—Most of the attention paid to DNS security focuses on the DNS query and response transaction. This transaction is a UDP transaction; however, DNS utilizes both UDP and TCP transport mechanisms. DNS TCP transactions are used for secondary zone transfers and for some DNSSEC traffic. Mitigating DNS tunneling traffic relies on a combination of traffic monitoring and server configuration. Zone transfers should only occur between an authoritative server and the secondary server. This should be fairly straightforward, since the secondary server is a known entity and should be listed in the whitelist. The quantity of DNS transactions should also be monitored, since this could be an indication of DNS misuse.
  • DNS Fast Flux—Fast flux represents the ability to quickly move the location of a Web, email, DNS, or any Internet or distributed service from one or more computers connected to the Internet to a different set of computers to delay or evade detection. 24 Defending against fast flux sites requires monitoring and blocking techniques. In some cases, there are known IP address ranges that are associated with fast flux behavior, so these addresses can be blocked. However, the dynamic nature of these sites makes monitoring as important as blocking. A sudden appearance of new destination addresses requires investigation in order to determine if the site is legitimate or a potential fast flux site. Phishing and pharming are big business in cybercrime and rely in part on DNS exploits. Phishing utilizes fast flux behavior when a link to a fast flux address is inserted in a targeted email. Pharming is associated with poisoned DNS cache records or DNS redirection, which occurs when a user enters a legitimate destination address, but the request is redirected to a malicious site.
  • Taking over the Registration of a Domain—Attackers take over the registration of a domain and change the authoritative DNS servers. This was the type of attack used by the Syrian Electronic Army. They gained access to the domain registration accounts operated by Melbourne IT, changed the authoritative DNS servers to ns1.syrianelectronicarmy.com and ns2.syrianarmyelectronicarmy.com.25 Such an attack allows hackers to redirect email and other services provided to clients. The changes created by this attack are globally cached on recursive DNS servers for a full day. Unless they are purged, it can take a full day or longer for the effects to be reversed.

Table 6-2 provides a DNS quick reference of ports and definitions.

Table 6-2: DNS Quick Reference

Ports53/TCP, 53/UDP
DefinitionRFC 882
RC 1034
RFC 1035

Various extensions to DNS have been proposed, to enhance its functionality and security, for instance, by introducing authentication through the use of DNSSEC, multicasting, or service discovery.26

Lightweight Directory Access Protocol (LDAP)27

Lightweight Directory Access Protocol (LDAP) is a client/server-based directory query protocol loosely based upon X.500, commonly used for managing user information. As opposed to DNS, for instance, LDAP is a front end and not used to manage or synchronize data per se. Back ends to LDAP can be directory services, such as NIS (see the section “Network Information Service (NIS), NIS +”), Microsoft’s Active Directory Service, Sun’s iPlanet Directory Server (renamed to Sun Java System Directory Server), and Novell’s eDirectory. LDAP provides only weak authentication based on host name resolution. It would therefore be easy to subvert LDAP security by breaking DNS (see the section “Domain Name System (DNS)”).

Table 6-3 provides an LDAP quick reference of ports and definitions.

Table 6-3: LDAP Quick Reference

Ports389/TCP, 389/UDP
DefinitionRFC 1777

LDAP communication is transferred in cleartext and therefore is easily intercepted. One way for the security architect to address the issues of weak authentication and cleartext communication would be through the deployment of LDAP over SSL, providing authentication, integrity, and confidentiality.

Network Basic Input Output System (NetBIOS)

The Network Basic Input Output System (NetBIOS) application programming interface (API) was developed in 1983 by IBM. NetBIOS was later ported to TCP/IP (NetBIOS over TCP/IP, also known as NetBT). Under TCP/IP, NetBIOS runs over TCP on ports 137 and 138 and over UDP on port 139. In addition, it uses port 135 for remote procedure calls (see the section “Remote Procedure Calls”).

Table 6-4 provides a NetBIOS quick reference of ports and definitions.

Table 6-4: NetBIOS Quick Reference

Ports135/UDP
137/TCP
138/TCP
139/UDP
DefinitionRFC 1001
RFC 1002

Network Information Service (NIS), NIS +

Network Information Service (NIS) and NIS + are directory services developed by Sun Microsystems and are mostly used in UNIX environments. They are commonly used for managing user credentials across a group of machines, for instance, a UNIX workstation cluster or client/server environment, but they can be used for other types of directories as well.

NIS

NIS uses a flat namespace in so-called domains. It is based on RPC and manages all entities on a server (NIS server). NIS servers can be set up redundantly through the use of slave servers. NIS is known for a number of security weaknesses. The fact that NIS does not authenticate individual RPC requests can be used to spoof responses to NIS requests from a client. This would, for instance, enable an attacker to inject fake credentials and thereby obtain or escalate privileges on the target machine. Retrieval of directory information is possible if the name of a NIS domain has become known or is guessable, as any of the clients can associate themselves with a NIS domain. A number of guides have been published on how to secure NIS servers. The basic steps that the security architect and practitioner would need to take are the following: Secure the platform a NIS server is running on, isolate the NIS server from traffic outside of a LAN, and configure it so the probability for disclosure of authentication credentials, especially system privileged ones, is limited.

NIS +

NIS+ uses a hierarchical namespace. It is based on Secure RPC (see the section “Remote Procedure Calls”). Authentication and authorization concepts in NIS+ are more mature; they require authentication for each access of a directory object. However, NIS+ authentication in itself will only be as strong as authentication to one of the clients in a NIS+ environment, as NIS+ is built on a trust relationship between different hosts. The most relevant attacks against a correctly configured NIS+ network come from attacks against its cryptographic security. NIS+ can be run at different security levels; however, most levels available are irrelevant for an operational network.28

Common Internet File System (CIFS)/Server Message Block (SMB)

Common Internet File System (CIFS)/Server Message Block (SMB) is a file-sharing protocol prevalent on Windows systems. A UNIX/Linux implementation exists in the free Samba project. SMB was originally designed to run on top of the NetBIOS protocol (see the section “Network Basic Input Output System (NetBIOS)”); it can, however, be run directly over TCP/IP.

Table 6-5 provides a CIFS/SMB quick reference of ports and definitions.

Table 6-5: CIFS/SMB Quick Reference

Ports445/TCP
See also “Network Basic Input Output System (NetBIOS)”
DefinitionProprietary

CIFS is capable of supporting user-level and tree/object-level (share-level) security. Authentication can be performed via challenge/response authentication as well as by transmission of credentials in cleartext. This second provision has been added largely for backward compatibility in legacy Windows environments.

The main attacks against CIFS are based upon obtaining credentials, be it by sniffing for cleartext authentication or by cryptographic attacks.

Network File System (NFS)

Network File System (NFS) is a client/server file-sharing system common to the UNIX platform. It was originally developed by Sun Microsystems, but implementations exist on all common UNIX platforms, including Linux, as well as Microsoft Windows. NFS has been revised several times, including updates to NFS Versions 2 and 3. NFS version 2 was based on UDP, and version 3 introduced TCP support. Both are implemented on top of RPC (see the section “Remote Procedure Calls”). NFS versions 2 and 3 are stateless protocols, mainly due to performance considerations. As a consequence, the server must manage file locking separately.

Table 6-6 provides an NFS quick reference of ports and definitions.

Table 6-6: NFS Quick Reference

PortsSee the section “Remote Procedure Calls”
DefinitionRFC 1094
RFC 1813
RFC 3010

Secure NFS (SNFS) offers secure authentication and encryption using Data Encryption Standard (DES) encryption. In contrast to standard NFS, secure NFS (or rather secure RPC) will authenticate each RPC request. This will increase latency for each request as the authentication is performed and introduces a light performance premium, mainly paid for in terms of computing capacity. Secure NFS uses DES encrypted time stamps as authentication tokens. If server and client do not have access to the same time server, this can lead to short-term interruptions until server and client have resynchronized themselves.

NFS version 4 is a stateful protocol that uses TCP port 2049. UDP support (and dependency) has been discontinued. NFS version 4 implements its own encryption protocols on the basis of Kerberos and has discontinued use of RPC. Foregoing RPC also means that additional ports are no longer dynamically assigned, which enables use of NFS through firewalls. Another approach that the security architect could consider as part of his or her design is to plan for the securing of NFS where it must be deployed, by tunneling NFS through Secure Shell (SSH), which can be integrated with operating system authentication schemes.

Simple Mail Transfer Protocol (SMTP) and Enhanced Simple Mail Transfer Protocol (ESMTP)

Using port 25/TCP, Simple Mail Transfer Protocol (SMTP) is a client/server protocol utilized to route email on the Internet. Information on mail servers for Internet domains is managed through DNS, using mail exchange (MX) records. Although SMTP takes a simple approach to authentication, it is robust in the way it deals with unavailability; an SMTP server will try to deliver email over a configurable period.

From a protocol perspective, SMTP’s main shortcomings are the complete lack of authentication and encryption. Identification is performed by the sender’s email address. A mail server will be able to restrict sending access to certain hosts, which should be on the same network as the mail server, as well as set conditions on the sender’s email address, which should be one of the domains served by this particular mail server. Otherwise, the mail server may be configured as an open relay, although this is not a recommended practice traditionally because it poses a variety of security concerns and may get the server placed on ban lists of anti-spam organizations.

To address the weaknesses identified in SMTP, an enhanced version of the protocol, ESMTP, was defined. ESMTP is modular in that client and server can negotiate the enhancements used. ESMTP does offer authentication, among other things, and allows for different authentication mechanisms, including basic and several secure authentication mechanisms.

A quick summary comparison of SMTP and ESMTP can be seen in Table 6-7.

Table 6-7: Comparison between SMTP and EMTP

SMTPESMTP
Simple Mail Transfer ProtocolExtended Simple Mail Transfer Protocol
First command in SMTP session:First command in ESMTP session:
HELO sayge.comEHLO sayge.com
RFC 821RFC 1869
SMTP MAIL FROM and RCPT TO allows size only of 512 characters including <CRLF>.ESMTP MAIL FROM and RCPT TO allows size greater than 512 characters.
SMTP alone cannot be extended with new commands.ESMTP is a framework that has enhanced capabilities, allowing it to extend existing SMTP commands.

File Transfer Protocol (FTP)

Before the advent of the World Wide Web and proliferation of Hypertext Transfer Protocol (HTTP), which is built on some of its features, File Transfer Protocol (FTP) was the protocol for publishing or disseminating data over the Internet.

Table 6-8 provides an FTP quick reference of ports and definitions.

Table 6-8: FTP Quick Reference

Ports20/TCP (data stream)
21/TCP (control stream)
DefinitionRFC 959

Although this authentication weakness can be addressed through the use of encryption, this approach carries with it the need for additional requirements to be imposed on the client. These requirements and methods are briefly outlined below:

  1. Secure FTP with TLS is an extension to the FTP standard that allows clients to request that the FTP session be encrypted. This is done by sending the “AUTH TLS” command. The server has the option of allowing or denying connections that do not request TLS. This protocol extension is defined in the proposed standard RFC 4217.
  2. SFTP, the SSH File Transfer Protocol, is not related to FTP except that it also transfers files and has a similar command set for users. SFTP, or secure FTP, is a program that uses SSH to transfer files. Unlike standard FTP, it encrypts both commands and data. It is functionally similar to FTP, but because it uses a different protocol, standard FTP clients cannot be used to talk to an SFTP server.
  3. FTP over SSH refers to the practice of tunneling a normal FTP session over an SSH connection. Because FTP uses multiple TCP connections, it is particularly difficult to tunnel over SSH. With many SSH clients, attempting to set up a tunnel for the control channel (the initial client-to-server connection on port 21) will protect only that channel; when data is transferred, the FTP software at either end will set up new TCP connections (data channels), which bypass the SSH connection and thus have no confidentiality or integrity protection.

Trivial File Transfer Protocol (TFTP)

Trivial File Transfer Protocol (TFTP) is a simplified version of FTP, which is used when authentication is not needed and quality of service is not an issue. TFTP runs on port 69 over UDP. It should therefore only be used in trusted networks with low latency.

Table 6-9 provides a TFTP quick reference of ports and definitions.

Table 6-9: TFTP Quick Reference

Ports69/UDP
DefinitionRFC 1350

In practice, TFTP is used mostly in LANs for the purpose of pulling packages, for instance, in booting up a diskless client or when using imaging services to deploy client environments.

Hypertext Transfer Protocol (HTTP)

Hypertext Transfer Protocol (HTTP) is the layer 7 foundation of the World Wide Web (WWW). HTTP, originally conceived as a stateless, stripped-down version of FTP, was developed at the European Organization for Nuclear Research (CERN) to support the exchange of information in Hypertext Markup Language (HTML).

Table 6-10 provides an HTTP quick reference of ports and definitions.

Table 6-10: HTTP Quick Reference

Ports80/TCP; other ports are in use, especially for proxy services
DefinitionRFC 1945
RFC 2109
RFC 2616

HTTP’s popularity caused the deployment of an unprecedented number of Internet facing servers; many were deployed with out-of-the-box, vendor-preset configurations. Often these settings were geared at convenience rather than security. As a result, numerous previously closed applications were suddenly marketed as “Web enabled.” By implication, not much time was spent on developing the Web interface in a secure manner, and authentication was simplified to become a browser-based style.

Even though HTTP does not natively support quality of service or bidirectional communication, workarounds were quickly developed to deal with Quality of Service (QoS) concerns and bidirectional communication needs. Consequently, HTTP will work from within most networks, shielded or not, and thereby lends itself to tunneling an impressive number of other protocols.

HTTP does not natively support encryption and has a fairly simple authentication mechanism based on domains, which in turn are normally mapped to directories on a web server. Although HTTP authentication is extensible, it is most often used in the classic username/password style.

HTTP Proxying and Anonymizing Proxies

Because HTTP is transmitting data in cleartext and generates a slew of logging information on web servers and proxy servers, the resulting information can be readily used for illegitimate activities, such as industrial espionage. To address this significant concern, the security practitioner can use any of the commercial and free services available that allow for the anonymization of HTTP requests. These services are mainly geared at the privacy market but have also attracted a criminal element seeking to obfuscate activity. A relatively popular free service is Java Anonymous Proxy, or JAP, also referred to as project AN.ON, or Anonymity.Online. JAP is referred to as JonDo within the commercially available solution JonDonym anonymous proxy server.29

Open Proxy Servers

Like open mail relays, open proxy servers allow unrestricted access to GET commands from the Internet. They can therefore be used as stepping stones for launching attacks or simply to obscure the origin of illegitimate requests. More importantly, an open proxy server bears an inherent risk of opening access to protected intranet pages from the Internet. (A misconfigured firewall allowing inbound HTTP requests would need to be present on top of the open proxy to allow this to happen.)

As a general rule, HTTP proxy servers should not allow queries from the Internet. For the security architect, it is a best practice to separate application gateways (sometimes implemented as reverse proxies) from the proxy for Web browsing because both have very different security levels and business importance. (It would be even better to implement the application gateway as an application proxy and not an HTTP proxy, but this is not always possible.)

Content Filtering

In many organizations, the HTTP proxy is used as a means to implement content filtering, for instance, by logging or blocking traffic that has been defined as or is assumed to be nonbusiness related for some reason. Although filtering on a proxy server or firewall as part of a layered defense can be quite effective to prevent virus infections (though it should never be the only protection against viruses), it will be only moderately effective in preventing access to unauthorized services (such as certain remote-access services or file sharing), as well as preventing the download of unwanted content.

HTTP Tunneling

HTTP tunneling is technically a misuse of the protocol on the part of the designer of such tunneling applications. It has become a popular feature with the rise of the first streaming video and audio applications and has been implemented into many applications that have a market need to bypass user policy restrictions. Usually, HTTP tunneling is applied by encapsulating outgoing traffic from an application in an HTTP request and incoming traffic in a response. This is usually not done to circumvent security but rather to be compatible with existing firewall rules and allow an application to function through a firewall without the need to apply special rules or additional configurations. Many of the most prevalent and successful malicious software packages, including viruses, worms, and especially botnets, will use HTTP as the means to transmit stolen data or control information from infected hosts through firewalls.

Suitable countermeasures that the security practitioner should consider include filtering on a firewall or proxy server and assessing clients for installations of unauthorized software. However, a security professional will have to balance the business value and effectiveness of these countermeasures with the incentive for circumvention that a restriction of popular protocols will create.

Implications of Multi-Layer Protocols

Multi-layer protocols have ushered in an era of new vulnerabilities that were once unthinkable. In the past, several “networked” solutions were developed to provide control and communications with industrial devices. These often proprietary protocols evolved over time and eventually merged with other networking technologies such as Ethernet and Token Ring. Several vendors now use the TCP/IP stack to channel and route their own protocols. These protocols are used to control coils, actuators, and machinery in multiple industries such as energy, manufacturing, construction, fabrication, mining, and farming to name a few. Insecurities in these systems often have real world visibility and impact. Given the fact that the life expectancy of many of the devices under control is 20 years or longer, it is easy to see how systems can become outdated. Often, critical infrastructure, such as power grids, is controlled using multi-layer protocols. Table 6-11 from the Idaho National Laboratory illustrates some of the differences and related challenges of control systems vs. standard information technology.30

Table 6-11: Differences and Challenges for Control Systems vs. Information Technology

SECURITY TOPIC INFORMATION TECHNOLOGY CONTROL SYSTEMS
Antivirus/Mobile Code Common
Widely used
Uncommon/impossible to deploy effectively
Support Technology Lifetime 2-3 years
Diversified vendors
Up to 20 years
Single vendor
Outsourcing Common
Widely used
Operations are often outsourced but not diverse to various providers
Application of Patches Regular
Scheduled
Rare, unscheduled
Vendor specific
Change Management Regular
Scheduled
Highly managed and complex
Time Critical Content Generally delays accepted Delays are unacceptable
Availability Generally delays accepted 24x7x365 (continuous)
Security Awareness Moderate in both private and public sector Poor except for physical
Security Testing/Audit Part of a good security program Occasional testing for outages
Physical Security Secure (server rooms, etc.) Remote/Unmanned
Secure
SCADA

The term most often associated with multi-layer protocols is supervisory control and data acquisition (SCADA). Another term used in relation with multi-layer protocols is industrial control system or ICS. In general, SCADA systems are designed to operate with several different communication methods including modems, WANS, and various networking equipment. The following figure shows a general layout of a SCADA system:31

As Figure 6-10 demonstrates, a great complexity of devices and information exists in SCADA systems. Most SCADA systems minimally contain the following:

c06f010.tif

Figure 6-10: Diagram of a Generic SCADA ICS

  • Control Server—A control server hosts the software and often the interfaces used to control actuators, coils, and PLCs through subordinate control modules across the network.
  • Remote Terminal Unit (RTU)—The RTU supports SCADA remote stations often equipped with wireless radio interfaces and is used in situations where land based communications may not be possible.
  • Human-Machine Interface (HMI)—The HMI is the interface where the humans (operators) can monitor, control, and command the controllers in the system.
  • Programmable Logic Controller (PLC)—The PLC is a small computer that controls relays, switches, coils, counters, and other devices.
  • Intelligent Electronic Devices (IED)—The IED is a sensor that can acquire data and also provide feedback to the process through actuation. These devices allow for automatic control at the local level.
  • Input/Output (IO) Server—The IO server is responsible for collecting process information from components such as IEDs, RTUs, and PLCs. They are often used to interface third-party control components such as custom dashboards with a control server.
  • Data Historian—The data historian is like the security event and incident management (SEIM) for industrial control systems. It is typically a centralized database for logging process information from a variety of devices.

Given the unique design of SCADA systems, and the critical infrastructures that they control, it is little wonder they are a new focus of attacks. Security architects and practitioners responsible for implementing or protecting SCADA systems should be aware of the following types of attacks:

  • Network Perimeter Vulnerabilities
  • Protocol Vulnerabilities throughout the Stack
  • Database Insecurities
  • Session Hijacking and Man-in-the-middle Attacks
  • Operating System and Server Weaknesses
  • Device and Vendor “Backdoors”

In late October of 2013, U.S.-based researchers identified 25 zero-day vulnerabilities in industrial control SCADA software from 20 suppliers that are used to control critical infrastructure systems. Attackers could exploit some of these vulnerabilities to gain control of electrical power and water systems. The vulnerabilities were found in devices that are used for serial and network communications between servers and substations. Serial communication has not been considered as an important or viable attack vector up until now, but breaching a power system through serial communication devices can be easier than attacking through the IP network because it does not require bypassing layers of firewalls. In theory, an intruder could exploit the vulnerabilities simply by breaching the wireless radio network over which the communication passes to the server.

Another issue that the security professional needs to contend with is the inability of antivirus software to address the threats facing SCADA/ICS environments. The Flame virus, for example, avoided detection from 43 different antivirus tools and took more than two years to detect. What the security practitioner needs to do, instead of continuing to rely on the traditional enterprise tools that may work well for desktops and servers, is to have tools in place that allow them to identify threats, respond, and expedite forensic analysis in real time within these complicated systems. For the security practitioner to achieve this, continuous monitoring of all log data generated by IT systems is required to automatically baseline normal, day-to-day activity across systems and identify any and all anomalous activity immediately.

It was announced in March of 2014 that more than 7,600 different power, chemical, and petrochemical plants may still be vulnerable to a handful of SCADA vulnerabilities. A researcher at Rapid 7, the Boston-based firm responsible for the popular pen testing software Metasploit, and an independent security researcher discovered three bugs in Yokogawa Electric’s CENTUM CS3000 R3 product. The Windows-based software is primarily used by infrastructure in power plants, airports, and chemical plants across Europe and Asia. The vulnerabilities are essentially a series of buffer overflows, heap based and stack based, that could open the software up to attack. All of them affect computers where CENTUM CS 3000, software that helps operate and monitor industrial control systems, is installed. With the first one, an attacker could send a specially crafted sequence of packets to BKCLogSvr.exe and trigger a heap-based buffer overflow, which in turn could cause a DoS and allow the execution of arbitrary code with system privileges. The second would involve a similar situation; a special packet could be sent to BKHOdeq.exe and cause a stack-based buffer overflow, allowing execution of arbitrary code with the privileges of the CENTUM user. Lastly, another stack-based buffer overflow, this involving the BKBCopyD.exe service, could allow the execution of arbitrary code as well.32

In April of 2014, attackers were able to compromise a utility in the United States through an Internet-connected system that gave the attackers access to the utility’s internal control system network. The utility had remote access enabled on some of its Internet-connected hosts, and the systems were only protected by simple passwords. Officials at the ICS-CERT, an incident response and forensics organization inside the Department of Homeland Security that specializes in ICS and SCADA systems, said that the public utility was compromised “when a sophisticated threat actor gained unauthorized access to its control system network.” The attacker apparently used a simple brute force attack to gain access to the Internet-facing systems at the utility and then compromised the ICS network. “After notification of the incident, ICS-CERT validated that the software used to administer the control system assets was accessible via Internet facing hosts. The systems were configured with a remote access capability, utilizing a simple password mechanism; however, the authentication method was susceptible to compromise via standard brute forcing techniques,” ICS-CERT said in a published report.33

The security of industrial control systems and SCADA systems has become a serious concern in recent years as attackers and researchers have begun to focus their attention on them. Many of these systems, which control mechanical devices, manufacturing equipment, utilities, nuclear plants, and other critical infrastructure, are connected to the Internet, either directly or through networks, and this has drawn the attention of attackers looking to do reconnaissance or cause trouble on these networks. Researchers have been sharply critical of the security in the SCADA and ICS industries, saying it’s “laughable” and has no formal security development lifecycle. The ICS-CERT report states that the systems in the compromised utility probably were the target of a number of attacks: “It was determined that the systems were likely exposed to numerous security threats and previous intrusion activity was also identified.” The investigators were able to identify the issues and found that the attackers likely had not done any damage to the ICS system at the utility.

In the same report, ICS-CERT detailed a separate compromise at an organization that also had a control system connected to the Internet. Attackers were able to compromise the ICS system, which operates an unspecified mechanical device, but did not do any real damage. “The device was directly Internet accessible and was not protected by a firewall or authentication access controls. At the time of compromise, the control system was mechanically disconnected from the device for scheduled maintenance,” the report says. “ICS-CERT provided analytic assistance and determined that the actor had access to the system over an extended period of time and had connected via both HTTP and the SCADA protocol. However, further analysis determined that no attempts were made by the threat actor to manipulate the system or inject unauthorized control actions.”

The security architect and practitioner should familiarize themselves with the latest alerts released by the ICS-CERT. These can be found at https://ics-cert.us-cert.gov/alerts/.

Security professionals should also consider the following list as a starting point for defensive actions that can be used to help secure SCADA/ICS systems:

  • Minimize network exposure for all control system devices. In general, locate control system networks and devices behind firewalls and isolate them from the business network.
  • When remote access is required, employ secure methods, such as virtual private networks (VPNs), recognizing that VPNs may have vulnerabilities and should be updated to the most current version available. Also recognize that that a VPN is only as secure as the connected devices.
  • Remove, disable, or rename any default system accounts wherever possible.
  • Implement account lockout policies to reduce the risk from brute forcing attempts.
  • Establish and implement policies requiring the use of strong passwords.
  • Monitor the creation of administrator-level accounts by third-party vendors.
  • Apply patches in the ICS environment, when possible, to mitigate known vulnerabilities.
Modbus and Fieldbus

Modbus and Fieldbus are standard industrial communication protocols designed by separate groups. The focus of the design around these protocols is not security; rather it is uptime and control of devices. Many of these protocols send information in cleartext across transmission media. Additionally, many of these protocols and the devices they support require little or no authentication to execute commands on a device. The security architect and practitioner need to work together to ensure that strict logical and physical controls are implemented to ensure these protocols are encapsulated and isolated from any public or open network.

Telecommunications Technologies

The area of telecommunications technologies is a broad one that encompasses many things that the SSCP will need to know. IP convergence, VoIP, POTS, PBX, cellular, and attacks and countermeasures are all items that are important to understand fully. The SSCP will want to ensure that they have a good, broad understanding of the key enabling technologies in this area, as well as the use of these technologies within the enterprise in a secure manner.

Converged Communications

IP convergence can be defined as using the Internet Protocol (IP) to transmit all of the information that transits a network, such as voice, data, music, or video.

The benefits of IP convergence that the security architect and practitioner can bring to the enterprise through the design, deployment, and management of a converged network infrastructure are as follows:

  • Excellent support for multimedia applications.
  • A converged IP network is a single platform on which interoperable devices can be run in innovative ways.
  • A converged IP network is easier to manage because of the uniform setup in which the system resources operate.
  • An IP convergent network is capable of making use of the developments in class of service differentiation and QoS-based routing.
  • A uniform environment requires fewer components in the network.
  • Device integration has the potential to simplify end-to-end security management and at the same time make it more robust.

Fibre Channel over Ethernet (FCoE)

Now that 10 GbE is becoming more widespread, Fibre Channel over Ethernet (FCoE) is the next attempt to converge block storage protocols onto Ethernet. FCoE takes advantage of 10 GbE performance and compatibility with existing Fibre Channel protocols. It relies on an Ethernet infrastructure that uses the IEEE Data Center Bridging (DCB) standards. The DCB standards can apply to any IEEE 802 network, but most often the term DCB refers to enhanced Ethernet.

The DCB standards define four new technologies:34

  1. Priority-based Flow Control (PFC), 802.1Qbb—Allows the network to pause different traffic classes.
  2. Enhanced Transmission Selection (ETS), 802.1Qaz—Defines the scheduling behavior of multiple traffic classes, including strict priority and minimum guaranteed bandwidth capabilities. This should enable fair sharing of the link, better performance, and metering.
  3. Quantized Congestion Notification (QCN), 802.1Qau—Supports end-to-end flow control in a switched LAN infrastructure and helps eliminate sustained, heavy congestion in an Ethernet fabric. Before the network can use QCN, you must implement QCN in all components in the Converged Enhanced Ethernet (CEE) data path. (This includes components such as Converged Network Adapters [CNAs], switches, and so on.) QCN networks must also use PFC to avoid dropping packets and ensure a lossless environment.
  4. Data Center Bridging Exchange Protocol (DCBX), 802.1Qaz—Supports discovery and configuration of network devices that support PFC, ETS, and QCN.

FCoE is a lightweight encapsulation protocol and lacks the reliable data transport of the TCP layer. Therefore, FCoE must operate on DCB-enabled Ethernet and use lossless traffic classes to prevent Ethernet frame loss under congested network conditions. FCoE on a DCB network mimics the lightweight nature of native FC protocols and media. It does not incorporate TCP or even IP protocols. This means that FCoE is a layer 2 (non-routable) protocol just like FC. FCoE is only for short-haul communication within a data center.

iSCSI

iSCSI is Internet SCSI (Small Computer System Interface), an Internet Protocol (IP)-based storage networking standard for linking data storage facilities, developed by the Internet Engineering Task Force (IETF) as RFC 3270.35 By carrying SCSI commands over IP networks, iSCSI is used to facilitate data transfers over intranets and to manage storage over long distances. Because of the ubiquity of IP networks, iSCSI can be used to transmit data over local area networks (LANs), wide area networks (WANs), or the Internet and can enable location-independent data storage and retrieval.

When an end user or application sends a request, the operating system generates the appropriate SCSI commands and data request, which then go through encapsulation and, if necessary, encryption procedures. A packet header is added before the resulting IP packets are transmitted over an Ethernet connection. When a packet is received, it is decrypted (if it was encrypted before transmission) and disassembled, separating the SCSI commands and request. The SCSI commands are sent on to the SCSI controller and from there to the SCSI storage device. Because iSCSI is bidirectional, the protocol can also be used to return data in response to the original request.

Multi-Protocol Label Switching (MPLS)

Multi-Protocol Label Switching (MPLS) is best summarized as a Layer 2.5 networking protocol. In a traditional IP network, each router performs an IP lookup (“routing”), determines a next-hop based on its routing table, and forwards the packet to that next-hop. Every router does the same, each making its own independent routing decisions, until the final destination is reached. MPLS does label switching instead. The first device does a routing lookup, just like before, but instead of finding a next-hop, it finds the final destination router. And it finds a pre-determined path from “here” to that final router. The router applies a “label” (or “shim”) based on this information. Future routers use the label to route the traffic without needing to perform any additional IP lookups. At the final destination router, the label is removed, and the packet is delivered via normal IP routing.36

So why do security professionals still care about MPLS? Three reasons:

  • Implementing Traffic-Engineering—The ability to control where and how traffic is routed on your network, to manage capacity, prioritize different services, and prevent congestion.
  • Implementing Multi-Service Networks—The ability to deliver data transport services, as well as IP routing services, across the same packet-switched network infrastructure.
  • Improving Network Resiliency—With MPLS Fast Reroute.

VoIP

Voice over Internet Protocol (VoIP) is a technology that allows you to make voice calls using a broadband Internet connection instead of a regular (or analog) phone line. VoIP is simply the transmission of voice traffic over IP-based networks. VoIP is also the foundation for more advanced unified communications applications such as Web and video conferencing. VoIP systems are based on the use of the Session Initiation Protocol (SIP), which is the recognized standard. Any SIP compatible device can talk to any other. Any SIP based IP-phone can call another right over the Internet; you do not need any additional equipment or even a phone provider. Just plug your SIP phone into an Internet connection, configure it, and then dial the other person right over the Internet.

In all VoIP systems, your voice is converted into packets of data and then transmitted to the recipient over the Internet and decoded back into your voice at the other end. To make it quicker, these packets are compressed before transmission with certain codecs, almost like zipping a file on the fly. There are many codecs with different ways of achieving compression and managing bitrates; thus each codec has its own bandwidth requirements and provides different voice quality for VoIP calls.

VoIP systems employ session control and signaling protocols to control the signaling, set-up, and tear-down of calls. They transport audio streams over IP networks using special media delivery protocols that encode voice, audio, video with audio codecs, and video codecs as digital audio by streaming media. Various codecs exist that optimize the media stream based on application requirements and network bandwidth; some implementations rely on narrowband and compressed speech, while others support high fidelity stereo codecs. Some popular codecs include μ-law and a-law versions of G.711, G.722, which is a high-fidelity codec marketed as HD Voice by Polycom, a popular open source voice codec known as iLBC, a codec that only uses 8 kbit/s each way called G.729, and many others.

Session Initiation Protocol (SIP)37

As its name implies, Session Initiation Protocol (SIP) is designed to manage multimedia connections. SIP is designed to support digest authentication structured by realms, similar to HTTP (basic username/password authentication has been removed from the protocol as of RFC 3261).38 In addition, SIP provides integrity protection through MD5 hash functions. SIP supports a variety of encryption mechanisms, such as TLS. Privacy extensions to SIP, including encryption and caller ID suppression, have been defined in extensions to the original Session Initiation Protocol (RFC 3325).39

Packet Loss

A technique called packet loss concealment (PLC) is used in VoIP communications to mask the effect of dropped packets. There are several techniques that may be used by different implementations. Zero substitution is the simplest PLC technique that requires the least computational resources. These simple algorithms generally provide the lowest quality sound when a significant number of packets are discarded. Waveform substitution is used in older protocols, and it works by substituting the lost frames with artificially generated, substitute sound. The simplest form of substitution simply repeats the last received packet. Unfortunately, waveform substitution often results in unnatural, “robotic” sound when a long burst of packets is lost.

Jitter

Unlike network delay, jitter does not occur because of the packet delay, but because of a variation of packet delays. As VoIP endpoints try to compensate for jitter by increasing the size of the packet buffer, jitter causes delays in the conversation. If the variation becomes too high and exceeds 150ms, callers notice the delay and often revert to a walkie-talkie style of conversation. By definition, reducing the delays on the network helps keep the buffer under 150ms even if a significant variation is present.

Sequence Errors

Some VoIP systems discard packets received out of order, while other systems discard out-of-order packets if they exceed the size of the internal buffer, which in turn causes jitter. Sequence errors can also cause significant degradation of call quality. Sequence errors may occur because of the way packets are routed. Packets may travel along different paths through different IP networks, causing different delivery times. As a result, lower-numbered packets may arrive at the endpoint later than higher numbered ones.

Codec Quality

A codec is software that converts audio signals into digital frames and vice versa. Codecs are characterized by different sampling rates and resolutions. Different codecs employ different compression methods and algorithms, using different bandwidth and computational requirements.

POTS and PBX

There are two standard phone systems used for telecommunications: POTS and PBX.

POTS

Plain old telephone service (POTS) is commonly found in the “last mile” of most residential and business telephone services. Once called “Post Office Telephone Service” in some countries, the name has mostly been retired due to the proliferation of phones in homes and businesses. POTS typically represents a bidirectional analog telephone interface that was designed to carry the sound of the human voice. POTS lacks the mobility of cellular phones and the bandwidth of several competing products; however, it is one of the most reliable systems available with an uptime close to or exceeding 99.999%. POTS is still often the telecom method of choice when high reliability is required and bandwidth is not. Typical applications include alarm systems and “out of band” command links for routers and other network devices.

PBX

A private branch exchange (PBX) is an enterprise class phone system typically used in businesses or large organizations. A PBX often includes an internal switching network and a controller that is attached to telecommunications trunks. Many PBXs had default manufacturer configuration codes, ports, and control interfaces that could be exploited if the security professional did not reconfigure them prior to deployment. A PBX is often targeted by war dialers who can then use the PBX to route long distance calling or eavesdrop on the organization. Analog POTS PBXs have largely been replaced with VoIP based or VoIP enabled PBXs.

Cellular

A cellular network or mobile network is a radio network distributed over land areas called cells, each served by at least one fixed-location transceiver, known as a cell site or base station. In a cellular network, each cell characteristically uses a different set of radio frequencies from all their immediate neighboring cells to avoid any interference. When joined together these cells provide radio coverage over a wide geographic area. This enables a large number of portable transceivers (e.g., mobile phones, pagers, etc.) to communicate with each other and with fixed transceivers and telephones anywhere in the network, via base stations, even if some of the transceivers are moving through more than one cell during transmission.

Attacks and Countermeasures

DDoS affects many types of systems. Some have used the term TDoS to refer to DDoS or DoS attacks on telecommunications systems (Telecommunications Denial of Service). Typical motives can be anything from revenge, extortion, political/ideological, and distraction from a larger set of financial crimes. The Dirt Jumper bot has been used to create distractions by launching DDoS attacks upon financial institutions and financial infrastructure at the same time that fraud is taking place (with the Zeus Trojan, or other banking malware or other attack technique). See Figure 6-11. Similarly, DDoS aimed at telecommunications is being used to create distractions that allow other crimes to go unnoticed for a longer period.

c06f011.tif

Figure 6-11: Dirt Jumper Bot Malware screen capture

A successful cyber-attack on a telecommunications operator could disrupt service for thousands of phone customers, sever Internet service for millions of consumers, cripple businesses, and shut down government operations. And there’s reason to worry: Cyber-attacks against critical infrastructure are soaring. For instance, in 2012, the U.S. Computer Emergency Readiness Team (US-CERT), a division of the Department of Homeland Security, processed approximately 190,000 cyber incidents involving U.S. government agencies, critical infrastructure, and the department’s industry partners. This represents a 68% increase over 2011.40

Another issue is the DDoS “attack for hire” networks that exist in the wild today. For example, advertisements like the ones pictured in Figures 6-12 and 6-13 can easily be found on the Internet for traditional DDoS services that also include phone attack services starting at $20 per day.

c06f012.tif

Figure 6-12: A DDoS attack for hire network ad

c06f013.tif

Figure 6-13: Another DDoS attack for hire network ad

SIP flooding attacks are another attack vector that the security practitioner needs to be aware of. Often, SIP flooding attacks take place because attackers are running brute-force password guessing scripts that overwhelm the processing capabilities of the SIP device, but pure flooding attacks on SIP servers also occur. Once the attackers obtain credentials into a VoIP or other PBX system, that system can become a pawn in their money-making scheme to perform DoS, Vishing, or other types of attacks. Default credentials are one of the security weaknesses that the attackers leverage to gain access to the VoIP/PBX systems, so organizations should ensure that their telecommunications systems credentials are strong enough to resist brute force attack, and that the ability to reach the telephone system is limited as much as possible in order to reduce the attack surface and convince the attacker to move on to the next victim.

Any system is subject to availability attacks at any point where an application layer or other processor-intensive operation exists as well as the networks that supply these systems via link saturation and state-table exhaustion. Telecommunications systems are no exception to this principle.

Control Network Access

The best way to control access to anything is to monitor any and all routes that can be used to access the item in question. Through continuous monitoring of the known access pathways, any attempts to gain access will be observed and can then be managed. The same thought process is applied to network access. The SSCP will want to understand how to ensure that all identified pathways to gain access to a system are monitored and that the appropriate control mechanisms are deployed to secure them. These may include the use of secure routing, DMZs, and hardware such as firewalls.

Secure Routing/Deterministic Routing

While it is possible to establish corporate wide area networks (WANs) using the Internet and VPN technology, it is not desirable. Relying on the Internet to provide connectivity means that there is little ability to control the routes that traffic takes or to remedy performance issues. Deterministic routing means that WAN connectivity is supplied based upon a limited number of different routes, typically supplied by a large network provider. Deterministic routing means that traffic only travels by pre-determined routes that are known to be either secure or less susceptible to compromise. Similarly, deterministic routing from a large carrier will make it much easier to address performance issues and to maintain the service levels required by the applications on the WAN. If the WAN is supporting converged applications like voice (VOIP) or video (for security monitoring or video conferencing), then deterministic routing becomes even more essential to the assurance of the network.

Boundary Routers

Boundary routers primarily advertise routes that external hosts can use to reach internal ones. However, they should also be part of an organization’s security perimeter by filtering external traffic that should never be allowed to enter the internal network. For example, boundary routers may prevent external packets from the Finger service from entering the internal network because that service is used to gather information about hosts.

A key function of boundary routers is the prevention of inbound or outbound IP spoofing attacks. In using a boundary router, spoofed IP addresses would not be routable across the network perimeter. Examples of IP spoofing attacks are:

  1. Non-Blind Spoofing This type of attack takes place when the attacker is on the same subnet as the victim. The sequence and acknowledgement numbers can be sniffed, eliminating the potential difficulty of calculating them accurately. The biggest threat of spoofing in this instance would be session hijacking. This is accomplished by corrupting the data stream of an established connection, then re-establishing it based on correct sequence and acknowledgement numbers with the attack machine.
  2. Blind Spoofing This is a more sophisticated attack because the sequence and acknowledgement numbers are unattainable. Several packets are sent to the target machine in order to sample sequence numbers. While not the case today, machines in the past used basic techniques for generating sequence numbers. It was relatively easy to discover the exact formula by studying packets and TCP sessions. Today, operating systems implement random sequence number generation, making it difficult to predict sequence numbers accurately. If, however, the sequence number was compromised, data could be sent to the target.
  3. Man-in-the-Middle Attack Both types of spoofing are forms of a common security violation known as a man-in-the-middle (MITM) attack. In these attacks, a malicious party intercepts a legitimate communication between two friendly parties. The malicious host then controls the flow of communication and can eliminate or alter the information sent by one of the original participants without the knowledge of either the original sender or the recipient.

Security Perimeter

The security perimeter is the first line of protection between trusted and untrusted networks. In general, it includes a firewall and router that help filter traffic. Security perimeters may also include proxies and devices, such as an intrusion detection system (IDS), to warn of suspicious traffic. The defensive perimeter extends out from these first protective devices to include proactive defense such as boundary routers, which can provide early warning of upstream attacks and threat activities.

It is important to note that while the security perimeter is the first line of defense, it must not be the only one. If there are not sufficient defenses within the trusted network, then a misconfigured or compromised device could allow an attacker to enter the trusted network.

Network Partitioning

Segmenting networks into domains of trust is an effective way to help enforce security policies. Controlling which traffic is forwarded between segments will go a long way to protecting an organization’s critical digital assets from malicious and unintentional harm.

Dual-Homed Host

A dual-homed host has two network interface cards (NICs), each on a separate network. Provided that the host controls or prevents the forwarding of traffic between NICs, this can be an effective measure to isolate a network.

Bastion Host

Bastion hosts serve as a gateway between a trusted and untrusted network that gives limited, authorized access to untrusted hosts. For instance, a bastion host at an Internet gateway could allow external users to transfer files to it via FTP. This permits files to be exchanged with external hosts without granting them access to the internal network in an uncontrolled manner.

If an organization has a network segment that has sensitive data, it can control access to that network segment by requiring that all access must be from the bastion host. In addition to isolating the network segment, users will have to authenticate to the bastion host, which will help audit access to the sensitive network segment. For example, if a firewall limits access to the sensitive network segment, allowing access to the segment from only the bastion host will eliminate the need for allowing many hosts access to that segment. For instance, terminal servers are a form of bastion host, which allow authenticated users deeper into the network.

A bastion host may also include functionality called a “data diode.” In the world of electronics, a diode is a device that only allows current to flow in a single direction. A data diode only allows information to flow in a single direction; for instance, it enforces rules that allow information to be read, but nothing may be written (changed or created or moved).

A bastion host is a specialized computer that is deliberately exposed on a public network. From a secured network perspective, it is the only node exposed to the outside world and is therefore very prone to attack. It is placed outside the firewall in single firewall systems or, if a system has two firewalls, it is often placed between the two firewalls or on the public side of a demilitarized zone (DMZ).

The bastion host processes and filters all incoming traffic and prevents malicious traffic from entering the network, acting much like a gateway. The most common examples of bastion hosts are mail, domain name system, Web, and file transfer protocol (FTP) servers. Firewalls and routers can also become bastion hosts.

The bastion host node is usually a very powerful server with improved security measures and custom software. It often hosts only a single application because it needs to be very good at what it does. The software is usually customized, proprietary, and not available to the public. This host is designed to be the strong point in the network to protect the system behind it. Therefore, it often undergoes regular maintenance and audit. Sometimes bastion hosts are used to draw attacks so that the source of the attacks may be traced.

For one to maintain the security of bastion hosts, all unnecessary software, daemons, and users are removed. The operating system is continually updated with the latest security updates, and an intrusion detection system is installed.41

Demilitarized Zone (DMZ)

A demilitarized zone (DMZ), also known as a screened subnet, allows an organization to give external hosts limited access to public resources, such as a company website, without granting them access to the internal network. See Figure 6-14. Typically, the DMZ is an isolated subnet attached to a firewall (when the firewall has three interfaces—internal, external, and DMZ—this configuration is sometimes called a three-legged firewall). Because external hosts by design have access to the DMZ (albeit controlled by the firewall), organizations should only place in the DMZ hosts and information that are not sensitive.

c06f014.tif

Figure 6-14: A demilitarized zone (DMZ) allows an organization to give external hosts limited access to public resources, such as a company website, without granting them access to the internal network.

Hardware

Networks use a vast array of hardware, including modems, concatenators, front-end processors, multiplexers, hubs, repeaters, bridges, switches, and routers.

Modems

Modems (modulator/demodulator) allow users remote access to a network via analog phone lines. Essentially, modems convert digital signals to analog and vice versa. A modem that is connected to the user’s computer converts a digital signal to analog to be transmitted over a phone line. On the receiving end, a modem converts the user’s analog signal to digital and sends it to the connected device, such as a server. Of course, the process is reversed when the server replies. The server’s reply is converted from digital to analog and transmitted over the phone line, and so on.

In order to mitigate some of the risks that exist from the legacy analogue work of communications, vendors have developed and taken to market telephony firewalls, which act not unlike IP firewalls but are designed specifically to focus on analog signals. These firewalls will sit at the demarcation point between the public switched telephone network (PSTN) and the internal organizational network, whether it is an IP phone system or an analog phone system. Telephony firewalls will monitor both incoming and outgoing analog calls to enforce rule-sets.

Concentrators

Concentrators multiplex connected devices into one signal to be transmitted on a network. For instance, a fiber distributed data interface (FDDI) concentrator multiplexes transmissions from connected devices to a FDDI ring.

Front-End Processors

Some hardware architectures employ a hardware front-end processor that sits between the input/output devices and the main computer. By servicing input/output on behalf of the main computer, front-end processors reduce the main computer’s overhead.

Multiplexers

A multiplexer overlays multiple signals into one signal for transmission. Using a multiplexer is much more efficient than transmitting the same signals separately. Multiplexers are used in devices from simple hubs to very sophisticated dense-wave division multiplexers (DWDMs) that combine multi-optical signals on one strand of optical fiber.

Hubs and Repeaters

Hubs are used to implement a physical star topology. All of the devices in the star connect to the hub. Essentially, hubs retransmit signals from each port to all other ports. Although hubs can be an economical method to connect devices, there are several important disadvantages:

  • All connected devices will receive each other’s broadcasts, potentially wasting valuable resources processing irrelevant traffic.
  • All devices can read and potentially modify the traffic of other devices.
  • If the hub becomes inoperable, then the connected devices will not have access to the network.

Bridges and Switches

Bridges are layer 2 devices that filter traffic between segments based on media access control (MAC) addresses. In addition, they amplify signals to facilitate physically larger networks. A basic bridge filters out frames that are not destined for another segment. Bridges can connect LANs with unlike media types, such as connecting an unshielded twisted pair (UTP) segment with a segment that uses coaxial cable. Bridges do not reformat frames, such as converting a Token Ring frame to Ethernet. This means that only identical layer 2 architectures can be connected with a simple bridge (e.g., Ethernet to Ethernet, etc.). Network administrators can use encapsulating bridges to connect dissimilar layer 2 architectures, such as Ethernet to Token Ring. These bridges encapsulate incoming frames into frames of the destination’s architecture. Other specialized bridges filter outgoing traffic based on the destination MAC address. Bridges do not prevent an intruder from intercepting traffic on the local segment. A common type of bridge for many organizations is a wireless bridge based upon one of the IEEE 802.11 standards. While wireless bridges offer compelling efficiencies, they can pose devastating security issues to organizations by effectively making all traffic crossing the bridge visible to anyone connected to the LAN. Wireless bridges must absolutely apply link-layer encryption and any other available native security features such as access lists to ensure secure operation.

Switches solve the same issues posed at the beginning of this section, except the solutions are more sophisticated and more expensive. Essentially, a basic switch is a multiport device to which LAN hosts connect. Switches forward frames only to the device specified in the frame’s destination MAC address, which greatly reduces unnecessary traffic. Switches can perform more sophisticated functions to increase network bandwidth. Due to the increased processing speed of switches, models exist that can make forwarding decisions based on IP address and prioritization of types of network traffic. Similar to hubs and bridges, switches forward broadcasts.

Routers

Routers forward packets to other networks. They read the destination layer 3 address (e.g., destination IP address) in received packets, and based on the router’s view of the network, it determines the next device on the network (the next hop) to send the packet. If the destination address is not on a network that is directly connected to the router, it will send the packet to another router. Routers can be used to interconnect different technologies. For example, connecting Token Ring and Ethernet networks to the same router would allow IP Ethernet packets to be forwarded to a Token Ring network.

Wired Transmission Media

When the SSCP considers wired transmission media, they should be thinking about items such as Ethernet network cables and fiber optic cables. Both of these cable types are used to carry information and provide data transmission across the wired networks that make up the corporate LAN. In addition, understanding the differences in cable transmission speeds and distance based on the type of cable used is an important area of knowledge for the SSCP to focus on.

Here are some parameters that should be considered when selecting cables:

  • Throughput—The rate that data will be transmitted.
  • Distance between Devices—The degradation or loss of a signal (attenuation) in long runs of cable is a perennial problem, especially if the signal is at a high frequency. Also, the time required for a signal to travel (propagation delay) may be a factor.
  • Data Sensitivity—What is the risk of someone intercepting the data in the cables? Fiber optics, for example, makes data interception more difficult than copper cables.
  • Environment—It is a cable-unfriendly world. Cables may have to be bent when installing, which contributes to degradation of conduction and signal distortion. The amount of electromagnetic interference is also a factor because cables in an industrial environment with a lot of interference may have to be shielded. Similarly, cables running through areas with wide temperature fluctuations and especially exposure to ultra-violet (sunlight) will degrade faster and be subject to degrading signals.

Twisted Pair42

Pairs of copper wires are twisted together to reduce electromagnetic interference and cross talk. Each wire is insulated with a fire-resistant material, such as Teflon. The twisted pairs are surrounded by an outer jacket that physically protects the wires. The quality of cable, and therefore its appropriate application, is determined by the number of twists per inch, the type of insulation, and conductive material. Cables are assigned into categories to help determine which cables are appropriate for an application or environment. See Table 6-12.

Table 6-12: Cable Categories

Cable CategoriesSpeedApplication or Environment
Category 1Less than 1 MbpsAnalog voice and basic interface rate (BRI) in Integrated Services Digital Network (ISDN)
Category 24 Mbps4 Mbps IBM Token Ring LAN
Category 310 Mbps10 Base-T Ethernet
Category 416 Mbps16 Mbps Token Ring
Category 5100 Mbps100 Base-TX and Asynchronous Transfer Mode(ATM)
Category 5e1,000 Mbps (1GB)1000 Base-T Ethernet
Category 6Up to 10,000 Mbps (10GB) 10BASE-T, 100BASE-TX, 1000BASE-TX, and 10GBASE-T
Category 6aUp to 10,000 Mbps (10GB)10BASE-T, 100BASE-TX, 1000BASE-TX, and 10GBASE-T

Unshielded Twisted Pair (UTP)

Unshielded Twisted Pair (UTP) has several drawbacks. Unlike shielded twisted-pair cables, UTP does not have shielding and is therefore susceptible to interference from external electrical sources, which could reduce the integrity of the signal. Also, to intercept transmitted data, an intruder can install a tap on the cable or monitor the radiation from the wire. Thus, UTP may not be a good choice when transmitting very sensitive data or when installed in an environment with much electromagnetic interference (EMI) or radio frequency interference (RFI). Despite its drawbacks, UTP is the most common cable type. UTP is inexpensive, can be easily bent during installation, and, in most cases, the risk from the above drawbacks is not enough to justify more expensive cables.

Shielded Twisted Pair (STP)

Shielded twisted pair (STP) is similar to UTP. Pairs of insulated twisted copper are enclosed in a protective jacket. However, STP uses an electronically grounded shield to protect the signal. The shield surrounds each of the twisted pairs in the cable, surrounds the bundle of twisted pairs, or both. The shield protects the electronic signals from outside. Although the shielding protects the signal, STP has disadvantages over UTP. STP is more expensive and is bulkier and hard to bend during installation.

Coaxial Cable

Instead of a pair of wires twisted together, coaxial cable (or simply, coax) uses one thick conductor that is surrounded by a grounding braid of wire. A non-conducting layer is placed between the two layers to insulate them. The entire cable is placed within a protective sheath. The conducting wire is much thicker than the twisted pair and therefore can support greater bandwidth and longer cable lengths. The superior insulation protects coaxial cable from electronic interference, such as EMI and RFI. Likewise, the shielding makes it harder for an intruder to monitor the signal with antennae or install a tap. Coaxial cable has some disadvantages. The cable is expensive and is difficult to bend during installation. For this reason, coaxial cable is used in specialized applications, such as cable TV.

Patch Panels

As an alternative to directly connecting devices, devices are connected to the patch panel. Then, a network administrator can connect two of these devices by attaching a small cable, called a patch cord, to two jacks in the panel. To change how these devices are connected, network administrators only have to reconnect patch cords. Patch panels and wiring closets must be secured since they offer an excellent place to tap into the network and egress the product. Wiring must be well laid out, neat, and records kept in a secure location; otherwise, it is much easier to hide a tap in a mess of wires. Shared wiring closets should be avoided.

Fiber Optic

Fiber optics use light pulses to transmit information down fiber lines instead of using electronic pulses to transmit information down copper lines. At one end of the system is a transmitter. This is the place of origin for information coming on to fiber-optic lines. The transmitter accepts coded electronic pulse information coming from copper wire. It then processes and translates that information into equivalently coded light pulses. A light-emitting diode (LED) or an injection-laser diode (ILD) can be used for generating the light pulses. Using a lens, the light pulses are funneled into the fiber-optic medium where they travel down the cable.

There are three types of fiber optic cable commonly used: single mode, multimode, and plastic optical fiber (POF). See Table 6-13.

Table 6-13: Fiber Types and Typical Specifications

Core/CladdingAttenuationBandwidthApplications/Notes
Multi-mode Graded-Index
@850/1300 nm@850/1300 nm
50/125 microns3/1 dB/km500/500 MHz-kmLaser-rated for GbE LANs
50/125 microns3/1 dB/km2000/500 MHz-kmOptimized for 850 nm VCSELs
62.5/125 microns3/1 dB/km160/500 MHz-kmMost common LAN fiber
100/140 microns3/1 dB/km150/300 MHz-kmObsolete
Single-mode
@1310/1550 nm
8-9/125 microns0.4/0.25 dB/kmHIGH!Telco/CATV/long high speed LANs
~100 Terahertz
Multi-mode Step-Index
@850 nm@850 nm
200/240 microns4-6 dB/km50 MHz-kmSlow LANs & links
POF (Plastic Optical Fiber)
@ 650 nm@ 650 nm
1 mm~ 1 dB/m~5 MHz-kmShort links & cars

Endpoint Security

Workstations should be hardened, and users should be using limited access accounts whenever possible in accordance with the concept of “least privilege.” Workstations should minimally have:

  • Up-to-date antivirus and antimalware software
  • A configured and operational host-based firewall
  • A hardened configuration with unneeded services disabled
  • A patched and maintained operating system

While workstations are clearly the endpoint most will associate with endpoint attacks, the landscape is changing. Mobile devices such as smartphones, tablets, and personal devices are beginning to make up more and more of the average organization’s endpoints. With this additional diversity of devices, there becomes a requirement for the security architect to also increase the diversity and agility of an organization’s endpoint defenses. For mobile devices such as smartphones and tablets, security practitioners should consider:

  • Encryption for the whole device, or if not possible, then at least encryption for sensitive information held on the device.
  • Remote management capabilities including:
    • Remote wipe
    • Remote geo-locate
    • Remote update
    • Remote operation
  • User policies and agreements that ensure an organization can manage the device or seize it for legal hold.

Voice Technologies

In the age of convergence, the area of voice technologies is an important one for the SSCP to consider when it comes to a discussion around the concepts of controlling network access. If voice technologies are not made a part of the plan that the SSCP will use to ensure that network communications enforce data confidentiality and integrity, then a defense in depth architecture cannot be fully realized. In that case, the SSCP is not exercising due care and due diligence with regard to the security of the network design and operation.

Modems and Public Switched Telephone Networks (PSTN)

The Public Switched Telephone Network (PSTN) is a circuit-switched network that was originally designed for analog voice communication. When a person places a call, a dedicated circuit is created between the two phones. Although it appears to the callers that they are using a dedicated line, they are actually communicating through a complex network. As with all circuit-switched technology, the path through the network is established before communication between the two endpoints begins, and barring an unusual event, such as a network failure, the path remains constant during the call. Phones connect to the PSTN with copper wires to a central office (CO), which services an area of about 1 to 10 km. The central offices are connected to a hierarchy of tandem offices (for local calls) and toll offices (toll calls), with each higher level of the hierarchy covering a larger area. Including the COs, the PSTN has five levels of offices. When both endpoints of a call are connected to the same CO, the traffic is switched within the CO. Otherwise, the call must be switched between a toll center and a tandem office. The greater the distance between the calls, the higher in the hierarchy the calls are switched. To accommodate the high volume of traffic, toll centers communicate with each other over fiber-optic cables.

War Dialing

Although modems allowed remote access to networks from almost anywhere, they could be used as a portal into the network by an attacker. Using automated dialing software, the attacker could dial the entire range of phone numbers used by the company to identify modems. If the host, to which the modem was attached, had a weak password, then the attacker would easily gain access to the network. Worse yet, if voice and data shared the same network, then both voice and data could be compromised.

The best defense against this attack is not to leave unattended modems turned on, and keep an up-to-date inventory of all modems so none get orphaned and left to operate without the knowledge and oversight of the security professional. All modems should require some form of authentication, at least a single factor, although the industry standard has moved to two-factor authentication for modem connections due to the risks that use of these devices poses. If modems are necessary, then organizations must ensure that the passwords protecting the attached host are strong, preferably with the help of authentication mechanisms, such as RADIUS, one-time passwords, etc.

Multimedia Collaboration

The use of multimedia collaboration technologies is standard practice in the enterprise today. The need to understand and secure these technologies as part of a defense in depth approach to network security is part of the responsibilities that the SSCP has in their role as a security practitioner. Whether it is P2P, remote meeting technology, or instant messaging clients, the unrestricted usage of these applications and technologies can lead to vulnerabilities being exploited by bad actors to the detriment of the organization.

Peer-to-Peer Applications and Protocols

Peer-to-Peer (P2P) applications are often designed to open an uncontrolled channel through network boundaries (normally through tunneling). They therefore provide a way for dangerous content, such as botnets, spyware applications, and viruses, to enter an otherwise protected network. Because P2P networks can be established and managed using a series of multiple, overlapping master and slave nodes, they can be very difficult to fully detect and shut down. If one master node is detected and shutdown, the “bot herder” who controls the P2P botnet can make one of the slave nodes a master and use that as a redundant staging point, allowing for botnet operations to continue unimpeded.

Remote Meeting Technology

Several technologies and services exist that allow organizations and individuals to meet virtually. These applications are typically Web-based, and they install extensions in the browser or client software on the host system. These technologies also typically allow desktop sharing as a feature. This feature not only allows the viewing of a user’s desktop but also control of the system by a remote user.

Some organizations use dedicated equipment such as cameras, monitors, and meeting rooms to host and participate in remote meetings. These devices are often a combination of VoIP and in some cases POTS technology. They are also subject to the same risks including but not limited to:

  • War dialing
  • Vendor backdoors
  • Default passwords
  • Vulnerabilities in the underlying operating system or firmware

Instant Messaging

Instant messaging systems can generally be categorized in three classes: peer-to-peer networks, brokered communication, and server-oriented networks. All these classes will support basic “chat” services on a one-to-one basis and frequently on a many-to-many basis. Most instant messaging applications do offer additional services beyond their text messaging capability, for instance, screen sharing, remote control, exchange of files, and voice and video conversation. Some applications even allow command scripting. Instant messaging and chat is increasingly considered a significant business application used for office communications, customer support, and “presence” applications. Instant message capabilities will frequently be deployed with a bundle of other IP-based services such as VoIP and video conferencing support. It should be noted that many of the risks mentioned here apply also to online games, which today offer instant communication between participants. For instance, multiplayer role-playing games, such as multiuser domains (MUDs), rely heavily on instant messaging that is similar in nature to Internet Relay Chat (IRC), even though it is technically based on a variant of the TELNET protocol.

Open Protocols, Applications, and Services

In order to be able to properly secure network access, it is important to understand the traffic that is moving through the network at all times. The ability to monitor traffic is impacted by the types of traffic being created and used in a network. The SSCP should have an understanding of the protocols, applications, and services that are in use on the networks that they are charged with defending. For example, the use of Jabber should raise a red flag for a security professional, as Jabber may be a potential source of confidentiality concerns if not configured properly. This section will discuss these issues and concerns and help the SSCP to understand what they need to do to address them.

Extensible Messaging and Presence Protocol (XMPP) and Jabber

Jabber is an open instant messaging protocol for which a variety of open source clients exist. A number of commercial services based on Jabber exist. Jabber has been formalized as an Internet standard under the name Extensible Messaging and Presence Protocol (XMPP), as defined in RFC 3920 and RFC 3921.43

Jabber is a server-based application. Its servers are designed to interact with other instant messaging applications. As with IRC, anybody can host a Jabber server. The Jabber server network can therefore not be considered trusted. Although Jabber traffic can be encrypted via TLS, this does not prevent eavesdropping on the part of server operators. However, Jabber does provide an API to encrypt the actual payload data.44 Jabber itself offers a variety of authentication methods, including cleartext and challenge/response authentication. To implement interoperability with other instant messaging systems from the server, however, the server will have to cache the user’s credentials for the target network, enabling a number of attacks, mainly on behalf of the server operator but also for anyone able to break into a server.

Internet Relay Chat (IRC)45

Internet Relay Chat (IRC) is a client/server-based network. IRC is unencrypted, and therefore an easy target for sniffing attacks. The basic architecture of IRC, founded on trust among servers, enables special forms of denial-of-service attacks. For instance, a malicious user can hijack a channel while a server or group of servers has been disconnected from the rest (net split). IRC is also a common platform for social engineering attacks, aimed at inexperienced or technically unskilled users.

Tunneling Firewalls and Other Restrictions

Control of HTTP tunneling can happen on the firewall or the proxy server. It should, however, be considered that in the case of peer-to-peer protocols, this would require a “deny by default” policy, and blocking instant messaging without providing a legitimate alternative is not likely to foster user acceptance and might give users incentive to utilize even more dangerous workarounds. It should be noted that inbound file transfers can also result in circumvention of policy or restrictions in place, in particular for the spreading of viruses. An effective countermeasure can be found in on-access antivirus scanning on the client, which should be enabled anyway.

Remote Access

Remote access technologies are an important area for the SSCP to focus on with regards to the ability to build a complete solution that addresses all aspects of controlling network access. Being able to balance the needs of users that will want to be able to work remotely and still maintain access to necessary corporate resources against the organization’s needs to ensure the confidentiality, integrity, and availability of the same resources is difficult. The ability for remote access technologies to address the needs and concerns of both audiences hinges on the design, implementation, and operation of the technology solution being used. It is up to the SSCP to develop the level of knowledge required to ensure that remote access technologies such as VPNs and tunneling protocols such as L2TP are used properly to ensure availability of resources while also safeguarding the confidentiality and integrity of them as well.

Virtual Private Network (VPN)

A Virtual Private Network (VPN) is an encrypted tunnel between two hosts that allows them to securely communicate over an untrusted network; e.g., the Internet. Remote users employ VPNs to access their organization’s network, and depending on the VPN’s implementation, they may have most of the same resources available to them as if they were physically at the office. As an alternative to expensive dedicated point-to-point connections, organizations use gateway-to-gateway VPNs to securely transmit information over the Internet between sites or even with business partners.

Tunneling

A tunnel is a communications channel between two networks that is used to transport another network protocol by encapsulation of its packets. Protocols such as Point-to-Point Tunneling Protocol (PPTP) and Layer 2 Tunneling (L2TP) are used to create these tunnels and to allow for the secure transmission of data between two endpoints, whether they are on the same or different networks. Authentication protocols such as Remote Authentication Dial In User Service (RADIUS) are deployed alongside of tunneling protocols to ensure that user authentication is handled properly within these solutions.

Point-to-Point Tunneling Protocol (PPTP)

Point-to-Point Tunneling Protocol (PPTP) is a VPN protocol that runs over other protocols. PPTP relies on generic routing encapsulation (GRE) to build the tunnel between the endpoints. After the user authenticates, typically with Microsoft Challenge Handshake Authentication Protocol version 2 (MSCHAPv2), a Point-to-Point Protocol (PPP) session creates a tunnel using GRE. A key weakness of PPTP is the fact that it derives its encryption key from the user’s password. This violates the cryptographic principle of randomness and can provide a basis for attacks. Password-based VPN authentication in general violates the recommendation to use two-factor authentication for remote access. The security architect and practitioner both need to consider known weaknesses, such as the issues identified with PPTP, when planning for the deployment and use of remote access technologies.

Layer 2 Tunneling Protocol (L2TP)

Layer 2 Tunneling Protocol (L2TP) is a hybrid of Cisco’s Layer 2 Forwarding (L2F) and Microsoft’s PPTP. It allows callers over a serial line using PPP to connect over the Internet to a remote network. A dial-up user connects to his ISP’s L2TP access concentrator (LAC) with a PPP connection. The LAC encapsulates the PPP packets into L2TP and forwards it to the remote network’s layer 2 network server (LNS). At this point, the LNS authenticates the dial-up user. If authentication is successful, the dial-up user will have access to the remote network. LAC and LNS may authenticate each other with a shared secret, but as RFC 2661 states, the authenticating is effective only while the tunnel between the LAC and LNS is being created.46 L2TP does not provide encryption and relies on other protocols, such as tunnel mode IPSec, for confidentiality.

Remote Authentication Dial-in User Service (RADIUS)47

Remote Authentication Dial-in User Service (RADIUS) is an authentication protocol used mainly in networked environments, such as ISPs, or for similar services requiring single sign-on for layer 3 network access, for scalable authentication combined with an acceptable degree of security. On top of this, RADIUS provides support for consumption measurement such as connection time. RADIUS authentication is based on provision of simple username/password credentials. These credentials are encrypted by the client using a shared secret with the RADIUS server. Overall, RADIUS has the following issues:

  • RADIUS has become the victim of a number of cryptographic attacks and can be successfully attacked with a replay attack.
  • RADIUS suffers from a lack of integrity protection.
  • RADIUS transmits only specific fields using encryption.
Simple Network Management Protocol (SNMP)48

Simple Network Management Protocol (SNMP) is designed to manage network infrastructure.

Table 6-14 provides an SNMP quick reference of ports and definitions.

Table 6-14: SNMP Quick Reference 49

Ports161/TCP, 161/UDP
162/TCP, 162/UDP
DefinitionRFC 1157

SNMP architecture consists of a management server (called the manager in SNMP terminology) and a client, usually installed on network devices such as routers and switches called an agent. SNMP allows the manager to retrieve “get” values of variables from the agent, as well as “set” variables. Such variables could be routing tables or performance-monitoring information.

Although SNMP has proven to be remarkably robust and scalable, it does have a number of clear weaknesses. Some of them are by design; others are subject to configuration parameters.

Probably the most easily exploited SNMP vulnerability is a brute force attack on default or easily guessable SNMP passwords known as “community strings” often used to manage a remote device. Given the scale of SNMP v1 and v2 deployment, combined with a lack of clear direction from the security professional with regards to the risks associated with using SNMP without additional security enhancements to protect the community string, it is certainly a realistic scenario, and a potentially severe but easily mitigated risk.

Until version 2, SNMP did not provide any degree of authentication or transmission security. Authentication consists of an identifier called a community string, by which a manager will identify itself against an agent (this string is configured into the agent) and a password sent with a command. As a result, passwords can be easily intercepted, which could then result in commands being sniffed and potentially faked. Similar to the previous problem, SNMP version 2 did not support any form of encryption, so that passwords (community strings) were passed as cleartext. SNMP version 3 addresses this particular weakness with encryption for passwords.50

Remote-Access Services

The services described under this section, TELNET and rlogin, while present in many UNIX operations, and, when combined with NFS and NIS, provide the user with seamless remote working capabilities, do in fact form a risky combination if not configured and managed properly. Conceptually, because they are built on mutual trust, they can be misused to obtain access and to horizontally and vertically escalate privileges in an attack. Their authentication and transmission capabilities are insecure by design; they therefore have had to be retrofitted or replaced altogether, as TELNET and rlogin have been through the use of SSH.

TCP/IP Terminal Emulation Protocol (TELNET)51

TELNET is a command line protocol designed to give command line access to another host. Although implementations for Windows exist, TELNET’s original domain was the UNIX server world, and in fact, a TELNET server is standard equipment for any UNIX server. (Whether it should be enabled is another question entirely, but in small LAN environments, TELNET is still widely used.)

  • TELNET offers little security, and indeed, its use poses serious security risks in untrusted environments.
  • TELNET is limited to username/password authentication.
  • TELNET does not offer encryption.
Remote Log-in (rlogin), Remote Shell (rsh), Remote Copy (rcp)52

In its most generic form, remote log-in (rlogin) is a protocol used for granting remote access to a machine, normally a UNIX server. Similarly, remote shell (rsh) grants direct remote command execution, while rcp copies data from or to a remote machine. If an rlogin daemon (rlogind) is running on a machine, rlogin access can be granted in two ways, through the use of a central configuration file or through a user configuration. By the latter, a user may grant access that was not permitted by the system administrator. The same mechanism applies to rsh and remote copy (rcp), although they are relying on a different daemon (rshd). Authentication can be considered host/IP address based. Although rlogin grants access based on user ID, it is not verified; i.e., the ID a remote client claims to possess is taken for granted if the request comes from a trusted host. The rlogin protocol transmits data without encryption and is hence subject to eavesdropping and interception.

The rlogin protocol is of limited value—its main benefit can be considered its main drawback: remote access without supplying a password. It should only be used in trusted networks, if at all. A more secure replacement is available in the form of SSHv2 for rlogin, rsh, and rcp.

Screen Scraper

A screen scraper is a program that can extract data from output on a display intended for a human. Screen scrapers are used in a legitimate fashion when older technologies are unable to interface with modern ones. In a nefarious sense, this technology can also be used to capture images from a user’s computer such as PIN pad sequences at a banking website when implemented by a virus or malware.

Virtual Network Terminal Services

Virtual terminal service is a tool frequently used for remote access to server resources. Virtual terminal services allow the desktop environment for a server to be exported to a remote workstation. This allows users at the remote workstation to execute desktop commands as though they were sitting at the server terminal interface in person. The advantage of terminal services such as those provided by Citrix, Microsoft, or public domain VNC services is that they allow for complex administrative commands to be executed using the native interface of the server rather than a command-line interface, which might be available through SSHv2 or telnet. Terminal services also allow for the authentication and authorization services integrated into the server to be leveraged for remote users, in addition to all the logging and auditing features of the server as well.

Telecommuting

Common issues such as visitor control, physical security, and network control are almost impossible to address with teleworkers. Strong VPN connections between the teleworker and the organization need to be established, and full device encryption should be the norm for protecting sensitive information. If the user works in public places or a home office, the following should also be considered:

  • Is the user trained to use secure connectivity software and methods such as a VPN?
  • Does the user know which information is sensitive or valuable and why someone might wish to steal or modify it?
  • Is the user’s physical location appropriately secure for the type of work and type of information they are using?
  • Who else has access to the area?
  • While a child may seem trusted, the child’s friends may not be.

Data Communication

Data can be transmitted using analog communication or digital communication.

Analog Communication

Analog signals use electronic properties, such as frequency and amplitude, to represent information. Analog recordings are a classic example: A person speaks into a microphone, which converts the vibration from acoustical energy to an electrical equivalent. The louder the person speaks, the greater the electrical signal’s amplitude. Likewise, the higher the pitch of the person’s voice, the higher the frequency of the electrical signal. Analog signals are transmitted on wires, such as twisted pair, or with a wireless device.

Digital Communication

Whereas analog communication uses complex waveforms to represent information, digital communication uses two electronic states (on and off). By convention, 1 is assigned to the on state and 0 to off. Electrical signals that consist of these two states can be transmitted over a cable, converted to light and transmitted over fiber optics, and broadcasted with a wireless device. In all of the above media, the signal would be a series of one of two states: on and off. It is easier to ensure the integrity of digital communication because the two states of the signal are sufficiently distinct. When a device receives a digital transmission, it can determine which digits are 0s and which are 1s (if it cannot, then the device knows the signal is erroneous or corrupted). On the other hand, analog complex waveforms make ensuring integrity very difficult.

LAN-Based Security

In order for the SSCP to be able to manage LAN-based security concerns effectively within the enterprise, they must understand the concept of separation between the data and control planes of a network. The control plane is where routing is handled, while the data plane is where commands are executed based on input from the control plane. In addition, the use of technologies such as logical segmentation of the network through the use of one or more VLANs is also important for the SSCP to be familiar with. The implementation of security solutions such as MACsec and secure device management are also important pieces of the LAN-based security puzzle that need to be considered carefully.

Separation of Data Plane and Control Plane

The control plane is where forwarding/routing decisions are made. Switches and routers have to figure out where to send frames (L2) and packets (L3). The switches and routers that run the network run as discrete components, but since they are in a network, they have to exchange information such as host reachability and status with neighbors. This is done in the control plane using protocols like spanning tree, OSPF, BGP, QoS enforcement, etc.

The data plane is where the action takes place. It includes things like the forwarding tables, routing tables, ARP tables, queues, tagging and re-tagging, etc. The data plane carries out the commands of the control plane. Figure 6-15 shows the control, data, and management plane as they would appear in a logical design diagram.

c06f015.tif

Figure 6-15: Logical design for control planes

For example, in the control plane, you set up IP networking and routing (routing protocols, route preferences, static routers, etc.) and connect hosts and switches/routers together. Each switch/router figures out what is directly connected to it, and then tells its neighbor what it can reach and how it can reach it. The switches/routers also learn how to reach hosts and networks not attached to it. Once all of the routers/switches have a coherent picture—shared via the control plane—the network is converged.

In the data plane, the routers/switches use what the control plane built to dispose of incoming and outgoing frames and packets. Some get sent to another router, for example. Some may get queued up when congested. Some may get dropped if congestion gets bad enough.

Segmentation

In simple terms, a VLAN is a set of workstations within a LAN that can communicate with each other as though they were on a single, isolated LAN. What does it mean to say that they “communicate with each other as though they were on a single, isolated LAN”?

Among other things, it means that:

  • Broadcast packets sent by one of the workstations will reach all the others in the VLAN.
  • Broadcasts sent by one of the workstations in the VLAN will not reach any workstations that are not in the VLAN.
  • Broadcasts sent by workstations that are not in the VLAN will never reach workstations that are in the VLAN.
  • The workstations can all communicate with each other without needing to go through a gateway. For example, IP connections would be established by ARPing for the destination.
  • IP and sending packets directly to the destination workstation—there would be no need to send packets to the IP gateway to be forwarded on.
  • The workstations can communicate with each other using non-routable protocols.

The Purpose of VLANs

The basic reason for splitting a network into VLANs is to reduce congestion on a large LAN. There are several advantages to using VLANs:

  • Performance—Removing routers from the equation avoids the bottlenecks that can occur when data rates increase.
  • Flexibility—Users can easily change locations within the VLAN without the restrictions that would otherwise be placed on them by routers in a physical LAN.
  • Virtual workgroups—Workers can be joined together by simply changing the configuration of switches without worrying about all of the hardware connections that would be necessary in a physical system.
  • Partitioning resources—You can place servers or other equipment on separate VLANs and easily control how much access to grant to each user for each VLAN.

Implementing VLANs/Port-Based VLANs

The act of creating a VLAN on a switch involves defining a set of ports and defining the criteria for VLAN membership for workstations connected to those ports. By far the most common VLAN membership criterion is port-based. With port-based VLANs, the ports of a switch are simply assigned to VLANs, with no extra criteria. All devices connected to a given port automatically become members of the VLAN to which that port was assigned. In effect, this just divides a switch up into a set of independent sub-switches.

It is important to remember that VLANs do not guarantee a network’s security. At first glance, it may seem that traffic cannot be intercepted because communication within a VLAN is restricted to member devices. However, there are attacks that allow a malicious user to see traffic from other VLANs (so-called VLAN hopping). Therefore, a VLAN can be created so that engineers can efficiently share confidential documents, but the VLAN does not significantly protect the documents from unauthorized access. The following lists the most common attacks that could be launched against VLANs at the data link layer:

  • MAC Flooding Attack—This is not properly a network “attack” but more a limitation of the way all switches and bridges work. They possess a finite hardware learning table to store the source addresses of all received packets: When this table becomes full, the traffic that is directed to addresses that cannot be learned anymore will be permanently flooded. Packet flooding however is constrained within the VLAN of origin; therefore, no VLAN hopping is permitted. This behavior can be exploited by a malicious user that wants to turn the switch he or she is connected to into a dumb pseudo-hub and sniff all the flooded traffic. This weakness can then be exploited to perform an actual attack, like the ARP poisoning attack.
  • 802.1Q and Inter-Switch Link Protocol (ISL) Tagging Attack—Tagging attacks are malicious schemes that allow a user on a VLAN to get unauthorized access to another VLAN.
  • Double-Encapsulated 802.1Q/Nested VLAN Attack—While internal to a switch, VLAN numbers and identification are carried in a special extended format that allows the forwarding path to maintain VLAN isolation from end to end without any loss of information. Instead, outside of a switch, the tagging rules are dictated by standards such as Cisco’s ISL or 802.1Q. When double-encapsulated 802.1Q packets are injected into the network from a device whose VLAN happens to be the native VLAN of a trunk, the VLAN identification of those packets cannot be preserved from end to end since the 802.1Q trunk would always modify the packets by stripping their outer tag. After the external tag is removed, the internal tag permanently becomes the packet’s only VLAN identifier. Therefore, by double-encapsulating packets with two different tags, traffic can be made to hop across VLANs.
  • ARP Attacks—In L2 devices that implement VLANs independently of MAC addresses, changing a device’s identity in an ARP packet does not make it possible to affect the way it communicates with other devices across VLANs. As a matter of fact, any VLAN hopping attempts would be thwarted. On the other hand, within the same VLAN, the ARP poisoning or ARP spoofing attacks are a very effective way to fool end stations or routers into learning counterfeited device identities: This can allow a malicious user to pose as intermediary and perform a man-in-the-middle (MiM) attack. The MiM attack is performed by impersonating another device (for example, the default gateway) in the ARP packets sent to the attacked device; these packets are not verified by the receiver, and therefore they “poison” its ARP table with forged information.
  • Multicast Brute Force Attack—This attack tries to exploit switches’ potential vulnerabilities, or bugs, against a storm of L2 multicast frames. The correct behavior should be to constrain the traffic to its VLAN of origin; the failure behavior would be to leak frames to other VLANs.
  • Spanning-Tree Attack—Another attack that tries to leverage a possible switch weakness is the STP attack. The attack requires sniffing for STP frames on the wire to get the ID of the port STP is transmitting on. Then, the attacker would begin sending out STP Configuration/Topology Change Acknowledgement BPDUs announcing that he was the new root bridge with a much lower priority.
  • Random Frame Stress Attack—This last attack can have many incarnations, but in general it consists in a brute force attack that randomly varies several fields of a packet while keeping only the source and destination addresses constant.

While many of these attacks are old, and may not be effective unless certain circumstances or misconfiguration issues are allowed to go unchecked within the network, the security practitioner needs to be aware of these attack vectors and ensure that they understand how they operate and what appropriate countermeasures are available.

Media Access Control Security (IEEE 802.1AE)

Media Access Control Security (MACsec) provides point-to-point security on Ethernet links between directly connected nodes. MACsec identifies and prevents most threats, including denial of service, intrusion, man-in-the-middle, masquerading, passive wiretapping, and playback attacks. MACsec is standardized in IEEE 802.1AE. When combined with other security protocols such as IP Security (IPsec) and Secure Sockets Layer (SSL), MACsec can provide end-to-end network security.

How MACsec Works

MACsec uses secured point-to-point Ethernet links. Matching security keys are exchanged and verified between the interfaces at each end of the link. Ports, MAC addresses, and other user-configurable parameters are similarly verified. Data integrity checks are used to secure and verify all data that traverses the link. If the data integrity check detects anything irregular about the traffic, the traffic is dropped. MACsec can also be configured to encrypt all data on the Ethernet link to prevent if from being viewed by anyone who might be monitoring traffic on the link.53

Connectivity Associations and Secure Channels

MACsec is configured using connectivity associations. You can configure MACsec using static secure association key (SAK) security mode or static connective association key (CAK) mode. Both modes use secure channels that send and receive data on the MACsec-enabled link. When you use SAK security mode, you configure the secure channels, which also transmit the SAKS across the link to enable the MACsec. Typically, you configure two secure channels, one for inbound traffic and the other for outbound traffic. When you use CAK security mode, you create and configure the connectivity association. Your secure channels are automatically created and configured in the process of configuring the connectivity association. The secure channels are cannot be separately configured by users.54

Secure Device Management

Configuration Management/Monitoring (CM) is the application of sound program practices to establish and maintain consistency of a product’s or system’s attributes with its requirements and evolving technical baseline over its life. It involves interaction among systems engineering, hardware/software engineering, specialty engineering, logistics, contracting, and production in an integrated product team environment. A configuration management/monitoring process guides the system products, processes, and related documentation, and facilitates the development of open systems. Configuration management/monitoring efforts result in a complete audit trail of plans, decisions, and design modifications.

Automated CM tools can help the security practitioner to:

  • Record, control, and correlate configuration items (CIs), configuration units (CUs), and Configuration Components (CCs) within a number of individual baselines across the life cycle.
  • Identify and control baselines.
  • Track, control, manage, and report change requests for the baseline CIs, CCs, and CUs.
  • Track requirements from specification to testing.
  • Identify and control software versions.
  • Track hardware parts.
  • Enable rigorous compliance with a robust CM process.
  • Conduct physical configuration audits (PCAs).
  • Facilitate conduct of functional configuration audits.

Secure Shell (SSH)

Secure Shell’s (SSH) services include remote log-on, file transfer, and command execution. It also supports port forwarding, which redirects other protocols through an encrypted SSH tunnel. Many users protect less secure traffic of protocols, such as X Windows and virtual network computing (VNC) , by forwarding them through an SSH tunnel. The SSH tunnel protects the integrity of communication, preventing session hijacking and other man-in-the-middle attacks.

There are two incompatible versions of the protocol, SSH-1 and SSH-2, though many servers support both. SSH-2 has improved integrity checks (SSH-1 is vulnerable to an insertion attack due to weak CRC-32 integrity checking) and supports local extensions and additional types of digital certificates such as Open PGP. SSH was originally designed for UNIX, but there are now implementations for other operating systems, including Windows, Macintosh, and OpenVMS.

DNSSEC

According to Nick Sullivan at Cloudfare, “The point of DNSSEC is to provide a way for DNS records to be trusted by whoever receives them. The key innovation of DNSSEC is the use of public key cryptography to ensure that DNS records are authentic. DNSSEC not only allows a DNS server to prove the authenticity of the records it returns. It also allows the assertion of “non-existence of records.”

The DNSSEC trust chain is a sequence of records that identify either a public key or a signature of a set of resource records. The root of this chain of trust is the root key, which is maintained and managed by the operators of the DNS root. DNSSEC is defined by the IETF in RFCs 4033, 4034, and 4035.

There are several important new record types:

  • DNSKEY—A public key, used to sign a set of resource records (RRset).
  • DS—Delegation signer, a hash of a key.
  • RRSIG—A signature of an RRset that shares name/type/class.

A DNSKEY record is a cryptographic public key; DNSKEYs can be classified into two roles, which can be handled by separate keys or a single key

  • KSK (Key Signing Key)—Used to sign DNSKEY records.
  • ZSK (Zone Signing Key)—Used to sign all other records in the domain that it is authoritative for.

For a given domain name and question, there are a set of answers. For example, if you ask for the A record for ISC2.org, you get a set of A records as the answer:

ISC2.org.            IN    A    208.78.71.5
ISC2.org.            IN    A    208.78.70.5
ISC2.org.            IN    A    208.78.72.5
ISC2.org.            IN    A    208.78.73.5

The set of all records of a given type for a domain is called an RRset. A Resource Record SIGnature (RRSIG) is essentially a digital signature for an RRset. Each RRSIG is associated with a DNSKEY. The RRset of DNSKEYs are signed with the key signing key (KSK). All others are signed with the zone signing key (ZSK). Trust is conferred from the DNSKEY to the record though the RRSIG: If you trust a DNSKEY, then you can trust the records that are correctly signed by that key.

However, the domain’s ZSK is signed by itself, making it difficult to trust. The way around this is to walk the domain up to the next/parent zone. To verify that the DNSKEY for ISC2.org is valid, you have to ask the .org authoritative server. This is where the DS record comes into play: It acts as a bridge of trust to the parent level of the DNS.

The DS record is a hash of a DNSKEY. The .org zone stores this record for each zone that has supplied DNSSEC keying information. The DS record is part of an RRset in the zone for .org and therefore has an associated RRSIG. This time, the RRset is signed by the .com ZSK. The .org DNSKEY RRset is signed by the .org KSK.

The ultimate root of trust is the KSK DNSKEY for the DNS root. This key is universally known and published.

Here is the DNSKEY root KSK that was published in August 2010 and will be used until sometime in 2015 or 2016 (encoded in base64):

AwEAAagAIKlVZrpC6Ia7gEzahOR+9W29euxhJhVVLOyQbSEW0O8gcC jFFVQUTf6v58fLjwBd0YI0EzrAcQqBGCzh/RStIoO8g0NfnfL2MTJR kxoXbfDaUeVPQuYEhg37NZWAJQ9VnMVDxP/VHL496M/QZxkjf5/Efu cp2gaDX6RS6CXpoY68LsvPVjR0ZSwzz1apAzvN9dlzEheX7ICJBBtuA 6G3LQpzW5hOA2hzCTMjJPJ8LbqF6dsV6DoBQzgul0sGIcGOYl7OyQd XfZ57relSQageu+ipAdTTJ25AsRTAoub8ONGcLmqrAmRLKBP1dfwhY B4N7knNnulqQxA+Uk1ihz0= 

By following the chain of DNSKEY, DS, and RRSIG records to the root, any record can be trusted.”55

Network-Based Security Devices

Key concepts for the security professional to consider regarding network-based security devices include:

  • Definition of Security Domains—This could be defined by level of risk or by organizational control. A prime example is the tendency of decentralized organizations to manage their IT—and thereby also their network security—locally, and as a result, with different degrees of success.
  • Segregation of Security Domains—Control of traffic flows according to risk/benefit assessment, and taking into account formal models, such as the Bell–La Padula model, the Biba integrity model, or the Clark–Wilson model.
  • Incident Response Capability—Including but not limited to:
    • An inventory of business-critical traffic (this could, for instance, be email or file and print servers, but also DNS and DHCP, telephony traffic—VoIP, building access control traffic, and facilities management traffic. Remember—modern building controls, physical security controls, and process controls are converging onto IP).
    • An inventory of less critical traffic (such as HTTP or FTP).
    • A way to quickly contain breaches (for instance, by shutting off parts of the network or blocking certain types of traffic).
    • A process for managing the reaction.
    • Contingency or network “diversity” in case of overload or failure of the primary network connection; alternate network connections are in place to absorb the load/traffic automatically without loss of services to applications and users.

Network Security Objectives and Attack Modes

It is a common misperception that network security is the endpoint for all other security measures; i.e., that firewalls only will protect an organization. This is a flawed perception; perimeter defense (defending the edges of the network) is merely part of the overall solution set, or enterprise security architecture that the security professional needs to ensure is in place in the enterprise. Perimeter defense is part of a wider concept known as defense in depth, which simply holds that security must be a multi-layer effort including the edges but also the hosts, applications, network elements (routers, switches, DHCP, DNS, wireless access points), people, and operational processes. Techniques associated with proactivity can be undertaken independently by organizations with a willingness and resources to do so. Others such as upstream intercession (assuming an external source of the threat such as DDOS, spam/phish, or botnet attacks) can be accomplished fairly easily and affordably through cooperation with telecommunications suppliers and Internet service providers (ISP). Finally, the most effective proactive network defense is related to the ability to disable attack tools before they can be deployed and applied against you.

Confidentiality

In the context of telecommunications and network security, confidentiality is the property of nondisclosure to unauthorized parties. Attacks against confidentiality are by far the most prevalent today because information can be sold or exploited for profit in a huge variety of (mostly criminal) ways. The network, as the carrier of almost all digital information within the enterprise, provides an attractive target to bypass access control measures on the assets using the network and access information while it is in transit on the wire. Among the information that can be acquired is not just the payload information but also credentials, such as passwords. Conversely, an attacker might not even be interested in the information transmitted but simply in the fact that the communication has occurred. An overarching class of attacks carried out against confidentiality is known as “eavesdropping.”

Eavesdropping (Sniffing)

To access information from the network, an attacker must have access to the network itself in the first place. An eavesdropping computer can be a legitimate client on the network or an unauthorized one. It is not necessary for the eavesdropper to become a part of the network (for instance, having an IP address); it is often far more advantageous for an attacker to remain invisible (and un-addressable) on the network. This is particularly easy in wireless LANs, where no physical connection is necessary. Countermeasures to eavesdropping include encryption of network traffic on a network or application level, traffic padding to prevent identification of times when communication happens, rerouting of information to anonymize its origins and potentially split different parts of a message, and mandating trusted routes for data such that information is only traversing trusted network domains.

Integrity

In the context of telecommunications and network security, integrity is the property association with corruption or change (intentional or accidental). A network needs to support and protect the integrity of its traffic. In many ways, the provisions taken for protection against interception to protect confidentiality will also protect the integrity of a message. Attacks against integrity are often an interim step to compromising confidentiality or availability as opposed to the overall objective of the attack. Although the modification of messages will often happen at the higher network layers (i.e., within applications), networks can be set up to provide robustness or resilience against interception and change of a message (man-in-the-middle attacks) or replay attacks. Ways to accomplish this can be based on encryption or checksums on messages, as well as on access control measures for clients that would prevent an attacker from gaining the necessary access to send a modified message into the network in the first place. Conversely, many protocols, such as SMTP, HTTP, or even DNS, do not provide any degree of authentication. Consequently, it becomes relatively easy for the attacker to inject messages with fake sender information into a network from the outside through an existing gateway. The fact that no application can rely on the security or authenticity of underlying protocols has become a common design factor in networking.

Availability

In the context of telecommunications and network security, availability is the property of a network service related to its uptime, speed, and latency. Availability of the service is commonly the most obvious business requirement especially with highly converged networks, where multiple assets (data, voice, physical security) are riding on top of the same network. For this very reason, network availability has also become a prime target for attackers and a key business risk that security professionals need to be prepared to address. While a variety of availability threats and risks are addressed in this domain, an overarching class of attack against availability is known as denial of service.

Attacks on the transport layer of the OSI model (layer 4) seek to manipulate, disclose, or prevent delivery of the payload as a whole. This can, for instance, happen by reading the payload (as would happen in a sniffer attack) or changing it (which could happen in a man-in-the-middle attack). While disruptions of service can be executed at other layers as well, the transport layer has become a common attack ground via ICMP.

Domain Litigation

Domain names are subject to trademark risks, related to a risk of temporary unavailability or permanent loss of an established domain name. For the business in question, the consequences can be equivalent to the loss of its whole Internet presence in an IT-related disaster. Businesses should therefore put in place contingency plans if they are concerned with trademark disputes of any kind over a domain name used as their main Web and email address. Such contingency plans might include setting up a second domain unrelated to the trademark in question (based, for instance, on the trademark of a parent company) that can be advertised on short notice, if necessary. Cyber-squatting and the illegitimate use of similar domains, containing common misspellings or representing the same second-level domain under a different top-level domain, is occurring more frequently as the range of domains continues to expand. The only way to protect a business from this kind of fraud is the registration of the most prominent adjacent domains or by means of trademark litigation. A residual risk will always remain, relating not only to public misrepresentation but also to potential loss or disclosure of email.

Open Mail Relay Servers

An open mail relay server is an SMTP service that allows inbound SMTP connections for domains it does not serve; i.e., for which it does not possess a DNS MX record. An open mail relay is generally considered a sign of bad system administration. Open mail relays are a principal tool for the distribution of spam because they allow an attacker to hide their identity. A number of blacklists for open mail relay servers exist that can be used for blacklisting open mail relays; i.e., a legitimate mail server would not accept any email from this host because it has a high likelihood of being spam. Although using blacklists as one indicator in spam filtering has its merits, it is risky to use them as an exclusive indicator. Generally, they are run by private organizations and individuals according to their own rules, they are able to change their policies on a whim, they can vanish overnight for any reason, and they can rarely be held accountable for the way they operate their lists.

Spam

By far the most common way of suppressing spam is email filtering on an email gateway. A large variety of commercial products exists, based on a variety of algorithms. Filtering based on simple keywords can be regarded as technically obsolete because this method is prone to generating false-positives, and spammers are able to easily work around this type of filter simply by manipulating the content and key words in their messages. More sophisticated filters, based, for instance, upon statistical analysis or analysis of email traffic patterns, have come to market. Filtering can happen on an email server (mail transfer agent [MTA]) or in the client (mail user agent [MUA]). The administrator of a mail server can configure it to limit or slow down an excessive number of connections (tar pit). A mail server can be configured to honor blacklists of spam sources either as a direct blocking list or as one of several indicators for spam.

Firewalls and Proxies

Firewalls and proxies are both considered to be gateway protection devices. They are deployed by the SSCP to help to support a defense in depth architecture design for the enterprise. A firewall is used to examine the flow of traffic entering and exiting a network, and to match that traffic flow against a predetermined set of rules that are used to determine whether that traffic should be allowed or denied. A proxy device is used to manage the exchange of traffic between networks and can also be used to filter traffic, examining it for layer 7 protocols such as HTTP or FTP.

Firewalls

Firewalls are devices that enforce administrative security policies by filtering incoming traffic based on a set of rules. Often firewalls are thought of as protectors of an Internet gateway only. While a firewall should always be placed at Internet gateways, there are also internal network considerations and conditions where a firewall would be employed, such as network zoning. Additionally, firewalls are also threat management appliances with a variety of other security services embedded, such as proxy services and intrusion prevention services which seek to monitor and alert proactively at the network perimeter.

Firewalls will not be effective right out of the box. Firewall rules must be defined correctly in order to not inadvertently grant unauthorized access. Like all hosts on a network, administrators must install patches to the firewall and disable all unnecessary services. Also, firewalls offer limited protection against vulnerabilities caused by applications flaws in server software on other hosts. For example, a firewall will not prevent an attacker from manipulating a database to disclose confidential information.

Filtering

Firewalls filter traffic based on a rule set. Each rule instructs the firewall to block or forward a packet based on one or more conditions. For each incoming packet, the firewall will look through its rule set for a rule whose conditions apply to that packet, and block or forward the packet as specified in that rule. Below are two important conditions used to determine if a packet should be filtered.

  1. By Address Firewalls will often use the packet’s source or destination address, or both, to determine if the packet should be filtered.
  2. By Service Packets can also be filtered by service. The firewall inspects the service the packet is using (if the packet is part of the TCP or UDP, the service is the destination port number) to determine if the packet should be filtered. For example, firewalls will often have a rule to filter the Finger service to prevent an attacker from using it to gather information about a host. Filtering by address and by service are often combined together in rules. If the engineering department wanted to grant anyone on the LAN access to its web server, a rule could be defined to forward packets whose destination address is the web server’s and the service is HTTP (TCP port 80).
Network Address Translation (NAT)

Firewalls can change the source address of each outgoing (from trusted to untrusted network) packet to a different address. This has several applications, most notably to allow hosts with RFC 1918 addresses access to the Internet by changing their non-routable address to one that is routable on the Internet.56 A non-routable address is one that will not be forwarded by an Internet router, and therefore remote attacks using non-routable internal addresses cannot be launched over the open Internet. Anonymity is another reason to use NAT. Many organizations do not want to advertise their IP addresses to an untrusted host and thus unnecessarily give information about the network. They would rather hide the entire network behind translated addresses. NAT also greatly extends the capabilities of organizations to continue using IPv4 address spaces.

Port Address Translation (PAT)

An extension to NAT is to translate all addresses to one routable IP address and translate the source port number in the packet to a unique value. The port translation allows the firewall to keep track of multiple sessions that are using PAT.

Static Packet Filtering

When a firewall uses static packet filtering, it examines each packet without regard to the packet’s context in a session. Packets are examined against static criteria, for example, blocking all packets with a port number of 79 (Finger). Because of its simplicity, static packet filtering requires very little overhead, but it has a significant disadvantage. Static rules cannot be temporarily changed by the firewall to accommodate legitimate traffic. If a protocol requires a port to be temporarily opened, administrators have to choose between permanently opening the port and disallowing the protocol.

Stateful Inspection or Dynamic Packet Filtering

Stateful inspection examines each packet in the context of a session, which allows it to make dynamic adjustments to the rules to accommodate legitimate traffic and block malicious traffic that would appear benign to a static filter. Consider FTP. A user connects to an FTP server on TCP port 21 and then tells the FTP server on which port to transfer files. The port can be any TCP port above 1023. So, if the FTP client tells the server to transfer files on TCP port 1067, the server will attempt to open a connection to the client on that port. A stateful inspection firewall would watch the interaction between the two hosts, and even though the required connection is not permitted in the rule set, it would allow the connection to occur because it is part of FTP.

Static packet filtering, in contrast, would block the FTP server’s attempt to connect to the client on TCP port 1067 unless a static rule was already in place. In fact, because the client could instruct the FTP server to transfer files on any port above 1023, a static rule would have to be in place to permit access to the specified port.

Personal Firewalls

Following the principle of security in depth, personal firewalls should be installed on workstations, which protect the user from all hosts on the network. It is critical for home users with DSL or cable modem access to the Internet to have a personal firewall installed on every PC, especially if they do not have a firewall protecting their network.

Proxies

A proxy firewall mediates communications between untrusted endpoints (servers/hosts/clients) and trusted endpoints (servers/hosts/clients). From an internal perspective, a proxy may forward traffic from known, internal client machines to untrusted hosts on the Internet, creating the illusion for the untrusted host that the traffic originated from the proxy firewall, thus hiding the trusted internal client from potential attackers. To the user, it appears that he or she is communicating directly with the untrusted server. Proxy servers are often placed at Internet gateways to hide the internal network behind one IP address and to prevent direct communication between internal and external hosts.

Circuit-Level Proxy

A circuit-level proxy creates a conduit through which a trusted host can communicate with an untrusted one. This type of proxy does not inspect any of the traffic that it forwards, which adds very little overhead to the communication between the user and untrusted server. The lack of application awareness also allows circuit-level proxies to forward any traffic to any TCP and UDP port. The disadvantage is that traffic will not be analyzed for malicious content.

Application-Level Proxy

An application-level proxy relays the traffic from a trusted endpoint running a specific application to an untrusted endpoint. The most significant advantage of application-level proxies is that they analyze the traffic that they forward for protocol manipulation and various sorts of common attacks such as buffer overflows. Application-level proxies add overhead to using the application because they scrutinize the traffic that they forward.

Web proxy servers are a very popular example of application-level proxies. Many organizations place one at their Internet gateway and configure their users’ web browsers to use the web proxy whenever they browse an external web server (other controls are implemented to prevent users from bypassing the proxy server). The proxies typically include required user authentication, inspection of URLs to ensure that users do not browse inappropriate sites, logging, and caching of popular webpages. In fact, web proxies for internal users are one of the prime manners in which acceptable usage policies can be enforced because external sites can be blacklisted by administrators and logs of user traffic kept for later analysis if required for evidentiary purposes.

Network Intrusion Detection/Prevention Systems

It is important for the SSCP to understand the difference between intrusion detection and intrusion prevention. Intrusion detection technologies are considered “passive,” only recording activity and potentially alerting an administrator that something unusual is occurring. Intrusion prevention technologies are considered “active,” meaning that they will do everything that an intrusion detection device does, but they will also have the capacity to react to the unusual or suspicious behavior in preprogrammed ways. It is important for the SSCP to carefully consider which technology or mix of technologies will be the best for their organization’s security needs.

Port Scanning

Port scanning is the act of probing for TCP services on a machine. It is performed by establishing the initial handshake for a connection. Although not in itself an attack, it allows an attacker to test for the presence of potentially vulnerable services on a target system. Port scanning can also be used for fingerprinting an operating system by evaluating its response characteristics, such as timing of a response, and details of the handshake. Protection from port scanning includes restriction of network connections; e.g., by means of a host-based or network-based firewall or by defining a list of valid source addresses on an application level.

FIN, NULL, and XMAS Scanning

In FIN scanning, a stealth scanning method, a request to close a connection is sent to the target machine. If no application is listening on that port, a TCP RST or an ICMP packet will be sent. This attack commonly only works on UNIX machines, as Windows machines behave in a slightly different manner, deviating from RFC 793 (always responding to a FIN packet with an RST, thereby rendering recognition of open ports impossible) and thereby not being susceptible to the scan.57 Firewalls that put a system into stealth mode (i.e., suppressing system responses to FIN packets) are available. In NULL scanning, no flags are set on the initiating TCP packet; in XMAS scanning, all TCP flags are set (or “lit,” as in a Christmas tree). Otherwise, these scans work in the same manner as the FIN scan.

TCP Sequence Number Attacks

To detect and correct loss of data packets, TCP attaches a sequenced number to each data packet that is transmitted. If a transmission is not reported back as successful, a packet will be retransmitted. By eavesdropping on traffic, these sequence numbers can be predicted and fake packets with the correct sequence number can be introduced into the data stream by a third party. This class of attacks can, for instance, be used for session hijacking. Protection mechanisms against TCP sequence number attacks have been proposed based on better randomization of sequence numbers as described in RFC 1948.58

Methodology of an Attack

Security attacks have been described formally as attack tree models. Attack trees are based upon the goal of the attacker, the risks to the defender, and the vulnerabilities of the defense systems. They are a specialized form of decision tree that can be used to formally evaluate system security. The following methodology describes not the attack tree itself (which is a defender’s view) but the steps that an attacker would undergo to successfully traverse the tree toward his or her target.

  1. Target Acquisition An attack usually starts with intelligence gathering and surveillance to obtain a collection of possible targets, for instance, through evaluating directory services and network scanning. It is therefore important for the security architect and the security practitioner to work together to limit information available on a network and make intelligence gathering as difficult as possible for the attacker. This would include installation of split network security zones (internal nodes are only visible on the inside of a network), network address translation, limiting access to directories of persons and assets, using hidden paths, nonstandard privileged usernames, etc. Importantly, not all of these obscurity measures have an inherent security value. They serve to slow the attacker down but will not in and of themselves provide any protection beyond this point; these measures are referred to as delaying tactics.
  2. Target Analysis In a second step, the identified target is analyzed for security weaknesses that would allow the attacker to obtain access. Depending on the type of attack, the discovery scan has already taken this into account, e.g., by scanning for servers susceptible to a certain kind of buffer overflow attack. Tools available for the target acquisition phase are generally capable of automatically performing a first-target analysis. The most effective protection that the security professional can deploy is to minimize security vulnerabilities, for instance, by applying software patches at the earliest possible opportunity and practicing effective configuration management. In addition, target analysis should be made more difficult for the attacker. For example, system administrators should minimize the system information (e.g., system type, build, and release) that an attacker could glean, making it more difficult to attack the system.
  3. Target Access In the next step, an attacker will obtain some form of access to the system. This can be access as a normal user or as a guest. The attacker could be exploiting known vulnerabilities or common tools for this, or bypass technical security controls altogether by using social engineering attacks. For one to mitigate the risk of unauthorized access, existing user privileges need to be well managed, access profiles need to be up to date, and unused accounts should be blocked or removed. Access should be monitored, and monitoring logs need to be regularly analyzed; however, most malware will come with root kits ready to subvert basic operating system privilege management.
  4. Target Appropriation As the last level of an attack, the attacker can then escalate his or her privileges on the system to gain system-level access. Again, exploitation of known vulnerabilities through existing or custom tools and techniques is the main technical attack vector; however, other attack vectors, such as social engineering, need to be taken into account.
  5. Countermeasures against privilege escalation, by nature, are similar to the ones for gaining access. However, because an attacker can gain full control of a system through privilege escalation, secondary controls on the system itself (such as detecting unusual activity in log files) are less effective and reliable. Network (router, firewall, and intrusion detection system) logs can therefore prove invaluable to the security practitioner. Logs are so valuable, in fact, that an entire discipline within IT security has developed known as security event management (SEM), or sometimes security incident and event management (SIEM).

To detect the presence of unauthorized changes, which could indicate access from an attacker or backdoors into the system, the use of host-based or network-based intrusion detection systems can provide useful detection services. However, it is important to keep in mind that because an IDS relies on constant external input in the form of attack signature updates to remain effective, these systems are only as “good” as the quality and timeliness of the updates being applied to them. The output from the host-based IDS (such as regular snapshots or file hashes) needs to be stored in such a way that it cannot be overwritten from the source system in order to insure integrity.

Finally, yet importantly, the attacker may look to remotely maintain control of the system to regain access at a later time or to use it for other purposes, such as sending spam or as a stepping stone for other attacks. To such an end, the attacker could avail himself of prefabricated “rootkits” to sustain and maintain control over time. Such a rootkit will not only allow access but also hide its own existence from traditional cursory inspection methods.

Bots and botnets are responsible for most of the activity leading to unauthorized, remote control of compromised systems today. Machines that have become infected, and are now considered to be bots: essentially, zombies controlled by shadowy entities from the dark places on the Internet. Bots and botnets are the largest source of spam email and can be coordinated by bot herders to inflict highly effective denial-of-services attacks, all without the knowledge of the system owners.

Network Security Tools and Tasks

Tools can make a security practitioner’s job easier. Regardless of whether they are aids in collecting input for risk analysis or scanners to assess how well a server is configured, tools automate processes, which saves time and reduces error. Do not allow yourself to fall into the trap of reducing network security to collecting and using tools however.

Intrusion Detection Systems

Intrusion detection systems (IDS) monitor activity and send alerts when they detect suspicious traffic. (See Figure 6-16.) There are two broad classifications of IDS: host-based IDS, which monitor activity on servers and workstations, and network-based IDS, which monitor network activity. Network IDS services are typically stand-alone devices or at least independent blades within network chassis. Network IDS logs would be accessed through a separate management console that will also generate alarms and alerts.

Currently, there are two approaches to the deployment and use of intrusion detection systems. An appliance on the network can monitor traffic for attacks based on a set of signatures (analogous to antivirus software), or the appliance can watch the network’s traffic for a while, learn what traffic patterns are normal, and send an alert when it detects an anomaly. Of course, the IDS can be deployed using a hybrid of the two approaches as well.

c06f016.tif

Figure 6-16: Architecture of an intrusion detection system (IDS)

Independent of the approach, how an organization uses an IDS determines whether the tool is effective. Despite its name, the IDS should not be used to detect intrusions because IDS solutions are not designed to be able to take preventative actions as part of their response. Instead, it should send an alert when it detects interesting, abnormal traffic that could be a prelude to an attack. For example, someone in the engineering department trying to access payroll information over the network at 3 a.m. is probably very interesting and not normal. Or perhaps a sudden rise in network utilization should be noted.

Security Event Management (SEM)/ Security Incident and Event Management (SIEM)

Security Event Management (SEM)/ Security Incident and Event Management (SIEM) is a solution that involves harvesting logs and event information from a variety of different sources on individual servers or assets, and analyzing it as a consolidated view with sophisticated reporting. Similarly, entire IT infrastructures can have their logs and event information centralized and managed by large-scale SEM/SIEM deployments. SEM/SIEM will not only aggregate logs but will perform analysis and issue alerts (email, pager, audible, etc.) according to suspicious patterns.

Aggregation and consolidation of logs and events will also potentially require additional network resources to transfer log and event data from distinct servers and arrays to a central location. This transfer will also need to occur in as close to real time as possible if the security information is to possess value beyond forensics. SEM/SIEM systems can benefit immensely from Security Intelligence Services (SIS). The output from security appliances is esoteric and lacks the real-world context required for predictive threat assessments. It falls short of delivering consistently relevant intelligence to businesses operating in a competitive marketplace. SIS uses all-source collection and analysis methods to produce and deliver precise and timely intelligence guiding not only the business of security but the security of the business. SIS services are built upon accurate security metrics (cyber and physical), market analysis, and technology forecasting, and are correlated to real-world events, giving business decision makers time and precision. SIS provides upstream data from proactive cyber defense systems monitoring darkspace and darkweb.

Scanners

A network scanner can be used in several ways by the security practitioner:

  1. Discovery Scanning Discovery scanning can be performed on devices and services on a network, for instance, to establish whether new or unauthorized devices have been connected. Conversely, this type of scan can be used for intelligence gathering on potentially vulnerable services. A discovery scan can be performed with very simple methods, for example, by sending a ping packet (ping scanning) to every address in a subnet. More sophisticated methods will also discover the operating system and services of a responding device.
  2. Compliance Scanning Compliance scanning can be used to test for compliance with a given policy, for instance, to ensure certain configurations (deactivation of services) have been applied. A compliance scan can be performed either from the network or on the device; for instance, as a security health check. If performed on the network, it will usually include testing for open ports and services on the device.
  3. Vulnerability Scanning and Penetration Testing Vulnerability scanning can be used to test for vulnerabilities, for instance, as part of a penetration test but also in preparation for an attack. A vulnerability scan tests for vulnerability conditions generally by looking at responding ports and applications on a given server and determining patch levels. A vulnerability scan will infer a threat based upon what might be available as an avenue of attack. When new vulnerabilities have been published or are exploited, targeted scanner tools often become available from software vendors, antivirus vendors, independent vendors, or the open-source community. Care must be taken when running scans in a corporate environment so that the load does not disrupt operations or cause applications and services to fail.
  4. A penetration test is the follow-on step after a vulnerability scan, where the observed vulnerabilities are actually exploited or are attempted. It is often the case that an inferred vulnerability, when tested, is not actually a vulnerability. For instance, a service might be open on a port and appear un-patched, but upon testing it turns out that the security administrator has implemented a secure configuration that mitigates the vulnerabilities. Penetration tests always have an elevated risk potential to bring down the asset against which they are being performed, and for this reason should never be conducted on operational systems unless the risks associated with the tests have been assessed and accepted by the systems owner. In addition, a clear waiver from the asset owner should be obtained prior to testing.

The following tools are commonly used scanning tools that are worth understanding:

  • Nessus—A vulnerability scanner
  • Nmap—A discovery scanner that will allow for determining the services running on a machine, as well as other host characteristics, such as a machine’s operating system
Network Taps

A network tap or span is a device that has the ability to selectively copy all data flowing through a network in real time for analysis and storage. (See Figure 6-17.) Network taps may be deployed for the purposes of network diagnostics and maintenance or for purposes of forensic analysis related to incidents or suspicious events. Network taps will generally be fully configurable and will function at all layers from the physical layer up. In other words, a tap should be capable of copying everything from layer 1 (Ethernet, for instance) upward, including all payload information within the packets. Additionally, a tap can be configured to vacuum up every single packet of data, or perhaps just focus on selected application traffic from selected sources.

c06f017.tif

Figure 6-17: A network tap is a device that simply sits on a network in “monitor” or “promiscuous” mode and makes a copy of all the network traffic, possibly right down to the Ethernet frames.

IP Fragmentation Attacks and Crafted Packets

The Internet Protocol (IP) uses datagram fragmentation to split up the pieces of data being transmitted between two network interfaces so that they are able to be sized at, or under the maximum transmission unit (MTU) value set for the network. When packets are fragmented, they can be used to hide attack data, obfuscating the attack from the monitoring devices designed to protect the network. The following sections discuss the details of these attacks and how they may be addressed.

Teardrop

In this attack, IP packet fragments are constructed so that the target host calculates a negative fragment length when it attempts to reconstruct the packet. If the target host’s IP stack does not ensure that fragment lengths are set within appropriate boundaries, the host could crash or become unstable. This problem is easily fixed with a vendor patch.

Overlapping Fragment Attack

Overlapping fragment attacks are used to subvert packet filters that only inspect the first fragment of a fragmented packet. The technique involves sending a harmless first fragment, which will satisfy the packet filter. Other packets follow that overwrite the first fragment with malicious data, thus resulting in harmful packets bypassing the packet filter and being accepted by the victim host. A solution to this problem is for TCP/IP stacks not to allow fragments to overwrite each other.

Source Routing Exploitation

Instead of only permitting routers to determine the path a packet takes to its destination, IP allows the sender to explicitly specify the path. An attacker can abuse source routing so that the packet will be forwarded between network interfaces on a multi-homed computer that is configured not to forward packets. This could allow an external attacker access to an internal network. Source routing is specified by the sender of an IP datagram, whereas the routing path would normally be left to the router to decide. The best solution is to disable source routing on hosts and to block source-routed packets.

Smurf and Fraggle Attacks

Both attacks use broadcasts to create denial-of-service attacks. A Smurf attack misuses the ICMP echo request to create denial-of-service attacks. In a Smurf attack, the intruder sends an ICMP echo request with a spoofed source address of the victim. The packet is sent to a network’s broadcast address, which forwards the packet to every host on the network. Because the ICMP packet contains the victim’s host as the source address, the victim will be overwhelmed by the ICMP echo replies, causing a denial-of-service attack.

The Fraggle attack uses UDP instead of ICMP. The attacker sends a UDP packet on port 7 with a spoofed source address of the victim. Like the Smurf attack, the packet is sent to a network’s broadcast address, which will forward the packet to all of the hosts on the network. The victim host will be overwhelmed by the responses from the network.

NFS Attacks

The first step in setting up an NFS connection will be the publication (exporting) of file system trees from the server. These trees can be arbitrarily chosen by the administrator. Access privileges are granted based upon the client IP address and directory tree. Within the tree, the privileges of the server file system will be mapped to client users.

Several points of risk exist:

  • Export of parts of the file system that were not intended for publication or with inappropriate privileges. This can be done by accident or through the existence of UNIX file system hard links (which can be generated by the user). This is of particular concern if parts of the server root file system are made accessible. One can easily imagine scenarios where a password file can be accessed and the encrypted passwords contained therein are subsequently broken by an off-the-shelf tool. Regular review of exported file system trees is an appropriate mitigation.
  • Using an unauthorized client. Because NFS identifies the client by its IP address or (indirectly) a host name, it is relatively easy to use a different client than the authorized one, by means of IP spoofing or DNS spoofing. At the very least, resolution of server host names should therefore happen via a file (/etc/hosts on UNIX), not through DNS.
  • Incorrect mapping of user IDs between server and client. Any machine not controlled by the server administrator can be used to propagate an attack, as NFS relies on user IDs as the only form of authorization credential. An attacker, having availed himself of administrative access to a client, could generate arbitrary user IDs to match those on the server. It is paramount that user IDs on server and client are synchronized; e.g., through the use of NIS/NIS+.
  • Sniffing and access request spoofing. Because NFS traffic, by default, is not encrypted, it is possible to intercept it, either by means of network sniffing or by a man-in-the-middle attack. Because NFS does not authenticate each RPC call, it is possible to access files if the appropriate access token (file handle) has been obtained, for instance, through sniffing. NFS itself does not offer appropriate mitigation; however, the use of secure NFS may. 59
  • SetUID files. The directories accessed via NFS are used in the same way local directories are. On UNIX systems, files with the SUID bit can therefore be used for privilege escalation on the client. NFS should therefore be configured in such a way as to not respect SUID bits.

Network News Transport Protocol (NNTP) Security

From a security perspective, the main shortcoming in Network News Transport Protocol (NNTP) is authentication. One of the earlier solutions users found to this problem was signing messages with Pretty Good Privacy (PGP). However, this did not prevent impersonation or faked identities, as digital signatures were not a requirement, and indeed would be unsuitable for the repudiation problem implied. To make matters worse, NNTP offers a cancellation mechanism to withdraw articles already published. Naturally, the same authentication weakness applies to the control messages used for these cancellations, allowing users with even moderate skills to delete messages at will.

Finger User Information Protocol

Finger is an identification service that allows a user to obtain information about the last login time of a user and whether he or she is currently logged into a system. The “fingered” user has the possibility to have information from two files in their home directory displayed (the .project and .plan files). For all practical purposes, the Finger protocol has become obsolete. Its use should be restricted to situations where no alternatives are available.

Network Time Protocol (NTP)

Network Time Protocol (NTP) synchronizes computer clocks in a network. This can be extremely important for operational stability (for instance, under NIS) but also for maintaining consistency and coherence of audit trails, such as in log files. A variant of NTP exists in Simple Network Time Protocol (SNTP), offering a less resource intensive but also less exact form of synchronization. From a security perspective, our main objective with NTP is to prevent an attacker from changing time information on a client or a whole network by manipulating its local time server. NTP can be configured to restrict access based upon IP address. From NTP version 3 onward, cryptographic authentication has become available, based upon symmetric encryption, but is to be replaced by public key cryptography in NTP version 4.

To make a network robust against accidental or deliberate timing inaccuracies, a network should have its own time server and possibly a dedicated, highly accurate clock. As a standard precaution, a network should never depend on one external time server alone, but it should synchronize with several trusted time sources. Thus, manipulation of a single source will have no immediate effect. To detect de-synchronization, one can use standard logging mechanisms with NTP to ensure synchronicity of time stamping.

DoS/DDoS

There are a number of different types of attacks that can be launched against a network, many of which we have already discussed. A special category is the denial of service and distributed denial of service attacks. Denial of service (DoS) attacks are carried out when an attacker sets out to attack a system through a set of activities designed to culminate in knocking the target offline and denying it ongoing access to the network for the duration of the attack. The key is that this attack is carried out by a single attacker. A distributed denial of service (DDoS) attack is designed to achieve the same end result as a DoS attack, but it is perpetrated by multiple attackers simultaneously coordinating their efforts.

Denial-of-Service Attack (DoS)

The easiest attack to carry out against a network, or so it may seem, is to overload it through excessive traffic or traffic that has been “crafted” to confuse the network into shutting down or slowing to the point of uselessness. Countermeasures include, but are not limited to, multiple layers of firewalls, careful filtering on firewalls, routers and switches, internal network access controls (NAC), redundant (diverse) network connections, load balancing, reserved bandwidth (quality of service, which would at least protect systems not directly targeted), and blocking traffic from an attacker on an upstream router. Bear in mind that malicious agents can and will shift IP address or DNS name to sidestep the attack, as well as employing potentially thousands of unique IP addresses during the execution of an attack. Enlisting the help of upstream service providers and carriers is ultimately the most effective countermeasure, especially if the necessary agreements and relationships have been established proactively or as part of agreed service levels.

It is instructive to note that many protocols contain basic protection from message loss that would at least mitigate the effects of denial-of-service attacks. This starts with TCP managing packet loss within certain limits, and it ends with higher level protocols, such as SMTP, that will provide robustness against temporary connection outages (store and forward).60

Distributed Denial of Service (DDoS) Types

There are several tactics used to disrupt or deny service to a computer, such as congesting netflow traffic to a computer or attempting to overwhelm a server on the application level. Distributed denial-of-service (DDoS) attacks can be broadly divided in three types:

  1. Volume-Based Attacks—Includes UDP floods, ICMP floods, and other spoofed-packet floods. The attack’s goal is to saturate the bandwidth of the attacked site, and magnitude is measured in bits per second (Bps).
  2. Protocol Attacks—Includes SYN floods, fragmented packet attacks, Ping of Death, Smurf DDoS, and more. This type of attack consumes actual server resources, or those of intermediate communication equipment, such as firewalls and load balancers, and is measured in packets per second.
  3. Application Layer Attacks—Includes Slowloris61, Zero-day DDoS attacks, DDoS attacks that target Apache, Windows, or OpenBSD vulnerabilities, and more. Comprised of seemingly legitimate and innocent requests, these attacks have a goal to crash the web server, and the magnitude is measured in requests per second.

Security practitioners need to be aware of the impact of DDoS attacks against their networks. Perhaps because they are a convenient attack vector requiring little specialized expertise, DDoS attacks are increasing in scope to include bigger targets and more packets. During Q1 2013, the average DDoS attack bandwidth totaled 48.25 Gbps, a 718% increase over the previous quarter—and the average packet-per-second rate reached 32.4 million. “DDoS challenges have spiked for enterprises in 2013,” noted Lawrence Orans of the research firm Gartner in a recent report. “Gartner estimates that its DDoS inquiry level quadrupled from September 2012 through September 2013. An increase of higher-volume and application-based DDoS attacks on corporate networks will force Chief Information Security Officers (CISOs) and security teams to find new, proactive solutions for reducing downtime.” 62

Some examples of DDoS attacks include the following:

  • The Spamhaus DDoS attack in March of 2013. This was not your typical hacktivist DDoS attack; rather it was a massive, 300 gigabits-per-second traffic attack against volunteer spam filtering organization Spamhaus, which spread to multiple Internet exchanges and ultimately slowed traffic for users mainly in Europe at the end of March 2013. The attackers abused improperly configured or default-state DNS servers, also known as open DNS resolvers, in the attacks. This allowed for a bigger bandwidth attack with fewer machines since DNS servers are large and run on high-speed Internet connections, a recipe that led to the record-breaking DDoS bandwidth levels. Security experts estimate that there are around 21 million of these servers running on the Internet. Properly configured DNS servers only accept traffic from their own IP space or in the case of ISPs, from their customers, whereas “open” DNS resolvers can take requests from anyone on the Internet. This makes it possible to send these servers DNS queries from a spoofed address. The attackers basically sent traffic purportedly from Spamhaus, so when the weak DNS servers returned their DNS resolver responses, they all bombarded Spamhaus.63
  • In April of 2013, Wordpress came under attack, as a DDoS botnet of almost 100,000 machines was used to brute force account passwords.
  • In May of 2013, a large DNS reflection denial of service (DrDoS) attack was targeted against a real-time financial exchange platform. The attack peaked at 167 Gigabits per second (Gbps). In this kind of attack, the attacker makes large numbers of spoofed queries against many public DNS servers. The source IP address is forged to appear as the target of the attack. When a DNS server receives the forged request, it replies, but the reply is directed to the forged source address. This is the reflection component. The target of the attack receives replies from all the DNS servers that are used. This type of attack makes it very difficult to identify the malicious sources. If the queries, which are small packets, generate larger responses, then the attack is said to have an amplifying characteristic.
  • On August 25, 2013, China faced the largest distributed denial-of-service (DDoS) attack in its history, leading to a two-to-four hour shutdown of swaths of IP addresses using .cn, China’s country code top-level domain. The China Internet Network Information Center (CNNIC), which maintains the registry for .cn, issued an apology and a notice that at 2 and 4 in the morning early Sunday local time, its National Nodes DNS was hit with two big attacks.64 CloudFlare CEO Matthew Prince told the Wall Street Journal that his company saw a 32% drop in traffic for the thousands of Chinese domains on the company’s network during the attack period, compared with the same timeframe 24 hours prior.

Countermeasures are similar to those of conventional denial-of-service attacks, but simple IP or port filtering might not work.

SYN Flooding

A SYN flood attack is a denial-of-service attack against the initial handshake in a TCP connection. Many new connections from faked, random IP addresses are opened in short order, overloading the target’s connection table.

Countermeasures include tuning of operating system parameters such as the size of the backlog table according to vendor specifications. Another solution, which requires modification to the TCP/IP stack, is SYN cookies changing TCP numbers in a way that makes faked packets immediately recognizable.65

Daniel J. Bernstein, the primary inventor of this approach, defines them as “particular choices of initial TCP sequence numbers by TCP servers. The difference between the server’s initial sequence number and the client’s initial sequence number is:

  • Top 5 bits—t mod 32, where t is a 32 bit time counter that increases every 64 seconds;
  • Next 3 bits—An encoding of an MSS selected by the server in response to the client’s MSS;
  • Bottom 24 bits—A server-selected secret function of the client IP address and port number, and the server IP address and port number.

A server that uses SYN cookies does not have to drop connections when its SYN queue fills up. Instead it sends back a SYN+ACK, exactly as if the SYN queue had been larger. When the server receives an ACK, it checks that the secret function works for a recent value of t, and then rebuilds the SYN queue entry from the encoded MSS.”

One of the most successful variants of SYN flooding can be carried out by the botnets discussed earlier. Botnets have the ability to direct potentially thousands of SYN requests to hosts at the same time, overwhelming not only the hosts but also the network connections that they rest upon. Under such circumstances, there are no host-configuration countermeasures available because a host without a network is as good as dead anyway. While SYN flooding might be the mode of attack, it is not being employed in any cunning manner with spoofed IP addresses from possibly a single malicious host; it is being applied as a pure brute-force form of attack.

Countermeasures include protecting the operating system through securing its network stack. This is not normally something the user or owner of a system has any degree of control over; it is a task for the vendor.

Finally, the network needs to be included in a corporation’s disaster recovery and business contingency plans. For local area networks, one may set high recovery objectives and provide appropriate contingency, based upon the fact that any recovery of services is likely to be useless without at least a working local area network (LAN) infrastructure. As wide area networks are usually outsourced, contingency measures might include acquisition of backup lines from a different provider, procurement of telephone or digital subscriber loop (DSL) lines, etc.

Spoofing

Spoofing is defined as acting with the intent of impersonating someone or something, with the goal of attempting to get a target to accept you as the legitimate party, even though you are not. When spoofing is used by a bad actor to attempt to trick a target machine on the corporate network into accepting the traffic being sent to it from the machine that is controlled by the attacker, this can present a challenge to confidentiality, integrity, and availability. The SSCP needs to understand the intricacies of spoofing and the types of attacks that may be used with spoofing to gain access to the network.

IP Address Spoofing and SYN-ACK attacks

Packets are sent with a bogus source address so that the victim will send a response to a different host. Spoofed addresses can be used to abuse the three-way handshake that is required to start a TCP session. Under normal circumstances, a host offers to initiate a session with a remote host by sending a packet with the SYN option. The remote host responds with a packet with the SYN and ACK options. The handshake is completed when the initiating host responds with a packet with the ACK option.

An attacker can launch a denial-of-service attack by sending the initial packet with the SYN option with a source address of a host that does not exist. The victim will respond to the forged source address by sending a packet with the SYN and ACK options, and then wait for the final packet to complete the handshake. Of course, that packet will never arrive because the victim sent the packet to a host that does not exist. If the attacker sends a storm of packets with spoofed addresses, the victim may reach the limit of uncompleted (half-open) three-way handshakes and refuse other legitimate network connections.

The above scenario takes advantage of a protocol flaw. To mitigate the risk of a successful attack, vendors have released patches that reduce the likelihood of the limit of uncompleted handshakes being reached. In addition, security devices, such as firewalls, can block packets that arrive from an external interface with a source address from an internal network.

Email Spoofing

As SMTP does not possess an adequate authentication mechanism, email spoofing is extremely simple. The most effective protection against this is a social one, whereas the recipient can confirm or simply ignore implausible email. Spoofing email sender addresses is extremely simple, and it can be done with a simple TELNET command to port 25 of a mail server and by issuing a number of SMTP commands. Email spoofing is frequently used as a means to obfuscate the identity of a sender in spamming, whereas the purported sender of a spam email is in fact another victim of spam, whose email address has been harvested by or sold to a spammer.

DNS Spoofing

To resolve a domain name query, such as mapping a web server address to an IP address, the user’s workstation will in turn have to undertake a series of queries through the Domain Name System hierarchy. Such queries can be either recursive (a name server receiving a request will forward it and return the resolution) or iterative (a name server receiving a request will respond with a reference).

An attacker aiming to poison a DNS server’s (name server) cache (information related to previous queries, which is stored for reuse in future queries for speed and efficiency) by injecting fake records, and thereby falsifying responses to client requests, will need to send a query to this very name server. The attacker now knows that the name server will shortly send out a query for resolution.

In the first case, the attacker has sent a query for a domain, whose primary name server he controls. The response from this query will contain additional information that was not originally requested, but which the target server will now cache. The second case is a dissimilar method that can also be used in iterative queries. Using IP spoofing, the attacker will send a response to his own query before the authoritative (correct) name server has a chance to respond.

In both cases, the attacker has used an electronic conversation to inject false information into the name server’s cache. Not only will this name server now use the cached information, but the false information will propagate to other servers, making inquiries to this one. Due to the caching nature of DNS, attacks on DNS servers as well as countermeasures always have certain latency, determined by the configuration of a (domain) zone.

There are two principal vulnerabilities here, both inherent in the design of the DNS protocol: It is possible for a DNS server to respond to a recursive query with information that was not requested, and the DNS server will not authenticate information. Approaches to address or mitigate this threat have only been partly successful.

Later versions of DNS server software are programmed to ignore responses that do not correspond to a query. Authentication has been proposed, but attempts to introduce stronger (or even “any”) authentication into DNS (for instance, through the use of DNSSEC) have not found wide acceptance. Authentication services have been delegated upward to higher protocol layers. Applications in need of guaranteeing authenticity cannot rely on DNS to provide such, but will have to implement a solution themselves.

The ultimate solution to DNS security issues for many organizations is to establish DNS servers dedicated to their domains and vigorously monitor them. An “internal” DNS server will also be established, which only accepts queries from internal networks and users, and therefore it is considered to be substantially more difficult for outsiders to compromise and use as a staging point for penetrating internal networks.

Manipulation of DNS Queries

Technically, the following two techniques are only indirectly related to DNS weaknesses. However, it is worth mentioning them in the context of DNS because they seek to manipulate name resolution in other ways.

Pharming is the manipulation of DNS records; for instance, through the “hosts” file on a workstation. A hosts file (/etc/hosts on many UNIX machines, C:WindowsSystem32driversetc on a Windows machine) is the resource first queried before a DNS request is issued. It will always contain the mapping of the host name local host to the IP address 127.0.0.1 (loopback interface, as defined in RFC 3330) and potentially other hosts.66 A virus or malware may add addresses of antivirus software vendors with invalid IP addresses to the hosts file to prevent download of virus pattern files. Alternately, Internet banking sites might have their IP addresses substituted for rogue, imposters’ sites, which will attempt to trick the user into providing login information. A further form of DNS pharming is to compromise a DNS server itself and thereby re-direct all users of the DNS server to imposter websites even though their workstation itself may be free from compromise.

Social engineering techniques will not try to manipulate a query on a technical level but can trick the user into misinterpreting a DNS address that is displayed to him in a phishing email or in his web browser address bar. One way to achieve this in email or hypertext markup language (HTML) documents is to display a link in text where the actual target address is different from what is displayed. Another way to achieve this is the use of non-ASCII character sets (for instance, Unicode—ISO/IEC 10646:2012—characters) that closely resemble ASCII (i.e., Latin) characters to the user.67 This may become a popular technique with the popularization of internationalized domain names.

Information Disclosure

Smaller corporate networks do not split naming zones; i.e., names of hosts that are accessible only from an intranet are visible from the Internet. Although knowing a server name will not enable anyone to access it, this knowledge can aid and facilitate preparation of a planned attack as it provides an attacker with valuable information on existing hosts (at least with regard to servers), network structure, and, for instance, details such as organizational structure or server operating systems (if the OS is part of the host name, etc.).

An organization should therefore operate split DNS zones wherever possible and refrain from using telling naming conventions for their machines. In addition, a domain registrar’s database of administrative and billing domain contacts (whois database) can be an attractive target for information and email harvesting.

Session Hijack

Session hijacking is the act of unauthorized insertion of packets into a data stream. It is normally based on sequence number attacks, where sequence numbers are either guessed or intercepted. Different types of session hijacking exist:

  • IP Spoofing—Based on a TCP sequence number attack, the attacker would insert packets with a faked sender IP address and a guessed sequence number into the stream. The attacker would not be able to see the response to any commands inserted.
  • Man-in-the-Middle Attack—The attacker would sniff or intercept packets, removing legitimate packets from the data stream and replacing them with his own. In fact, both sides of a communication would then communicate with the attacker instead of each other.

Countermeasures against IP spoofing can be executed at layer 3 (see the section “IP Address Spoofing and SYN-AcCKAttacks”). As TCP sessions only perform an initial authentication, application layer encryption can be used to protect against man-in-the-middle attacks.

SYN Scanning

As traditional TCP scans became widely recognized and were blocked, various stealth scanning techniques were developed. In TCP half scanning (also known as TCP SYN scanning), no complete connection is opened; instead, only the initial steps of the handshake are performed. This makes the scan harder to recognize; for instance, it would not show up in application log files. However, it is possible to recognize and block TCP SYN scans with an appropriately equipped firewall.

Wireless Technologies

The security practitioner needs to understand the common wireless technologies, networks, and methods. With that foundation, it is easier to understand common vulnerabilities and countermeasures.

Wireless Technologies, Networks, and Methodologies

There are several types of wireless technologies. They include the following:

  1. Wi-Fi Primarily associated with computer networking, Wi-Fi uses the IEEE 802.11 specification to create a wireless local-area network that may be secure, such as an office network, or public, such as a coffee shop. Usually a Wi-Fi network consists of a wired connection to the Internet, leading to a wireless router that transmits and receives data from individual devices, connecting them not only to the outside world but also to each other. Wi-Fi range is generally wide enough for most homes or small offices, and for larger campuses or homes, range extenders may be placed strategically to extend the signal. Over time, the Wi-Fi standard has evolved, with each new version faster than the last. Current devices usually use the 802.11n or 802.11ac versions of the spec, but backwards compatibility ensures that an older laptop can still connect to a new Wi-Fi router. However, in order for you to see the fastest speeds, both the computer and the router must use the latest 802.11 version.
  2. Bluetooth While both Wi-Fi and cellular networks enable connections to anywhere in the world, Bluetooth is much more local, with the stated purpose of “replacing the cables connecting devices,” according to the official Bluetooth website. Bluetooth uses a low-power signal with a maximum range of 50 feet, but with sufficient speed to enable transmission of high-fidelity music and streaming video. As with other wireless technologies, Bluetooth speed increases with each revision of its standard but requires up-to-date equipment at both ends to deliver the highest possible speed. Also, the latest Bluetooth revisions are capable of using maximum power only when it’s required, preserving battery life.
  3. WiMAX While over-the-air data is fast becoming the realm of cellular providers, dedicated wireless broadband systems also exist, offering fast Web surfing without connecting to cable or DSL. One well-known example of wireless broadband is WiMAX. Although WiMAX can potentially deliver data rates of more than 30 megabits per second, providers offer average data rates of 6 Mbps and often deliver less, making the service significantly slower than hard-wired broadband.

Wireless technologies are used in a variety of wireless networks. Types of wireless networks include the following:

  1. Wireless PAN Wireless personal area networks (WPANs) interconnect devices within a relatively small area that is generally within a person’s reach. Wi-Fi PANs are becoming commonplace as equipment designers start to integrate Wi-Fi into a variety of consumer electronic devices. Intel “My WiFi” and Windows 7 “virtual Wi-Fi” capabilities have made Wi-Fi PANs simpler and easier to set up and configure.
  2. Wireless LAN A wireless local area network (WLAN) links two or more devices over a short distance using a wireless distribution method, usually providing a connection through an access point for Internet access. The use of spread-spectrum or OFDM technologies may allow users to move around within a local coverage area and still remain connected to the network. It is often used in cities to connect networks in two or more buildings without installing a wired link.
  3. Wireless Mesh Network A wireless mesh network is a wireless network made up of radio nodes organized in a mesh topology. Each node forwards messages on behalf of the other nodes. Mesh networks can “self-heal,” automatically re-routing around a node that has lost power.
  4. Wireless MAN Wireless metropolitan area networks are a type of wireless network that connects several wireless LANs. WiMAX is a type of Wireless MAN and is described by the IEEE 802.16 standard.
  5. Wireless WAN Wireless wide area networks are wireless networks that typically cover large areas, such as between neighboring towns and cities, or city and suburb. These networks can be used to connect branch offices of business or as a public Internet access system. The wireless connections between access points are usually point-to-point microwave links using parabolic dishes on the 2.4 GHz band, rather than omnidirectional antennas used with smaller networks. A typical system contains base station gateways, access points, and wireless bridging relays.
  6. Cellular Network A cellular network or mobile network is a radio network distributed over land areas called cells, each served by at least one fixed-location transceiver, known as a cell site or base station. In a cellular network, each cell characteristically uses a different set of radio frequencies from all their immediate neighboring cells to avoid any interference. When joined together these cells provide radio coverage over a wide geographic area. This enables a large number of portable transceivers (e.g., mobile phones, pagers, etc.) to communicate with each other and with fixed transceivers and telephones anywhere in the network, via base stations, even if some of the transceivers are moving through more than one cell during transmission.

It is important to understand the principles and methodologies of delivering wireless information. These include the following:

  1. Spread Spectrum Spread spectrum is a method commonly used to modulate information into manageable bits that are sent over the air wirelessly. Essentially, spread spectrum refers to the concept of splitting information over a series of radio channels or frequencies. Generally, the number of frequencies is in the range of about 70, and the information is sent over all or most of the frequencies before being demodulated, or combined at the receiving end of the radio system.

Two kinds of spread spectrum are available:

  • Direct-Sequence Spread Spectrum (DSSS)—Direct-sequence spread spectrum is a wireless technology that spreads a transmission over a much larger frequency band, and with corresponding smaller amplitude. By spreading the signal over a wider band, the signal is less susceptible to interference at a specific frequency. In other words, the interference affects a smaller percentage of the signal. During transmission, a pseudorandom noise code (PN code) is modulated with the signal. The sender and receiver’s PN code generators are synchronized, so that when the signal is received, the PN code can be filtered out.
  • Frequency-Hopping Spread Spectrum (FHSS)—This wireless technology spreads its signal over rapidly changing frequencies. Each available frequency band is subdivided into sub-frequencies. Signals rapidly change (hop) among these sub-frequencies in an order that is agreed upon between the sender and receiver. The benefit of FHSS is that the interference at a specific frequency will affect the signal during a short interval. Conversely, FHSS can cause interference with adjacent DSSS systems.
  1. Orthogonal Frequency Division Multiplexing (OFDM) A signal is subdivided into sub-frequency bands or tones, and each of these bands is manipulated so that they can be broadcasted together without interfering with each other. In an Orthogonal Frequency Division Multiplexing (OFDM) system, each tone is considered to be orthogonal (independent or unrelated) to the adjacent tones and, therefore, does not require a guard band. Because OFDM is made up of many narrowband tones, narrowband interference will degrade only a small portion of the signal and has no or little effect on the remainder of the frequency components.
  2. Vectored Orthogonal Frequency Division Multiplexing (VOFDM) In addition to the standard OFDM principles, the use of spatial diversity can increase the system’s tolerance to noise, interference, and multipath. This is referred to as vectored OFDM, or VOFDM. Spatial diversity is a widely accepted technique for improving performance in multipath environments. Because multipath is a function of the collection of bounced signals, that collection is dependent on the location of the receiver antenna. If two or more antennae are placed in the system, each would have a different set of multipath signals. The effects of each channel would vary from one antenna to the next, so carriers that may be unusable on one antenna may become usable on another.
  3. Frequency Division Multiple Access (FDMA) Frequency division multiple access (FDMA) is used in analog cellular only. It subdivides a frequency band into sub-bands and assigns an analog conversation to each sub-band. FDMA was the original “cellular” phone technology and has been de-commissioned in many locations in favor of GSM or CDMA-based technologies.
  4. Time Division Multiple Access (TDMA) Time division multiple access (TDMA) multiplexes several digital calls (voice or data) at each sub-band by devoting a small time slice in a round-robin to each call in the band. Two sub-bands are required for each call: one in each direction between sender and receiver.

Transmission Security and Common Vulnerabilities and Countermeasures

Wireless technologies rely on a variety of protocols and authentication systems that have vulnerabilities that can be exploited. Fortunately, there are wireless security devices and countermeasures that can be used to provide stronger security.

Wireless Security Issues

The protocols and authentication methods that wireless technologies employ are intrinsically related to how secure they are. It is important to understand both the strengths and vulnerabilities that come with them.

Open System Authentication

Open System Authentication (OSA) is the default authentication protocol for the 802.11 standard. It consists of a simple authentication request containing the station ID and an authentication response containing success or failure data. Upon successful authentication, both stations are considered mutually authenticated. It can be used with the Wired Equivalent Privacy (WEP) protocol to provide better communication security; however, it is important to note that the authentication management frames are still sent in cleartext during the authentication process. WEP is used only for encrypting data once the client is authenticated and associated. Any client can send its station ID in an attempt to associate with the AP. In effect, no authentication is actually done.

Shared Key Authentication

Shared Key Authentication (SKA) is a standard challenge and response mechanism that makes use of WEP and a shared secret key to provide authentication. Upon encrypting the challenge text with WEP using the shared secret key, the authenticating client will return the encrypted challenge text to the access point for verification. Authentication succeeds if the access point decrypts the same challenge text.

Ad-Hoc Mode

Ad-hoc mode is one of the networking topologies provided in the 802.11 standard. It consists of at least two wireless endpoints where there is no access point involved in their communication. Ad-hoc mode WLANs are normally less expensive to run, as no APs are needed for their communication. However, this topology cannot scale for larger networks and lacks some security features like MAC filtering and access control.

Infrastructure Mode

Infrastructure mode is another networking topology in the 802.11 standard. It consists of a number of wireless stations and access points. The access points usually connect to a larger wired network. This network topology can scale to form large networks with arbitrary coverage and complex architectures.

Wired Equivalent Privacy Protocol (WEP)

Wired Equivalent Privacy (WEP) Protocol is a basic security feature in the IEEE 802.11 standard, intended to provide confidentiality over a wireless network by encrypting information sent over the network. A key-scheduling flaw has been discovered in WEP, so it is now considered to be insecure because a WEP key can be cracked in a few minutes with the aid of automated tools.

Wi-Fi Protected Access (WPA) and Wi-Fi Protected Access 2 (WPA2)

Wi-Fi Protected Access (WPA) provides users with a higher level of assurance that their data will remain protected by using the Temporal Key Integrity Protocol (TKIP) for data encryption. 802.1x authentication has been introduced in this protocol to improve user authentication. Wi-Fi Protected Access 2 (WPA2), based on IEEE 802.11i, is a wireless security protocol that allows only authorized users to access a wireless device, with features supporting stronger cryptography (e.g., Advanced Encryption Standard or AES), stronger authentication control (e.g., Extensible Authentication Protocol or EAP), key management, replay attack protection, and data integrity.

In July 2010, a security vendor claimed they discovered a vulnerability in the WPA2 protocol, named “Hole 196.”68 By exploiting the vulnerability, an internally authenticated Wi-Fi user could decrypt the private data of other users and inject malicious traffic into the wireless network. After a thorough investigation, it turned out that such an attack cannot actually recover, break, or crack any WPA2 encryption keys (AES or TKIP). Instead, attackers could only masquerade as access points and launch a man-in-the-middle attack when clients attached to them. In addition, if the security architect does their job properly, such an attack would not be able to succeed in a properly configured environment in the first place. If the client isolation feature is enabled on all access points, wireless clients are not allowed to talk with each other when they are attached to the same access point. As a result of this simple security configuration setting being applied, an attacker is unable to launch a man-in-the-middle attack against other wireless users.

TKIP was initially designed to be used with WPA, while the stronger algorithm AES was designed to be used with WPA2. Some devices may allow WPA to work with AES, while some others may allow WPA2 to work with TKIP. In November 2008, a vulnerability in TKIP was uncovered that would allow an attacker to be able to decrypt small packets and inject arbitrary data into a wireless network. Thus, TKIP encryption is no longer considered to be secure. The security architect should consider using the stronger combination of WPA2 with AES encryption.

The design flaws in the security mechanisms of the 802.11 standard also give rise to a number of potential attacks, both passive and active. These attacks enable intruders to eavesdrop on, or tamper with, wireless transmissions.

A Parking Lot Attack

Access points emit radio signals in a circular pattern, and the signals almost always extend beyond the physical boundaries of the area they are intended to cover. Signals can be intercepted outside of buildings, or even through the floors in multi-story buildings. As a result, attackers can implement a “parking lot” attack, where they actually sit in the organization’s parking lot and try to access internal hosts via the wireless network. If a network is compromised, the attacker has achieved a high level of penetration into the network. They are now through the firewall and have the same level of network access as trusted employees within the enterprise. An attacker may also fool legitimate wireless clients into connecting to the attacker’s own network by placing an unauthorized access point with a stronger signal in close proximity to wireless clients. The aim is to capture end-user passwords or other sensitive data when users attempt to log on to these rogue servers.

Shared Key Authentication Flaw

Shared key authentication can easily be exploited through a passive attack by eavesdropping on both the challenge and the response between the access point and the authenticating client. Such an attack is possible because the attacker can capture both the plaintext (the challenge) and the ciphertext (the response). WEP uses the RC4 stream cipher as its encryption algorithm. A stream cipher works by generating a keystream, i.e., a sequence of pseudo-random bits, based on the shared secret key, together with an initialization vector (IV). The keystream is then XORed against the plaintext to produce the ciphertext. An important property of a stream cipher is that if both the plaintext and the ciphertext are known, the keystream can be recovered by simply XORing the plaintext and the ciphertext together, in this case the challenge and the response. The recovered keystream can then be used by the attacker to encrypt any subsequent challenge text generated by the access point to produce a valid authentication response by XORing the two values together. As a result, the attacker can be authenticated to the access point.

Service Set Identifier (SSID) Flaw

Access points come with vendor provided default Service Set Identifiers (SSIDs) programmed into them. If the default SSID is not changed, it is very likely that an attacker will be able to successfully attack the device due to the use of the default configuration. In addition, SSIDs are embedded in management frames that will be broadcast in cleartext from the device, unless the access point is configured to disable SSID broadcasting or is using encryption. By conducting analysis on the captured network traffic from the air, the attacker could be able to obtain the network SSID and may be able to perform further attacks as a result.

The Vulnerability of Wired Equivalent Privacy Protocol (WEP)

Data passing through a wireless LAN with WEP disabled (which is the default setting for most products) is susceptible to eavesdropping and data modification attacks. However, even when WEP is enabled, the confidentiality and integrity of wireless traffic is still at risk because a number of flaws in WEP have been revealed, which seriously undermine its claims to security. In particular, the following attacks on WEP are possible:

  • Passive attacks to decrypt traffic based on known plaintext and chosen ciphertext attacks
  • Passive attacks to decrypt traffic based on statistical analysis of ciphertexts
  • Active attacks to inject new traffic from unauthorized mobile stations
  • Active attacks to modify data
  • Active attacks to decrypt traffic, based on tricking the access point into redirecting wireless traffic to an attacker’s machine.
Attack on Temporal Key Integrity Protocol (TKIP)

The Temporal Key Integrity Protocol (TKIP) attack uses a mechanism similar to the WEP attack, in that it tries to decode data one byte at a time by using multiple replays and observing the response over the air. Using this mechanism, an attacker can decode small packets like ARP frames in about 15 minutes. If Quality of Service (QoS) is enabled in the network, the attacker can further inject up to 15 arbitrary frames for every decrypted packet. Potential attacks include ARP poisoning, DNS manipulation, and denial of service. Although this is not a key recovery attack and it does not lead to compromise of TKIP keys or decryption of all subsequent frames, it is still a serious attack and poses risks to all TKIP implementations on both WPA and WPA2 networks.

Wireless Security Devices

As wireless enterprise networks become more pervasive, increasingly sophisticated attacks are developed to exploit these networks. In response, many organizations consider the deployment of wireless intrusion protection and wireless intrusion detection systems (WIPS/WIDS). These systems can offer sophisticated monitoring and reporting capabilities to identify attacks against wireless infrastructure while stopping multiple classes of attack before they are successful against a network.

Deployment Approaches

When selecting a WIDS vendor, it is important for the security practitioner to first understand the deployment methodologies supported by each system. The available WIDS deployment models include overlay, integrated, and hybrid.

Overlay Monitoring

In an overlay monitoring deployment, organizations augment their existing WLAN infrastructure with dedicated wireless sensors or Air Monitors (AMs). The AMs are connected to the network in a manner similar to access points (APs). They can be deployed in ceilings or on walls and supported by power over Ethernet (PoE) injectors in wiring closets. While APs are responsible for providing client connectivity, AMs are primarily passive devices that monitor the air for signs of attack or other undesired wireless activity. In an overlay WIDS system, the WIDS vendor provides a controller in the form of a server or appliance that collects and assesses information from the AMs that is monitored by an administrator. These devices do not otherwise participate with the rest of the wireless network, and are limited to assessing traffic at the physical layer (layer 1) and the data-link layer (layer 2).

Integrated Monitoring

In an integrated monitoring deployment, organizations leverage existing access point hardware as dual-purpose AP/AM devices. APs are responsible for providing client connectivity in an infrastructure role, and for analyzing wireless traffic to identify attacks and other undesired activity at the same time. This is often a less-costly approach compared to overlay monitoring because organizations use existing hardware for both monitoring and infrastructure access without the need for additional sensors or an overlay management controller.

Hybrid Monitoring

A hybrid monitoring approach leverages the strengths of both the overlay and integrated monitoring models. A hybrid approach uses both dual-purpose APs and dedicated AMs for intrusion detection and protection. Organizations can use an existing deployment of APs and augment that protection with dedicated AMs, or they can deploy a dedicated monitoring infrastructure consisting solely of AM devices. In either case, analysis is performed by a centralized controller similar to what is used with an overlay model, rather than the approach used in an integrated WIDS deployment, where processing is handled by distributed access points.

Powerful Attack Response

To mitigate attacks on the wireless network, WIDS vendors have augmented the analysis components of their products with reactive components, often known as Wireless Intrusion Prevention Services (WIPS). When the analysis mechanism recognizes an attack, such as an attempt at accelerated WEP key cracking, the wireless device reacts to the event by reporting it to the administrator and by taking steps to prevent the attack from succeeding.

Summary

Network and communications security can be a complex set of topics for the SSCP to understand. The need to be able to describe network-related security issues can involve multiple topics made up of many moving parts. The ability to identify protective measures for telecommunications technologies can be challenging, as the speed at which technology changes and evolves continues to increase. The SSCP needs to be able to also identify the processes best suited for managing LAN-based security, but at the same time, take into account the needs of the organization overall, and the necessary procedures for operating and configuring network-based security devices such as IDS and IPS solutions. The security professional should be able to put all of these issues and concerns into context, understand their main goals, and apply a common sense approach to typical scenarios. The focus here is to maintain operational resilience and protect valuable operational assets through a combination of people, processes, and technologies. At the same time, security services must be managed effectively and efficiently just like any other set of services in the enterprise.

Sample Questions

  1. Which of the following is typically deployed as a screening proxy for web servers?
    1. Intrusion prevention system
    2. Kernel proxies
    3. Packet filters
    4. Reverse proxies
  2. A customer wants to keep cost to a minimum and has only ordered a single static IP address from the ISP. Which of the following must be configured on the router to allow for all the computers to share the same public IP address?
    1. Virtual Private Network (VPN)
    2. Port Address Translation (PAT)
    3. Virtual Local Area Network (VLAN)
    4. Power over Ethernet (PoE)
  3. Sayge installs a new Wireless Access Point (WAP) and users are able to connect to it. However, once connected, users cannot access the Internet. Which of the following is the most likely cause of the problem?
    1. a. An incorrect subnet mask has been entered in the WAP configuration.
    2. b. Users have specified the wrong encryption type and packets are being rejected.
    3. c. The signal strength has been degraded and latency is increasing hop count.
    4. d. The signal strength has been degraded and packets are being lost.
  4. Which of the following devices should be part of a network’s perimeter defense?
    1. Web server, host-based intrusion detection system (HIDS), and a firewall
    2. DNS server, firewall, and a boundary router
    3. Switch, firewall, and a proxy server
    4. Firewall, proxy server, and a host-based intrusion detection system (HIDS)
  5. A Security incident event management (SIEM) service performs which of the following function(s)? (Choose all that apply.)
    1. Coordinates software for security conferences and seminars
    2. Aggregates logs from security devices and application servers looking for suspicious activity
    3. Gathers firewall logs for archiving
    4. Reviews access control logs on servers and physical entry points to match user system authorization with physical access permissions
  6. A botnet can be characterized as a:
    1. Type of virus
    2. Group of dispersed, compromised machines controlled remotely for illicit reasons
    3. Automatic security alerting tool for corporate networks
    4. Network used solely for internal communications
  7. During a disaster recovery test, several billing representatives need to be temporarily set up to take payments from customers. It has been determined that this will need to occur over a wireless network, with security being enforced where possible. Which of the following configurations should be used in this scenario?
    1. WPA2, SSID disabled, and 802.11a
    2. WEP, SSID disabled, and 802.11g
    3. WEP, SSID enabled, and 802.11b
    4. WPA2, SSID enabled, and 802.11n
  8. A new installation requires a network in a heavy manufacturing area with substantial amounts of electromagnetic radiation and power fluctuations. Which media is best suited for this environment is little traffic degradation is tolerated?
    1. Shielded twisted pair
    2. Coax
    3. Fiber
    4. Wireless
  9. What is the network ID portion of the IP address 191.154.25.66 if the default subnet mask is used?
    1. 191
    2. 191.154.25
    3. 191.154
  10. Given the address 192.168.10.19/28, which of the following are valid host addresses on this subnet? (Choose two.)
    1. 192.168.10.31
    2. 192.168.10.17
    3. 192.168.10.16
    4. 192.168.10.29
  11. Circuit-switched networks do which of the following tasks?
    1. Divide data into packets and transmit it over a virtual network.
    2. Establish a dedicated circuit between endpoints.
    3. Divide data into packets and transmit it over a shared network.
    4. Establish an on-demand circuit between endpoints.
  12. What is the biggest security issue associated with the use of a multiprotocol label switching (MPLS) network?
    1. Lack of native encryption services
    2. Lack of native authentication services
    3. Support for the Wired Equivalent Privacy (WEP) and Data Encryption Standard (DES) algorithms
    4. The need to establish peering relationships to cross Tier 1 carrier boundaries
  13. The majority of DNS traffic is carried using User Datagram Protocol (UDP); what types of DNS traffic is carried using Transmission Control Protocol (TCP)? (Choose all that apply)
    1. Query traffic
    2. Response Traffic
    3. DNNSEC traffic that exceeds single packet size maximum
    4. Secondary zone transfers
  14. What is the command that a client would need to issue to initialize an encrypted FTP session using Secure FTP as outlined in RFC 4217?
    1. “ENABLE SSL”
    2. “ENABLE TLS”
    3. “AUTH TLS”
    4. “AUTH SSL”
  15. What is the IEEE designation for Priority-based Flow Control (PFC) as defined in the Data Center Bridging (DCB) Standards?
    1. 802.1Qbz
    2. 802.1Qau
    3. 802.1Qaz
    4. 802.1Qbb
  16. What is the integrity protection hashing function that the Session Initiation Protocol (SIP) uses?
    1. SHA-160
    2. MD4
    3. MD5
    4. SHA-256
  17. Layer 2 Tunneling Protocol (L2TP) is a hybrid of:
    1. Cisco’s Layer 2 Forwarding (L2F) and Microsoft’s Point to Point Tunneling Protocol (PPTP)
    2. Microsoft’s Layer 2 Forwarding (L2F) and Cisco’s Point to Point Tunneling Protocol (PPTP)
    3. Cisco’s Layer 2 Forwarding (L2F) and Point to Point Protocol (PPP)
    4. Microsoft’s Layer 2 Forwarding (L2F) and Point to Point Protocol (PPP)
  18. With regards to LAN-based security, what is the key difference between the control plane and the data plane?
    1. The data plane is where forwarding/routing decisions are made, while the control plane is where commands are implemented.
    2. The control plane is where APIs are used to monitor and oversee, while the data plane is where commands are implemented.
    3. The control plane is where forwarding/routing decisions are made, while the data plane is where commands are implemented.
    4. The data plane is where APIs are used to monitor and oversee, while the control plane is where commands are implemented.
  19. There are several record types associated with the use of DNSSEC. What does the DS record type represent?
    1. A private key
    2. A public key
    3. A hash of a key
    4. A signature of an RRSet
  20. MACsec (IEEE 802.1AE) is used to provide secure communication for all traffic on Ethernet links. How is MACsec configured?
    1. Through key distribution
    2. Using connectivity groups
    3. Using key generation
    4. Using connectivity associations

End Notes

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.81.33