CHAPTER 4

Network Infrastructure

In this chapter, you will learn about

•   Network types

•   Network optimization

•   Routing and switching

•   Network ports and protocols

Network configuration is an integral piece of cloud computing and is key to cloud computing performance. One of the factors an organization must consider is the impact of networking on cloud computing performance and the differences that exist between their current network infrastructure and what would be utilized in a cloud computing infrastructure.

This chapter introduces you to networking components that are used in cloud computing. After reading this chapter, you should understand the different types of networks and how to optimize an organization’s network for cloud computing. You will also learn how network traffic is routed between the various cloud models and how to secure that traffic. And you will find out about the different network protocols used in cloud computing and when to use those protocols. It is important for you to have a thorough understanding of these topics for the exam.

Network Types

A network is defined as a group of interconnected computers and peripherals that are capable of sharing resources, including software, hardware, and files. The purpose of a network is to provide users with access to information that multiple people might need to perform their day-to-day job functions.

There are numerous advantages for an organization to construct a network. It allows users to share files so that multiple users can access them from a single location. An organization can share resources such as printers, fax machines, storage devices, and even scanners, thus reducing the total number of resources they have to purchase and maintain. A network also allows for applications to be shared by multiple users as long as the application is designed for this and the appropriate software licensing is in place.

There are three types of networks: intranet, Internet, and extranet. They all rely on the same Internet protocols but have different levels of access for users inside and outside the organization. This section describes each of these network types and when to use them.

Intranet

An intranet is a private network based on the Internet Protocol (IP) that is configured and controlled by a single organization and is only accessible to users that are internal to that particular organization. An intranet can host multiple private websites and is usually the focal point for internal communication and collaboration.

An intranet allows an organization to share information and websites within the organization and is protected from external access by a firewall or a network gateway. For example, an organization may want to share announcements, the employee handbook, confidential financial information, or organizational procedures with its employees but not with people outside the organization.

An intranet is similar to the Internet, except that an intranet is restricted to specific users. For example, a web page that is designed for the intranet may have a similar look and feel like any other website that is on the Internet, with the only difference being who is authorized to access the web page. Public web pages that are accessible over the Internet are typically available to everyone. In contrast, an intranet is owned and controlled by the organization and that organization decides who can access that web page. Figure 4-1 shows an example of an intranet configuration.

Images

Figure 4-1  An intranet network configuration, where access is private

Internet

The Internet is a global system of interconnected computer networks that use the same Internet protocols (TCP/IP) as an intranet network uses. Unlike an intranet, which is controlled by and serves only one organization, the Internet is not controlled by a single organization and serves billions of users around the world. The Internet is a network of multiple networks relying on network devices and common protocols to transfer data from one intermediate destination (sometimes called a hop) to another until it reaches its final destination.

Aside from a few countries that impose restrictions on what people in their country can view, the Internet is largely unregulated, and anyone can post or read whatever they want on the Internet. The Internet Corporation for Assigned Names and Numbers (ICANN) is a nonprofit organization that was created to coordinate the Internet’s system of unique identifiers, including domain names and IP addresses.

Extranet

An extranet is an extension of an Intranet, with the primary difference being that an extranet allows controlled access from outside the organization. An extranet permits access to vendors, partners, suppliers, or other limited third parties. Access is restricted using firewalls, access profiles, and privacy protocols. It allows an organization to share resources with other businesses securely. For example, an organization could use an extranet to sell its products and services online or to share information with business partners.

Both intranets and extranets are owned and supported by a single organization. The way to differentiate between an intranet and an extranet is by who has access to the private network and the geographical reach of that network. Figure 4-2 shows an example configuration of an extranet network.

Images

Figure 4-2  An extranet network configuration, where outside access is limited

Images

EXAM TIP   The difference between an intranet and an extranet is that an intranet is limited to employees, while an extranet is available to a larger group of people, such as vendors, partners, or suppliers.

Network Optimization

Now that you know about the different types of networks, you need to understand the components of those networks and how they can be optimized. In this section, you will learn about the components that make up intranet and extranet networks and how to configure them so that they perform most efficiently.

Network optimization is the process of keeping a network operating at peak efficiency. To keep the network running at peak performance, an administrator must perform a variety of tasks, including updating the firmware and operating system on routers and switches, identifying and resolving data flow bottlenecks, and monitoring network utilization. By keeping the network optimized, a network administrator as well as CSPs can more accurately meet the terms of the organization’s SLA.

Network Scope

The scope of a network defines its boundaries. The largest network on Earth was described earlier. It is the Internet, but millions of other networks span organizational or regional boundaries. The terms LAN, MAN, and WAN are used to differentiate these networks.

LAN

A local area network (LAN) is a network topology that spans a relatively small area like an office building. A LAN is a great way for people to share files, devices, pictures, and applications and is primarily Ethernet-based.

There are three different data rates of modern Ethernet networks:

•   Fast Ethernet  Transfers data at a rate of 100 Mbps (megabits per second)

•   Gigabit Ethernet  Transfers data at 1000 Mbps

•   10 Gigabit Ethernet  Transfers data at 10,000 Mbps

MAN

A metropolitan area network (MAN) is similar to a LAN except that a MAN spans a city or a large campus. A MAN usually connects multiple LANs and is used to build networks with high data connection speeds for cities or college campuses. MANs are efficient and fast because they use high-speed data carriers such as fiber optics.

WAN

A wide area network (WAN) is a network that covers a large geographic area and can contain multiple LANs or MANs. WANs are not restricted by geographic areas. The Internet is an example of the largest WAN. Some corporations use leased lines to create a corporate WAN that spans a large geographic area containing locations in multiple states or even countries. Leased lines are private network circuits that are established through a contract with an ISP. These connect one or more sites together through the ISP’s network.

Network Topologies

How the different nodes or devices in a network are connected and how they communicate is determined by the network’s topology. The network topology is the blueprint of the connections of a computer network and can be either physical or logical. Physical topology refers to the design of the network’s physical components: computers, switches, cable installation, etc. Logical topology can be thought of as a picture of how the data flows within a network.

The primary physical topologies to be considered are bus, star, ring, mesh, and tree. There are various pros and cons of the different network topologies. After evaluating the needs of the organization, you can then choose the most efficient topology for the intended purpose of the network.

Bus

In a bus topology, every node is connected to a central cable, referred to as the bus or backbone. In a bus topology, only one device is allowed to transmit at any given time. Since a bus topology uses a single cable, it is easy to set up and cost-effective.

The bus topology is not recommended for large networks because of the limitations to the number of nodes that can be configured on a single cable. Troubleshooting a bus topology is much more difficult than troubleshooting a star topology because in a bus topology, you have to determine where the cable was broken or removed. In a star topology, the central device offers a simple place to conduct troubleshooting. Figure 4-3 shows an example of a network configured to use a bus topology.

Images

Figure 4-3  Network configuration using a bus topology

Star

In a star topology, each node is connected to a central hub or switch. The nodes communicate by sending data through the central hub. New nodes can easily be added or removed without affecting the rest of the nodes on the network.

The star topology offers improved performance over a bus topology. It is also more resilient because the failure of one node does not affect the rest of the network. Problematic nodes can be easily isolated by unplugging that particular node. If the problem disappears, it can be concluded that it is related to that node, making troubleshooting much more straightforward in a star topology.

The main drawback to the star topology is that if the central hub or switch fails, all the nodes connected to it are disconnected and unable to communicate with the other nodes. This is known as a single point of failure. Figure 4-4 shows an example of a network configured to use a star topology.

Images

Figure 4-4  Network configuration using a star topology

Ring

In a ring topology, each node is connected to another, forming a circle or a ring. Each packet is sent around the ring until it reaches its target destination. The ring topology is hardly used in today’s enterprise environment because all network connectivity is lost if one of the links in the network path is broken. Figure 4-5 shows an example of a network configured to use a ring topology.

Images

Figure 4-5  Network configuration using a ring topology

Mesh

In a true mesh topology, every node is interconnected to every other node in the network, allowing transmissions to be distributed even if one of the connections goes down. A mesh topology, however, is difficult to configure and expensive to implement and is not commonly used. It is the most fault-tolerant of the physical topologies, but it requires the most amount of cable. Since cabling is expensive, the cost must be weighed against the fault tolerance achieved. Figure 4-6 shows an example of a network configured to use a mesh topology.

Images

Figure 4-6  Network configuration using a mesh topology

Most real-world implementations of a mesh network are actually a partial mesh, where additional redundancy is added to the topology without incurring the expense of connecting everything to everything.

Tree

In a tree topology, multiple star networks are connected through a linear bus backbone. As you can see in Figure 4-7, if the backbone cable between the two star networks fails, those two networks would no longer be able to communicate; however, the computers on the same star network would still maintain communication with each other. The tree topology is the most commonly used configuration in today’s enterprise environment.

Images

Figure 4-7  Network configuration using a tree topology

Bandwidth and Latency

Now that you understand the different network topologies that you can configure, you need to know what other factors affect network performance. When moving to the cloud, network performance is crucial to the success of your deployment because the data is stored off-site. Two of the necessities for determining network performance are bandwidth and network latency. Bandwidth is the speed of the network. Network latency is the time delay encountered while data is being sent from one point to another on the network.

There are two types of latency: low latency and high latency. A low-latency network connection is a connection that experiences very small delays while sending and receiving traffic. A high-latency network has long delays while sending and receiving traffic. When it is excessive, network latency can create bottlenecks that prevent data from using the maximum capacity of the network bandwidth, thereby decreasing the effective bandwidth.

Compression

Compression is defined as the reduction in the size of data traveling across the network, which is achieved by converting that data into a format that requires fewer bits for the same transmission. Compression is typically used to minimize the required storage space or to reduce the amount of data transmitted over the network. When using compression to reduce the size of data that is being transferred, a network engineer sees a decrease in transmission times, since there is more bandwidth available for other data to use as it traverses the network. Compression can result in higher processor utilization because a packet must be compressed and decompressed as it traverses the network.

Network compression can automatically compress data before it is sent over the network to improve performance, especially where bandwidth is limited. Maximizing the compression ratio is vital to enhancing application performance on networks with limited bandwidth. Compression can play a key role in cloud computing. As an organization migrates to the cloud network, compression is vital in controlling network latency and maximizing network bandwidth.

Images

EXAM TIP   Compression requires compute power to perform. The higher the compression, the higher the compute cost.

Caching

Caching is the process of storing frequently accessed data in a location closer to the device that is requesting the data. For example, a web cache could store web pages and web content either on the physical machine that is accessing the website or on a storage device like a proxy server. This would increase the response time of the web page and reduce the amount of network traffic required to access the website, thus improving network speed and reducing network latency.

Images

EXAM TIP   The most common type of caching occurs with proxy servers.

There are multiple benefits to caching, including the cost savings that come with reducing the bandwidth needed to access information via the Internet and the improved productivity of the end users (because cached information loads significantly faster than noncached information). With your data now being stored in the cloud, it is crucial to understand how caching works and how to maximize caching to improve performance and maximize your network bandwidth.

Load Balancing

Throughout this section, we have discussed the importance of optimizing network traffic and infrastructure. Data must be routed as efficiently as possible to optimize network traffic. For example, if an organization’s network has five routers and three of them are running at 5 percent and the other two are running at 90 percent, the network utilization is not as efficient as it could be. If the load were balanced such that each of the routers was running at 20 percent utilization, it would improve network performance and limit network latency.

Similarly, websites and cloud servers can only handle so much traffic on their own. E-commerce sites may receive thousands or millions of hits every minute. These systems service such a huge number of requests through load-balanced systems by splitting the traffic between multiple web servers that are part of a single web farm, referenced by a single URL. This increases performance and removes the single point of failure connected with having only one server respond to the requests.

Load balancing is the process of distributing incoming HTTP or application requests evenly across multiple devices or web servers so that no single device is overwhelmed. Load balancing allows for achieving optimal resource utilization and maximizing throughput without overloading a single machine. Load balancing increases reliability by creating redundancy for your application or website by using dedicated hardware or software. Figure 4-8 shows an example of how load balancing works for web servers.

Images

Figure 4-8  An illustration of load balancing

Network Service Tiers

Network service tiers determine the level of network performance and resiliency provided to the cloud environment. CSPs may have multiple tiers to choose from, each of them offering different options. Some of the options include network performance, reliability, and geographic distribution.

Images

NOTE   There is no standard network service tier package that every vendor uses. You will need to evaluate what each tier offers to choose the one that best meets your needs.

Network Performance

The first way network services tiers differentiate themselves is through network performance. Premium tiers will offer the lowest network latency, higher bandwidth, and more concurrent connections. In contrast, lower tiers will have higher latency, bandwidth caps or throttling, and fewer concurrent connections.

Images

CAUTION   Don’t assume that the premium or highest tier is necessarily the best solution. Tiers give customers the option to only pay for what they need. Business needs should drive the decision on which tier to select.

Reliability

The second way network service tiers are differentiated is through higher availability. Higher tiers will have less downtime in their SLA, and they may offer more advanced routing and DNS capabilities so that fewer packets are lost when systems fail over or when network links go down or routes are updated. Conversely, lower tiers will have more downtime in their SLAs and may lose more packets when such events occur. This does not mean that they will lose data, only that the data will need to be retransmitted, which can mean slower speeds or potential timeouts for customers in such events.

Geographic Distribution

The third way network service tiers differentiate themselves is through geographic distribution. A cloud service can be load-balanced across multiple nodes in different regions or even different countries. A low tier might host the systems on a single instance, while a middle tier may load-balance across multiple servers in a local region, called regional load balancing. A premium tier might load-balance across multiple servers in multiple regions across the globe, called global load balancing.

Exercise 4-1: Configuring Network Service Tiers in Google Cloud

In this exercise, you will learn how to change the network service tier for a Google Cloud project. The process is rather simple, which just shows how powerful the cloud is. If you were to upgrade the service tier of a locally hosted application, you would need to purchase additional bandwidth, upgrade networking hardware, configure replication to multiple sites, and establish partnerships or hosting contracts with each of those sites. With the cloud, a system can be upgraded to a higher service tier with a few clicks.

1.   Log in to your Google Cloud dashboard.

2.   Click the three lines in the upper right and scroll down until you find the networking category and then you will see network service tiers, as shown next. Please note that Google shortens the name to “network service…” in its list.

Images

3.   Select the project you wish to modify.

4.   Click the Change Tier button.

5.   If you have never changed the tier before, it will be set to the default network tier you have specified. Select the tier you wish to change it to.

6.   Click Change.

Proxy Servers

Proxy servers are used to route traffic through an intermediary. The proxy server can receive requests for data, such as a website request, and then make the request for the user. The web server would interact with the proxy, sending back the response, and then the proxy would forward it on to the end user. Proxy servers hide the client from the server because the server communicates with the proxy instead of the client or user, as shown in Figure 4-9.

Images

Figure 4-9  Proxy server

This first example approached the proxy server from the client perspective. Proxy servers can also be used in reverse. This is known as a reverse proxy. A reverse proxy accepts requests for resources and then forwards the request on to another server. A reverse proxy hides the server from the client because the client or user connects only to the reverse proxy server, believing that the resources reside on the reverse proxy when they actually reside somewhere else, as shown in Figure 4-10.

Images

Figure 4-10  Reverse Proxy server

Content Delivery Network

A content delivery network (CDN) is a collection of servers that are geographically distributed around the world to provide content to users via local servers. CDNs allow companies to deploy content to one place but have it locally available around the world. Local connections have lower latency than remote ones. For example, if you did not have a CDN and a user in India wanted to access your website that is hosted in the United States, they may have to traverse many network links, otherwise known as hops, in between their computer and the data center hosting your website. However, if you are using a CDN, they will only have a few hops between their computer and the closest CDN server in India, so the website will be much more responsive for them, as shown in Figure 4-11.

Images

Figure 4-11  CDN Delivering Web Content to a User in India

Routing and Switching

We have discussed the different options and configurations that are available for setting up a network. Now let’s explore how to route traffic to and from networks. Knowing how a network operates is the most important piece to understanding routing and switching. In the previous section, you learned that a network operates by connecting computers and devices in a variety of different physical configurations. Routers and switches are the networking devices that enable other devices on the network to connect and communicate with each other and with other networks. They are placed on the same physical network as the other devices.

While routers and switches may give the impression they are somewhat similar, these devices are responsible for very different operations on a network. A switch is used to connect multiple devices to the same network or LAN. For example, a switch connects computers, printers, servers, and a variety of other devices. It allows those devices to share network resources with each other. This makes it possible for users to share resources, saving valuable time and money for the organization.

A router, on the other hand, is used to connect multiple networks together and allows a network to communicate with the outside world. An organization would use a router to connect its network to the Internet, thus allowing its users to share a single Internet connection. A router can analyze the data that is being sent over the network and change how it is packaged so that it can be routed to another network or even over a different type of network.

A router makes routing decisions based on the routing protocol configured on it. Each routing protocol uses a specific method to determine the best path a packet can take to its destination. Some routing protocols include Border Gateway Protocol (BGP), Interior Gateway Routing Protocol (IGRP), Open Shortest Path First (OSPF), and Routing Information Protocol (RIP). BGP uses rule sets; IGRP uses delay, load, and bandwidth; OSPF uses link-state; and RIP uses hop count to make routing decisions.

Images

EXAM TIP   A router connects outside networks to your local network, whereas a switch connects devices on your internal network.

Network Address Translation

Now that you know a router can allow users to share a single IP address when browsing the Internet, you need to understand how that process works. Network address translation, or NAT, allows a router to modify packets so that multiple devices can share a single public IP address. Most organizations require Internet access for their employees but do not have enough valid public IP addresses to allow each individual to have his or her own public address to locate resources outside of the organization’s network. The primary purpose of NAT is to limit the number of public IP addresses an organization needs.

NAT allows outbound Internet access, including for cloud-based virtual machines, but prevents inbound connections initiated from the Internet directly to inside machines or cloud-based virtual machines, which route through the NAT devices as their default gateway.

For example, most organizations use a private IP address range, which allows the devices on the network to communicate with all the other devices on the network and makes it possible for users to share files, printers, and the like. But if those users need to access anything outside the network, they require a public IP address. If Internet queries originate from various internal devices, the organization would need to have a valid public IP address for each device. NAT consolidates the addresses needed for each internal device to a single valid public IP address, allowing all of the organization’s employees to access the Internet with the use of a single public IP address.

To fully understand this concept, you first need to know what makes an IP address private and what makes an IP address public. Any IP address that falls into one of the IP address ranges reserved for private use by the Internet Engineering Task Force (IETF) is considered a private IP address. Table 4-1 lists the different private IP address ranges.

Images

Table 4-1  Private IP Addresses

A private network that adheres to the IETF published standard RFC 1918 is a network address space that is not used or allowed on the public Internet. These addresses are commonly used in a home or corporate network or LAN when a public IP address or globally routed address is not required on each device. Because these address ranges are not made available as public IP addresses and consequently are never explicitly assigned for use to any organization, they receive the designation of “private” IP addresses. IP packets that are addressed by private IP addresses cannot be transmitted onto the public Internet over the backbone.

There are two reasons for the recent surge in using RFC 1918 addresses: one is that Internet Protocol version 4 (IPv4) address space is rapidly diminishing, and the other is that a significant security enhancement is achieved by providing address translation, whether it is NAT or PAT (described shortly) or a combination of the two. A perpetrator on the Internet cannot directly access a private IP address without the administrator taking significant steps to relax the security. A NAT router is sometimes referred to as a poor man’s firewall. In reality, it is not a firewall at all, but it shields the internal network (individuals using private addresses) from attacks and from what is sometimes referred to as Internet background radiation (IBR).

An organization must have at least one “routable” or public IP address to access resources that are external to its network. This is where NAT comes into play. NAT allows a router to change the private IP address into a public IP address to access resources that are external to it. The NAT router then tracks those IP address changes. When the external information being requested comes back to the router, the router changes the IP address from a public IP address to a private IP address to forward the traffic back to the requesting device. Essentially, NAT allows a single device like a router to act as an agent or a go-between for a private network and the Internet. NAT provides the benefits of saving public IP addresses, higher security, and ease of administration.

In addition to public and private IP addresses, there is automatic private IP addressing (APIPA, sometimes called Autoconfig), which enables a Dynamic Host Configuration Protocol (DHCP) client to receive an IP address even if it cannot communicate with a DHCP server. APIPA addresses are “nonroutable” over the Internet and allocate an IP address in the private range of 169.254.0.1 to 169.254.255.254. APIPA uses Address Resolution Protocol (ARP) to verify that the IP address is unique in the network.

Images

EXAM TIP   You need to quickly identify a private IP address, so it is advantageous to memorize the first octet of the IP ranges (i.e., 10, 172, and 192).

Port Address Translation

Like NAT, port address translation (PAT) allows for mapping of private IP addresses to public IP addresses and mapping multiple devices on a network to a single public IP address. Its goal is the same as that of NAT: to conserve public IP addresses. PAT enables the sharing of a single public IP address between multiple clients trying to access the Internet.

An excellent example of PAT is a home network where multiple devices are trying to access the Internet simultaneously. In this instance, your ISP would assign your home network’s router a single public IP address. On this network, you could have multiple computers or devices trying to access the Internet at the same time using the same router. When device Y logs on to the Internet, it is assigned a port number appended to the private IP address. This gives device Y a unique IP address. If device Z were to log on to the Internet simultaneously, the router would assign the same public IP address to device Z but with a different port number. The two devices are sharing the same public IP address to browse the Internet, but the router distributes the requested content to the appropriate device based on the port number the router has assigned to that particular device.

Images

EXAM TIP   Basic NAT provides a one-to-one mapping of IP addresses, whereas PAT provides a many-to-one mapping of IP addresses.

Subnetting and Supernetting

Subnetting is the practice of creating subnetworks, or subnets. A subnet is a logical subdivision of an IP network. Using subnets may be useful in large organizations where it is necessary to allocate address space efficiently. They may also be utilized to increase routing efficiency and offer improved controls for network management when different networks require the separation of administrator control for different entities in a large or multitenant environment. Inter-subnet traffic is exchanged by routers, just as it would be exchanged between physical networks.

All computers that belong to a particular subnet are addressed with the use of two separate bit groups in their IP address, with one group designating the subnet and the other group designating the specific host on that subnet. The routing prefix of the address can be expressed in either classful notation or classless inter-domain routing (CIDR) notation. CIDR has become the most popular routing notation method in recent years. This notation is written as the first address of a network, followed by a slash (/), then finishing with the prefix’s bit length. To use a typical example, 192.168.1.0/24 is the network prefix starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. An allocation of 24 bits is equal to the subnet mask for that network, which you may recognize as the familiar 255.255.255.0.

Whereas subnetting is the practice of dividing one network into multiple networks, supernetting does the exact opposite, combining multiple networks into one larger network. Supernetting is most often utilized to combine multiple class C networks. It was created to solve the problem of routing tables growing too large for administrators to manage by aggregating networks under one routing table entry. It also provided a solution to the problem of class B network address space running out.

In much the same fashion as subnetting, supernetting takes the IP address and breaks it down into a network bit group and a host identifier bit group. It also uses CIDR notation. The way to identify supernetted networks is that the network prefix is always lower than 23, which allows for a greater number of hosts (on the larger network) to be specified in the host bit group.

Routing Tables

A routing table is a set of procedures stored on a router that the router uses to determine the destination of network packets that it is responsible for routing. The routing table contains information about the network topology that is located adjacent to the router, as well as information gathered from neighboring routers. This information is used by routers to determine which path to send packets down to efficiently deliver information to its destination.

Routers may know of multiple paths to a destination. The routing table will rank each of these paths in order of efficiency. The method of ordering the paths depends on the routing protocol used. If the most efficient path is unavailable, the router will select the next best path as defined by its routing table.

Images

EXAM TIP   Routers can maintain multiple routing tables simultaneously, allowing for identical IP addresses to coexist without conflict through a technology called virtual routing and forwarding (VRF).

Network Segmentation and Micro-segmentation

Network segmentation and micro-segmentation are techniques to divide the network into smaller pieces to isolate or better control traffic and to apply more granular policies to those network segments. Both segmentation and micro-segmentation can be used to reduce the spread of malicious code or make attacks harder, as attackers will need to figure out how to move between network segments before they can attack nodes in another segment.

Segmentation and micro-segmentation methods include traditional VLANs, as well as newer technologies such as VXLAN, NVGRE, STT, and GENEVE. Each of these protocols operates on top of other network protocols such as TCP and IP. Therefore, they are often referred to collectively as network overlays. These protocols aim to make the network more scalable and flexible, a requirement when networks span multiple clouds and when clouds host thousands or millions of customers.

Virtual Local Area Network

A virtual local area network, or VLAN, is the concept of partitioning a physical network to create separate, independent broadcast domains that are part of the same physical network. VLANs are similar to physical LANs but add the ability to break up physical networks into logical groupings of networks, all within the same physical network.

VLANs were conceived out of the desire to create logical separation without the need for additional physical hardware (i.e., network cards, wiring, and routers). VLANs can even traverse physical networks, forming a logical network or VLAN even if the devices exist on separate physical networks. With a virtual private network (VPN), which extends a private network over a public network such as the Internet, a VLAN can even traverse the entire Internet. For example, you could implement a VLAN to place only certain end users inside the VLAN to help control broadcast traffic.

VLAN tagging is the process of inserting a 4-byte header directly after the destination address and the Ethernet frame header’s source address. There are two types of VLAN tagging mechanisms: Inter-Switch Link (ISL), which is proprietary to Cisco equipment, and IEEE 802.1Q, which is supported by everyone, including Cisco, and is usually the VLAN option of choice. Approximately 4095 different VLAN IDs can be achieved on the same physical network segment (depending on what is supported by the switch and router devices) utilizing the IEEE 802.1Q protocol.

A VLAN is usually associated with an IP subnet, so all the devices in that IP subnet belong to the same VLAN. To configure a VLAN, you must first create a VLAN and then bind the interface and IP address. VLANs must be routed, and there are various methods for assigning VLAN membership to switch ports. Switch ports can be assigned membership to a particular VLAN on a port-by-port basis manually. Switch ports can also be dynamically configured from a VLAN membership policy server that tracks MAC addresses and their associated VLANs, or they can be classified based on their IP address if the packets are untagged or priority tagged.

Broadcasts, by their very nature, are processed and received by each member of the broadcast domain. VLANs can improve network performance by segmenting the network into groups that share broadcast traffic. For example, each floor of a building might have its own subnet. It might make sense to create a VLAN for that subnet to control broadcasts to other floors of the building, thus reducing the need to send broadcasts to unnecessary destinations (in this case, another floor of the building). The general rule for VLANs is to keep the resources that are needed for the VLAN and that are consumed by members of the VLAN on that same VLAN. Latency issues will occur whenever a packet must cross a VLAN, as it must be routed. This situation should be avoided if possible.

The type of port that supports a VLAN is called an access link. When a device connects using an access link, it is unaware of any VLAN membership. It behaves as if it were a component of a broadcast domain. All VLAN information is removed by switches from the frame before it gets to the device connected to the access link. No communication or interaction can occur between the access link devices and the devices outside of their designated VLAN. This communication is only made possible when the packet is routed through a router.

A trunk link, also known just as a trunk, is a port that transports packets for any VLAN. These trunk ports are usually found in connections between switches and require the ability to carry packets from all available VLANs because those VLANs span multiple switches. Trunk ports are typically VLAN 0 or VLAN 1, but there is nothing magical about those numbers. It is up to the manufacturer to determine which ID is designated as the trunk port. Specifications are spelled out in the 802.1Q protocol. Like any other blueprint, some manufacturers will make their own interpretation of how trunk ports should be implemented.

For cloud VLANs, it is important to understand another type of VLAN known as a private VLAN or PVLAN. PVLANs contain switch ports that cannot communicate with each other but can access another network. PVLANs restrict traffic through the use of private ports so that they communicate only with a specific uplink trunk port. A good example of the utilization of a PVLAN is in a hotel setting. Each room of the hotel has a port that can access the Internet, but it is not advantageous for the rooms to communicate with each other.

Virtual Extensible LAN

The virtual extensible LAN (VXLAN) is a network overlay protocol used to subdivide a network into many smaller segments. VXLAN tunnels or encapsulates frames across UDP port 4789. The data addressed to another member of a VXLAN is placed inside the UDP packet and routed to its destination, at which point it is de-encapsulated so that the receiving port, known as a VXLAN tunnel endpoint (VTEP), can receive it as if it were sent over a local network. The primary vendors behind VXLAN are Cisco, VMware, Brocade, Citrix, Red Hat, Broadcom, and Arista. VXLAN has a moderate overhead of 50 bytes.

VXLAN is primarily used in cloud environments to segment different tenants. VXLANs were created because VLAN uses a 12-bit VLAN ID that can have a maximum of 4096 network IDs assigned at one time. This is not enough addresses for many large cloud environments. VXLAN uses a 24-bit segment ID that allows for 16 million segments, which provides enough segments for many more customers. Similar to VLANs, switch ports can be members of a VXLAN. VXLAN members can be either virtual or physical switch ports.

Network Virtualization Using Generic Routing Encapsulation

Network Virtualization Using Generic Routing Encapsulation (NVGRE) is another network overlay protocol used to subdivide a network into many smaller segments. As the name implies, NVGRE uses the GRE protocol for tunneling. The primary vendors behind the NVGRE protocol are Microsoft for use with their Hyper-V hypervisors on Azure, Huawei, and HP. NVGRE has a moderate overhead of 42 bytes.

Like VXLAN, NVGRE is also used to segment a much larger network set than the traditional VLAN. Also, just like VXLAN, NVGRE uses a 24-bit Virtual Subnet Identifier (VSID) that uniquely identifies the network segment. This allows for 16 million segments. However, unlike VXLAN, NVGRE headers include an extra 8-bit field called the flow ID, which devices can use to better prioritize traffic. Also, NVGRE frame sizes are larger than VXLAN and STT, so intermediary nodes will need to support this larger 1546-byte frame.

Stateless Transport Tunneling

Stateless Transport Tunneling (STT) is another network overlay protocol used to subdivide a network into many smaller segments. STT can be implemented entirely in software so that it can operate on top of existing network technologies. STT runs over TCP/IP and supports the largest number of networks with its 64-bit network ID. This allows for 18 quintillion networks. This larger ID also increases its overhead. It has the most overhead of each of the network overlay protocols, at 76 bytes. However, STT does support offloading of processing to the NIC to reduce its burden on the systems encapsulating and de-encapsulating its content.

The primary vendors behind STT are VMware and Broadcom. VXLAN, NVGRE, and STT are compared in Table 4-2.

Images

Table 4-2  Network Overlay Protocols Compared

Generic Network Virtualization Encapsulation

Generic Network Virtualization Encapsulation (GENEVE) is the answer to VXLAN, NVGRE, and STT. Each of these protocols has different vendors behind them, but we live in a world that requires increasing interconnectivity and interoperability. GENEVE was created as a multivendor effort to allow for this interoperability. GENEVE is an overlay protocol, but it only specifies the data format. It does not specify a control format like VXLAN, NVGRE, and STT. The goal of the protocol was to allow it to evolve over time and offer maximum flexibility. It operates over TCP/IP. To support each of the aforementioned protocols, GENEVE includes a 64-bit metadata field for network virtualization. This increases the overhead much like STT. GENEVE utilizes Type-Length-Value (TLV) encoding for the metadata so that hardware can skip over parts it does not support without resulting in an error. GENEVE has the support of many vendors and software systems.

Network Ports and Protocols

Now that you understand how to select the physical network configuration and segment and route network traffic, you need to learn about the different ports and protocols that are used in cloud computing. A network port is an application-specific endpoint to a logical connection. It is how a client program finds a specific service on a device. A network protocol, on the other hand, is an understood set of rules agreed upon by two or more parties that determine how network devices exchange information over a network. In this section, we discuss the different protocols used to securely connect a network to the Internet so that it can communicate with the cloud environment.

Hypertext Transfer Protocol and Hypertext Transfer Protocol Secure

Hypertext Transfer Protocol (HTTP) is an application protocol built on TCP to distribute Hypertext Markup Language (HTML) files, text, images, sound, videos, multimedia, and other types of information over the Internet. HTTP typically allows for communication between a web client or web browser and a web server hosting a website. HTTP defines how messages between a web browser and a web server are formatted and transmitted and which actions the web server and browser should take when issued specific commands. HTTP uses port 80 to communicate by default.

Hypertext Transfer Protocol Secure (HTTPS) is an extension of HTTP that provides secure communication over the Internet. HTTPS is not a separate protocol from HTTP; it layers the security capabilities of Secure Sockets Layer (SSL) or Transport Layer Security (TLS) on top of HTTP to provide security to standard HTTP, since HTTP communicates in plaintext. HTTPS uses port 443 by default.

When a web client first accesses a website using HTTPS, the server sends a certificate with its embedded public key to the web client. The client then verifies that the certificate is in its trusted root store during the standard authentication process, thus trusting the certificate was signed by a trusted certificate authority. The client generates a session key (sometimes called a symmetric key) and encrypts the session key with the server’s public key. The server has the private key, which is the other half of the public-private key pair and can decrypt the session key, which allows for an efficient, covert, and confidential exchange of the session key. No entity other than the server has access to the private key.

Once both the web client and the web server know the session key, the SSL/TLS handshake is complete and the session is encrypted. As part of the protocol, either the client or the server can ask that the key be “rolled” at any time. Rolling the key is merely asking the browser to generate a new 40-, 128-, or 256-bit key or above, forcing a would-be attacker to shoot at a moving target.

Images

EXAM TIP   Some organizations may use a proxy for connecting to the Internet. Proxy automatic configuration (PAC) is a system that automatically configures devices to use a proxy server if one is required for a web connection. PAC is also known as proxy auto-config.

File Transfer Protocol and FTP over SSL

Unlike HTTP, which is used to view web pages over the Internet, the File Transfer Protocol (FTP) is used to download and transfer files over the Internet. FTP is a standard network protocol that allows for access to and transfer of files over the Internet using either the FTP client or command-line interface. An organization hosts files on an FTP server so that people from outside the organization can download those files to their local computers. Figure 4-12 shows an example of a graphical-based FTP client.

Images

Figure 4-12  Screenshot of a graphical-based FTP client

FTP is built on a client-server architecture and provides a data connection between the FTP client and the FTP server. The FTP server is the computer that stores the files and authenticates the FTP client. The FTP server listens on the network for incoming FTP connection requests from FTP clients. The clients, on the other hand, use either the command-line interface or FTP client software to connect to the FTP server.

After the FTP server has authenticated the client, the client can download files, rename files, upload files, and delete files on the FTP server based on the client’s permissions. The FTP client software has an interface that allows you to explore the directory of the FTP server, just like you would use Windows Explorer to explore the content of your local hard drive on a Microsoft Windows–based computer.

Similar to how HTTPS is an extension of HTTP, FTPS is an extension of FTP that allows clients to request that their FTP session be encrypted. FTPS allows for the encrypted and secure transfer of files over FTP using SSL or TLS. There are two different methods for securing client access to the FTP server: implicit and explicit. The implicit mode gives an FTPS-aware client the ability to require a secure connection with an FTPS-aware server without affecting the FTP functionality of non-FTPS-aware clients. With explicit mode, a client must explicitly request a secure connection from the FTPS server. The security and encryption method must then be agreed upon between the FTPS server and the FTPS client. If the client does not request a secure connection, the FTPS server can either allow or refuse the client’s connection to the FTPS server.

Secure Shell File Transfer Protocol

Secure Shell File Transfer Protocol (SFTP) is a network protocol designed to provide secure access to files, file transfers, file editing, and file management over the Internet using a Secure Shell (SSH) session. Unlike FTP, SFTP encrypts both the data and the FTP commands, preventing the information from being transmitted in cleartext over the Internet. SFTP differs from FTPS in that SFTP uses SSH to secure the file transfer and FTPS uses SSL or TLS to secure the file transfer.

SFTP clients are functionally similar to FTP clients, except SFTP clients use SSH to access and transfer files over the Internet. An organization cannot use standard FTP client software to access an SFTP server, nor can it use SFTP client software to access FTP servers.

There are a few things to consider when deciding which method should be used to secure FTP servers. SFTP is generally more secure and superior to FTPS. Suppose the organization is going to connect to a Linux or Unix FTP server. In that case, SFTP is the better choice because it is supported by default on these operating systems. If one of the requirements for the FTP server is that it needs to be accessible from personal devices, such as tablets and smartphones, then FTPS would be the better option, since most of these devices natively support FTPS but may not support SFTP.

Images

EXAM TIP   It is important to understand that FTPS and SFTP are not the same. FTPS uses SSL or TLS and certificates to secure FTP communication, and SFTP uses SSH keys to secure FTP communication.

Domain Name System

The Domain Name System (DNS) distributes the responsibility for both the assignment of domain names and the mapping of those names to IP addresses to the authoritative name servers within each domain. An authoritative name server is responsible for maintaining its specific domain name. It can also be authoritative for subdomains of that primary domain. For example, if you want to go to a particular web page like https://www.cwe.com, all you do is type the web page’s name into your browser, and it displays the web page. For your web browser to show that web page by name, it needs to locate it by IP address. This is where DNS comes into play.

DNS translates Internet domain or hostnames into IP addresses. DNS would automatically convert the name https://www.cwe.com into an IP address for the web server hosting that web page. To store the full name and address information for all the public hosts on the Internet, DNS uses a distributed hierarchical database. DNS databases reside in a hierarchy of database servers where no one DNS server contains all the information. Figure 4-13 shows an example of how a client performs a DNS search.

Images

Figure 4-13  The steps in a DNS search

DNS consists of a tree of domain names. Each branch of the tree has a domain name and contains resource records for that domain. Resource records describe specific information about a particular object. The DNS zone at the top of the tree is called the root zone. Each zone under the root zone has a unique domain name or multiple domain names. The owner of that domain name is considered authoritative for that DNS zone. Figure 4-14 shows the DNS hierarchy and how a URL is resolved to an IP address.

Images

Figure 4-14  Example of a DNS hierarchy

DNS servers manage DNS zones. Servers store records for resources within one or more domains that they are configured and authorized to manage. A host record, or “A” record, is used to store information on a domain or subdomain along with its IP address. A canonical name (CNAME) record is an alias for a host record. For example, a CNAME record testing for the comptia.org domain pointing to www would allow for users to enter the URL www.comptia.org or testing.comptia.org to go to the same site. A mail exchanger (MX) record stores information on the mail server for the domain, if one exists.

DNS is one of the protocols the Internet and cloud services are based on because, without it, users cannot resolve the names for the services they wish to reach. Because of this importance, DNS has also been the target for attacks to disrupt its availability. In addition, DNS has privacy implications, since the resolving servers can view the queries made to it and where those queries came from. Several protocols have been developed to help mitigate these concerns. These include DNS Security (DNSSEC), DNS over HTTPS (DoH), and DNS over TLS (DoT).

DNS Security

DNSSEC was developed to address data integrity issues with DNS. Namely, when a client issues a request for DNS resolution, they have no way of validating that the response they receive is correct because attackers can spoof the DNS IP address and then reply back with incorrect IP addresses for common URLs that send users to websites that may look and feel like real websites but are designed to steal their information. For example, a DNS server may provide the wrong IP address for ebay.com, which sends the user to an attacker’s page, where attackers harvest user credentials or credit card information and then pass them back to the real ebay.com.

DNSSEC signs DNS records with digital signatures so that user systems can independently validate the authenticity of the data they receive. Each DNS zone has a public and private key. The private key is used to sign the DNS data in the zone so that users can use the public key to verify its authenticity. The keys used by DNSSEC can be trusted because they are signed with the parent zone’s private key, so machines validate the private keys used to sign the DNS data by validating the chain all the way up to the root-level domain, which each of the systems trusts implicitly.

DNS over HTTPS and DNS over TLS

DoH and DoT were developed to address the data confidentiality and privacy issues with DNS. Just like HTTPS encapsulates HTTP in SSL, DoH encapsulates DNS requests in SSL. Some significant vulnerabilities have been discovered in SSL, so it has been superseded by TLS. DoT operates with the same goal as DoH. It takes DNS and encapsulates it in TLS. Both of these protocols encrypt the traffic so that other parties cannot view it as it traverses the Internet. DoH transmits traffic over port 443, while DoT uses port 853.

Dynamic Host Configuration Protocol

DHCP is a network protocol that allows a server to automatically assign IP addresses from a predefined range of numbers, called a scope, to computers on a network. DHCP is responsible for assigning IP addresses to computers, and DNS is responsible for resolving those IP addresses to names. A DHCP server can register and update resource records on a DNS server on behalf of a DHCP client. A DHCP server is used any time an organization does not wish to use static IP addresses (IP addresses that are manually assigned).

DHCP servers maintain a database of available IP addresses and configuration options. The DHCP server leases an IP address to a client based on the network to which that client is connected. The DHCP client is then responsible for renewing its lease or IP addresses before the lease expires. DHCP supports both IPv4 and IPv6. It can also create a static IP address mapping by creating a reservation that assigns a particular IP address to a computer based on that computer’s media access control (MAC) address.

If an organization’s network has only one IP subnet, clients can communicate directly with the DHCP server. If the network has multiple subnets, the company can still use a DHCP server to allocate IP addresses to the network clients. To allow a DHCP client on a subnet that is not directly connected to the DHCP server to communicate with the DHCP server, the organization can configure a DHCP relay agent in the DHCP client’s subnet. A DHCP relay agent is an agent that relays DHCP communication between DHCP clients and DHCP servers on different IP subnets. DNS and DHCP work together to help clients on an organization’s network communicate as efficiently as possible and allow the clients to discover and share resources located on the network.

IP Address Management

IP Address Management (IPAM) is a centralized way to manage DHCP and DNS information for an enterprise. An enterprise may have DHCP systems in various locations and clouds that each operates independently. This can be difficult to manage. IPAM includes tools to discover DHCP and DNS servers and then centrally manage each of the scopes in use and the IP addresses assigned to systems. IPAM stores the information it collects in a database. This makes it easier to administer the network, identify redundancies, and monitor which systems have which IP addresses across the enterprise.

Simple Mail Transfer Protocol

Documents and videos are not the only pieces of information you might want to share and communicate over the Internet. While HTTP and FTP allow you to share files, videos, and pictures over the Internet, Simple Mail Transfer Protocol (SMTP) is the protocol that enables you to send an e-mail over the Internet. SMTP uses port 25 and provides a standard set of codes that help to simplify the delivery of e-mail messages between e-mail servers. Almost all e-mail servers that send an e-mail over the Internet use SMTP to send messages from one server to another. After the e-mail server has received the message, the user can view that e-mail using an e-mail client, such as Microsoft Outlook. The e-mail client also uses SMTP to send messages from the client to the e-mail server.

Network Time Protocol

Have you ever tried to schedule a meeting with someone who you didn’t know was in another time zone? You schedule the appointment for 2:30, and then they e-mail you shortly after 1:30, wondering why you aren’t on the call. Similarly, computers need to have reliable time to reliably communicate.

Network protocols rely on accurate time to protect against attacks where authentication data is replayed back to the server. This is why Microsoft’s Active Directory will reject connections if the machine’s system time is over five minutes off from the domain controller. Similarly, TLS and DNSSEC each rely upon accurate time to operate.

Network Time Protocol (NTP) is a protocol operating on UDP port 123 that is used to dynamically set the system clock on a machine by querying an NTP server. In a domain, computers are typically configured to all set their system clocks to authoritative time servers to ensure that they are all correct. Similarly, network devices such as routers or firewalls often use NTP to ensure their time is right. Windows machines use the Windows Time service (W32Time), and some Linux distributions use chrony to set the system time from an NTP server.

Public NTP servers can be queried by anyone. An organization can also set up their own internal private NTP server. Some companies implement a private NTP server that connects to a public NTP server so that each individual NTP request does not need to go out to the Internet to be resolved. Public NTP servers usually reside within a pool where a group of servers actually respond to the NTP request. Some of these pools have servers in various geographic locations to reduce latency for queries.

Despite NTP’s usefulness, it is vulnerable to man-in-the-middle (MITM) attacks and has been used in distributed denial of service (DDoS) reflection attacks. MITM attacks involve an attacker who intercepts traffic intended for the NTP server and replies back with false time information, while DDoS reflection attacks submit NTP queries to a server from a spoofed IP so that the NTP servers flood the real IP address with NTP responses. A solution to some of the security concerns was developed in the Network Time Security (NTS) protocol.

Network Time Security

NTS expands on NTP to include a key exchange to properly authenticate client and server. This helps defend against MITM attacks because the client can validate that they are talking to the correct NTP server. NTS uses the TLS protocol for secure key exchange. There is an optional client authentication component as well.

Well-Known Ports

Ports are used in a TCP or UDP network to specify the endpoint of a logical connection and how the client can access a specific application on a server over the network. Port binding is used to determine where and how a message is transmitted. Link aggregation can also be implemented to combine multiple network connections to increase throughput. The Internet Assigned Numbers Authority (IANA) assigns the well-known ports that range from 0 to 1023. The IANA is responsible for maintaining the official assignments of port numbers for a specific purpose. You do not need to know all of the well-known ports for the CompTIA Cloud+ exam, so we will focus only on the ports that are relevant to the exam. Table 4-3 specifies server processes and their communication ports.

Images

Table 4-3  Well-Known Server Processes and Communication Ports

Images

EXAM TIP   Make sure you know the ports listed in Table 4-3 and which service uses which port.

Chapter Review

A network’s physical topology is a key factor in its overall performance. This chapter explained the various physical topologies and when to use each of them. It also discussed how traffic is routed across the network, which is key to understanding how to implement cloud computing. Since most information is accessed over an Internet connection, it is crucial to know how to properly configure a network and how it is routed.

There are various ways to reduce network latency and improve network response time and performance, including caching, compression, load balancing, and maintaining the physical hardware. These issues are critical for ensuring that an organization meets the terms of its SLA.

Questions

The following questions will help you gauge your understanding of the material in this chapter. Read all the answers carefully because there might be more than one correct answer. Choose the best response(s) for each question.

1.   Which network type is not accessible from outside the organization by default?

A.   Internet

B.   Extranet

C.   Intranet

D.   LAN

2.   Which of the following statements describes the difference between an extranet and an intranet network configuration?

A.   An intranet does not require a firewall.

B.   An extranet requires less administration than an intranet.

C.   An intranet is owned and operated by a single organization.

D.   An extranet allows controlled access from outside the organization.

3.   Which of the following is a network of multiple networks relying on network devices and common protocols to transfer data from one destination to another until it reaches its final destination and is accessible from anywhere?

A.   Intranet

B.   Extranet

C.   Internet

D.   LAN

4.   Which of the following terms defines the amount of data that can be sent across a network at a given time?

A.   Network latency

B.   Bandwidth

C.   Compression

D.   Network load balancing

5.   Which of the following causes network performance to deteriorate and delays network response time?

A.   Network latency

B.   Caching

C.   Network bandwidth

D.   High CPU and memory usage

6.   After taking a new job at the state university, you are asked to recommend a network topology that best fits the large college campus. The network needs to span the entire campus. Which network topology would you recommend?

A.   LAN

B.   WAN

C.   MAN

D.   SAN

7.   You administer a website that receives thousands of hits per second. You notice the web server hosting the website is operating at close to capacity. What solution would you recommend to improve the performance of the website?

A.   Caching

B.   Network load balancing

C.   Compression

D.   Network bandwidth

8.   Which process allows a router to modify packets so that multiple devices can share a single public IP address?

A.   NAT

B.   DNS

C.   VLAN

D.   Subnetting

9.   Which of the following IP addresses is in a private IP range?

A.   12.152.36.9

B.   10.10.10.10

C.   72.64.53.89

D.   173.194.96.3

10.   Which of the following technologies allows you to logically segment a LAN into different broadcast domains?

A.   MAN

B.   WAN

C.   VLAN

D.   SAN

11.   Which of the following protocols and ports is used to secure communication over the Internet?

A.   HTTP over port 80

B.   SMTP over port 25

C.   FTP over port 21

D.   HTTPS over port 443

12.   SFTP uses _________ to secure FTP communication.

A.   Certificates

B.   FTPS

C.   SSH

D.   SMTP

13.   In a network environment _______ is responsible for assigning IP addresses to computers and _______ is responsible for resolving those IP addresses to names.

A.   DNS, DHCP

B.   DHCP, DNS

C.   HTTP, DNS

D.   DHCP, SMTP

14.   Which of these ports is the well-known port for the Telnet service?

A.   25

B.   22

C.   23

D.   443

15.   Which protocol is responsible for transferring e-mail messages from one mail server to another over the Internet?

A.   DNS

B.   HTTPS

C.   FTP

D.   SMTP

Answers

1.   C. An intranet is a private network that is configured and controlled by a single organization and is only accessible by users that are internal to that organization.

2.   D. An extranet is an extension of an intranet, with the primary difference being that an extranet allows controlled access from outside the organization.

3.   C. The Internet is not controlled by a single entity and serves billions of users around the world.

4.   B. Bandwidth is the amount of data that can traverse a network interface over a specific amount of time.

5.   A. Network latency is a time delay that is encountered while data is being sent from one point to another on the network and affects network bandwidth and performance.

6.   C. A metropolitan area network (MAN) can connect multiple LANs and is used to build networks with high data connection speeds for cities or college campuses.

7.   B. Network load balancing is used to increase performance and provide redundancy for websites and applications.

8.   A. NAT allows your router to change your private IP address into a public IP address so that you can access resources that are external to your organization; then the router tracks those IP address changes.

9.   B. 10.0.0.0 to 10.255.255.255 is a private class A address range.

10.   C. A VLAN allows you to configure separate broadcast domains even if the devices are plugged into the same physical switch.

11.   D. HTTPS is an extension of HTTP that provides secure communication over the Internet and uses port 443 by default.

12.   C. SFTP uses SSH to secure FTP communication.

13.   B. DHCP is responsible for assigning IP addresses to computers, and DNS is responsible for resolving those IP addresses to names.

14.   C. Telnet uses port 23 by default for its communication.

15.   D. SMTP is used to transfer e-mail messages from one e-mail server to another over the Internet.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset