Data Centers or Computer Rooms

Design considerations for the data centers or computer rooms and their contents include the following:

  • Physical security of the facility

  • Fire protection

  • Adequate ventilation to dissipate the heat that is generated by the equipment that is stored there

  • Power protection

  • Disaster planning and recovery in the event that an unforeseen catastrophe should affect those critical locations

All these considerations facilitate the proper functioning and protection of what makes networks so indispensable to begin with: application servers and storage devices that accumulate more and more of the SMB's data. Data centers might house some network gear, like routers and switches, but primarily they house servers, extended storage, and backup equipment.

Network Servers

Early vintage network operating systems (NOSes)—and the servers on which they were installed—supported basic file sharing and print services across the network. As NOSes have continued to evolve, they have incorporated more and more network services functions in addition to providing the required robust platforms for complex vertical applications. The network services include DNS, Dynamic Host Configuration Protocol (DHCP), remote access, routing, security (such as Remote Access Dial-In User Service [RADIUS], Terminal Access Controller Access Control System Plus [TACACS+]), and more. Numerous applications and database servers now populate data centers. They use a range of hardware platforms and operating systems (such as Windows, NetWare, and Solaris) to support critical business functions.

However, setting aside the applications for a moment, from the network infrastructure perspective, design considerations relating to servers boil down to the following:

  • Will the servers be single or multihomed? That is, how many NICs will they have?

  • What will be the connection type and speed between the servers and the switches? Will the servers connect to a switch at a predetermined speed? That is, will the connection parameters be hardcoded on both the server and the switch, or will they be negotiable? Is the number of servers at the site sufficient that a policy regarding server-to-switch connectivity might be necessary?

  • What VLAN or multiple VLANs will the servers belong to?

  • Will any of the server connections be a trunk?

  • What are the server availability requirements?

Server Multihoming

Multihoming applies if you need to connect the server to multiple VLANs. Consider a scenario with three VLANs that share no other resources except a single server. There would not be a need for routing between the VLANs if the server were equipped with three NICs, each connecting to one of the VLANs.

Server Connection Speed and Parameters

There are no absolute standards for whether to allow parameter negotiation between a server and a switch. With the spectrum of Ethernet connections now ranging from 10 Mbps to 10 Gbps, with the 100/1000 Mbps being in the mainstream, it is advisable for servers to use the highest available speed. Experience dictates that at higher connection speeds (1000 Mbps or more), it is preferable to hardcode the parameters at both ends of the connection instead of allowing them to be autonegotiated.

Server VLAN Membership

The purpose of the server determines which VLAN or multiple VLANs it will belong to. If a server connects only to a single VLAN but needs to be accessed from other VLANs as well, remember that routing between the VLANs is required and that server traffic will be crossing multiple VLANs. Perhaps in this situation it might make better sense to use a single VLAN.

Server Trunk Port Usage

Connecting servers to VLAN trunk ports is possible (and has already been mentioned earlier in this chapter), but the server OS in combination with the NIC must support trunking.

Server Availability Requirements

Network services servers like DNS or DHCP have become an integral component of effective network infrastructure and administration. These servers must typically be more available than even the specific application servers. Whereas network services functions can reside on the same server as the business applications (and many times they do), consider a more robust design, in which they would be separate. Bringing down an applications server for maintenance, upgrade, or any other reason should not shut down key network services functions. In multilayer topologies, local servers (network services or applications) are typically plugged into the access layer switches. They tend to have lower availability requirements than enterprise servers, which are typically connected at the distribution or core layers. You should also consider having redundant hardware for enterprise servers.

Network Storage

A staggering evolution has taken place in mass storage technologies since the early 1990s. Storage capacities have increased dramatically while their physical size and power requirements have gone down. The capacities of storage solutions are now measured in terabytes (TB) as opposed to megabytes (MB), which represents an increase of six orders of magnitude. Gigabyte (GB) capacities are sandwiched between the megabytes and the terabytes.

Two approaches emerged to address the explosive demands for mass storage in network environments: storage-area networks (SANs) and network-attached storage (NAS). In addition to these two storage technologies, SMBs should not entirely overlook the more traditional direct-attached storage (DAS) approach, in which large-capacity disk drives are installed directly in the NOS servers. Those drives can be in the hundreds of gigabytes, which might be sufficient to meet the needs of enterprises with less demanding storage requirements.

SAN and NAS solutions should be viewed as distinct but complementary tools in the arsenal of a designer confronted with developing a storage solution for an SMB as part of the overall network design. Nothing prevents SAN and NAS from harmonious coexistence on the same internetwork.

Storage-Area Networks

A storage-area network (SAN) can be defined as a dedicated network whose primary purpose is to transfer data between computer systems and/or storage elements. The generic components of a SAN include a communications infrastructure (the network part), a management layer (configuration and control software), storage elements (disks, backup tapes), and computer systems (servers). The storage elements and the servers are interconnected via the network component or the communications infrastructure, which most frequently, but not always, is implemented with Fibre Channel switches. The products in the Cisco 9000 series family of Multilayer Directors and Fabric Switches (MDS) are aimed at supporting the SAN technology. SAN solutions are available from multiple vendors, including IBM, EMC Corporation, Hewlett-Packard (HP), and Dell.

Because they have to do with networks, SANs invite almost an intuitive understanding on the part of network professionals, even though they might not necessarily be directly involved with storage solutions. Those who have dealt with networks know well that despite the many commonalities, it is hard to find two networks that are the same; similarly, SANs can differ drastically from one another while still fulfilling their primary purpose of providing an enterprise with large amounts of storage.

Many of the design considerations for SANs are generically similar to those for LANs, WANs, and the internetworks that combine them. A few of these considerations follow:

  • The geographical layout of the SAN or the topology— SAN topology affects the network technologies that will be used to interconnect the storage elements and servers, especially if long distances are involved. For example, the MDS 9000 series switches support optical interface modules with shortwave or longwave options that can extend Fibre Channel or Gigabit Ethernet up to 500 meters or 10 kilometers, respectively.

  • Data locality— The location of the data on the SAN and the associated access paths between the storage elements and the servers impacts performance. Data access patterns between the servers and the storage elements need to be analyzed, and the relevant servers and storage elements should be interconnected with maximum possible bandwidth between them to ensure adequate performance.

  • The storage capacity of the SAN— Large storage capacity is the reason that SANs are deployed in the first place. The SAN's capacity should reflect the business requirements that were identified through the design process. SAN total storage can be increased through the addition of new storage elements or the upgrading of existing ones to larger capacities.

  • Business continuance policies— Backup and restore procedures, and requirements for overall data availability, can translate into considerations for designing redundancy. The level of disaster tolerance (complete or partial loss of the SAN) can translate into a consideration for designing data replication.

Network-Attached Storage

NAS and SANs share a common purpose: to accommodate the demanding storage requirements of many SMBs and large enterprises. Whereas SAN is its own network, interconnecting servers and storage elements, NAS is a storage system that attaches to a LAN. Figure 3-2 illustrates the difference between a NAS and a SAN in the context of logical network topology. As a mnemonic to help you closely relate SAN and NAS technologies, remember that when you spell SAN backwards, you get NAS.

Figure 3-2. Differences Between a SAN and a NAS in the Context of Logical Network Topology


The heart of a NAS solution is typically a filer, which is a dedicated high-performance file server with its own optimized operating system that controls multiple disk arrays. The filer is equipped with one or more high-speed (typically Gigabit Ethernet) interfaces to the network. The filer services data I/O requests using standard higher-layer protocols like the Network File System (NFS) in UNIX environments or the Common Internet File System (CIFS) in Windows-based environments. NAS appears to be simpler to deploy than SAN because it is effectively a storage product. In contrast, SAN is considered an architecture. Logically, NAS is positioned between the application server(s) and the file system, whereas SAN can be thought of as being positioned between the file system(s) and the underlying physical storage elements.

From the design perspective, another way to think of a NAS is that it is a high-performance extension of the application server(s) that unburdens them from the bandwidth-intensive I/O tasks, instead allowing them to concentrate on the processing of data. Designers should also consider SAN and NAS for developing effective network backup solutions. SANs support the tape backups. A NAS can also serve as a backup for SAN contents, or vice versa. Multiple NAS filers can be used as backups for one another.

It is common for SAN vendors to offer NAS solutions as well. HP, IBM, and Sun Microsystems all offer NAS products. Cisco switches from the 4000 or the 6500 series are well suited for NAS interconnection, and they have proven their reliability in performance tests with NAS products from Network Appliance Corporation.

Power Protection

Reliable power is part of any effective network design. The ability to provide uninterruptible or stable power to critical network components affects the level of network availability, which is always a key design consideration. Proceed with the following steps when designing power protection for a network installation:

1.
Identify all network components to be protected.

2.
Determine each component's power requirements.

3.
Determine the aggregate power requirements and the amount of time that the equipment needs to remain operational.

4.
Make a decision regarding type of power protection equipment to be deployed (such as UPSes, generators, line conditioners, isolation transformers).

5.
Look at data sheets from companies that offer power protection products. Identify devices that match the requirements.

6.
Consult as necessary with the network vendors on the most appropriate power protection solution for their products to ensure compatibility between the selected power protection equipment and the network gear to be protected.

Uninterruptible Power Supplies

Uninterruptible power supplies (UPSes) are rated in volt amperes (VAs). VAs are derived by calculating the following formula:

operating equipment voltage × number of amps drawn by the equipment

The voltage is typically 110 or 220. Vendors normally identify the amp rating on each piece of equipment. Repeat this exercise for each piece of equipment under consideration.

Alternatively, you can get the watts rating from each piece of equipment. If you follow the watts route, remember to convert watts to VAs after you come up with the total number of watts for all of the equipment. Dividing the number of watts by a square root of 2 over 2 (or approximately 0.7) yields the number of VAs. The UPS VAs can also be converted to watts in the event that they are not already specified. Simply put, be consistent and work either with watts or VAs.

Next, you need to determine the UPS operational times under various power load levels. The higher the load (that is, the higher the aggregate VA or watts number of the equipment to be protected), the shorter the duration of UPS operation when the device is disconnected from utility power.

Another design consideration for UPS deployment is whether to deploy a single larger unit or multiple smaller ones. Multiple units offer greater mobility and ease of power protection distribution. However, they will likely end up being more expensive than a single higher-capacity unit and, over time, more difficult to manage.

When deploying UPSes, develop a plan for periodic battery testing and replacement. Doing so enables you to avoid the surprise of watching a UPS fail when you are most in need of it.

Other Power Protection Equipment

Generators, line conditioners, and isolation transformers can complement the deployment of UPSes in the following ways:

  • Generators tend to be crucial in health care (hospitals) or utility services environments, where loss of power and the subsequent loss of network connectivity or access to critical data can be life threatening.

  • Line conditioners maintain stable voltage levels during brownouts (voltage dips) and overvoltages (voltage spikes) but do not continue to supply power during an outage. They tend to be less expensive than UPSes of comparable ratings. You need to make a pragmatic design decision whether to deploy a line conditioner or a UPS. UPSes offer line conditioning as well.

  • Isolation transformers might be most applicable in manufacturing and laboratory environments or, in general, in environments in which heavy electrical machinery needs to coexist and operate side by side with more sensitive networking or other SMB-specific equipment. Isolation transformers eliminate ground loops and noise that can impact network equipment operation.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.233.54