Chapter
13

The Wider Data Center Neighborhood

TOPICS COVERED IN THIS CHAPTER:

  • The importance of the data center
  • Data center tiers
  • Racks, cabinets, and rows
  • Power and cooling
  • Structured cabling
  • Access control
  • Fire prevention

image This chapter covers the central role and importance of the data center, as well as the high-level principles of data center design. You'll learn about the tiers of the data center, as defined by the Uptime Institute. You'll also learn about the major components of the data center, including raised floors, racks, power systems, cooling systems, and cabling. This chapter presents in some detail the major components of a structured cabling scheme and outlines best practices that will make day-to-day management of the data center as simple as possible, while still allowing the data center to be flexible enough to cater to tomorrow's requirements. Finally, the chapter concludes by listing some good practices for working in a data center.

Data Center Design

The first thing to know about modern data centers is that they are vital to any organization that values its data. If you value your data, value your data center! If you don't value your data center, you don't value your data!

The data center is more than just the sum of its component parts. It is rightfully a system in and of itself. Gone are the days of a data center being a grubby spare room used to hide away IT equipment. In the modern world, instead of being merely a technology room, the data center is more of a technology engine—an integrated system of computing, storage, network, and power and cooling, with its own levels of performance, resiliency, and availability. As such, the data center requires a proper design, as well as appropriate levels of management and maintenance. Again, if you care about your data, you need to care about your data centers.

In many ways, the data center is the perfect home for computing, network, and storage devices—a habitat built from the ground up to keep them all in perfect working condition. Data centers provide not only physical security for IT equipment, but also cooling, humidification, grounding, and power feeds that are all highly optimized for IT equipment. As a human being, you won't want to spend too much time in a tier 3 or tier 4 data center. If you do, you'll end up with sore, dry eyes and probably irritated ears. You'll also probably find the varying temperatures uncomfortable too, as some areas of the data center will be too cold for comfort, whereas others will be too warm. Data centers are concerned with keeping IT equipment, not human beings, in working order.

With all of this in mind, let's take a quick look at the tiers of the data center that we commonly refer to when talking about data center uptime and availability.

Data Center Tiers

The most widely accepted definition of data center tiers is the one from the Uptime Institute (http://uptimeinstitute.com). The Uptime Institute is an independent division of the 451 Group, dedicated to data center research, certification, and education. These tiers are also borrowed and referenced by the Telecommunications Institute Association (TIA), making them pretty much the de facto standard.

The reason for classifying data centers into tiers is to attempt to standardize how we define tiers and talk about data center uptime and availability.

image

The Telecommunications Industry Association (TIA) introduced an important standard in 2005 relating to cabling within a data center. This TIA-942 standard has been instrumental in standardizing and raising the level of cabling within the data center. TIA-942 includes an annex that references the Uptime Institute's data center tier model.

At a high level, the Uptime Institute defines four tiers of a data center:

  • Tier 4
  • Tier 3
  • Tier 2
  • Tier 1

As indicated by this list, tier 4 is the highest tier, and tier 1 is the lowest tier. This means that tier 4 data centers have the highest availability, highest resiliency, and highest performance, whereas tier 1 data centers have the lowest.

On one hand, a tier 4 data center will have redundant everything—power, cooling, network connectivity, the whole works. On the other hand, a tier 1 data center could be little more than your garage with a single power circuit, a mobile air-conditioning unit, and a broadband connection.

image

Not all data centers are audited and certified by the Uptime Institute. However, data centers can still be built and maintained to standards that are mapped to formal Uptime Institute tiers. To formally refer to a data center as, for example, tier 3 or tier 4, the data center should be formally certified.

Let's talk about some of the high-level features and requirements of the data center tiers specified by the Uptime Institute.

Tier 1

Tier 1 is the most basic, and lowest availability, data center. It is kind of like your garage at home. There's no requirement for things such as multiple power feeds or an on-site diesel generator. There's also no requirement for multiple diverse network connections. Basically, there is no requirement for any level of component redundancy. You could almost build one yourself at home! As a result, tier 1 data centers are extremely susceptible to unplanned downtime, so you almost certainly don't want to run your business from one.

Tier 2

Tier 2 takes things up a considerable level from tier 1. Tier 2 data centers have redundant on-site generators and uninterruptable power supplies, as well as some component-level redundancy. However, tier 2 data centers still require downtime for certain planned maintenance activities, such as annual power maintenance and testing.

Tier 3

Tier 3 data centers take the foundation of tier 2, and add N+1 redundant power and cooling paths (for example, all equipment must have multiple power supplies and power feeds). These redundant paths need to work in only active/passive mode, although each path must be capable of handling the entire load on its own. This active/passive mode of operation allows you to perform planned maintenance without incurring downtime, but does not necessarily protect you in the event of an unplanned issue on active power paths. Tier 3 data centers require on-site generators that are capable of powering the entire data center for extended periods of time.

Tier 4

A tier 4 data center is the highest tier currently defined. A tier 4 data center can ride out unplanned outages of essential services such as power, cooling, and networking without incurring downtime. All of these major systems are implemented as multiple N+1 redundant and physically isolated, which means the power systems might have two independent paths, each of which is N+1 redundant. Tier 4 data centers also require on-site generators that are capable of powering the entire data center for extended periods of time.

If you can afford it, you probably want to run your business out of a tier 4 data center. If you're partnering with a private cloud supplier, you want to ensure their data centers are tier 3 or tier 4 data centers.

Data Center Overview

Let's now take a look at some of the major components of the data center.

Racks

Almost all IT equipment in data centers is housed in standardized racks referred to as 19-inch racks. Figure 13.1 shows the front view of two side-by-side industry-standard 19-inch racks. One rack is empty, and the other rack is partially filled with various computing, storage, and network hardware.

FIGURE 13.1 19-inch racks

image

These 19-inch racks get their name from the fact that, when viewed from either the front or the back, they are about 19 inches wide. This standard width, coupled with standardized depths, allows for IT equipment such as servers, storage arrays, and network switches to come in standard sizes and fit almost all data center racks. This makes installing equipment in the data center so much easier than it was before manufacturers had standardized on the 19-inch rack.

image

According to the EIA-310D standard, 19 inches is the width of the mounting flange on the equipment. The space between the rails is 17.72 inches (450 mm), and center-hole spacing is 465 mm.

When talking about the height of 19-inch racks, we usually talk in rack units (RUs), with a standard RU being 1.75 inches. More often than not, we shorten the term rack unit to simply unit or U. Hence the popular term, and rack size, of 42 U, meaning 42 rack units in height, or 73.5 inches. We also measure the height of servers, switches, and storage arrays in terms of RU, or U. For example, a single 42 U rack can house up to 24 2-U servers stacked directly on top of each other, as shown in Figure 13.2. It should be noted, however, that although a single rack can physically be fully populated, the data center may not be able to supply enough power and cooling to allow you to fully populate the rack.

The process of installing equipment in a 19-inch rack is referred to as racking, and the process of removing the equipment is known as unracking.

image

Make sure that when you rack and unrack equipment, you use proper lifting techniques. More often than not, there should be two people involved, so you don't injure yourself or cause damage to equipment.

FIGURE 13.2 19-inch rack filled with 24 2-U servers

image

image

Heavy equipment should be racked in the bottom of the rack so it doesn't topple over when leaned against or when sliding heavy equipment in and out of the rack. Racks should be bolted together or have special stabilizing feet attached to provide additional stability and resistance to toppling.

Sadly, storage arrays are one of the worst offenders when it comes to requiring nonstandard specialized racks. It's not uncommon for data centers to have entire rows dedicated to storage equipment, because storage arrays often come in custom racks and can't be installed in standard data center racks. Fortunately, though, most storage vendors are starting to support their equipment when installed in industry-standard 19-inch racks.

It is required by U.S. electrical code to ground racks, and sometimes equipment within racks, to protect people and equipment from electric shock and static. For this same reason, it is highly recommended to wear antistatic wristbands when working with equipment in the data center, especially if taking the lid off equipment.

Flooring

It is common practice in modern data centers to have raised floors. A raised floor is an artificial floor, usually made of 600 mm × 600 mm floor tiles that rest on top of pedestals. The floor tiles are removable, allowing engineers to lift them and gain access to the void between the raised floor and the solid floor beneath. This floor void is commonly used to route network cables and power cables, as well as potentially channel cold air. Figure 13.3 shows an artificially raised floor sitting 2 feet above a concrete floor on supporting pedestals.

FIGURE 13.3 Raised floor and void

image

It is important to know how much weight the raised floor can sustain, so you don't load a rack with more weight than the floor is rated to hold. You should always consult with the experts when it comes to the load-bearing capacity of your raised data center floor. It is common practice to install racks so they span multiple floor tiles. Quite often, each floor tile is capable of taking a certain weight, and standing a rack across four tiles, as shown in Figure 13.4, can sometimes give you the strength and weight-bearing capacity of all four tiles. However, make sure that you seek the advice of a knowledgeable expert before making this assumption.

FIGURE 13.4 Aerial view of data center rack spread over four floor tiles

image

Rows

In most data centers, racks are organized into rows, as shown in the aerial view provided in Figure 13.5.

In this manner, racks can easily be located by giving their row location as well as their rack location. For example, rack 3 in row A is highlighted in Figure 13.6.

FIGURE 13.5 Racks organized in rows

image

FIGURE 13.6 Locating rack 3 in row A

image

Later in the chapter, you will see how rows are important when it comes to power and cooling. For now, it's enough to know that data center design and management are far simpler if you line up racks in neat rows.

Power

Power is one of the most important considerations for modern data centers! If you thought the world's insatiable appetite for more and more storage was off the scale, then demand for power in the data center is right there with it!

Data centers need power and often can't get enough of it. In fact, it's not uncommon for some of the larger data centers to draw more power than an average-sized city. But it's not just the servers, storage, and networking equipment that suck the power. Heating, ventilation, and air conditioning (HVAC) systems consume a lot of power. In some places in the world, HVAC consumes more power than the IT equipment, meaning that many data centers consume more power keeping the temperature right than they do keeping equipment powered on.

But it's not just the overall data center as a whole that is struggling with power. Many data centers are struggling to provide enough power and cooling to individual racks. For example, modern servers draw more power and pump out more heat than ever before, and data centers that can provide only in the region of 5–6 kW to each rack just aren't supplying enough power to be able to fill the rack with servers. If your data center can't provide enough power for cooling to enable you to fill your data center racks, your racks may have to operate up to half empty. Many modern data centers now provide 20 kW or more to each rack, so every inch of rack space can be used!

image

In 2011, Google announced that its data centers in 2010 were continuously consuming 260 megawatts (MW) of power. That's 260,000,000 watts, which equates to more power than that used by Salt Lake City, Utah. Of course, Google is not the only company to have large data centers that consume a lot of power, and Google also claims to do a lot to offset and minimize its impact on the environment.

Because of the power constraints we currently operate under, vendors are placing significant focus on improving the efficiency of power supply units (PSUs) in the back of servers, storage, and networking equipment. For example, if it costs you $500,000 a year to power your IT equipment, a 20 percent improvement in the efficiency of PSUs could significantly reduce your annual electricity bill.

The industry uses the power usage effectiveness (PUE) rating to measure efficiency of a data center. PUE is determined by the following formula:

image

Where x is the efficiency rating. A PUE of 2 means that you have a single watt of overhead for every watt you supply to your actual IT equipment (server, storage, network). A PUE of 4 would mean you have 3 watts of overhead for every watt used to power your equipment. So, obviously, a lower PUE rating is better.

Quite often, data centers have multiple independent power feeds (ideally from separate utility companies if that's possible). This allows them to lose power from one utility company and not have to run on power from the on-site generator.

image

To provide power to the data center when utility power is lost, most data centers have in-place backup power systems based on diesel-powered motors. If utility power is lost, these on-site diesel generators kick in and power the data center until utility power is returned. It is common practice to maintain enough fuel on-site to be able to power the data center at full capacity for a minimum of 48 hours. It's obviously important to perform regular (at least annual) tests of these on-site generators so you know they can be relied on when called into action. As an example of the importance of such generators, when Hurricane Sandy hit in 2012, many data centers in the New York and Newark areas had to rely on generator power for extended periods of time.

Within the data center itself, it's common to provide two independent power feeds to each and every rack. This power is often supplied via independent power distribution units (PDUs) at the end of each row, as shown in Figure 13.7.

FIGURE 13.7 Power distribution to data center rows

image

In Figure 13.7, each row is fed by a power feed from PDU A as well as a feed from PDU B. For each row, PDU A is independent of PDU B, meaning that one PDU can fail, or the power supplied to it can fail, without the other being affected. For example, if PDU B fails for row C, PDU A will not be affected and will continue to provide power to equipment racked in row C. This redundant power configuration allows for computing, storage, and networking equipment to be dual-power fed and survive many power failure scenarios. This also allows you to perform maintenance on one power feed without having to power equipment down or run on power from the on-site generator.

It is also common for all power to be protected by an uninterruptable power supply (UPS) that ensures that power is clean and spike free. UPS systems often provide a few seconds or minutes of power in the event that utility power is briefly interrupted. This means that computing, storage, and networking equipment is unaffected by brief spikes or drops in power.

Power is usually fed to the racks via the floor void, as shown in Figure 13.8.

FIGURE 13.8 Power to cabinets via floor void

image

Power to each rack usually has its own circuit breaker on the PDU at the end of the row, allowing you to isolate power on a rack-by-rack basis.

In data center facilities that do not have raised floors, it is also possible to route power cables overhead, and have industrial and multiphase power plugs and sockets suspended from the ceiling so you can easily connect them to rack-based power strips. As with routing cables in the floor void, routing them overhead keeps them out of the way when performing day-to-day work in the data center.

image

Power is often referred to as one of the mechanical and electrical (M&E) subsystems of the data center.

Cooling

Cooling is big business in data centers, with a lot of power being expended on keeping the data center cool. To reduce the amount of energy required to keep equipment cool, most data centers operate on a form of the hot/cold aisle principle. Most modern IT equipment is built so that fans suck in cold air from the front and expel hot air from the rear. In a hot/cold aisle configuration, racks are installed so that all racks in a row face the same direction, and opposing rows face each other in a back-to-back configuration. This means that equipment in adjacent rows is always facing either back-to-back or front-to-front, as shown in Figure 13.9.

image

As much as cooling is a function of power, it is also a function of equipment placement. One Uptime Institute report suggests that equipment higher in the rack is far more likely to fail than equipment lower in the rack. This is because cool air is warmed as it rises, making the temperature at the top of the rack higher than at the bottom.

FIGURE 13.9 Hot/cold aisle configuration

image

The kind of hot/cold aisle configuration shown in Figure 13.9 helps keep hot and cold air separated. This makes for a far more efficient data center cooling system than if the hot air that was blown out of some equipment was then sucked into the front of other equipment; trying to keep equipment cool by using hot air isn't a good idea. However, even with hot/cold aisle data centers, hot and cold air still manages to mix, and as a result, some data centers decide to enclose either the hot aisle or the cold aisle. For example, in a data center where cold air is kept within an enclosed aisle, hot air from outside cannot get into the enclosed area and raise the temperature. It is also popular to pump cold air around the data center through the floor void, and allow it up into the cold aisle via perforated floor tiles, as shown in Figure 13.10.

FIGURE 13.10 Cold air routed through the floor void

image

Most modern data center racks come with perforated front and rear doors so racks can breathe; air can come in and out of the rack through the tiny perforated holes in the front and rear doors. If your racks don't have perforated front and rear doors, you should seriously consider removing the doors entirely. In fact, removing the front and rear doors is not an uncommon practice in large data centers.

image

Some equipment, especially older network equipment, doesn't conform to front-to-rear airflow. Some equipment sucks in cold air from one side and blows hot air out the other side, and some equipment even takes cold air in from below and expels hot air from the top. If you operate a data center that is built around a hot/cold aisle design, you should try to avoid this kind of nonstandard equipment in the same areas of the data center.

It's not uncommon for data centers to be located in strategic geographic areas that provide opportunities to leverage local environmental conditions in order to make the data center as green and environmentally friendly as possible. Examples include areas with cool air that can be channeled into the data center and used to cool equipment or areas with long sunny days that enable good use of solar power.

Amid all this discussion about racks, rows, power, and airflow, it's easy to forget that the purpose of all this is to make the best possible environment for servers, storage, and networking equipment to operate in, but also to do so at the most efficient cost.

Server Installation in a Data Center Rack

Now that we've talked a lot about racks, power cooling, and related matters, Exercise 13.1 walks you through the common steps required to install a new server into a data center rack.

EXERCISE 13.1

Racking a New 2U Server

This exercise outlines some of the steps required to install new equipment in a data center rack.

  1. Determine whether the rack where you are installing your new equipment has the capacity for that equipment. Check for the following:
    • Is there enough physical space in the rack? The rack will need at least two free rack units.
    • Are there enough free sockets in the power strips in the rack to take the power cables from your new server?
    • Is there enough spare power to the rack to handle the power draw of your new server?
    • Is there enough cooling to the rack to handle your new server?
    • Are there appropriate copper and fiber cables to the rack?
    • Will you have to remove floor tiles for any of the network or power cabling work?
    • Can the floor and rack take the weight of adding your new 2U server?
  2. After you have determined that there is enough space, power, cooling and so on, you need to plan the installation of your new server. This planning often includes many of the following considerations:
    • When, and by whom, will your new server be delivered? Most data centers do not allow delivery of equipment without prior notification. You may well need to pre-inform the security office at the data center and provide them a description of the equipment as well as the name of the company delivering the equipment.
    • Who will rack your equipment? You will need two people to physically rack the equipment.
    • When can the equipment be racked? If it will be racked into a rack in the production zone of your data center, the physical racking work may need to be performed on a weekend and may require an authorized change number to be formally agreed upon by a change management board.
    • Do you have rail equipment to allow your server to be physically racked?
    • Are you required to use anti-static wrist-bands, anti-static mats, or other specific equipment? If so, where are they stored?
    • Are there guidelines or protocols that you are required to follow? Be sure to review them in advance, and follow up with the appropriate parties with any questions you have.
    • What paperwork or forms do you need to fill out in advance, and how far in advance do you need to fill them out?
    • Do you need to engage with the network and storage teams so that your new server is physically and logically configured on the IP and storage networks?
  3. After you have everything planned, the day arrives for you to install your new equipment. Before heading down to the data center, ensure that all your electronic paperwork is in order. Make sure that the security staff at the data center are expecting you and that they will allow you on site. Also make sure your equipment has been delivered and find out where it's currently being stored.
  4. When you arrive and have gained access to the site and the computer room with your new server, follow a procedure similar to the following:
    1. Locate the rack where your server will be racked.
    2. Open the rack and ensure that there are no obstructions to installing your new server. Be careful with power and network cables to existing servers and storage. You definitely don't want to knock any of them out.
    3. Unpack your server. It is standard practice at most data centers for you to unbox new equipment in a build room and not in the computer rooms of the data center.
    4. Install the racking equipment (rails). You may be required to use an antistatic wristband or antistatic mat while doing the work.
    5. Lift your server onto the rails and gently slide it into place, ensuring you don't unplug or damage any other cables while doing so.
    6. Cable up the network and power, including removing any required floor tiles.
    7. Power up your server.
    8. If you had to lift any floor tiles, put them back in place.
    9. Ensure that all other systems in the rack look fine, and close the rack doors.
    10. Test the server, and follow whatever protocols are required to help the network and storage teams configure your server on the IP and storage networks.
    11. Dispose of any packaging that your equipment was delivered in.

That is all there is to it. Of course, every scenario is unique. As you perform these steps, you will probably find that your process is different at several points along the way, but this exercise is a good blueprint for the general steps you follow.

Everything we've discussed so far is about creating an optimal environment for business applications and IT equipment to operate in. Now let's take a look at data center cabling.

Data Center Cabling

If you didn't already know how vital power and HVAC are to the data center, hopefully you do now that you have read the previous sections. Let's move on and discover how equally vital good cabling is.

Following Standard Principles

Good, well-thought-out, and properly installed cabling is an absolute must for any data center. It will make the day-to-day management and the future growth of the data center so much simpler than they would otherwise be. There are several overarching principles to account for when cabling a data center. They include the following:

  • Planning for the future
  • Testing
  • Documenting

When planning for the future, try to use cables that will last for 10 years. This includes the robustness of the cable as well as its capability to transmit the kind of data rates (bandwidth) that are anticipated for the next 10 years. As an example, if you're installing copper twisted-pair cabling, make sure to install the latest category so that it has a fighting chance of being able to transmit 40G Ethernet. The same goes for fiber cables; install the latest optical mode (OM) available, such as laser-optimized OM3 or OM4, which are both capable of 40G and 100G Ethernet (including FCoE). Cutting corners and cutting costs by installing cheap cables might save you some cash today, but in a few years’ time, it will come back to bite you. There aren't many more daunting and costly tasks in data center maintenance than re-cabling.

Planning for the future is also more than just choosing the right kind of cable to use. It involves things like having a large-enough floor void and having enough space for conduits between different floors in the building to allow you to work with cabling today (so you can easily stay within recommended bend radius and so on) as well as add cable capacity in the future.

image Real World Scenario

Cabling Consumes Space

One company didn't think far enough ahead when installing a computer room at its corporate HQ. The company didn't leave enough space to run cables between floors and ended up having to decommission an entire elevator shaft and reuse it as a cabling duct between floors.

Testing is vital with cabling, especially structured cabling. If you'll be doing the testing yourself, do yourself and your organization a favor, and use high-quality cabling and high-grade testing equipment. If a third party will be doing the cable testing for you, make sure you accept only the highest scores as a pass score. You absolutely do not want to be responsible for structured cabling that was done on the cheap and not tested properly. Faulty cables in a structured cabling system cannot usually be replaced; you just have to live with the unusable cable and make sure you don't connect anything through it. For this reason, make sure you apply the carpenter's motto of measure twice, cut once.

image

Copper cabling is susceptible to interference from nearby devices, technically known as alien cross-talk (AXT). This can be avoided by understanding and planning so that you can implement cabling best practices. Some cabling best practices include things like not mixing CAT 6A cables running 10G Ethernet in the same ducts and pathways as CAT 5, CAT 5e, and CAT 6. Patch-panel densities can also lead to AXT in copper patch panels.

Documenting cabling is also vital. This includes things easily overlooked such as using labels on patch panels, as well as placing labels on the ends of cables. Cabling is something that will get out of control and become an unmanageable mess if you don't document it.

Using Structured Cabling

The most common, and recommended, form of data center cabling is structured cabling. The term structured cabling refers to the fact that it follows a well-defined set of standards. These standards have been established over the years to improve the quality and manageability of cabling within the data center.

image

The opposite of structured cabling is either point-to-point cabling or simple ad hoc cabling, neither of which scale. Point-to-point consists of running cables directly from servers to switches to storage, with the cables usually being routed within and between racks without the use of patch panels. These solutions can work in some small data centers, more often referred to as computer rooms or equipment rooms, where there is no space or requirement for a serious, long-term, scalable cabling solution.

The de facto standard when it comes to data center cabling is TIA-942, produced by the TIA in 2005. As well as recommending things like using the highest-capacity cabling possible (future-proofing), this standard outlines the following major components of a data center that relate to cabling:

  • Entrance room (ER)
  • Main distribution area (MDA)
  • Horizontal distribution area (HDA)
  • Equipment distribution area (EDA)
  • Zone distribution area (ZDA)
  • Vertical cabling/backbone cabling
  • Horizontal cabling

At a very high level, they work together as follows. All external network links arrive into the data center at the entrance room (ER). From here, backbone cabling connects equipment in the ER to the main distribution area (MDA) where your network core resides. More backbone cabling then connects the equipment in the MDA to the horizontal distribution area (HDA), which is where your distribution layer switches—and often core SAN switches—reside. From the HDA, horizontal cabling then runs to the equipment distribution areas (EDAs), where servers, storage arrays, and edge switches reside. This is all shown in Figure 13.11.

FIGURE 13.11 High-level structured cabling

image

But that's just a high-level picture. Let's look a bit closer at each of these components.

Entrance Room

According to TIA-942, each data center has to have at least one ER, but can have more if required. The ER is where all of your external network connections come in—connections from network providers as well as all connections to other campus networks. Basically, any network that wants to come into the data center must come in via the ER. The ER is effectively the buffer between the inner sanctum of the data center and the outside world. In network lingo, we call this the demarcation point—where you draw the line between network equipment from your network providers and your own network equipment. Within the ER, you connect your equipment to the equipment from your network providers, and your network providers should have no network circuits or equipment beyond the ER. Anything beyond the ER should be owned and managed by you.

Having multiple ERs can provide higher availability for external links. For example, if all your external links come into a single ER, and there is a power failure, flood, or other incident in that room, you risk losing all external connections to the data center.

The ER can be inside or outside your computer rooms.

image

Computer rooms are where your MDA, HDA, and EDA (including servers, storage, and network equipment) reside.

Main Distribution Area

The main distribution area (MDA) is inside the computer room and typically houses the core network switches and routers. These are your network switches and routers, owned and managed by you.

The outward-facing side of MDA is connected to the ER, and the inward-facing side is connected to the HDA. Both of these connections are via backbone cabling, sometimes called vertical cabling. Backbone cabling is often fiber cabling.

Strictly speaking, the internal side of the MDA can connect directly to your EDAs, or it can connect to the EDAs via HDAs. Which of the two designs you choose will be based on scale. Smaller data centers often omit the HDA and cable directly from the MDA to the EDAs. However, larger data centers and data centers that plan to scale usually implement one or more HDAs. Aside from scale, HDAs can make it easier and less risky to make cable-related changes, as they often allow you to reduce the number of cabling changes required within the MDA.

Horizontal Distribution Area

The horizontal distribution area (HDA) is optional. If your data center is large or you plan to scale it to become large, you'll want to implement one or more HDAs. If you expect your data center to always remain small, you can skip the HDA and cable directly from the MDA to your various EDAs.

image

When talking about the size or scale of your data center, it isn't always high port count (a high number of network ports in the data center) that dictates the necessity of implementing HDAs. HDAs can also be useful if the distance from the MDA to your various EDAs exceeds recommended cable lengths. This is because the switching technologies employed in the HDA are active, meaning that they repeat the signal. Repeating a signal allows signals to be transmitted over large distances.

If you implement HDAs in your data center, this is sometimes where your core SAN switches may be placed, as they act as consolidation points between storage arrays and edge switches that the servers connect to. However, it is also common to install SAN switches in the EDAs of your data center. In IP networking, the HDAs usually map to the aggregation layer.

Cabling between the MDA and the HDA is backbone cabling.

image

Very large data centers can also implement zone distribution areas (ZDAs) that sit between the HDA and EDA and act as consolidation points to potentially allow for more flexibility.

Equipment Distribution Area

The equipment distribution area (EDA) is where most of the server and storage action happens, as this is where servers, storage, and top-of-rack (ToR) network switches live.

The cabling between HDAs and EDAs is considered horizontal cabling. Although in the past copper was king in horizontal cabling, it is fast losing popularity to fiber cabling, because of the ability for fiber cabling to deliver 40G Ethernet, as well as being road-mapped to handle 100G Ethernet.

Copper twinax direct-attach cable (DAC) is a popular choice when cabling within the EDA, especially cabling within a single rack. For example, copper twinax is commonly used for cabling between servers and ToR switches, as it is capable of transmitting 10G Ethernet over distances of up to 5 meters. Copper twinax solutions are also more power-efficient than other copper alternatives such as unshielded twisted-pair (UTP) and shielded twisted-pair (STP), due to the fact that they use more power-efficient PHY chips in the server and switch.

Aside from the aforementioned concepts, the TIA also recommends using the highest-capacity cabling media possible—such as laser-optimized OM4 and the highest category specification available—in order to future-proof the data center and mitigate potential future disruption. This is sound advice. Think long and hard about the life expectancy of your cabling.

Working in the Data Center

There are several rules and courtesies that will help you with any work that you carry out in a data center.

Access Control

The levels of access control differ depending on the size and nature of the data center. As a general rule, only authorized personnel should be permitted access to your data centers. Even you, as an IT/storage admin, might not be permitted uncontrolled access to the data center. It is common practice for data centers to have high fences with security gates manned by dedicated security staff. To get through the front gate, you normally need to be preauthorized.

image Real World Scenario

Date Center Security Should Be Taken Seriously

The CIO of a major global financial institution turned up unannounced at one of the company's major data centers. Because his name was not on the list of people authorized to access the data center that particular day, he was denied access.

Once through the security gate, it is normal practice for access to the ER and computer rooms to be secured with electronic access systems such as swipe-card access or biometric access. Further, some data centers secure their servers, network, and storage equipment inside locked cabinets with badge or pin-code access. Opening the cabinets requires preauthorization.

It is also a common practice for the ER to be outside the computer rooms so engineers from your network providers don't need to access your computer rooms.

Fire-Prevention Systems

Obviously, a fire in the data center could be catastrophic to your organization. Data centers need best-of-breed fire-prevention systems that include the following:

  • Fire warning systems
  • Fire suppression systems

When it comes to fire suppression, data centers have systems ranging from small handheld fire extinguishers all the way up to expensive, top-of-the-line, gas-based fire suppression systems. If possible, water-based fire suppression systems should be avoided, as water and computer equipment doesn't usually mix well! However, gas suppression systems, which control and put out fires by starving them of oxygen, can be expensive.

General Rules of Conduct

Food and drink should be banned in all data centers. Gone are the days of IT guys resting a half-full beverage on top of a server while they work at the console. In fact, in many data centers, if you're caught with food and drink, you will be immediately walked off-site and asked not to come back.

It's also good manners and a best practice to clean up after yourself. That means leaving things in the same or better condition than they were in when you arrived. Don't leave any empty cable bags, spare screws, or anything else behind after you leave. Always remember to put floor tiles back safely and securely! After all, the raised floor is often important in maintaining proper airflow in the data center.

image Real World Scenario

Take Care When Working with Raised Floors

A worker in a small computer room with a raised floor lifted all the tiles in a single row in order to easily lay some cabling in the floor void. What he didn't realize was that the raised floor was substantially weakened by having an entire row of tiles lifted. As he was working with the row of tiles removed, part of the floor collapsed, damaging cables and equipment in the floor void, which required major engineering work to fix and recertify the floor.

Make sure that you carry out only authorized work while in the data center. Don't get overexcited and decide to do other jobs while you're there. For example, don't lift floor tiles without authorization. Most data centers will be equipped with closed-circuit security cameras to monitor what goes in and out of the data center, as well as what happens while people are inside.

There is one final and important point to keep in mind. Don't press the big, red, emergency power-off (EPO) button! Most data centers have a big, red, EPO button that is used to cut power to the data center in the event of an emergency. Although it can be extremely tempting to see what happens if this button is pressed, unless there is a genuine emergency, do not press it!

Summary

In this chapter we learned about the importance of the data center, and that not all data centers are created equal. We learned about the widely accepted data center tier model maintained by the Uptime Institute, and that tier 4 data centers are the highest tier and provide the best availability and redundancy. We also learned about data center racks, raised flooring, air-flow and cooling, and power. We also learned about the vital nature of cabling, and how it influences reliability of the data center today, as well as future proofing it for new technologies. We finished the chapter by talking about physical data center security, fire detections and suppression systems, and some general rules of conduct that should be adhered to in all data centers.

Chapter Essentials

Racks Data center racks come in standard sizes, should be grounded, and need to be installed in a planned layout to facilitate efficient air-flow and cooling within the data center. Racks that come in non-standard sizes should be avoided where possible, and all work within a rack should be approved and carried out with the utmost care.

Power Modern data centers consume huge amounts of electricity and often have backup generators on-site that can provide electricity in the event that utility power is lost. Most racks and rows in the data center have N+1 redundant power supplies.

Cooling Modern IT equipment kicks out a lot of heat, and if not kept cool will malfunction. Many data centers operate hot/cold aisles and sometimes even enclose aisles to stop hot and cold air from mixing. Keeping hot and cold air separate can drastically reduce the cost of keeping the data center cool.

Structured Cabling Cabling is a vital component of the data center. Don't cut corners when it comes to structured cabling, and make sure that you plan and deploy a cabling solution that scales well and will suit the needs of your data center 10 years from now.

Emergency Power-Off Button Every data center has at least one really cool-looking emergency power-off (EPO) button. In the event of a real emergency, this button can save lives by cutting all power to the computer room. Use it only in a real emergency. If you use the EPO button inappropriately, you should expect to be sacked.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.172.223