Chapter 20. Physical Constraints: Heat, Space, and Power

Chapter Objectives

  • Identify the physical constraints for cluster design

  • Show simple examples of power ratings and calculations

  • Discuss heat generation and removal

  • Provide floor layouts for example clusters

An example is presented of a real-world design situation with space, power, and cooling for a medium-size cluster.

Identifying Physical Constraints for Your Cluster

Any cluster has to have a home, a place to exist during its useful lifetime. Ensuring that the cluster design process takes physical constraints into account is an important detail. It would do no good to build a cluster that cannot function properly in its intended location. This chapter identifies the physical constraints placed on clusters by budget and location, and shows a simple example dealing with heat, space, and power requirements.

The cluster design and physical parameters used here are from a proposed design for use in the facilities at the company where I work. The building space, cooling, and power figures were all taken from the plans for the computer room in our facility. The design parameters are

  • 188,000 BTUs (18 tons at 87% efficiency) of cooling installed, and another 188,000 BTUs available

  • Two 220-V three-phase, 100-Amp circuits for power distribution

  • A lab that is 38 feet by 24 feet, with 874 square feet of space available

The facility was “built out” with cluster equipment in mind, so there was additional cooling available and, we hope, adequate power. We will examine each of the design considerations in the upcoming sections. The front view of the example cluster equipment is shown in Figure 20-1.

Cluster hardware used for example calculations

Figure 20-1. Cluster hardware used for example calculations

Space, the Initial Frontier

Perhaps the easiest constraint to satisfy appears to be the required square footage and placement for the equipment racks in your facility. There either is enough space or there isn't. This is easy, right?

In actuality, there are a number of considerations about the placement of racks, and the larger the cluster, the more important and difficult they become. These details include planning for proper airflow, access to the racks and equipment, and requirements that may be dictated by local building codes, such as maximum height with respect to the ceiling or aisle widths. Before you place rows of racks too close to obstacles, think about access to the front and rear of the racks and the ability to place service equipment, like “crash carts,” close to the system on which you are working. It is also important to be aware of your local building codes.

For a large number of racks, a standard practice is to create “cold aisles” and “hot aisles” to aid delivery of cooling to the equipment. This is done by placing the front of the rack rows (air intake) facing each other with perforated floor tiles in front of the racks. Cold air is forced out from under the raised floor, through the perforated tiles, into the cold aisle and is allowed to flow into the equipment in the racks.

The fans in the equipment racks draw the cool air into the equipment and exhaust it out the back, into the hot aisle. The air rises to the ceiling and is removed by the cooling system. There must be enough space available to place the racks in this arrangement, and proper placement of perforated tiles and racks can be important.

We examine the cooling situation a little closer in a future section. It takes specialized expertise to properly design a large-scale cooling approach. Although there are simulation programs that can help in this task, it is better to seek professional help for large, complex cases. We now return to placing the example installation's racks.

The example rack placement for our five-rack cluster is shown in Figure 20-2. In this particular case, the racks will be placed against a wall, with the hot-air exhaust (backs of the rack) toward the wall. Note the clearance for the rear door in the diagram. What is not shown is the requirement for space in front of the rack for stabilization plates, and for the equipment's slide rails to extend out the front of the rack.

Example physical rack placement

Figure 20-2. Example physical rack placement

Although the space considerations are not overly complex for our example installation, be sure not to underestimate the needs for your installation. Front and back clearance for the racks, and placement to help cooling are all important. The power receptacles, of course, also must be within reach of the racks.

The cluster takes five standard (each tile is two feet square) floor tiles by four tiles, for a total of 96 square feet. This area does not count any spacing between the racks and adjacent racks, which might require another row of tiles on either side, or another 32 square feet.

While we are on the subject of space, let's touch on height for a minute. Any preassembled racks are going to need to fit through doors, hallways, and possibly freight elevators (if the computer room is on the second floor). You do not want to be confronted with disassembling your nicely built cluster racks to make them fit through a constricted space on the way to their final home. (I am reminded of a tape subsystem error from the Apollo Computer DOMAIN/OS software which read, “Unit does not fit through 36-inch hatch,” that was jokingly added to the message catalog in reference to attempting to install a tape unit on board a ship.)

Power-Up Requirements

It is important to ensure that your power infrastructure can support the additional equipment you will be adding. Any calculations you do should be checked by a professional electrician and you should check that any work that requires a licensed professional is done by one. To do otherwise is risky business.

To determine the total power used by the racks of equipment, you must turn to the manufacturer's specifications. Unfortunately, there may be a lack of information that you require, or the numbers you do find may be contradictory. The best way to estimate the total power requirements for a given system configuration (RAM, disks, interface cards, and so forth) is to measure it. If you don't have the means to do this, you will need to rely on specifications from the manufacturer, which may be for an idle system, or for maximum possible configurations. (The CPU's floating-point processor, when active, generates a substantial amount of heat compared with a system that is doing integer computations. It is easy to underestimate the power and heat requirements.)

System Power Utilization

If you go to Hewlett-Packard's Web site and find the technical specifications for the DL-360 G3 system, you will immediately see a number like 325 W per system. Until you dig a little deeper, you will have no idea what configuration this number represents. Make sure that you understand what the number represents before you use it in your calculations.

Searching most major manufacturers' Web sites will lead you to a power calculation tool that allows picking a system configuration and generating a power figure for it. This is the next-best way to generate a power utilization number without actually measuring it. For our cluster's compute slice configuration, I came up with 445 W. The power utilizations for the other devices are shown in Table 20-1.

Table 20-1. Device Wattage Ratings for Example Cluster Configuration

Device Description

Number in Cluster

Watts per Device

Total Watts

HP DL-360 G3, Dual 3.2-GHz Pentium Xeon CPU, 2 GB RAM

128

445

56,960

HP DL-380 NFS Server, Dual 3.2-GHz Pentium Xeon CPU, 6 GB RAM, dual power supply

1

800

800

HP StorageWorks 4454R storage enclosure

3

754

2,262

HP Procurve 2848 switch

8

100

800

HP Procurve 2650 switch

4

100

400

Cyclades TS2000 serial port switch

4

37

148

Integrated keyboard/video/mouse

1

50

50

These numbers are still estimates, so it is important to design with a safety margin that is adequate to protect from underestimates. The total wattage for the cluster's active devices is estimated at 62,040 W. To convert this to kilowatts, which is a common way of expressing power (a lot of the calculations we will do are expressed in kilowatts instead of Watts), we use the following equation:

Device Wattage Ratings for Example Cluster Configuration

Our next job is to calculate the available power, given the two 100-Amp, 220-V, three-phase service entrances in our target facility. Some very useful conversion factors may be found on the Web site at http://www.abrconsulting.com/conversions/elec-con.htm. Let's calculate the available kilowatts using the following conversion:

Device Wattage Ratings for Example Cluster Configuration

where the PF in the equations stands for the power factor, which measures the efficiency of power conversion by the equipment. This value is very rarely specified by the manufacturer, so the value of 0.85 is an estimate used in the examples on the previously mentioned Web site.

As you can see from the calculations, our power requirements are within 2 kW of the available power distribution. This is possibly too close for comfort. It would be best to consult with the local electrician.

Taking the Heat

Power flows into the computer room in the form of electrical energy and is supplied to the cluster's equipment, which converts it to heat in the process of doing work. To prevent the buildup of heat in the computer room, which can lead to equipment overheating and failure, the cooling system must have the appropriate capacity to remove the excess heat. A simple model of a computer room, with heat, power, and airflow is shown in Figure 20-3.

Simple computer room model

Figure 20-3. Simple computer room model

The example in Figure 20-3 shows an uninterruptable power supply (UPS) providing power to the cooling system and two racks of equipment. The racks are arranged to pull cold air from the cold aisles through the rack and exhaust it into the hot aisle. The hot air rises to the ceiling, where it is pulled through the cooling unit.

The cool air is pulled through the cooling units and is forced under the raised floor. (Notice that there is a pressure drop across the length of the under-the-floor duct. There is also flow constriction because of cable troughs and PDUs that block airflow under the floor.) The cool air is then fed to perforated tiles that supply air to the racks. The racks are fed power, which gets converted into heat. The equipment is rated in terms of Watts. We can also convert the power usage into British thermal units (or BTUs[1]) with the following equation:

  • BTUs = kW × 3,413 = 62.04 × 3,413 = 211,743 BTUs

The conversion factor, 3,413 is obtained by dividing the number of joules in a kilowatt (1 kW is equal to 3.6 x 106 joules) by the number of joules per BTU. Oops, as you can see from the numbers, it looks like we have exceeded the 188,000-BTU capacity available from our single, existing cooling unit. We need to have the second one installed to cope with the extra 23,743 BTUs. Good thing we did the calculations before we put in the equipment!

Whenever you see the term tons used in reference to cooling capacity, it refers to a refrigeration ton. A refrigeration ton[2] is defined as

Simple computer room model

Notice that this is in terms of units of work over time. Also notice that it takes 144 BTU to raise one pound of water one degree. Our existing 18 tons of capacity yields 216,000 BTU/hour. Assuming 87% efficiency (based on temperature differences and other factors) yields the documented 188,000 BTU in the plans for our facility.

Another useful conversion is

  • 1 Refrigeration Ton = 4.72 Horsepower = 3,516 W

This equation allows us to convert between refrigeration tons, Watts, and horsepower, and when coupled with the previous equation, allows us to convert to and from BTUs as well.

As a check, let's take a look at the manufacturer's specifications for the equipment BTU ratings. Note that the BTU rating for most equipment is really expressed as BTUs per hour, although they are rarely labeled that way. Table 20-2 lists the BTU ratings for our example equipment.

Table 20-2. Device BTU Ratings for Example Cluster Configuration

Device Description

Number in Cluster

BTUs per Device

Total BTUs

HP DL-360 G3, Dual 3.2-GHz Pentium Xeon CPU, 2 GB RAM

128

1,519

194,432

HP DL-380 NFS Server, Dual 3.2-GHz Pentium Xeon CPU, 6 GB RAM, dual power supply

1

1,475

1,475

HP StorageWorks 4454R storage enclosure

3

1,232

3,696

HP Procurve 2848 switch

8

341

2,728

HP Procurve 2650 switch

4

341

1,464

Cyclades TS2000 serial port switch

4

126

504

Integrated keyboard/video/mouse

1

170

170

The total BTUs, based on the manufacturer's information, is 204,469. Notice this number is close to that obtained from the wattage ratings, with the difference being 7,224 BTUs (2.12 kW). This discrepancy is very close to the difference between the predicted wattage and the available power, so it bears further investigation. This type of discrepancy is not unusual when using manufacturer's figures for BTUs and Watts. It does serve as a check on our previous calculations.

Physical Constraints Summary

As a final note, each compute rack weighs 1,479 pounds and the management rack weighs 517 pounds. It is important that your raised floor is strong enough to support the cluster's racks. The weights are also important if you are going to ship the cluster racks.

Ensuring that your cluster's expected home meets its requirements is important. Designing for the proper power, cooling, and space to support your cluster will not only help during the eventual operation of the cluster, but can help eliminate implementation delays. This chapter covers only the very basic approach to designing your cluster's new home; you should consult with experienced professionals when you have further questions.



[1] A unit of energy, equivalent to the heat required to raise one pound of water by one degree Farenheit, from 58.5 degrees F. to 59.5 degrees F. A BTU is equal to 1054.4 joules. The metric equivalent, a thermodynamic calorie, is the amount of energy required to raise the temperature of one gram of water from 14.5 degrees Celsius to 15.5 degrees Celsius. This is equal to 4.184 joules.

[2] The amount of heat required to melt one ton of ice at 32 degrees farenheit in a 24-hour period. Can you guess that this is a hold-over from the days when ice was the only cooling source?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.163.159