CHAPTER 6: DATA CENTER TECHNICAL OPPORTUNITIES

Given the state of energy consumption in the data center, it is important to identify improvement opportunities to explore.

Prioritizing where to look

Based on studies by The Green Grid, the top four categories that account for 90% of power consumption are the chiller (33%), IT hardware (30%), uninterruptible power supply (18%) and the computer room air conditioner (nine per cent)12.

Given that data, these are the four general areas to investigate initially. As more data is collected and insight generated, there will be specific improvement opportunities that will need to be reviewed and prioritized accordingly.

Ghost equipment

Up to 30% of the servers in a data center may be abandoned yet continue to consume power and incur other costs without delivering any value13. This can be true for other equipment in the data center as well.

Depending on the type of data center, it could cost an organization between $1,600-2,300 in operating and capital expenses of which $700-750 is from electrical costs for a 1U $3,000 server14. These numbers further highlight that servers should never be viewed as free and that the initial purchase cost is only a portion of the total costs of what IT will spend over the lifetime of a server.

Conducting an inventory or using automated monitoring tools to detect ghost systems can yield benefits at a very low cost. For example, by removing this ghost equipment, data center space can be freed up, servers and software licenses reclaimed to support new activity without incremental costs, electrical demand decreased, and operations related costs reduced.

Underutilization

In general, as utilization goes down then efficiency goes down as well. This is an issue when IT does ad hoc over sizing to try and prepare for future growth. By doing so the systems run at a fraction of capacity and, thus, at a lower efficiency. A server running 90% of capacity is more efficient than one at 50%. Likewise, a power supply, UPS or CRAC running at 90% will be more efficient. With server and facility utilization at only six and 56% respectively, improvements are needed15.

The objective should be to investigate how to consolidate loads to boost utilization of these various types of devices through proper sizing during procurement, consolidation of devices, virtualization, proper airflow, and so forth depending on the type of device in question.

Thinking in zones

The industry has migrated away from thinking about power and cooling demands in terms of averages per square foot of the data center. Given today’s high-density high-performance computing systems, the power consumption and heat generated is extremely dense and can vary dramatically from the square foot average an older data center was designed to support.

As a result, existing power trunks and cooling systems designed for average loads per square foot may be unable to meet the needs of a 30,000 watt blade server or a series of them in a relatively small area.

Looking to the future, while 30,000 kWh per rack may sound like a lot, there are already discussions around 60-70,000 kWh rack-level demands. Not only are the demands going up, but what data centers will assuredly see is an increase in the variation of power and cooling demands in specific areas or production and will need to be able to plan as needed. The more modular and flexible the power and cooling infrastructure can be, the longer the facility can be viable.

Hot-cold aisle approach

One zoning approach that can work in existing and new data centers is to design the rack layout such that there are partitioned dedicated hot and cold aisles wherein cool air is drawn in through the front of the racks and hot air is exhausted out the back via the cooling fans.

By engineering the hot aisle to be as hot as possible, the hot exhaust air can be cooled by economizers that are leveraging cooler external temperatures to some extent for relatively free cooling16.

Moreover, the CRACs aren’t cooling mixed hot and cold air unnecessarily and thus the cooling system runs more efficiently while delivering colder dedicated air directly into the servers, again leading to the next point.

Air flow

For IT equipment to be cooled via air then the air flow must be optimized. This means that:

•  Obstructions to air flow, such as abandoned wire and trash, should be removed from under raised floors.

•  Face blanks and cable seals should be used to prevent uncontrolled airflow through and around racks.

•  Partitions should be used to segregate hot and cold aisles.

•  Perforated floor tiles should be located in the correct positions.

•  Return air plenums should be located in the correct locations.

•  CRACs should be located as close to the IT equipment as possible to minimize the lengths of cooling ducts.

•  Raised floors should be at least 24 inches17.

Age of equipment

To make a broad generalization, older equipment is less efficient and has fewer power saving features than newer equipment. This tends to be true for chillers, IT hardware, UPSes, CRACs and so on.

When assessing replacements, bear in mind that there is a lot of sales puffery going on so a buyer must be cautious. For example, the power requirements of a server in a marketing brochure may not remotely reflect how the server will be configured for your organization’s needs, which has a far higher power demand.

Thus, review existing equipment and new equipment but get power requirements in writing as part of the contract and hold the vendor accountable.

In all cases possible, test the equipment in your environment, or a similar environment to verify the actual demands that will be placed on the energy and cooling systems.

12  “Guidelines for Energy-Efficient Datacenters”, The Green Grid, 16 February 2007, http://www.thegreengrid.org

13  “Data Center Dirty Secrets”, Kenneth G Brill, Forbes, 30 June 2008, http://www.forbes.com/2008/06/29/google-microsoft-economics-tech-cio-cx_kgb_0630goog.html

14  “Ken Brill Speaks: What Constitutes Green IT”, A Vodcast by Uptime Institute, www.uptimeinstitute.org/content/view/159/57/.

15  “Revolutionizing Data Center Efficiency – McKinsey/Institute Report Released”, Will Forrest, Podcast and PowerPoint at the Uptime Institute, http://www.uptimeinstitute.org.

16  Of course, cooler locations will provide even greater savings compared to warmer locations. This can also vary from season to season but there are still savings when engineered properly.

17  The Green Data Center – Steps for the Journey, Mike Ebbers, Alvin Galea, Marc Tu Duy Khiem, and Michael Schaefer, RedPaper Draft dated 2 June 2008.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.22.71.220