Learn more about our sustainable Prince William County campus initiative

Is Data Center Airflow Still a Mystery?


Is data center airflow still a mystery? Pardon me for the bluntness of my answer: only if you let it be. While everyone who operates a data center may not be a physicist, or whatever field of scientific endeavor covers the principles of air circulation within confined spaces, they do have access to a variety of product offerings that do the heavy lifting for you. Despite this plethora of available tools to help you identify areas in your data center that are hot enough to smelt metal, a romp through any number of “mission critical” facilities will uncover a healthy complement of empty metal racks being bathed in the cool breezes delivered from the perforated tiles surrounding them. Since many of you reading this are “shocked…shocked”, that this level of optimized airflow apostasy still exists in many existing shrines to compute and storage, I think it makes sense to examine how and why many of our data center peers persist in remaining in a state of “airflow mystification”.

The Proper Tools For Data Center Airflow Management

We’ve probably all heard the old adage, “If a tree falls in the forest and there is no one there to hear it, does it make a sound?” Data center airflow is kind of like that-only different. In the case of understanding the nature of the path of air in a data center, it helps to have the proper tool for the job. The correct tool in this case being Computational Fluid Dynamics (CFD) software. The whole purpose of a CFD package is to provide users with a color-coded – blue and green good, orange and red, not so much–visual representation of the airflow in your data center based on the proposed layout and enable you to make any necessary modifications that will improve your cooling, and subsequently, energy efficiency. As we shall see, not all raised floor (or slab) layouts are created equally.

Interestingly, it is estimated that 80% of data center customers use CFD tools to optimize the original layout of the facility, but there is a precipitous drop-off in usage in an on-going basis. This is, of course, a logical decision if one never expects the layout of their data center to change. Unfortunately, in practice, this is rarely the case. At a minimum, most data centers perform hardware refreshes every 3-5 years, and in a world where not having, or plans to have, a cloud presence of your very own makes you the data center equivalent of Neanderthal man, maximizing the efficiency your data center cooling methodology is a basic operational requirement.

Why Is CFD Modeling an Essential Component of Data Center Planning and Operation?

As self-evident as the answer to the question in the headline may seem, it never hurts to understand the underlying rationale for the procurement and use of CFD software, if for no other reason than someone in the finance department might ask. More now than ever, data centers are evolving entities in which it’s easy to make deployments but not as easy to undo them. A failure to plan for the potential consequences of revised data center layouts on airflow and cooling can, and usually does, result on the ultimate deterioration of your data center’s cooling capacity. In some cases, it is possible to begin experience cooling issues at levels significantly below design capacity resulting in a large volume of unused capacity, a phenomenon commonly referred to as “stranding” capacity.

As previously stated, an estimated 80% of data center operators make use, either themselves or more commonly through their data center provider, of CFD modeling to evaluate and determine the optimal layout of their new data center facility. If these efforts are performed in concert with the initial commissioning process, thereby removing extraneous variables that can affect future reporting values, the data center can be calibrated through the establishment of initial benchmarks to benchmark the performance standards that are unique to the site.

In electing not to use a CFD modeling tool in a post-turnover environment operators forfeit the ability to evaluate the effectiveness of their cooling efforts, and, to evaluate the impact of new potential configuration on the effectiveness of airflow within the facility. In practical terms, this means that operators can “pre-test” alternatives prior to using the concept of Cooling Path Management (CPM) to ensure that the most efficient solution is the one that is finally implemented.

CPM is the process of stepping through the full route taken by the cooling air and systematically minimizing or eliminating potential breakdowns. The ultimate goal of this exercise is meeting the air intake requirement for each unit of IT equipment. Cooling paths are influenced by a number of variables including: the room configurations, the IT equipment and how the equipment is arranged relative to each other and any changes to the facility, such as AHU settings, cabinet arrangement and equipment placement that will fundamentally change the cooling paths. In order to proactively avoid cooling problems or inefficiencies that may creep in over time, CPM is therefore essential to the initial design of the room and to configuration management of the data center throughout its lifespan.

The establishment of benchmarks and the evaluation of cooling paths, enable CFD users to institute a program of “continuous modeling”. The value of continuous modeling is, as referenced above, the ability to test potential changes before moving the IT equipment in, a lot of important “what-ifs” can be answered (and costs avoided) while meeting all essential capabilities such as availability, capacity and efficiency. Examples of continuous modeling applications include:

  1. Creating custom cabinet layouts to predict the impact of various configurations
  2. Increasing cabinet power density or modeling custom cabinets
  3. Modeling hot aisle/cold aisle containment
  4. Changing the control systems that regulate VFDs to move capacity where needed
  5. Increasing the air temperature safely without breaking a temperature SLA
  6. Investigating upcoming AHU maintenance or AHU failures that can’t be achieved in a production environment

In each of these applications, the appropriate modeling tools are used in concert with initial calibration data to determine the best method of implementing a desired change. The ability to proactively identify the level of deviation from the site’s initial system benchmarks can aid in the identification of more effective alternatives that not only improve operations performance, but also reduce the time and cost associated with their implementation

Why Don’t All Data Center Operators Use CFD Software?

Although there are probably any number of reasons that an operate can, or has, cite(d) for eschewing the on-going use of CFD tools, but they all basically align under two main themes:

  1. There is no one on staff that has the skills to make effective use of these software packages
  2. We can’t afford it

The lack of skills argument seems to strain the boundaries of credulity since the average data center tech tends to be a little more computer literate than your grandmother. When this high degree of computer literacy is coupled with the training programs offered by most CFD vendors, mastery of the software seems to be an achievable goal.

The second most common reason for not acquiring a CFD platform is cost. While “it’s too expensive” does tend to be an acceptable excuse for most data center related software applications, DCIM immediately springs to mind, it tends to ring a little hollow when compared to the on-going costs to efficiently cool your company’s multi-million dollar investment. To put this ratio disparity into perspective, in 2015 an IDC data center survey found that 24% of a data center budget is associated with cooling. When this percentage is applied to an average annual data center budget of $1.2M, the cost of cooling translates into $300,000. For those of you who still cling to list price as the reason for not adding CFD functionality to your repertoire, you may want to reacquaint yourself with the old-adage, “penny wise, pound foolish”.

I Still Don’t Need a CFD Solution, So How Do I De-Mystify?

When your favorite football team is on a losing streak you’ll frequently hear the head coach say that, “We need to get back to basics”. This is certainly a good strategy, and one that applies to data centers as well. Even those operators who still find CFD solutions to be an unnecessary luxury demonstrate their basic understanding of the mysteries of airflow by using the following strategies within their facilities (If you haven’t implemented these in your own data center following these guidelines will at least take your mystification level down a few notches)

  1. Re-check your number of perforated tiles—Frequently this is a problem that arises due to data center reconfigurations and hardware refreshes. The key areas to look at are your hot aisles and whitespace areas and diagnosing the problem is pretty straightforward since there shouldn’t be any perf tiles in these locations.
  2. Seal your unsealed openings—Just when you thought you were done with this one, very often it turns out that you probably aren’t. Common areas for leakage may be found under electrical gear such as PDU’s and power panels. Be vigilant.
  3. Make blanking panels your friend—You just can’t have too many of these guys covering the open spaces in your racks and, chances are, you don’t. Recirculation issues due to uncovered slots can raise the temperature of IT equipment as much as 15F.
  4. Check your temperature settings—If you’re still wearing a sweater when you reset a server you might want to review ASHRAE’s expanded operating temperature guidelines to see where you fall along the psychometric chart and evaluate how much you would save by increasing your facility’s operating temperatures a few degrees.
  5. While you’re at it check those humidity settings too—We all know that cooler air is drier air and that means a higher potential for static-electrical discharges, but your facility shouldn’t feel like a tropical rain forest either. Review these as part of your exercise with 4 above.
  6. Poorly calibrated temperature and humidity sensors—You should check your temperature and relative humidity sensors at least once every six months and recalibrate if necessary.
  7. Use hot aisle/cold aisle—Most of you aren’t guilty of this, but a study by Schneider Electric found that 25% of data centers are still not using this technique.
  8. Empty cabinet spaces—When one or more cabinet spaces are left empty your airflow balance can become, well…out of balance. This situation can lead to the recirculation of exhaust air into the cold aisle or loss of cool air from the cold aisle.
  9. Poor rack layout—Deviations from the desired hot aisle/cold aisle configuration with CRACs or AHR’s at the end of each row, like small islands of racks or orienting racks from front to back are going to reduce your effective airflow.

Summary of Data Center Airflow

While there is no “mystery” to the effects of airflow within a data center, the real question is to what degree are they being used within a specific facility? At the most basic level, the fundamentals of optimizing airflow through a variety of actions like perforated tile placement and the use of blanking panels has been well documented, albeit not yet universally implemented. The use of a CFD modeling tool provides a data center operator with a multi-faceted tool for the management of airflow within the site which translates into more efficient energy utilization. If used on a continuing basis, these tools not only ensure the optimization of the site’s initial layout, but enables the operator to determine the specific benchmarks for on-going performance evaluation and capacity planning for future expansion. In short, the degree of mystery involving data center airflow is a function of the tool sets used within the facility.