What Is Data Center Cooling?
Data centers use a significant amount of energy to power servers, and those servers generate a lot of heat. All that heat can raise the temperature of hardware in data centers to a point where equipment could be damaged or unsafe. That’s why data centers have cooling systems that keep hardware at a temperature that allows it to run safely and efficiently.
The American Society of Heating, Refrigerating, and Air Conditioning Engineers (ASHRAE) Thermal Guidelines 2015 recommended an equipment environment of 18 to 27C (~64 to 80F) for all classes of data center equipment, and different allowable range for each class.
Why Data Center Cooling Matters
Data center cooling is one of the most important areas of innovation for data centers. A data center’s ability to control heat impacts its tier, a classification created by The Uptime Institute that evaluates the center’s reliability and ability to meet businesses’ needs. Data center computer systems and IT hardware run 24/7 to ensure the servers behind our digital economy are continuously available. Maintaining reliability and efficiency is of the utmost importance.
Powering always-on data centers cool can come at significant energy costs. According to an analysis of energy company earnings calls, a large data center requires the same amount of electricity as 750,000 homes. Since cooling systems use an estimated 50% of a data center’s power, an efficient system can have a massive impact on the overall cost to run a data center, and its environmental footprint.
Data Center Cooling Systems
Data center operators have achieved significant improvements in the energy efficiency of cooling systems over the past 50 years. Let’s break down the history of data center cooling, and how it has evolved.
The Early Days of Data Center Cooling
From the 1970s to 2000, most data centers were cooled by pressurized air. Raised floor systems delivered cold air from a computer room air conditioner (CRAC) or computer room air handler (CRAH). The floors’ perforated tiles allowed cold air to enter the main floor through a pressurized space below. The cool air mixed with the hot air in the room until the space reached the required temperature and humidity – like how a room in a house or office is cooled by air conditioning. After the hardware in the room heats up the air, the air would return to the pressurized space below to be cooled again.
Raised floor systems are still used by some data centers today, but they are expensive, inefficient and tend to cool spaces unevenly. As data centers grew and output in the 2000s, operators had to consider how increasing loads were impacting cooling systems’ performance. It was clear that over time, innovation was needed to transform cooling systems and make them more efficient.
Evolution of Cooling Technologies
In the 2000s, data center cooling evolved from basic air conditioning to more sophisticated systems. Hot and cold aisle containment, strategically organizes server racks and other hardware in a data center in alternating rows. In this configuration, cold air comes from one direction and hot air faces the other direction. Containment systems keep the hot and cold aisles separated and control air pressure. This method of cooling is more energy-efficient than raised floor systems and allows for more consistency in temperature.
Air cooling is reliable, easy to implement and used by many data centers today. However, it still uses a significant amount of energy – especially in hot climates. Additionally, growing AI workloads are pushing air cooling systems to the brink, forcing data centers to innovate further.
Modern Cooling Technologies
As computing densities have increased with processing-intensive applications, like AI training, racks in data centers are generating more heat. This requires advanced cooling systems, to avoid equipment failure and safety issues.
Facing the limitations of air-cooling systems, data center developers have been exploring new methods to make cooling systems even more efficient. This has led to a transition to more advanced thermal management techniques, like evaporative cooling and liquid cooling.
Here’s a breakdown of modern cooling systems data centers are adopting today:
Evaporative Cooling
Evaporative cooling, sometimes called adiabatic cooling, uses evaporation to cool the air in a data center. Outside air draws through wet cooling pads, and as the water evaporates, the heat in the air is absorbed, lowering the temperature. This approach uses less energy than a mechanical air conditioner.
Immersion Cooling
With immersion cooling, hardware is submerged in a dielectric fluid that absorbs its heat without conducting electricity. Immersion cooling is more efficient than air cooling because liquid absorbs more heat than air does. Heat is then circulated through heat exchangers. A simple way to think about it is to imagine jumping into a swimming pool after going for a run on a hot day. You’re going to cool down much faster than if you stood in front of a fan.
There are two types of immersion cooling: single-phase and two-phase.
- Single-phase immersion cooling uses circulation to cool hot electrical components.
- Two-phased immersion uses fluorocarbons which boil at low temperatures. The gas to boil the fluorocarbons comes from the heated components, and is then recovered, condensed and returned.
Liquid Cooling
Liquid cooling encompasses immersion cooling, as well as direct-to-chip cooling, where coolant is circulated directly to components that generate heat.
Other types of liquid cooling include in-rack cooling, which is a more targeted type of cooling for high-density data centers, and cold-plate cooling, which uses a cold plate attached to an individual component.
Depending on the equipment inside a data center, liquid coolants may be more effective than air cooling systems.
Energy-Efficient Cooling and Sustainability
Demand for data centers is only increasing, and with it, data centers’ use of energy. Data centers already account for roughly 4% of global energy consumption, and The Electric Power Research Institute predicts that by 2030, they will use up to 9% of total electricity in the US alone. Goldman Sachs predicts that AI will drive a 160% increase in data center power demand by 2030.
This threatens the data center industry’s ability to continue developing to meet demand, as policymakers and citizens express concerns around an already-strained energy grid, carbon emissions and rising costs. This makes efficient operations that minimize data centers’ energy use and carbon footprint even more critical. Liquid cooling is also becoming more compelling, as analysis finds it does not cost more than air cooling.
Future Trends in Data Center Cooling
Almost 40% of data centers are using liquid cooling technology today. Going forward, energy-efficient cooling systems will likely be the default for new data centers, and we’ll continue to see legacy data centers transition from air to liquid (or hybrid) cooling operations. With AI workloads pushing air cooling systems to the brink, this transition is inevitable.
Additionally, data centers are continuing to pioneer new technologies that make liquid cooling even more efficient. IoT technologies, for example, can predict when maintenance is needed and optimize water management. There are many exciting prospects of liquid cooling that are yet to be realized as data centers take energy efficiency and optimization to the next level.
Want to keep exploring the future of data center cooling? Tune in to an episode of the Not Your Father’s Data Center podcast, where host Raymond Hawkins delves into Data Center Energy Consumption, discussing innovations in cooling, power management, and the latest trends in renewable energy integration. Don’t miss it!