Hyper-converged infrastructure can cause new data center cooling challenges. Before implementation, determine which temperatures and cooling units will work best.
Despite the benefits of hyper-converged infrastructure, like simplified IT management, these systems can present unique challenges from a data center cooling standpoint.
When you mount hyper-converged infrastructure boxes in racks, you create high heat loads in dense cooling environments. It’s important to keep the cooling paths unobstructed, but getting enough cooling air to the boxes at the right temperature, and removing the heat from them, is more difficult.
The principles are no different than with any other IT hardware, but the compactness of hyper-converged infrastructure (HCI) — and the ease with which you can fill a cabinet with it — can make efficient data center cooling systems difficult to create. Many data centers aren’t equipped to handle cooling requirements for hyper-converged servers — but with the right techniques, you can adapt to make your data center as efficient as possible.
The power supplies for HCI are often rated well above 1,000 watts (W) — they give off a lot of heat. A full rack of 2U-high HCI boxes could be 25-30 kW, while typical 1U servers are about 350-500 W each. Even if you’re doing all the right things to minimize air mixing, such as implementing hot and cold aisles, blanking plates and filler panels and containment, you’re still likely to be under-cooled, even if you have the theoretical cooling capacity to handle the loads.
What are the best data center cooling systems for HCI?
Operate close to the ASHRAE-recommended inlet temperature of 80.6 degrees Fahrenheit (27 degrees Celsius) to not only save energy, but to boost air conditioner cooling capacity. A higher inlet temperature should result in a commensurately higher return air temperature to the air conditioners. A typical 20-ton computer room air conditioning unit (CRAC) rated at 84 kW capacity with 75 degrees Fahrenheit return air can provide 137 kW of cooling with 90 degrees Fahrenheit return air. That’s quite a difference, but it doesn’t guarantee adequate cooling unless it can deliver that capacity to the computing equipment.
Effective data center cooling systems for HCI take both capacity and air flow. If there’s insufficient air quantity, the server fans will try to pull more air from wherever they can get — that includes over the tops of cabinets, through unsealed spaces between rack devices, between adjacent cabinets or between the bases of cabinets and the floor. That creates two problems. First, server fans speed up and consume extra energy. Consequently, the air that server fans draw from elsewhere will be warmer than the air from the air conditioners.
If you already run at higher inlet temperatures, then these expensive boxes will run hotter than intended, which can, at a minimum, cause data errors. It can also shorten the lives of the boxes and cause them to fail.
How much air do you need for HCI?
With this formula, you can quickly determine whether it’s even possible to deliver the necessary air quantity to your equipment:
CFM = BTU/1.08 x TD
Where: CFM is Cubic Feet per Minute of air flow
BTU is the Heat Load in British Thermal Units per hour
(BTU = Watts x 3.414)
1.08 is a weight of air correction constant under normal conditions
TD is Temperature Differential, also known as Delta T or ΔT
The difference in temperature between inlet and discharge air temperature is about 25 degrees Fahrenheit
As a quick check, assume the power supply rating is the device load when it runs at nearly full utilization. So, a 1,600 Watt unit is equivalent to 5,460 BTU. Assuming a TD of 25 degreesFahrenheit, the device requires 202 CFM of air. By itself, that’s not a problem.
If you fill a rack with HCI systems, however, you could need more than 4,000 CFM of air. If an air conditioner can deliver 12,000 CFM — which is fairly typical for a 20-ton CRAC — it could only cool three full racks of these systems. Even if you assume only 75% server utilization on average, you still need more than 3,000 CFM of air per cabinet.
What if you can’t deliver that much air? The formula can be solved for TD:
TD = BTU / 1.08 x CFM
Four full cabinets of this equipment at 75% utilization would need 12,000 CFM. But if other cabinets in the row total another 6,000 W, you now have the equivalent of six cabinets of HCI and need at least 18,000 CFM. Your CRAC can only deliver 12,000 CFM. Assuming the server fans are now running full speed, but can’t pull any more air from anywhere, your TD has increased to 53 degrees Fahrenheit. Computing equipment can’t tolerate that kind of heat rise, and will either shut itself down for self-preservation, or fail.
Other air flow limiters
When you have a number of CRACs, several of them can deliver air to the row. But if you use under-floor air, even grate-type tiles that offer 56% open area can typically deliver about only 900-1,600 CFM depending on under-floor pressure, or 5-9 kW of cooling under ideal conditions. If you install too many of these tiles or add fan-boosted tiles, you’ll drop the under-floor pressure and air-starve other cabinets in the row and room. Duct size limits overhead air delivery — so even if your combined air conditioners can deliver enough air, there’s always a limit to how much can get to the computing hardware.
What changes have you made to data center cooling systems to support HCI?
When you get above 7,500 W to10,000 W in a cabinet, you’ll most likely need supplemental data center cooling systems. A variety of options are available that are specifically designed for high-density cooling. They include in-row and overhead coolers, rear-door heat exchangers, and even liquid immersion systems. And it’s likely that we’ll see more high-performance systems designed for direct liquid cooling.