3 Important Data Center Power Considerations
Calculating a data center’s power usage effectiveness (PUE) helps establish a power-usage baseline. It reveals the ratio of available power to the amount active power consumed by IT equipment. In other words, it’s a gauge of energy efficiency. The goal is often to reach a ratio of 1; last year, typical data centers had a PUE of around 1.59, according to the Uptime Institute. A higher PUE indicates that a data center could be running more efficiently. Lower ratios suggest that energy is being used effectively to get compute work done.
As you evaluate data center power usage, there are three important—but often overlooked— considerations to keep in mind that may impact how the facility runs.
1. Voltage Level
High voltages allow you to be more efficient without using as much power. Across the United States, 480V power is typically brought into most data center facilities. From there, to power data center equipment (servers, switches, storage systems, etc.), the voltage is stepped down or converted to either 208V or 120V.
To make this conversion, data centers rely on three-phase transformers. These transformers generate heat (which increases cooling requirements) and are a contributing factor to efficiency loss. Every time you step the voltage level down, you also lose efficiency.
Think of it like a structured cabling system: In these systems, each connection point causes signal loss within a channel—it’s a disruption in data flow. The same holds true with power. At each connection point, energy efficiency is lost because there’s an interruption in power flow.
To increase efficiency, some large data centers are finding ways to use 480V instead of stepping it down. Facebook is a good example: The company chose to run 480V three-phase power to its data center racks so it doesn’t need transformers and experiences virtually no power loss. Google also distributes power directly to the rack at 480V AC with a three-phase rectifier that converts AC to DC within the rack.
2. Power Stranding
When a data center relies on three-phase power (which transmits up to 57% as much power as single-phase power over the same cable size), it’s possible for power to become stranded.
This happens whenever the power distributed to a rack exceeds the amount of power it consumes during peak utilization. While this may not seem like a big deal, multiplying a few kilowatts by the number of racks that fill your data center may reveal otherwise. In some cases, stranded power can equal nearly 50% of total power available.
To make sure you strand as little power as possible, it’s crucial to balance your power load properly across the three lines. Make sure power is spread out properly and that everything runs at approximately the same usage. For example: You don’t want one line pulling 12A while another line only pulls 3A.
Power management is part of the equation, but it also comes down to how you plug in things like servers and SANs. Connecting all your storage hardware (which pulls a lot of power) at the bottom of the rack on line 3 while connecting switches (which don’t pull much power) at the top of the rack on line 1 can lead to stranded power.
3. Zombie Servers
Something else to consider in regard to data center power: zombie servers. These pieces of equipment continue to run while contributing nothing to compute resources. In other words, they consume power but don’t serve a purpose. (If you unplug the server, nothing of consequence would happen—except for power savings!)
Make sure your servers are working up to their full potential and capacity limits. Managed power distribution units (PDUs) that control power usage down to the outlet level can tell you whether a specific server is running any applications. If not, you can either put it back in use or pull the server out of the data center if it’s no longer needed.
Need help evaluating your data center strategy to make better use of power and space? We can help you get started. Send me a note with your questions!