The Dirty Little Secrets of PUE

The Dirty Little Secrets of PUE

The Dirty Little Secrets of PUEI was reading an article the other day asking the question, “Whose data centers are more efficient? Facebook’s or Google’s?”, and I couldn’t help but be struck by the irrelevance of the entire premise for the average enterprise or service provider data center operator. Certainly there is a small curiosity factor here but isn’t this really just a more sophisticated way of asking: “Ginger or MaryAnn”? Interesting to ponder but is it going to change the way you operate your data center? Probably not.

While these two titans Indian leg wrestle their way to be the first to claim that they have achieved nirvana (in this case a PUE of 1.0), data center professionals whose sites’ support everything from the company’s email to commodities futures trades to ERP are too busy to be designing their own servers, cleaning off their roofs and implementing new containment strategies to shave another hundredth of a point from their performance rating. To a certain extent aren’t both of these exercises in futility?

The fact of the matter is this: Facebook and Google use relatively homogenous IT loads in the ten’s of megawatts for their applications. The luxury that the Facebook’s and the Google’s of the world have is the tightness of the integration of the stack. Because their businesses consume tens of megawatts per application, their cost models dictate that a tight integration exists between Software and Hardware and Facility. This becomes extremely economic in the tens of megawatts at a time. In other words, they have control over their entire IT environment. You don’t. You have a smattering of apps and hardware in which the only thing that is homogenous is the rate of change over time. Ah, the homogenous environment, who remembers those days? Ah yes, the beloved green screen that went with our mainframe…

Unfortunately, reality bites. When we talk about PUEs for the big Internet guys, we are doing ourselves a disservice. Yes, they are innovative. Yes they are best practices. Yes, they are what we aspire to be. But the fact of the matter is that while Microsoft can move server and storage capacity around in 300 kW chunks, you can’t. You have to serve your customer, whether an internal business unit or third party. Sometimes that means them telling you the kind of hardware to use. In almost every case, it means that they control the software that drives the utilization of the hardware. Lack of control of utilization is a truth, as is to the inability to tightly integrate your company’s software/hardware/facility stack. Unless you are Facebook or Google, where your business is the datacenter, it is too tough, does not make economic sense, or simply cannot be done. As a side note, this dilemma adheres to the law of diminishing returns. For tens or hundreds of megawatts, it works. For 2MW, not so much.

To most in our industry, the way to get the latest in efficiency happens in three ways: (1) You can move to the perfect climate and incur the costs of a remote operation; (2) You can use the latest in technology (although I am hard pressed to see how evaporative cooling or fresh air is “technology”); (3) You can do some combination of the two.

How about another approach…

I would argue that it is more important to achieve a level of data center efficiency that achieves an optimal level of sustainable and predictable performance at the lowest load level possible. This not only reduces your operational costs but also provides you with a level of certainty that positively impacts your budgeting (power pricing margins for service providers) as the IT load within you facility increases. We shouldn’t forget that data center efficiency isn’t about score-boarding in a press release, but in cost effectively (and in some cases, profitably) operating the facility. This strategy better enables us to answer the larger questions that we are confronted with, which in this case is obviously…MaryAnn.