A Punch in the Face

A Punch in the Face

Punch In The Face - Compass DatacentersOkay, I think by now we’ve all read the New York Times article on data centers and we can all step back and utter a collective, “ouch”. Yes it can be euphemistically described as “less than flattering” and at certain points the industry was portrayed as a 50’s sci-fi villain, laughing maniacally, bent on global domination. “Today Quincy, tomorrow the World! Meheheheha”. But let’s face it we as an industry aren’t exactly wearing a white dress to this party either.

Just like in any 12-step program the first step in fixing a problem is admitting that you have a problem:

“Does anyone have anything to share with the group? Yes, you in the back.”
“My name is Data Center Industry”.
“Hi, Data Center Industry”.
“I’m Data Center Industry and I have a capital spending, operating cost and energy efficiency problem”

And we do. Not as malicious or secretive as the picture the Times’ attempts to portray, but a hard set of problems in a young industry. Despite all we have learned, some still build huge monolithic sites all at once without using an incremental approach to populating and powering them. And yes, some still operate the majority of their data centers using a “belt and suspenders” approach. We know we do this. It’s built into the specs that many providers, and their customers write. While it’s not part of any master plan to upset any global balance of power, the Times’ article did probably leave a lot of folks wondering why they ever bothered worrying about the Russians. So what do we do—and for you guys thinking lynching is an option, throw some cold water on yourselves.

Step One: Understand Differences
Understanding the differences means that we need to distinguish the delta between Internet data centers and the mission critical facilities operated by businesses. While business facilities are new world “back offices” with a clear first priority for reliable and consistent operations, Internet data centers are a hybrid that combines elements of a factory with those of a lab. Over the past seven (7) years the amount of wholesale changes for the big Internets has been astonishing. Google uses containers. Then they don’t. Facebook loves evaporative cooling. Then they don’t. One guy buys servers. Another guy makes them. As Mike Manos wrote in his blog, Loosebolts, the Internets are tackling complex problems in real time while simultaneously addressing their business needs. Not all data centers are the same.

Step Two: Continue Industry Led Innovation
Organizations like the Green Grid, OpenCompute, the Uptime Institute, etc. need to continue to push and share innovative approaches to data center efficiency—note that I said continue not start. Perhaps more importantly they need to take a more active role in promoting advancements in these areas. The data center industry is characterized by continuous improvement in design and operational advancements. However, if we don’t do a better job of promoting our successes it becomes much easier to be demonized by our critics. Basically, we suck at marketing.

Step Three: Eliminate Waste
The first thing that we need to do is strive for data center designs and operations that eliminate waste. The great thing about our business is that the elimination of waste is not only efficient but it helps with the capital and operating expense problem. This is more of design philosophy than a targeted mandate but it does go to the heart of the matter of efficiency. For example, we should not overbuild. If the facility is ultimately going to be 500,000 square feet great, but we don’t put in all the UPS and generators up front. And perhaps the better question is: why build all 500,000 square feet upfront? Why not five (5) 100,000 square foot units or two (2) 250,000 square foot facilities? This is not only a waste of the physical materials (what if the site never reaches capacity?), it consumes more energy than then is required throughout the life of the facility. We also need to re-examine our definitions of redundancy. Is fault tolerant ever needed? I would argue the extra $10-$20 million is better used on software development to eliminate applications that require fault tolerance. By adopting an “eliminate waste” approach, a good number of the issues the article reveals are addressed.

Mike Tyson always used to say that: “everyone has a plan until they get hit”. I think we can all agree that as an industry we just got tagged pretty good. How we respond is up to us. Outright denial and finger pointing are probably not the most effective use of time and resources—no matter how cathartic they may be. The first step has to be an honest internal review of our methodologies and the reasons for their usage. I think that when viewed through the lens of less wasteful design and operational goals a number of the issues that we face in terms of energy efficiency begin to fall by the wayside. After all, while the new ones are orders of magnitude better than the ones built just seven (7) years ago, it’s not like we are turning off the older ones. They will still be going for the 15-20 years. Although we did just absorb a punch in the face, it’s early in the fight.
(Next Blog: The article and regulation)