The Five Tool Player: MEP
Great baseball teams are usually “strong up the middle”. This means that they have good players at catcher, shortstop, second base and center field. It makes sense since the catcher handles the ball on every play, shortstops and second base men are responsible for covering the most ground in the infield, and the center fielder hawks down everything in the outfield. It’s the same with the mechanical, electrical and plumbing (MEP) systems for data centers. If their performance is poor it is manifested in a lower level of reliability for the facility itself. Thus, it is extremely important to understand the fundamentals of MEP design, componentry and the potential ramifications of decisions made regarding them. For a long time, data centers have been designed with focus on only the plays made and not the errors. But just like that up-and-coming young shortstop who can turn two from deep in the hole one play, it is the errors that he makes that costs him his job. From a data center perspective, this means avoiding making errors in three fundamental areas.
Number One: “It’s more efficient to do it this way”—PUE is an excellent measure of data center efficiency. Unfortunately the relentless pursuit of a number that would be worthy of a press release often causes data center designers to make their decisions based on a less than thorough analysis. In any data center MEP decision there are four elements that are fundamentally intertwined:
– External dependencies
– Capital Expense
– Operating Expense
The common mistake that is made in this area is making the decision based on performance alone. For example, a decision to use an evaporative cooling system made solely on the basis of reducing PUE a few tenths of a point is ultimately going to make someone very disappointed (usually the CFO) when they conclude that the company spent $500,000 for a system (capital expense) that requires a few million gallons of water annually (operating expense),. Meanwhile, the PUE was “more efficient” at a whopping 0.04 drop with a 10¢ power bill (external dependency). Please don’t laugh; this is a real anecdote. Obviously, decisions based on performance efficiency alone have a strong likelihood of ending up unpleasantly for everyone involved, so remember the old adage, “Nature abhors a vacuum, and so does your business when you make a decision in one”.
Number Two: “But it’s the latest technology”—It’s human nature to fall in love with something new. Who among us is immune to the shiny bauble? In terms of data centers the problem is that shiny and new often turns into passé and obsolescent very quickly. When you consider that the fundamental purpose of building a data center is to erect a facility that will operate for 30 years while the equipment within it changes every 3-5, the folly of following the latest fad becomes pretty obvious. The best way to future proof a data center is to design it using technology that has proven itself over time. With data centers, substance usually triumphs over style. When your industry still considers “fresh air” a “new technology”, it’s probably best to stay away from servers dunked in mineral oil until it has been around for a few decades. There’s nothing worse than fielder who cannot make the routine play…
Number Three: “More systems equal less failures”: The desire to design to prevent the possibility of a single catastrophic event is common within the industry. It even makes sense on a certain level. If side B is the failsafe for side A, and side C backs up both of them—it seems like it should be more reliable. Then someone makes a high voltage mistake doing some routine maintenance and your Auto Static Transfer Switches flip and bring everything down. As Bill Mazzetti points out in his expert opinion, complexity of design typically doesn’t equate to heightened reliability. You see: people are the problem. Nearly every outage that I have seen is a series of one-off events with people failing at multiple stages along the way. Rather than design for a “black swan” event with automated PLC state machines more complex than decision making in Washington, MEP systems should be built to address the fact that the scenario to resolve is most probably going to happen in the middle of the night when clarity, rather than complexity, is of the utmost importance.
Just like being strong up the middle doesn’t mean every player has to be an all-star, a data center’s MEP should be solid and dependable. This mindset is important if a facility is to avoid the three most common areas for mistakes in MEP planning and design. Although this design philosophy may not win many awards for innovation, it does help ensure a data center that is there when its customers need it to be, and, oh yeah, it costs less too.