The problem with the escalating performance rating of “good, better, best” is that nothing comes after “best”. It’s really a static mode of looking at things and makes no provision for any degree of continued progress. While “More Better”, “Bester” or Bestiest” aren’t in our lexicon of superlatives past the age of 6, the inability to plan for continued improvement is a common human error. In closets and drawers all over America are Betamax Players, Flip video cameras, Microsoft Zunes and Laser Disk Players that we quickly and soundly trumped by new technological competitors. This same lack of foresight is also found in the data center industry, particularly in terms of hardware.
Many data centers are designed with IT hardware in mind. While the IT hardware that will initially run inside the facility is certainly an important consideration it is only a small portion of the planning equation. Many data center designers become so enamored with the equipment that will run in their new facility that they lose site of the fact that the average facility changes out its hardware every three (3) to five (5) years. As too many data center designers have found out, a focus on what is currently the “best” is a proven strategy for turning a data center that was intended to last for 30 years into a destination location for weddings and bar mitzvahs in a fraction of the time.
In designing a data center from a static perspective, you are discounting that both standards and technology regularly change. If for example, you design you upcoming enterprise data center facility to add water-cooling directly to your racks there is a strong possibility that this architecture will become obsolete with the next update of allowable operating ranges in the ASHRAE TC9.9 standards. Since hardware providers like Cisco, Dell and HP continue to operate, it is reasonable to assume that they will continue to manufacture discontinue existing products and release new ones. Thus, regular cycles of hardware refreshes have to be factored into the design equation.
The most important factor to consider while looking at the functional obsolescence question of data centers is that with each refresh cycle performance continues to improve providing more processing and storage capacity per kilowatt. That’s right, every data center that gets an IT hardware refresh performs more work per kW. That’s why you still see people operating in 40-year-old facilities today. That power outlet otherwise known as a data center is a long-term asset. Don’t believe me? When was the last time you got a new TV for your house? Does that TV perform much better than the old one? Did you have to rewire the house to plug it in? I didn’t think so.
Hardware is an important consideration in the design for any new data center. The key is to enter into the planning phase understanding that your design must be broad based to accommodate regular hardware refresh cycles rather than focused on the equipment that will drive its initial applications. Although it may seem to be a simple concept, designing a facility to support the standards and technology enhancements that will continue to characterize the industry is very often overlooked. Those that do typically find their facilities working their way down the “good, better, best” continuum.
Other Five Tool Posts