Learn more about our sustainable Prince William County campus initiative

The Five Tool Data Center: Software


The Five Tool Data Center: SoftwareIn this final installment of the Five Tool Data Center, we arrive at the applications layer of the OSI stack as we discuss the need to understand the impact of software on the facility. The software-based applications supported by a data center are the critical driver of its hardware requirements. From a high-level perspective, accommodating an organization’s software demands seems like it should be a straightforward exercise. If this were the case, determining the number of users that need to be supported or examining trend data would be all that was necessary to set the benchmarks for the data center’s associated hardware requirements. The collective sigh in the background is the lament of data center planners everywhere wishing that this were the case.

Man may have gone to the moon using 8-bit processors, but any linkage between a programmer’s knowledge of how their programs impact hardware efficiency ended about the time Armstrong’s feet touched the lunar surface. To think that I used to write assembly code and could read hex is still a nightmare that I try to repress. Since re-compile means never having to say you’re sorry, how is the data center manager supposed to ensure that software uses the kW efficiently? Today’s answer for faster software is faster hardware. As we discussed in our last installment on hardware (link here) this constant need for faster hardware drives 3-5 year refresh cycles.

Although they are responsible for supporting all of an organization’s applications, data center professionals are not the masters of their fate. While they may control things like their company’s email, they are at the mercy of corporate business units as to the bulk of the applications that the company offers. The potential for another application to be thrust into their realm of responsibility continues to make capacity planning an inexact science for today’s data center managers. Even virtualization, the panacea for the crowded data center floor has had the unanticipated impact of eliminating physical “zombie” servers and replacing them with a 20X more virtual “zombie” servers. Virtualization’s impact on planning can be summarized by updating an old adage to state, “the data center abhors a vacuum”.

Although tools like improved DCiM platforms will, over time, offer data center providers greater insight and control over the applications that operate within their increasingly virtualized facilities (or in reality, quickly show the business unit that the 20kW blade rack that they “needed” is running at 6kW peak), knowledge of the relationship between software and hardware will always be a necessary skill. In factoring software into data center design a broad perspective should be used. The understanding needed is that that the inefficiencies of software design will continue to lead to overestimated draw on hardware. The next great phase of energy efficiency will come from software utilization, not hardware or the MEP. A better view will ensure that a data center isn’t built just to accommodate immediate requirements but those of the future as well. This understanding is key to ensuring that a data center investment can support the organization’s needs over a prolonged (20-30 years) lifespan, as opposed to providing only temporary relief to an immediate need.
If you listen to the boys talking faceplate draw, you’ve already lost. Through

DCiM and true staging and test of equipment and applications, we may be able to finally cut waste in the data center. Piece of cake, eh?