No one denies that the volume of information traversing the Internet is growing at a rate faster than the rabbit population at your local pet store. The anecdotal evidence surrounds us. In almost any public environment, the guy who stands out is the one who isn’t hunched over a glowing screen with his thumbs tapping out a message faster than a woodpecker assaults a tree trunk. A 2014 study conducted by Cisco attempted to quantify the volume this activity and arrived at the following estimates:

  • Annual Global IP traffic will pass a zettabyte—that’s a one followed by 21 zeros-by 2016– and surpass 1.6 zettabytes by 2018
  • This traffic volume has increased five fold in just the past five years
  • By 2018 over half of the traffic will be generated by non-PC devices, the number of which will be double the global population.

The other aspect of this continued escalation in data volume is the immediacy of its availability required by its consumers. The demands of this combination of volume and speed will continue to act as stressors on existing data centers and are forcing providers to examine how to keep data as close to end users as possible to enhance its value and eliminate latency. These requirements will manifest themselves in more stratified data center structures.

The Inefficiencies of the Traditional Data Center Structure

The nature of the data passing through the data center is evolving. Huge packets (for rich applications such as video, for example) are intermixed with billions of tiny bits of information from billions of devices comprising the continually growing “ Internet of Things” (IOT). The common thread that links these seeming disparate data types is their common need for near real term processing on a continuous basis. Due to deficiencies in their design, construction and operation, many of today’s existing facilities cannot ensure that they are up to the challenges posed by these demanding processing requirements. In effect, many existing data centers and their supporting network structures were not designed and built to effectively process the heterogeneous volumes of data that are increasingly required to simultaneously deliver a video within the window a customer defines as acceptable while also performing the analytics necessary for a manufacturing company to track their inventory status in real time.

The Stratified Structure and New Roles for Data Centers

The underlying diktat in a stratified structure is to keep the data as close to the end user as possible to reduce or eliminate the negative impact of latency. From a structural perspective, a stratified structure features one or more large centralized data centers facilities that work in concert with multiple, smaller “edge” data centers located strategically to be closer in proximity to end users and, in the very near future, micro facilities that will operate in concert with specific edge sites.

Although they have yet to be implemented on a large scale, increasing data volumes and latency “sensitivity” will give rise to micro data centers (<150kW) that will serve as the initial point of interaction with “end users”. In this role they will function as screening and filtering agents for edge facilities to move information to and from a localized area. More specifically they will determine what data or “requests” are passed on to data centers above them in the hierarchy and also deliver “top level” (most common) content directly to end users themselves. The primary impact of this “localization” of delivery will be levels of latency below what is currently achievable. An important consideration for the effective development and implementation of these micro sites is the need for a more sophisticated level of network, compute and storage management than is available today.

The next level within a stratified architecture is the edge data center. The specific function of edge facilities is to serve as “regional” sites supporting one or more micro locations. As a result, they perform the processing and cache functions within the parameters of what would defined as the lowest acceptable level of latency for their coverage region. As the regional point of aggregation and processing, edge facilities will require a high level of capacity and reliability than their connected micro sites. Tier III certification will be a standard edge data center requirement, with average capacities coalescing around 1-3MW.

The central facilities within a stratified structure house an organization’s mission applications and serve as the central applications processing points (CAPP) within the architecture with back-ups conducted on an infrequent basis in relation to their connected edge data centers. Due to their level of criticality, the applications within a centralized data center are “non-divisible”. From a practical perspective this means that it is inefficient in terms of both a cost and overall system overhead to run these functions in multiple facilities The vast amount of bi-directional information flow prescribed in a stratified structure necessitates that “fat pipe” connectivity be used between them and the edge locations they support.

Division of Labor

Simply put, the primary efficiency of stratified versus centralized data center network architecture is based upon the historical concept of the division of labor. Each component has a specific role in the structure that promotes efficiency by reducing overhead and negating distance limitations. The multiple advantages of the stratified structure—including: flexibility, security, mission critical at all levels and adaptability–enable it to meet the technical and consumer demands for compute, storage and access and make it the logical next phase in the evolution of the role of the data center.

Planning for a Stratified Structure

Successful implementation of mesh structures must be part of a long-term strategic plan. For many organizations this will impose a new discipline on their data center planning process. In contrast to the quasi-reactive—“we’re almost out of capacity at data center “X” so we need to put together an RFP for new site”—mode of planning that characterizes most organizations today, planning for a mesh network architecture will require end users to conduct their planning activities using an expanded time horizon, in which the questions center around how many data centers will we need in the next 5-10 years?

Summary

The massive volumes of information and the continually declining definition of “acceptable latency” are placing new demands not just on the individual data center but their aggregated network architectures as well. In a world where even the youngest consumer of data complains about load times, building bigger data centers isn’t the sole—or best solution—to these requirements. The need to keep data as close to the end user as possible is rapidly becoming the most important factor in data center planning. Relying on the time tested principle of the “division of labor” stratified architectures will come to dominate the data center landscape in the coming years.

Share on FacebookTweet about this on TwitterShare on LinkedIn