The Way to Predictive Analytics: Creating Data Infrastructure

Creating Data Infrastructure: A person interacts with a futuristic holographic display of charts and graphs projected from a laptop, implying advanced analytics or business intelligence.

Paradoxically, data center management to date hasn’t really involved data. Maintenance is based on arbitrary schedules, viewed and performed piecemeal at the equipment level and involves human intervention, i.e. the introduction of error. In the data center market, we’ve reached the point when redundancy only leads to degradation. Understanding the failure point is elusive because failure doesn’t actually happen. Getting to predictive analytics requires data infrastructure and a systemic approach.

This is what Schneider Electric and Compass Datacenters are working on. We’re considering the data center as a whole – as a complex system – not as individual assets. Individual analytics are overwhelming, and they don’t show cause and effect. Let’s say the gear fails, for example. What then is the impact on the UPS? Or how does adjusting the ambient temperature impact the performance of the electrical infrastructure?

In other words, what’s the cascading effect of any failure? It’s unknowable unless the entire system is considered. We’re building asset models based on the domain expertise, but as a system. The collective data will drive predictability. We’ll connect as many data points as possible, and this data infrastructure will enable accumulation of data to build rules-based models.

Data Infrastructure First

The current lack of data infrastructure means there’s not enough data to build high performance machine learning. Yet, this is the precursor to AI. The AI conversation tends gets carried away. AI in data centers doesn’t really exist at this stage. We have to work on the basics first to deliver advanced analytics.

Creating data infrastructure starts with the cloud. Then comes instrumenting and ensuring the telemetry is in place for the data center to aggregate as much data as possible. Essentially, the result will be a registry of all the assets in one place. A consistent asset model across the system will deliver higher value analytics, and that will allow us to better control the context to gain insight.

Security is always a question when it comes to data, especially when we’re talking about more and more data. But we can’t let it be an obstacle. Of course, Schneider Electric equipment has been thoroughly cyber-tested at the ground level. A larger focus for building a secure data infrastructure, however, should be around people and processes because the majority of vulnerability falls in these areas.

The Redundancy Dilemma

Data centers involve a large footprint of equipment that’s often heavily redundant – sometimes triple. That means it never really fails. As the business of data centers ages and continues to expand at the same time, redundancy will become an issue that only analytics can address. In theory, we’re talking about the concept of failure modes and effects analysis (FMEA).

This is an approach that’s been used in aerospace for years. It basically looks at every component within the system and analyzes what impact it would have in a particular failure mode in the system and what the effect would be on the overall system.

Only a couple of points in any given system are critical to failure. It’s not that other parts aren’t important, but some things can fail and not affect the overall system performance. This is a novel idea for data centers and exactly what we are doing now.

In data center management, we need the ability to use live data from the data center to understand how individual assets are performing within a system. Then we’ll re-rate and create a risk hierarchy within the system against the overall potential for failure.

In most industrial analytic applications, there’s room to fail, and you may never see it. All that’s apparent is performance degradation. Redundancy obscures how the assets are performing underneath. Data will tell the full story and potentially reduce redundancy, and thereby, capex.

The Benefits of Predictive Analytics

Beyond reducing upfront costs and longer-term investment, analytics will decrease failures and interventions too. They’ll provide visibility and improve asset performance for higher uptime and longer meantime between failure. Ultimately, risk will be lower and the life cycle optimized when applying data driven asset management, i.e. predictive analytics.

The broader goal is to replicate across multiple locations and geographies. The full value lies in comparing mission critical environments to each other to get benchmarking – that’s the eventual goal. The larger the volume of data, the better we’ll get at it.

Learn more about Schneider’s full portfolio of software and services solutions for colocation providers.