When a new concept or technology like edge data centers is introduced into the marketplace, it’s often difficult to cut through the hype to get the information you need to know. If you’re thinking about implementing an edge computing solution, you know what I’m talking about.
Too often, the industry’s discussion about the edge is focused on questions that seem highly philosophical, leading to debates that tend to go in circles rather than going in a forward direction:
- What is the edge?
- Where is the edge?
- How is your definition of the edge different than mine?
- Why is your definition of the edge totally wrong?
- Why is my definition totally right?
- Which of these is the true “killer app” for the edge: autonomous cars or virtual reality?
Edge discussions often seem more like the existential debates of French philosophers, sitting in their cafés on the Left Bank of Paris – rather than conversations about how to solve edge computing issues in practical ways. Yes, there is a time and a place for those kinds of debates, but I think they tend to be a distraction when it comes to IT deployments.
Since nobody ever looked to Sartre for advice on technology, I would like to propose a different set of questions you and your team should be asking yourselves as you look at edge data centers, talk to vendors and select hardware and software partners.
Before you pick an edge data center solution, have you mapped out your “Edge IT” strategy?
So much of our industry’s discussion of edge computing focuses narrowly on competing data center units that the bigger picture often gets overlooked. Edge infrastructure is about supporting an overall IT strategy, and that needs to be the dog that wags the tail, philosophically speaking. That means that your edge discussions should start with an IT discussion rather than an edge data center discussion: What services and applications are going to be served by the edge datacenter? What are today’s hardware requirements? And how are they likely to evolve over time? Do you know what services and applications are going to be serviced by the edge datacenter a year from now? Three years from now?
The answers to those questions will drive key requirements about storage, compute, and connectivity. The problem, though, is the future-looking questions on that list are largely unanswerable – just like the most tormenting existential conundrums. It is therefore critical to admit your lack of control over the unknowable future – like any good existentialist does – and focus on selecting edge infrastructure that has flexibility and scalability as cornerstones of its design.
How important is it to align edge data center specifications and requirements with those of larger core data centers?
The future is vexing to existentialists, and it also is for anyone involved in IT infrastructure. It’s impossible to know what will happen, other than that change will be a constant. In the data center world, that change typically revolves around upgrades to hardware and the launch of new applications. Those changes are going to happen, and the decisions you make now about your edge facilities will dictate how difficult those changes will be to carry out across your distributed infrastructure.
When organizations leverage non-conforming or non-standard facilities as they go to the edge, all of those data center implementations create what I call the “snowflake problem.” Instead of being able to carry out upgrades and changes in a uniform fashion across all sites, the IT team must work around all the unique aspects of each facility. The more snowflakes you have in your infrastructure, the more work your team will need to do to work around all the idiosyncrasies of each edge facility. And as edge facilities grow in number, the scale of that snowflake challenges grows as well. This is a massive burden for a rapidly growing platform or service, so it is essential that your edge facilities are designed around the same standards as the rest of your sites.
Once you map out your “Edge IT” needs, is the edge truly “mission critical”?
If existentialists had a favorite color, it would be gray – because, to them, so little is black and white in this world. The same is true for edge computing. Discussion of the edge tends to be very binary, with some advocates insisting that edge computing is truly mission-critical (emphasizing the importance of technical specifications that mirror core data centers) while others talk edge-based services in nice-to-have terms that can “failover” back to a nearby datacenter (de-emphasizing specs that rise to that level). In fact, some edge experts say the same thing in the same sentence, even though both sentiments are opposed to one another. The world is clearly a confusing place, especially when it comes to the edge.
Are edge data centers just a nice-to-have that can improve latency but that doesn’t need to meet truly stringent specifications? Or are they genuinely mission-critical?
This question isn’t just a philosophical debate, though, because edge facility designs vary wildly in terms of how seriously they take that mission-critical threshold. Depending on how your organization is viewing edge deployments and what bar you are setting for them, some providers will be better suited than others. Furthermore, you may decide that you don’t need just a black-and-white approach to the edge. You may decide you need different levels of criticality for different sites, and your provider will need to deliver facilities that match the exact shade of gray that you need for the job an edge site needs to do today and down the road. Thinking in gray rather than black-and-white will help you build out the infrastructure you truly need and save you headaches—and budget—down the line.
Is the edge data center ruggedized for the range of environments that we will deploy in?
This is a critical question to ask, but an easy one to overlook since some pilot projects with data centers are being conducted in geographies that are radically different from where these facilities will proliferate to over time. For example, if your test unit is deployed in Miami, you will see how it performs in high temps and humidity, and perhaps in very high wind conditions. But do you know how it will perform in the pervasive rain of the Pacific Northwest? Or the high-temperature swings of the Southwest? Is the facility secure enough against unauthorized access?
These questions are typically baked into the design of core data centers because everyone implicitly knows that a larger data center in Florida needs to be built to withstand different conditions than one in an earthquake zone in California. But this key question can become overlooked in edge discussions because of the temptation to use an individual, site-specific, designs that would vary, sometimes significantly, between different geographies and environments. That rigid thinking is something to watch out for, though, because edge computing sites can vary in such dramatic ways in terms of climate, security, natural disasters, and much more.
Can your technicians actually fit to gain needed access?
This may seem like a silly question, but it’s a very practical one. The setup inside of some edge solutions is so focused on shipping efficiency that their less-than-tech-friendly design can cause years of pain every time you need an upgrade or a repair. Too often, this gets discovered after a unit is installed rather than early in the process. So be sure to bring one of your technicians for your facility tours and see whether they can access the areas they will need to for the most common work order they will have. It’s worth a few days of extra freight time to avoid a decade of struggling with a non-standard facility layout. Otherwise, you might be stuck with the existential crisis of an edge facility that is an inaccessible box that is less of a data center than a symbol of human futility.
What does management look like at scale? Is it as easy with 1,000 units as it is with one?
Of all of the questions, this one may be the most important to ask early, because the impact of finding out too late is enormous. The test units of edge data centers that providers have set up around the country are typically standalone units. That is plenty for answering most of our previous questions above, but that doesn’t help you understand how well the solution will scale. After all, telecom companies, for example, are looking at eventually deploying hundreds of these. If managing 100 is 100 times harder than managing 1, that is a problem. Your organization should, therefore, ask a lot of questions about how the solution scales, how the management software you use will work with the edge solution, or how their software will work with your existing data center management systems.
Existential debates are fascinating, but they don’t tend to provide the answers that you are going to need to plan your edge strategy and select a provider. In other words, focus on the practical rather than the esoteric. In many ways, the questions that need to be asked regarding prospective edge facilities are no different than those you’d use in evaluating any other data center alternative. Asking the right questions is the key to ensuring that you can move forward successfully with your edge implementation plans. As Sartre himself says, “There is no reality except in action,” and taking action on edge deployments will be accelerated by a more practical approach to the topic. Maybe Sartre does have something to say about technology after all. Who knew?
*Portions of this blog were previously published on telecomramblings.com
About the Author:
Sharif Fotouh is the CEO of Compass EdgePoint and an ex-Googler. Fotouh is responsible for the Compass EdgePoint’s edge data center solutions as part of ’ comprehensive core-to-the-edge offering to customers. He is recognized across both the information technology and the data center industries as one of the preeminent experts on edge computing. He has more than 10 years of tenure leading large data center and technology teams, including founding and leading Google Fiber’s national network facilities and deployment engineering program.