When discussing the economics of the bandwidth market, the primary equation is this: how much data can you move, and how fast you can move it. The ideal, of course, is being able to move large amounts of data in a very fast, efficient manner. This has been the common goal of anyone tasked with the job of running a bunch of stuff (virtual or otherwise) from point A to point B.
Digital Traffic Rules of Transport
Digital traffic, much like physical, vehicle-based traffic, lives by the same rules of transport: the number of lanes controls the volume rather than the length of the highway. No one – and I mean no one – from cattle ranchers to game servers to commuters likes a choke point. We want broad, spacious lanes for all of our transport needs. Unfortunately, this isn’t as easy to achieve as one would think.
Amazon: A Practical Example
A practical example that perfectly illustrates the “lane vs. length” issue is how Amazon approaches the logistics of delivering millions of products to millions of people across the country in the fastest yet most economical way possible. First off, let’s address the fact that it’s not practical (not even possible, really) for Amazon to ship every order from one or two central locations. The transport capacity (trucks and roads) required to do that would hopelessly clog the central locations themselves, never mind the surrounding roads and highways. Think of it this way. Shipments would slow to a crawl if everything that Amazon sold on the east coast had to come down Interstate 95 (think the 405 on the west coast) after it was ordered. And we’d be staring at a sea of Amazon-branded trucks while we sat in our 4-hour gridlocked traffic jam, the situation only worsened by the fact that we would simultaneously NOT be getting our latest Amazon Prime order when we were expecting it. A lose/lose for you, me, and Amazon.
The solution for Amazon, to balance the transport requirements of large volumes of goods (Amazon wants to carry anything and everything you might want to buy) against the expectation of responsiveness (I don’t want to have to wait) is to employ a network of regional, local, and dare we say Edge distribution centers. By operating such a balanced network of decentralized warehouses that support their respective metropolitan areas while also receiving goods as needed from Amazon’s global supply chain of wholesalers, Amazon maintains that balance between bandwidth and speed.
Similarly, when discussing the distribution of virtual goods, you know, data, a decentralized approach that incorporates a blended network of regional, local, and Edge datacenters make the most sense – especially when we’re talking about any sort of volume.
Data Center Economics and the Edge Data Centers
The Amazon example illustrates the idea of balancing transport capacity across layers in the pursuit of economics and responsiveness – interesting to note here how any time there is an increase in economics or responsiveness, they feed each other, i.e. when you achieve economies in delivery, you then increase your responsiveness, and when your responsiveness increases, you will naturally benefit from that by better service, which means more customers, ergo better economics. If all the data had to be transported from a single, central location to the edge of the network (that is, from a common US-West or US-East data center to end users all over the country/world), the pipes (fiber optic capacity) would have to be as big as the peak workload. Put simply, this can’t happen. So how do you get around this? Building Edge data centers and using them to distribute data ahead of time permits balancing of transport requirements. In other words, bigger pipes + less distance to travel = a more efficient, lower cost network.
Data Centers as Close as Possible to the Point of Consumption
StackPath CDN Moves Data Closer to the End User
StackPath, a key partner at , is a global edge-compute provider that has implemented this strategy for one of the services on their platform, which is their global CDN. A Content Delivery Network (CDN) applies the above strategy for static content – images, videos, etc., which in and of itself is a huge help in keeping the multiple lanes of bandwidth as clear and freely moving as possible.
StackPath has taken this concept even further by applying it to dynamic content, even actual computational services (websites, applications, APIs) at the Edge, as close as possible to the point of consumption. StackPath pre-positions that movie, that game, that eCommerce website at the Edge of the network so that it is as physically close as possible to their end users; and therefore faster and more cost effective.
To continue the Amazon example, the Prime power-users among you will notice that certain items are always available and can be shipped to you on the same day in some cases. There is no coincidence that these are high-volume items that Amazon chooses to keep stocked at a continuously monitored level in the regional warehouse that serves your particular zip code. This is why when you order a 3-pack of USB cables versus a replacement windshield wiper motor for a 1997 Jeep TJ, you receive the former by the end of the day, and the latter in 2-3 business days (assuming it even qualifies for Prime shipping). So when you’re in Miami reading the latest from The Washington Post, and then you find yourself finishing that article once you’ve touched down in San Francisco, you won’t notice any lag in your content consumption because you’ve been served an identical copy of the content you started on the east coast, now on the west thanks to their CDN.
This distributed network strategy has allowed StackPath to leverage the scale of their CDN, thereby driving down unit costs while simultaneously building a highly performant network edge, and because all of their services run on the same platform as their Container and VM offerings, customers can deploy globally distributed and highly performant compute services, a far more complex and dynamic offering than static content – in mere minutes.
Edge Data Centers Ensure Availability
In much the same vein of logic that makes the idea of traveling to a fuel refinery for every tank of gas sound… insane, so to is the idea – at least among the data center crowd – of having to go back to a core data center for every computational service. Edge data centers (and local gas stations) are working to ensure that the services you need are available to you locally.