Predictions for Edge Computing in 2019
by Curtis Collicutt on March 12, 2019
Kubernetes Persistent Volumes with NetApp Trident
by Curtis Collicutt on February 11, 2019
Layer 1 SFC with BigSwitch
by Curtis Collicutt on February 4, 2019
100,000 Carrier Clouds Coming Online
by Curtis Collicutt on January 28, 2019
OpenStack Edge Hackathon
by Curtis Collicutt on January 21, 2019
Over the next few years, we expect to see a large number of data centers coming online.
By 2030, we could see one hundred thousand new carrier cloud data centers. — CIMI Corp
In this context, a “carrier cloud” is edge computing owned by Communication Service Providers, aka Carriers. Edge computing is somewhat poorly defined, in that the computing power of an “edge” location can vary by several orders of magnitude, but in this example let’s consider an edge location to exist outside of a traditional data center, and to be made up of at least two servers, and up to a full rack.
The edge cloud is much different from the typical concept of “cloud” you may know, which is effectively a massive, centralized location for storage and compute, usually typified by Amazon Web Services, Microsoft Azure, Google Cloud, or large private data centers.
In the case of carriers, let’s consider their centralized clouds to exist in existing traditional regional data centers.
Centralized public cloud, especially that of public cloud, is capable because of economies of scale (in the commercial sense of buying power, and in the sense of technological level). Edge cloud is different.
Edge cloud doesn’t depend on economies of scale but on the preservation of a short control loop. — CIMI Corp
“Short control loop” here is related to latency. If remote systems, for example, a small sensor of some kind, need decisions made by more powerful systems they can request it over the network. If the decision-making systems are physically distant, then latency can become an issue. However, if the decision-making systems are close, i.e. on the edge, then latency is reduced.
So, exactly where is the edge? The most straightforward answer is the Carrier Central Office, or CO. Globally we could probably meet the 100,000 number directly from CO locations.
Several organizations have projects to define COs that are also data centers, such as CORD, “Central Office Re-Architected as Datacenter.” Another example is Intel’s “Next Generation Central Office.”
A Next Generation Central Office is a fiber-rich mini or edge datacenter that can support both fixed and mobile traffic and serve up to 35,000 subscribers per central office compared to 5,000 in today’s CO. Located between Access Network and Metro Transport, the NGCO functions as a local edge datacenter with a smaller area and power footprint than a traditional, centralized or hyperscale datacenter. — https://builders.intel.com/blog/next-generation-central-office-transform-network-edge/
A percentage of COs will indeed become small data centers, but that number might not get us to 100,000. Where else can the edge live?
Given the level of computing power we have limited ourselves to in this blog post, a minimum of two servers, it’s entirely possible they could deploy in cell sites, not just COs. There are a significant number of cell sites–over 250,000 in the United States alone. Cell sites are a typically less hospitable environment for servers, but there are plans to place compute infrastructure there including an entire paradigm dedicated to putting compute power in cell sites: Multi-Access Edge Cloud, or MEC.
MEC is a natural development in the evolution of mobile base stations and the convergence of IT and telecommunications networking. — https://www.etsi.org/technologies-clusters/technologies/multi-access-edge-computing
Between making COs into data centers and putting compute into cell sites, we will see a vast increase in the number of data centers. Globally, the introduction of 100,000 datacenters, albeit small ones, seems not only possible but likely.