This blog first appeared in Mission Critical on Sept. 17, 2019.
To ask how 5G will affect life inside the data center is analogous to asking how a city would stand up to a natural disaster. The answer is, it depends—on the city and the storm. Make no mistake, 5G will most definitely alter how data centers are designed and, in some cases, will change the role they play in the larger network. By some estimates, data centers will be spending over half their operating budget to support 5G by 2025. The $64,000 question is, “where will the money be spent and for what?” To be more specific means digging a bit deeper. Grab your shovel.
5G: Promises and problems
Let’s start with what we know. From a technical perspective, 5G will have several defining characteristics. Most obvious is the use of the new 5G New Radio (NR) air interface that will exploit new spectrum and provide latency capabilities in the single-digit millisecond range. The enhanced performance will drive deployment of billions of edge-based connected devices and create the need for flexible user-centric networks. To deal with the crush of new data, networks will rely heavily on virtualized architectures like network slicing, and other cloud-based technologies such as AI and machine-type learning.
CLICK TO TWEET: CommScope's James Young and Pedro Torres explain how 5G will alter how data centers are designed and the role they play in the larger network.
The wild card in all of this are the applications that 5G will enable. Self-driving vehicles, industrial automation, facial recognition, machine-to-machine communications and a mind-numbing range of smart city apps—you name it and it’s in somebody’s pipeline. Problem is, they have a wide variety of requirements regarding latency, reliability and the volume/type of data traffic generated. Unless you know the parameters of the problem, it’s tough to speculate on how it will affect the data center. We can, however, provide some insight based on CommScope’s experience and perspective. Here are a couple of key trends we see affecting the data center ecosystem, and a question that may be at the heart of the matter.
Moving compute and storage to the edge
User-centric networks involve pushing the compute and storage resources closer to the users and connected devices. The only way to meet the ultra-reliable, low latency requirements will be to deploy these edge nodes as mesh networks with an east-to-west flow and parallel data paths. In some cases, these nodes may be big enough to classify as pod-type data centers or micro data centers, similar to those being used by the telecom and cable providers. The bigger question is what effect this will have on the core data center. Again, it depends.
Cloud-scale data centers as well as the larger enterprise facilities may be only slightly affected, as they are already using distributed processing and are designed to handle the increased data flow from the edge. A bit more disruption may occur among the retail multi-tenant data centers (MTDCs), which have traditionally grown in response to rising demand for cloud-scale services. In order to maintain that relationship, look for retail MTDCs to re-locate closer to the edge in order to provide regional points of presence for the cloud-scale facilities.
Perhaps the biggest changes will occur among the service providers as they refine the relationship between their core data centers and the evolving central offices and centralized RAN (CRAN) hubs. Increasing virtualization of the core networks and the radio-access networks is necessary to handle the anticipated 5G data flow and enable service providers to easily move compute and storage capacity where it is needed.
The effects of virtualization AI and machine learning
Another potential disrupter will be the increasing use of virtualization, artificial intelligence (AI) and massive machine learning. These technologies will require accelerated server speed and higher network capacity to enable greater volume of increasingly sophisticated edge services. Building the data models will require processing massive data pools which, in most cases, are best matched to core data center capabilities.
Most of the data that will be used to develop AI models will come from the edge. This hints at a potential change in how the larger cloud-scale data centers will support the network. One scenario involves using the horsepower of the core data center to assemble data from the edge to develop the models. The finished models would then be pushed out to deliver localized low-latency services. The process would then be repeated creating a feedback loop that refines the operating model.
In the service provider networks, increasing virtualization may have a more direct effect in the core data center. As wireless and wireline networks become more virtualized and converged, the business case for a single physical layer infrastructure grows stronger. The big question is the degree of convergence that will occur between the core network and the RAN and where this will happen. Central office or data center?
Prioritizing and balancing the data load
Of course, the degree of change within the various data center environments will depend on the application requirements. The data traffic generated by the billions of sensors and devices may produce a steady stream of data while others will be delivered intermittently or produced in irregular bursts of information. How can the data collection and processing be optimized? How much data should remain local to the edge device or edge node versus needing to be processed in the core data center?
Once these questions are answered, network engineers need to determine the best way to move the data through the network. Different latency and reliability requirements require the ability to prioritize data traffic at a very granular level. What can be off-loaded onto the internet via local Wi-Fi versus having to be backhauled to the cloud service provider (CSP) data center? And remember, edge networks must fit into a financial model that makes it profitable.
Stay tuned for the answers
Even as the first 5G networks services are beginning to go live, these questions are still very much up in the air. From the data center manager to the CIO, learning how to mine the capabilities of 5G will require a lot of on-the-job training. The avalanche of new data from the network edge necessitates high compute and storage horsepower. But exactly how much horsepower will data centers need and where will they need to be located? You guessed it: that depends.
One thing I can promise, though, is that CommScope will be among the first to understand what changes are coming and how it affects data centers of all types. After all, it’s our job to know what’s next. Stay tuned…
If you are in North America and want to learn more about 5G and the role of the data center, then sign up for our six-city roadshow. It will educate data center professionals on how 5G, next-generation speeds, new physical layer topologies and other trends are transforming the data center industry. The cities on the tour include:
- San Jose, California on Thursday, October 10
- Seattle, Washington on Thursday, October 17
- Manassas, Virginia (outside of Washington D.C.) on Wednesday, October 30
- Frisco, Texas (outside of Dallas) on Thursday, November 7
- Rosemont, Illinois (outside of Chicago) on Thursday, November 14
- Hoboken, New Jersey (outside of New York City) on Thursday, November 21
Register by clicking here.