How C-RAN Evolution Impacts the Way We Test

On September 5th, 2017, China Mobile, together with the other 13 partners in the C-RAN eco-system, released the very first version of the 5G C-RAN Wireless Cloud Network General Technical Report (Technical Report for short hereafter). It certainly marked a milestone in C-RAN’s long evolving path since 2009. In the past, a C-RAN whitepaper was published every few years, however this Technical Report goes much deeper and gets to the heart of the matter, dealing with all outstanding technical obstacles in next generation RAN deployments. As many of you know, the C-RAN concept was raised initially to address the fronthaul bottleneck problem - that’s why a ‘Centralized’ BBU (Baseband Unit) is so key in RAN deployments. In the past 8 years, the C in C-RAN has been interpreted differently, ranging from ‘Centralized’, ‘Cooperative’, to ‘Cloud’ and ‘Clean’.

Over the last 4 years, the big telecom service providers, one by one are announcing bold plans to restructure their networks, adopting SDN and NFV technologies, starting with AT&T’s Domain 2.0, then DT’s PAN-NET, China Telecom’s CTNet2025, and China Unicom’s CUBE-Net. China Mobile also debuted its own journey under the NovoNet brand in 2015. Though it had been six years since the birth of the C-RAN concept, the NovoNet vision resonated with the C-RAN concept very well, and we all now understand that NFV with cloud implementation has become the prevailing vision. With the BBU split into the DU and CU (Distributed Unit & Centralized Unit), most CU components will be virtualized, or even “cloudified”. This is why “Cloud RAN” has now become the most common interpretation of the C-RAN acronym.

Meanwhile, the testing story has evolved too. We’ve been continuously developing methodologies for testing NFV and the cloud. Importantly, we have learned that test tools on their own are insufficient – what is required is an entire test solution, including expertise. For example, with C-RAN NFVi testing, one of the key elements depicted in the Technical Report, multiple test tools were used together, including tools sourced from Spirent and open source communities, as well as custom hardware and software tools. The combination of these various tool sets and specialized expertise enabled Spirent to deliver testing services to our service provider customers. A key deliverable to such customers are intelligent reports, which have become more important than ever; without it we would be drowning in test data. Another crucial thing is, that automation has been so essential to glue all the parts together for both executing tests and reporting results. It would be a nightmare for the test engineer if the whole testing cycle could not be automated. Doing some simple arithmetic, for 16 common test scenarios in a 3-layer decoupled NFV environment, to evaluate just 3 vendors from each layer, the grand total of test scenarios would be 16 x 3 x 3 x 3 = 432. The number of test cases needed to be run in an actual environment is in fact much higher, going beyond what is practical with manual testing, to say nothing of test efficiency impact. Even with automation, the need remains for test methodologies, as they are still the key for guiding every actual test practices. But we need NEW test methodologies simply because we’re facing a completely different monster. And last but not least, virtualized test tools are also required in most cases. Fortunately, we’re all ready for it.

Let’s zoom in to the exact test topologies for C-RAN NFVi testing, which are also shown in Technical Report. Four typical VM traffic models are defined as shown in the figure below.

Block diagram showing vSwitch and Virtual Networks

Model 1 is for an SR-IOV deployment scenario; models 2, 3 and 4 are for OVS-DPDK deployment scenarios. For each model, the data forwarding performance is measured both with the deployment of a typical number of VMs, and also with the deployment of a maximum number of VMs.

Chart showing Latency and Jitter Change as VM Numbers Increase

As you can see in this example, the latency and jitter go up as the number of VMs increases.  Such measurements give a feel for how many VMs need to be deployed per server to ensure latency and jitter requirement by 5G RAN.

Certainly, there are more metrics to pay attention to, such as CPU and memory utilization, depending on what services are to be deployed. We’ll leave that to a future discussion.

To learn more about our SDN and NFV solutions visit: https://www.spirent.com/Solutions/SDN-NFV-Solutions.

comments powered by Disqus
× Spirent.com uses cookies to enhance and streamline your experience. By continuing to browse our site, you are agreeing to the use of cookies.