RAN-in-the-Cloud: Why must a successful O-RAN implementation run on a cloud architecture?

A new radio area network standard called Open RAN (O-RAN) promises to accelerate the disaggregation of 5G networks. Until recently, the RAN was completely closed, creating vendor lock-in and allowing a handful of vendors to command high prices. Not only did this cause the cost of building out a RAN site to go up, but it also created inflexible and closed networks that did not allow new monetizable services. Mobile Network Operators (MNOs) and governments decided that this was not an ideal situation which led to the formation of the O-RAN Alliance, a standards development organization (SDO) that creates detailed specifications for the various internal interfaces of a RAN, thus allowing for its disaggregation. Disaggregation is expected to foster innovation, result in new monetizable services, and reduce costs.

What does the future hold for O-RAN? I think there are three possibilities:

  1. O-RAN is a failure
  2. O-RAN gets a hollow victory
  3. O-RAN is a true success

Let us evaluate each scenario.

Scenario#1 – O-RAN is a failure: This could happen if O-RAN is unable to meet or exceed existing proprietary RAN solutions on key performance, power, and cost metrics. I think the probability of this outcome is relatively low. Technologies such as NVidia Aerial and alternatives from other semiconductor vendors will ensure that O-RAN performs just as well on performance as proprietary RAN, at similar or lower price points. Nevertheless, we cannot eliminate this possibility yet as we need to see more proof points.

Scenario#2 – O-RAN gets a hollow victory: If O-RAN solely matches proprietary RAN on key metrics and solely provides disaggregation as the differentiator, there is a significant danger that incumbents will “O-RAN-wash” their products and the status quo will persist for 5G. The incumbents will call their vertically integrated products O-RAN compliant while in reality they will only support a few open interfaces. Interoperability with third parties will be suboptimal, forcing MNOs to purchase a vertically integrated stack. In this case, there simply won’t be enough leverage to force the incumbent vendors to truly open up nor will there be enough incentive for MNOs to try out a new vendor.

Scenario#3 – O-RAN is a true success: For this possibility, O-RAN based implementations must provide greater value than proprietary RAN. Let’s now explore this possibility.

Embracing Cloud Architecture will be a Game Changer

For O-RAN based implementations to provide more value than proprietary RAN, they must use an end-to-end cloud architecture and be deployed in a true datacenter cloud or edge environment; hence the term “RAN-in-the-Cloud”. The simple reason is that a cloud can run multiple workloads, meaning cloud hosted O-RAN can support multi-tenancy and multiple services on the same infrastructure. Since RAN implementations are built for peak traffic, they are underutilized, typically running at <50% utilization. In a traditional architecture that uses specialized acceleration or an appliance like implementation, nothing can be done to improve this utilization number. However, in a RAN-in-the-Cloud implementation, the cloud can run other workloads during periods of underutilization. An O-RAN implementation built by fully embracing cloud principles will function in a far superior manner to proprietary RAN as the utilization can be optimized. With increased utilization, the effective CAPEX and power consumption will be significantly reduced. The RAN will become flexible and configurable i.e., 4T4R, 32T32R or 64T64R or TDD/FDD on the same infrastructure. As an added benefit, when the RAN is underutilized, MNOs can pivot their GPU accelerated infrastructure to other services such as edge AI, video applications, CDN, and more. This will improve the monetization of new edge applications and services. Overall, these capabilities will provide MNOs with the leverage they need to force the incumbents to fully comply with O-RAN and/or try out new and innovative O-RAN vendors.

To be considered RAN-in-the-Cloud, the O-RAN implementation must use:

●      General purpose compute with a cloud layer such as Kubernetes

●      General purpose acceleration, for example NVidia GPU, that can be used by non-O-RAN workloads such as AI/ML, video services, CDNs, Edge IOT, and more

●      Software defined xHaul and networking

●      Vendor neutral SMO (Service Management and Orchestration) that can perform the dynamic switching of workloads from RAN→non-RAN→RAN; the SMO[1] also needs the intelligence to understand how the utilization of the wireless network varies over time. The Aarna Networks Multi Cluster Orchestration Platform SMO is a perfect example of such a component.

You can see an example of this architecture presented during the upcoming session at GTC this week: “Big Leap in VRAN: Full Stack Acceleration, Cloud First, AI and 6G Ready [S51797]”. In my view, this reference architecture will drive O-RAN to its full potential and is the type of architecture MNOs should be evaluating in their labs.

References:

●      NVIDIA Blog: https://developer.nvidia.com/blog/ran-in-the-cloud-delivering-cloud-economics-to-5g-ran/

●      Video: https://www.youtube.com/watch?v=FrWF1L8jI8c

●      Solution Brief: https://www.youtube.com/watch?v=FrWF1L8jI8c

[1] Strictly speaking, the SMO as defined by the O-RAN Alliance is only applicable for the RAN domain. However, we are using the term SMO more broadly to include orchestration of other domains such as edge computing applications, transport, and more.

We use cookies to enhance site navigation, analyze site usage, and assist in our marketing efforts. For more information, please see the Aarna Networks Cookie Policy.
Accept cookies