Nvidia Superchip Boosts Open RAN

Nvidia, the renowned graphics processing unit (GPU) giant, has now unveiled its groundbreaking "Grace Hopper" chip (Light Reading), signifying their continued commitment to advancing AI and OpenRAN technologies. This represents a significant leap forward in the realm of cloud economics and wireless connectivity.

Per Nvidia, Grace is used for L2+, Hopper (the GPU) is used for inline acceleration at Layer 1, and the BlueField DPU runs timing synchronization for open fronthaul 7.2 -- an interface between baseband and radios developed by the O-RAN Alliance. This results in an impressive 36 Gbit/s on the downlink and 2.5x more power efficiency. Softbank and Fujitsu are some early customers lining up behind the AI-plus-RAN approach of combining powerful AI analytics at the edge with a software-defined 5G RAN.

RAN-in-theCloud Demo

In a recent demonstration, Nvidia collaborated with Radisys and Aarna Networks to showcase RAN-in-the-Cloud, a 5G radio access network fully hosted as a service in multi-tenant cloud infrastructure running as a containerized solution alongside other applications. This Proof-of-Concept includes a single pane of glass orchestrator from Aarna and Radisys to dynamically manage the 5G RAN workloads and 5G Core applications and services end-to-end in real time on NVIDIA GPU accelerators and architecture.  

Learn more about this demo by downloading the solution brief.  This RAN-in-the-Cloud stack is ready for customer field trials in 2H 2023. Contact us to learn more.

We use cookies to enhance site navigation, analyze site usage, and assist in our marketing efforts. For more information, please see the Aarna Networks Cookie Policy.
Accept cookies