5G and edge computing are expected to be large new market opportunities; ABI research predicts a $1.5T market size by 2030. However, the advent of 5G and Edge Computing has led to an exponential stress on application management. Gone are the days when every network component was a piece of hardware with fixed functionalities, in 5G and edge computing everything is a piece of software. Looking at this problem quantitatively:
Stress on application management = Number of edge/core sites x Application instances x Application changes per unit of time.
In 5G and edge computing, there are 100,000s of edge sites, 10,000s of application instances created by a combination of a large number of applications and network slicing (which causes multiple instances of an application to be created) and 10s of application changes per hour. So, the stress on application management is a million times greater than anything we do today.
Let’s shift gears a little bit and explore an analogy before continuing. The popular pets vs cattle analogy was earlier applied to infrastructure where each server was treated as a pet, as admins would upgrade and maintain servers individually before onboarding applications on them. With cloud computing, now a group of servers is treated as a unit. However, applications are still treated as pets and because of the huge stress on application management, the pets approach will not work. We need to adapt to a cattle methodology to simplify application management. See our prior blog on the pets vs. cattle analogy. With this new approach, the impact on initial orchestration will be:
Register K8s clouds with the orchestrator (manual/automatic)
Onboard Helm chart for each application
Orchestrate onto 1 to N clouds with multiple instances with a click of a button
For ongoing Life Cycle Management, also, the cattle methodology works best, as it is not feasible to use millions of application management endpoints (GUI or API). LCM is handled best by the cattle methodology by doing the following:
For app independent LCM actions, one should use a unified endpoint for all app instances
For category dependent LCM actions (e.g.: O-RAN, 5G Core, SD-WAN, Firewall etc.), one should use a unified dashboard for that particular application that manages all instances from any vendor
For app dependent LCM actions (e.g.: AR/VR, drone control), one should use management endpoints retrofitted to connect to multiple instances of that application
For service assurance, as well, the cattle methodology is helpful. With pets methodology you would have to log into many management endpoints to troubleshoot and raise tickets. Whereas in the Cattle methodology application (and optionally infra) telemetry is sent to a closed loop automation system (big data or AI/ML) that makes corrective actions automatically.
With the recently announced 2.0 version of the Aarna Networks Multi Cluster Orchestration Platform (AMCOP), we are solving all three aspects of network service and application management:
Ongoing Lifecycle Management (LCM)
Service Assurance or Real-Time Policy Driven Closed Loop Automation
AMCOP 2.0 has three new capabilities:
It has a full integration of the Intel OpenNESS EMCO (our orchestration engine) with the ONAP CDS project for full day 1 & 2 configuration and lifecycle management of cloud native network functions (CNF) and cloud native applications (CNA).
There is an early access version of the Aarna Analytics Platform based on Google’s CDAP project that can be used for real-time policy-driven closed-loop automation. The analytics platform will also be the foundation of additional technologies coming from us such as the Non-Real-Time RIC (NONRTRIC) for O-RAN and Network Data Analytics Function (NWDAF).
AMCOP 2.0 has full support for end-to-end 5G network slicing.
A high-level block diagram of AMCOP 2.0 is shown below:
AMCOP 2.0 is available for a free trial. Give it a shot. You can onboard a free 5GC and orchestrate that onto a Kubernetes cloud.
Also, don’t forget to join our “Cloud Native Application (CNA) Orchestration on Multiple Kubernetes Edge Clouds” meetup on Monday Feb-22 at 7AM PT. In this hands-on technical meetup, we will show you how to onboard and orchestrate edge computing applications on multiple K8s edge clouds.