NWDAF Rel 17 Explained - Architecture, Features and Use Cases

5G System is expected to be AI capable for optimal allocation and usage of network resources. Analytics functionality of 5G Network System are separated from other core functionalities to ensure better modularisation and reach. Network Data Analytics Function or NWDAF with its 3GPP compliant interfaces provide data analytics on 5G Core. NWDAF Rel 15 did not see much adoption because of the unavailability of data and the specifications in 3GPP was not fully defined with respect to NWDAF. At present with 5G deployments kicking off and 3GPP standardizing all necessary specifications, complete implementation of NWDAF is possible.  The architecture of NWDAF is defined in 3GPP Specifications TS 23.288 and the detailed specification with APIs, etc. is defined in 3GPP Specification TS 29.520.

Release 17 specifies two separate NWDAF functions

  • Analytics Logical Function (AnLF)
  • Model Training Logical Function (MTLF)

NWDAF is responsible for data collection and storage required for inference, but it can use other functions to achieve the same. A typical NWDAF Use Case consists of one or more machine learning models. Building a machine learning model is an iterative process. Data scientists experiment with different models and different data sets. Even after deployment, it requires constant monitoring and retraining. A typical use case consists of many ML models and overlapping data is fed into these ML models.

                     

Fig 1: NWDAF Architecture

NWDAF is different from other NFs in the 5G Core because -

  1. Requirement      of retraining, when we consider an NF and once we deploy it we don't      expect its behaviour to change. But a ML model is different, it is tightly      coupled to the data it was trained on. So if data patterns change from      trial runs to actual environments, the ML model might behave differently.      This is called Data Drift.
  2. Requirement      of historical data, NFs just need the current state of the machine, but a      ML system analyses historical data to derive the future values.

NWDAF is an API layer that provides a standard interface for other network elements to get analytics. The consumer can be an NF/OAM/AF. Consumers can subscribe to analytics through NWDAF. Each NWDAF will be identified by an analytics id and area of interest. Single NWDAF can provide multiple ids also. Managing the huge data set and different NWDAF ML Models need to ensure that there is no duplication of effort for collecting the data and storing the data. The Area of Interest is the geographical location that the NF belongs to. Since UE is mobile it can move from one area of interest to another.

Features of NWDAF -

1. Aggregation
Fig 2: Aggregation supported by NWDAF

                     

There are different types of Aggregation that NWDAF can do -

●     Aggregation based on Area of Interest -

Each Area of Interest can have separate Areas of Interest. In some cases the analytics consumer might require a larger area of interest. In the example above the consumer requires three areas of interest. Some NWDAF can act as an aggregator by collecting data from other NWDAF associated with other areas of interest, and send a single aggregated result to the consumer.

●     Aggregation based on Analytics -

A Use Case can be made up of different types of Use Cases. In the example above the NWDAF with AID 3 is made up of NWDAF with AID 1 and NWDAF with AID 2, by means of a logic. This kind of aggregation is called Analytics Aggregation.

  1. Analytics Subscription Transfer

One NWDAF can transfer subscriptions to another NWDAF. For example if an Analytics Consumer is getting analytics data of a UE through a NWDAF associated with a particular Area of Interest. Now if the UE moves to another Area of Interest, then the NWDAF associated with the new Area of Interest will continue sending analytics data to the consumer, as the first NWDAF will transfer the subscription of the consumer to the second NWDAF. This also becomes handy when NWDAF undergoes graceful shutdown or performs load balancing.

MLOps - the complete picture

MLOps comprises practices for deploying and maintaining machine learning models in production networks.The word “MLOps” is a compound of "machine learning" and “DevOps” It includes the following components which are also the prerequisites for building a NWDAF platform -

●             Configuration Module

●             Data Collection Module on the Core/Edge (should be as per 3GPP standards)

●             Data Collection Long Term Module

●             Data Verification Module

●             Machine Resource Management

●             Feature Extension Module

●             Analysis Tools

●             Process Management Tools

●             Data Serving Module (should be  as per 3GPP Standardisation)

●             Monitoring Module

●             ML Code

               

Fig 3: MLOps Introduction

The group of standard functions that are defined by 3GPP for supporting  data analytics in 5G Network Deployments -

●             NWDAF- AnLF - Analytical Logical Function

●             NWDAF - MTLF - Model Training Logical Function

●             DCCF - Data Collection Coordination (& Delivery) Function

●             ADRF - Analytical Data Repository Function

●             MFAF - Messaging Framework Adaptor Function

             

Fig 4 : Complete Loop

The NF/OAM/AF which is the Analytical Consumer, requests for Analytics from the NWDAF directly or through DCCF. NWDAF is divided into two functions - AnLF and MTLF. The Analytical Logical Function (AnLF) of NWDAF is responsible for collecting the analytical request and sending the response to the consumer. AnLF requires the model endpoints, which is provided by the MTLF (Model Training Logical Function). NWDAF MTLF trains and deploys the model inference microservice. Now the AnLF requires historical data that the Model Microservice requires for prediction, For this it requests the DCCF (Data Collection Coordination and Delivery Function). DCCF is the central point for managing all the data requests. If any other NF has already requested the same set of data and this data is available, DCCF directly sends it to NWDAF. Otherwise the DCCF initiates a data transfer from the Data Provider. It also initiates the data transfer with the data provider. Then the data transfer will actually happen between the MFAF (Messaging Framework Adaptor Function) and ADRF (Analytical Data Repository Function). ADRF stores the historical data that is required. Now the DCCF will pass on the data to the NWDAF’s AnLF. Now the NWDAF’s AnLF can request prediction from the Model Microservice. Now the NWDAF will construct the response in 3GPP format and pass that prediction back to the NF or Analytical Consumer.

Data Collection -

The data collection for analytics by NWDAF happens in 3 levels -

●     For Feature Engineering, Analysis and Offline Training, data is collected for the long term and  can be stored in  Data Lake/ Data Warehouse.

●     The data required for Online Training (which is managed by MTLF) can be collected in ADRF.

●     Data required by AnLF for model inference may come from ADRF/NF/OAM. This data will be shorter-term, like a few hours of data.

Model Serving & MTLP -

To understand MTLP we need to know what model it is serving. Models basically contain code or trained parameters. But for applications to use this we need to convert this into a microservice. So that the result of analytics is available as an end point to the application. So different frameworks are available for this like TF Serving (TensorFlow Model Serving), TorchServe Framework, Triton Inference Server (NVIDIA’s framework) and the Acumos AI.

The following is the input formats that MTLP accepts -

●             ML Code - Online Training

●             Saved Models - include code and trained parameters - this is the most popular way to share the pretrained models.

●             Container Images

Model Monitoring and Feedback -

ML Model’s performance may decay over the time, this can impact the performance of the system negatively like over allocating the resources or affecting the user experience. So continuous self-monitoring and re-training is required.Re-training with newer data can be managed by MTLP or outside the edge/core. Redesigning of the ML Model may be required in certain cases. MTLP needs to send a trigger to the model management layer when retraining is no more effective in MTLP.

ML Pipeline - NWDAF Interaction

             

Fig 5 : ML Pipeline

The part in the cloud is the ML Pipeline. To connect the cloud with the components in edge, three interfaces are important (the ones drawn in blue lines). The first is the Model Deploy interface that is required to push the Model from the Cloud Layer to MTLF. The second is the Model Feedback interface which MTLF uses for sending the feedback to the upper layer. The third is the Pull Data interface, which is required to send the data for ML training and will be stored in the Data Warehouse/ Data Lake.

The data in the data warehouse should be easily accessible for experiments by data analysts, hence it is present in the cloud. ETL/ELT is the step where data is Extracted, Transformed and Loaded to the storage. In some cases when the endpoint is data lake the data is loaded without any transformation. Data collection from the source is done in batches. Nowadays streaming of data is getting more traction. NWDAF is designed in such a way that even streaming can be used. Which data is to be uploaded or which data is required for the ML experiment is a complex subject. Uploading all data can lead to unnecessary use of bandwidth. Also there are Data Protection Regulations of different geographical areas where the edge is located. Hence this component should be designed very carefully.

Distributed System Platform -

Fig 6 : Distributed System Platform

The components of NWDAF are treated in a similar way as Network Functions by the underlying platform and they are installed in a distributed manner. The NWDAF should have minimal dependency on the underlying platform of the ecosystem. 

NWDAF SDK -

The platform provides SDK/Framework for developing AnLF & MTLF, this reduces coding effort for MLOps Engineers. The SDK framework should hide the complexity of the 3GPP standard from the developer. The SDK should also support advanced NWDAF features like aggregation, subscription transfer. 

3GPP Release 17 Timeline -

  • Q3 2021 - We can expect the Architecture to freeze

  • Q2 2022 - We can expect a Stage 3 freezing with detailed definitions and APIs

  • Q3 2022 - We can expect the protocol codes to be freezed.

We use cookies to enhance site navigation, analyze site usage, and assist in our marketing efforts. For more information, please see the Aarna Networks Cookie Policy.
Accept cookies