Aarna Networks




Sriram Rupanagunta

Enabling the RAGOps
Find out more

This is a follow up blog to the earlier blog “From RAGs to the Riches” from my colleague, Amar Kapadia. 

Setting up the GenAI for an Enterprise involves multiple steps, and this can be categorized as: 

  • Infrastructure Orchestration, which includes the servers/GPUs with cloud software, virtualization tools and the networking infrastructure. There may be additional requirements depending on the Enterprise needs, such as: 
  • SD-WAN setup between their locations
  • Access to the Enterprise data from their SaaS infrastructure (Confluence/Jira/Salesforce etc.)
  • Connectivity to public clouds, if needed
  • Connectivity to the repos where the GenAI models are present (Huggingface etc.)
  • If this is set up on Cloud Edge DCs (such as Equinix), there may be a need to configure the fabric to connect to other Edge locations or the public clouds, using network edge devices (routers/firewalls that run as xNFs)
  • GenAI Orchestration, which includes bringing up the GenAI tools, either for training or for inferencing. 
  • RAG Orchestration, which includes building the necessary Vector DB from various Enterprise sources, and using that as part of the Inferencing pipeline. 

All of the above requires a sophisticated Orchestrator that can work in a generic manner, and provide a single-click (or a command) functionality. 

The flow will be as follows: 

  • The Admin creates a high-level Intent that describes the necessary infrastructure, connectivity requirements, site details and the tools 
  • The Orchestrator takes the Intent as input, and sets up the necessary infrastructure and applications
  • The Orchestrator also monitors the infra/applications for any failures/performance issues, and makes the necessary adjustments (it could work with one of the existing tools such as TMS for this function).

I hope this sheds some light on the topic and gives some clarity on how to go about setting up the underlying infrastructure for RAGOps. 

AMCOP can orchestrate AI (and more specifically, GenAI) workloads on various platforms.  At Aarna Networks, we offer open source, zero-touch, orchestrator, AMCOP (also offered as a SaaS AES) for lifecycle management, real-time policy, and closed loop automation for edge and 5G services. If you’d like to discuss your orchestration needs, please contact us for a free consultation.

Next Steps

Contact us for help on getting started with RAGOps. The Aarna Networks Multi Cluster Orchestration Platform orchestrates and manages edge environments including support for RAGOps. We have specifically created an offering that is suitable for NSPs by focusing not just on the FM and related ML components, but also on the infrastructure e.g. using Equinix Metal to speed up deployment and Equinix Fabric for seamless data connectivity. As an NVidia partner, we have deep expertise with server platforms like the NVidia GraceHopper and platform components such as NVidia Triton and NeMo.

Amar Kapadia

A Glimpse from PTC'24
Find out more

At the recent Pacific Telecommunications Council (PTC)'24 event held in Honolulu, Hawaii, Subramanian Sankaranarayanan, AVP at Aarna Networks, took the stage to deliver an insightful talk on “Multi-Domain Edge Connectivity Services for Equinix Metal, Network Edge, Fabric, and Multi-Cloud.

Subbu’'s presentation centered on the dynamic evolution of data centers towards Infrastructure-as-a-Service (IaaS) and the complexities inherent in multi-vendor IaaS deployments. He highlighted the innovative solutions offered by the Linux Foundation Edge Akraino PCEI, an award-winning blueprint, for orchestrating and managing cloud edge infrastructures.

A focal point of his discussion was Aarna Edge Services (AES), a SaaS platform instrumental in simplifying the deployment and orchestration of infrastructure, apps, and network services at the cloud edge. Subbu illustrated various use cases of AES, demonstrating its efficiency in reducing deployment time from weeks to less than an hour and optimizing cloud-adjacent storage and GenAI processes.

The session provided valuable insights into the future of cloud and edge computing, emphasizing the importance of seamless integration and efficient management in today's interconnected digital world.

Subbu's expertise and the innovative approaches discussed at PTC'24 paint an exciting picture of the future of cloud edge management and multi-cloud deployments, promising a more streamlined, efficient, and interconnected digital ecosystem.

We are grateful to Pacific Telecommunications Council (PTC) for this amazing opportunity and this memorable exposure and all the time we spent at the PTC’24.

If you couldn't connect with us at the event, feel free to contact us to arrange a meeting.

Amar Kapadia

Exploring Edge-Native Application Design Behaviors
Find out more

In December 2023, the tech community welcomed a groundbreaking whitepaper titled "Edge-Native Application Design Behaviours." This comprehensive document delves into the dynamic realm of Edge-native application design, providing invaluable insights for developers and architects navigating the unique challenges of Edge environments.

Evolution from CNCF IoT to Edge-Native Principles

Building upon the foundational principles outlined in the CNCF IoT Edge Native Application Principles Whitepaper, this latest release adapts and refines these principles specifically for Edge environments. The result is a guide that serves as an indispensable resource for those working on Edge-native applications, offering practical guidelines and illuminating insights.

Navigating Key Aspects of Edge-Native Design

The whitepaper meticulously explores key aspects crucial for Edge-native design, unraveling the intricacies of concurrency, scale, autonomy, disposability, capability sensitivity, data persistence, and operational considerations. A particular highlight is a real-world scenario, illustrating the application of these design behaviours in a tangible context.

Decoding Edge Native Application Design

Understanding Edge-native application design necessitates recognizing its departure from cloud-native design. Edges, as autonomous entities, play a pivotal role in ingesting, transforming, buffering, and displaying data locally. Distributed edge components complement these entities, handling functions to reduce bandwidth consumption and adhere to location-based policies.

Design Constraints and Principles

Edge-native applications face distinct design constraints, such as connectivity, data-at-rest, and resource constraints. The whitepaper emphasises the importance of evolving cloud-native application design principles to address these constraints effectively. Key principles include the separation of data and code, stateless processes, share-nothing entities, and the separation of build and run stages.

Guidelines for Edge-Native Development

For developers venturing into Edge-native applications, the whitepaper provides a detailed reference guide. Topics such as concurrency and scale, edge autonomy, disposability, capability sensitivity, data persistence, metrics/logs, and operational considerations are meticulously explored.

A Glimpse into the Future

As the digital landscape evolves, Edge-native application design becomes increasingly vital. The whitepaper not only serves as a guide but also charts a course for future development in this dynamic field. The principles and insights shared pave the way for innovation, ensuring that Edge-native applications are not just efficient but also resilient in the face of evolving technological landscapes.

Click here to download the whitepaper.

Amar Kapadia

From RAGs to Riches
Find out more

A Unique RAGOps Opportunity for NSPs to Offer RAG to their Enterprise Customers

Enterprises are going to embrace GenAI, of that there is no doubt. GenAI will add value in just about every function of an enterprise. The speed at which an enterprise adopts GenAI will clearly result in a competitive advantage. However, a more durable and lasting competitive moat will result by blending enterprise data with the GenAI model. The more data an enterprise can utilize for GenAI, the deeper their competitive moat. 

There are two options for an enterprise to mix corporate data with the GenAI model:

  1. Fine tune an existing Foundational Model: In this option, an enterprise fine tunes a private copy of an existing GenAI Foundational Model (FM) with their own corporate data. Though much simpler than training a new GenAI model, which we are not even considering, this option is difficult for most enterprises. It requires GPUs in the tune of $Ms, a high degree of skill set to set up Large Language Model Operations (LLMOps) pipelines, and the need to continuously fine tune the model to prevent it from drifting or getting stale.
  2. Retrieval Augmented Generation (RAG): In this approach, an enterprise uses a lightweight Foundational Model (FM) that has generic natural language processing capability but no real domain knowledge. Users will then supplement the prompt with real time augmented data to get a meaningful result. Finally, RAG can also prevent hallucination by citing the exact data source(s). However, this approach is network heavy in that with each prompt there may be a large amount of traffic to retrieve the relevant data. 

In that sense the two approaches are analogous to the following images:

Fine tuning an FM is akin to tapping into an intelligent employee who has been fully trained in your corporate data. Of course, they need to be trained on an ongoing basis to stay current.

RAG is similar to hiring an intelligent employee/consultant who doesn’t have prior knowledge of any specific domain, but is fast enough to read any information you want in real-time.

Given the above: Most enterprises will use RAG

There are three deployment models for RAG:

  1. Public model – In this option, a public model e.g. Microsoft is used for RAG. The public model will use corporate data to provide the response. The fly in the ointment is the requirement to move all the relevant data to a public GenAI service provider. Some enterprises might be comfortable with this but most will not be for a variety of reasons.
  2. Private model in a public cloud – In this approach, an enterprise uses a private FM in a public cloud along with other components such as vector databases. This is convenient but again, all the data needs to be shipped to the public cloud. This is perhaps less scary than the previous option since the data would reside in a private repository; nevertheless, it is a lot to swallow.
  3. Private model in a private cloud – In this option, the enterprise would use a private FM along with other components like a vector database in a private cloud. What makes this approach attractive is that the private cloud already has all the required network connections to internal data sources. However, this approach does require a bit more sophistication on the part of the user to deploy and manage RAG.
From the above, it is clear:  A RAG model in a private cloud will dominate

Enter Network Service Providers (NSP)

Unlike ML/LLMOps which require significant ML expertise, RAG does not. In fact, RAG requires expertise in data connectivity since the value of a RAG model is directly proportional to the amount of corporate data made available to it. Who better to provide managed RAG than the provider of SD-WAN and managed IP networks?

NSPs are best positioned to offer managed RAG

Getting Started with RAGOps

RAGOp may be summed up as DevOps based methodology to deploy and manage a RAG model. RAGOps requires the following steps:

To expand a bit more:

  • Deploy virtual infrastructure with GPUs to host the RAG model. This may be a combination of virtual compute (containers, VMs), storage, virtual networks, and Kubernetes/hypervisor layer.
  • Deploy an FM along with a vector database, text embedding, and other data sources.
  • Deploy supporting guardrail/management/monitoring components.
  • Set up data pipelines to collect Enterprise data from diverse sources and populate the vector database.
  • Monitor and manage (upgrade, scale, troubleshoot) the environment over Days 1,2 as needed.

Since NSPs can provide data connectivity, they hold a competitive advantage. However, the competitive advantage NSPs hold will not last forever. For this reason:

NSPs need to start RAGOps PoCs for enterprise customer ASAP

Next Steps

Contact us for help on getting started with RAGOps.

The Aarna Networks Multi Cluster Orchestration Platform orchestrates and manages edge environments including support for RAGOps. We have specifically created an offering that is suitable for NSPs by focusing not just on the FM and related ML components, but also on the infrastructure e.g. using Equinix Metal to speed up deployment and Equinix Fabric for seamless data connectivity. As an NVidia partner, we have deep expertise with server platforms like the NVidia GraceHopper and platform components such as NVidia Triton and NeMo.

Amar Kapadia

Aarna Networks 2023 Highlights and What’s to Come
Find out more

Happy New Year to all! Hope 2023 was a good year for you and 2024 will be even better.

For Aarna, we closed 2023 with several major accomplishments:

  • Pushed our flagship product, Aarna Networks Multi Cluster Orchestration Platform (AMCOP), into production
  • Established ourselves as #1 Private 5G orchestrator (successfully implemented E2E P5G solution in partnership with Druid and Airspan)
  • Established ourselves as #1 ML + O-RAN SMO through collaboration with NVidia and interop exercises at TIP, Digital Catapult, i14y, O-RAN Plugfests
  • Released a beta version of our cloud edge orchestration SaaS product, Aarna Edge Services (AES), initially targeting the Cloud Adjacent Storage use case followed by Cloud Adjacent GenAI (RAG), edge⇔multi cloud networking, and more
  • Recognized for contributions to Linux Foundation project, including Nephio (where we are the #3 contributor), Akraino, and 5G SuperBlueprint
  • Hired 15+ new staff
  • Raised Series A financing

For 2024, here’s what we are working on:

  • Solidifying our position in Private 5G
  • Building on our edge ML work to establish ourselves as the #1 Edge ML orchestration company
  • Providing comprehensive cloud edge orchestration features to cover storage, networking, and GenAI/ML use cases
  • Expanding Nephio to a number of enterprise use cases by collaborating with other open source communities such as Kubernetes, OpenTofu, Ansible, LF AI & Data Foundation and more

If you are looking to join our journey (as a customer, investor, partner, advisor, employee, press/analyst, or other) please reach out to us.


Amar & Sriram

Brandon Wick

Getting Started in GenAI with a Private, Zero-Trust, Fully Managed LLM
Find out more

In June of 2023, we announced a partnership with Predera to offer a packaged Generative AI solution to the industry — a private, and fully managed LLM with Predera’s AIQ MLOps Platform modern toolstack and Aarna Network’s AMCOP for zero touch orchestration, configuration, management of upgrades/updates. 

We’re now pleased to announce that NetFoundry has been added to this offering, bringing a zero trust security approach, built on Ziti (which comes in both open source OpenZiti or CloudZiti SaaS). Hosted in an Equinix data center, all connections are made using software-only zero trust endpoints, using outbound connections and ‘authenticate-before-connect’ making it ‘dark’ to the internet with no inbound ports. This provides a security posture beyond those of Managed Commercial solutions, with a user experience as simple as ‘it just being available on the internet’. Get the Solution Brief

This significantly strengthens security and controls as unauthorized attackers have no network access by which to exploit the data. The solution also includes, built-in identity, authentication and authorization, least privilege access, granular visibility, and audit controls.

The GenAI offering also provides resources such as Intel Xeon 6338 processors and NVidia A100, hosted on Equinix Metal. Users can choose between LLMs like Llama, Dolly, or NeMo, support services for model fine-tuning and operations, a management dashboard, and now a zero trust security overlay.

Businesses today need to move fast and start taking advantage of the GenAI revolution while avoiding security threats and bogging down LLM adoption with overly-complex configurations. Get the Solution Brief to learn more and start building your own private, zero trust, fully managed, LLM for GenAI.

To better understand zero trust security, please see these assets from NetFoundry: