The Role of Telco Edge

01

As the demand for faster, more reliable digital services continues to grow, telecommunication providers face a paradigm shift from centralized data management to decentralized, edge-based processing. Telco Edge computing emerges as a critical architecture to address this need, offering localized data processing capabilities closer to end-users. This document outlines the essential elements of Telco Edge deployment, offering insights into the taxonomy of edge nodes, a recap of edge layers, deployment strategies, and the overall benefits and challenges associated with edge computing.


02

Edge computing is revolutionizing how networks are designed and managed. By moving data processing capabilities from central cloud data centers to localized nodes near the users, Telco Edge offers several key advantages that make it indispensable for the future of telecommunications, among others, latency reduction, optimized bandwidth usage, scalability, flexibility and enhanced security and data privacy.



03

The adoption of edge computing by telecommunications providers, enterprises, and industry leaders is transforming how data is handled, moving from a cloud-centric model to a distributed edge-based model. The Telco Edge is a key enabler of next-generation applications such as 5G, IoT, and real-time data analytics.




Telco Edge Deployment Locations

Access Edge Location

Access Nodes represent the first point of contact between users and the Telco Edge network. At these stations, edge nodes are deployed to handle data processing for nearby users. MEC brings compute resources closer to the end-user, enabling low-latency applications such as video gaming, streaming, or smart city applications.

Access Aggregation Points; Far Edge

ypically located in regional hubs, these points process large volumes of data in real time and support applications requiring rapid responses, such as virtual reality, industrial IoT, or autonomous vehicle coordination.

Regional Aggregation Points; Near Edge

data from access points is further processed, making these locations ideal for latency-sensitive applications that require coordination over a wide geographic area. These aggregation points are crucial in environments such as smart cities, where real-time data from traffic systems, environmental sensors, and public safety devices must be processed and acted upon immediately.

National Core Data Centers

These centers handle large-scale processing, data storage, and complex analytics. They provide the backbone for the Telco Edge network, ensuring that vast amounts of data can be analyzed, stored, and disseminated across the network efficiently.

Telco Edge Functional Necessities

Service Discovery

MEC applications need to locate available services dynamically as they operate in the edge network. Edge service discovery locates an appropriate edge cloud, based on location and other information, and provides a dynamic IP address for the MEC App via mobile network internal information and DNS lookup. Selecting an appropriate edge cloud requires device locations, edge node locations and instantiated edge services. This information is maintained via AMF (user location) and SMF (identity of the UPF anchor) and used to infer edge node locations and hence to select an edge node.

01

Device Mobility and Service Continuity

As devices move through different geographical regions, Telco Edge must ensure seamless service continuity to maintain the Quality of Service (QoS).

02

Traffic Steering

Directs data flow in real-time to ensure the appropriate balance between network capacity and demand. This includes steering traffic to edge nodes that can efficiently handle localized data. Traffic steering allows traffic to be routed to destination MEC app locations. Such Routing policies and rules can be requested by applications

03

Network Capability Exposure for Service Enhancement

Network capabilities can be  exposed as a service to third-party applications, allowing for advanced service customization and enhancements.

04

NextraNet Edge Solution Architecture

CI Pipeline

CD Pipeline

Application

Develoment

Commit

Build & Test

Publish

Developer

APP code

Source Code

Repository

CI Engine

MEC Image

Registry

MEC Config

Registry

Update

Commit

Deployment

Manifest

Edge Cluster 1

Edge Cluster 2

Edge Cluster 3

Day-0

Day-1

Day-2

Infra-as-Code

Design Time

Run Time (Deployment & Operation)

Deploy & Sync

MEC APP Configs

MEC Manager Configs

MEC Platform Configs

CD Agent

Workload Placement
& Node Selection

Monitoring

Access Control

Route

Handler

MEC Orchestrator

Cluster API

Operate

This figure outlines our Telco Edge Solution Architecture, presenting a high-level view of how edge computing integrates with telecom networks. Our proposed telco edge blueprint includes both cloud and edge layers. The architecture ensures seamless integration between the central cloud and distributed edge resources.

This architectural view shows how edge computing is integrated with centralized cloud resources, supporting a hybrid approach where some tasks are processed at the edge, while others remain in the cloud. The deployment and management are automated using orchestration tools. This helps to manage the complexity of deploying services across distributed nodes while maintaining centralized control over network and application resources.


MEC Orchestrator

This feature allows the system to automatically select the most optimal edge node for deploying a service or application. It uses predefined rules to identify which node will offer the best performance for a given workload. “Zero-touch” means that this entire process is done without manual intervention, which accelerates deployment times and reduces human error.

Automated Edge Node Selection and Zero-touch Deployment

01

Continuous Deployment agent automates the deployment of new or modified configurations. The orchestrator automatically detects changes in configuration repository and deploys them to the relevant edge or cloud clusters.

Continuous Deployment

03

Workloads, such as applications and data processing tasks, are placed on the appropriate edge node based on policies defined by network operators. These policies take into account factors like latency, bandwidth, and resource availability. For example, an AI application requiring real-time processing would be placed on an edge node closest to the data source

Policy-based Workload Placement

02

MEC Platform Manager

This refers to the ability to create and manage Custom Resource Definitions (CRDs) in Kubernetes. CRDs allow operators to extend Kubernetes by defining new resource types that are specific to their application needs. For example, a custom resource might define a specific network configuration or deployment type for a 5G service. This ensures flexibility and adaptability in managing edge and cloud resources.

Custom Resource Declaration

02

In an intent-based system, the operator defines high-level intents or goals rather than configuring individual parameters. For example, an operator may define an intent to "deploy an AR application with the lowest possible latency." The platform manager then handles all the underlying details, like selecting the best edge nodes, configuring the network, and deploying the app to meet this intent.

Intent-based Description

01

MEC Platform

This refers to optimizing data processing at the edge to ensure that tasks such as video streaming, IoT data processing, and other high-throughput applications can be handled efficiently without routing data back to the central cloud.

Enhanced User-plane

Data Processing Acceleration

01

The edge enabler layer is responsible for managing DNS (Domain Name System) queries locally at the edge, reducing response times. The Traffic Routing Engine ensures that network traffic is intelligently routed to the appropriate edge nodes for processing, further enhancing performance.

Edge Enabler Layer:

DNS Handling, Traffic Routing Engine

02

The MEC platform includes an API gateway that allows third-party applications and services to interact with the MEC infrastructure. It acts as a management layer, handling traffic between applications and the underlying network or edge resources.

API Gateway

03

Exclusive Capabilities & Specifications

Diversity of Ownership Models and Multi-tenancy Support

Our solution can cover a wide range of service models varies from application-as-a-service to MEC platform-as-a-service in both multi-tenant and service-provider-centric.

Private 5G Network

This solution provides a zero-touch automation framework for automatic deployment of 5G Core Network Functions in a central cloud cluster (central office) while user plane functions are distributed across edge node clusters (branches) to facilitate a flexible realization of 5G private networks.

Scalability

The solution offers high scalability and flexibility within the cloud environment. This allows for seamless adaptation to changing demands and ensures optimal performance across various workloads.

Hybrid Cloud Deployment

The solution can be deployed and integrated across public, private, and hybrid cloud environments. This provides flexibility and adaptability to diverse infrastructure setups.

Cloud-native

Software development based on Cloud-native design principles, utilizing Micro-services. This approach ensures scalability, flexibility, and adaptability to the dynamic cloud environment.

ETSI-MEC Compliant

The solution is fully compliant with the ETSI MEC reference architecture. This ensures interoperability and adherence to industry standards for edge computing.

Telco Edge-native Supported Features

Local Routing and Traffic Steering refers to the methods used in 5G networks to direct user data traffic along optimal paths within the network infrastructure. Traffic steering mechanisms dynamically adjust the routing based on network conditions, user location, and service policies, ensuring that data takes the most efficient route to its destination, thereby enhancing performance and resource efficiency.

Traffic Steering

01

Multi-homing

Multi-homing

Uplink Classification

Application Functions (AF) can steer traffic based on application needs, allowing for better resource allocation. The AF communicates with network entities to convey specific service requirements or preferences. By influencing traffic steering, the network can allocate appropriate resources, adjust routing paths, and prioritize traffic to meet the application's needs.

AF Influenced Traffic Routing:

02

Network Capability Exposure allows 5G networks to offer external applications and services access to certain network functions and information through standardized Application Programming Interfaces (APIs). By exposing capabilities such as user location, network status, or quality of service parameters, third-party developers can create innovative services that leverage the network's advanced features. This openness fosters a collaborative ecosystem where network operators and service providers can jointly develop value-added services, driving innovation and enhancing user experiences.

Network Capability Exposure:

03

Building Real-world AI-based Use Cases

Edge AI involves running AI and Machine Learning (ML) algorithms directly on edge nodes, reducing the need for data to be sent to central cloud locations for processing. For the AI-enabled application, ML-based inferencing will need to be done locally.

We promote a design pattern where one can decouple the application code from the ML model, thereby delivering the application image and ML model on life cycles independent of each other. This will result in the continuous running of the application without any downtime while the ML model is being delivered. Once the ML model is delivered, the application code can query the system for the new ML model and reinitialize its local inferencing service code to use the newly delivered ML model without interruption.

CCTV Video Stream Processing

Face Recognition: AI algorithms running at the edge can identify faces in real-time, a feature that's useful in security applications, personalized services, and retail analytics.

Scenario Description using LLM (Large Language Models): LLMs can be used to describe complex scenarios or interpret data at the edge. For instance, LLMs can generate insights or summaries based on patterns observed in the data collected at the edge.

Tracing and Heatmap: This refers to tracking the movement and behaviors of individuals or objects in a physical space, generating heatmaps to visualize activity. This capability can be used in areas like smart cities, retail, and traffic management.

Value Monetization

Possibility of Building an Application Marketplace

With the edge infrastructure in place, telcos and third parties can create marketplaces for applications that leverage the edge, offering services like IoT management, AI-powered analytics, and content delivery.

Reduced Capex due to Optimized Resource Utilization 

The solution allows for efficient use of resources at the edge, reducing the need for significant capital expenditure (Capex) on centralized data centers.

Ease of Service Monetization based on Network Slicing and Customization of Network Functions Placement across Edge Nodes

Telcos can monetize services by offering customizable network functions and network slicing, which allows them to tailor network resources to the needs of different clients, particularly in industries like manufacturing, healthcare, and smart cities.