H04L47/76

System and method for supporting a usage calculation process in a cloud infrastructure environment

Systems and methods described herein support a usage calculation process in a cloud infrastructure environment. The usage calculation process can be used to determine whether a requested transaction that targets a compartment within a tree-structure of compartments violates any compartment quota or limit within parent compartments within the tree-structure.

System and method for supporting a usage calculation process in a cloud infrastructure environment

Systems and methods described herein support a usage calculation process in a cloud infrastructure environment. The usage calculation process can be used to determine whether a requested transaction that targets a compartment within a tree-structure of compartments violates any compartment quota or limit within parent compartments within the tree-structure.

Multi-slice support for MEC-enabled 5G deployments
11700628 · 2023-07-11 · ·

A system configured to track network slicing operations within a 5G communication network includes processing circuitry configured to determine a network slice instance (NSI) associated with a QoS flow of a UE. The NSI communicates data for a network function virtualization (NFV) instance of a Multi-Access Edge Computing (MEC) system within the 5G communication network. Latency information for a plurality of communication links used by the NSI is retrieved. The plurality of communication links includes a first set of non-MEC communication links associated with a radio access network (RAN) of the 5G communication network and a second set of MEC communication links associated with the MEC system. A slice configuration policy is generated based on the retrieved latency information and slice-specific attributes of the NSI. Network resources of the 5G communication network used by the NSI are reconfigured based on the generated slice configuration policy.

Techniques for bi-direction preemption indication transmissions
11552734 · 2023-01-10 · ·

Aspects described herein relate to bi-direction preemption indication transmissions. In one example, a node such as an integrated access and backhaul (IAB) node may determine that a set of one or more resources are preempted for use for both an uplink transmission and a downlink transmission, and transmit, to a user equipment (UE), the bi-direction preemption indication indicating that the set of one or more resources are preempted for use for both of the uplink transmission and the downlink transmission. In another example, a UE may receive a bi-direction preemption indication indicating that a set of one or more resources are preempted for use for both an uplink transmission and a downlink transmission, and perform rate matching for both of the uplink transmission and downlink transmission based on the set of one or more resources indicated by the bi-direction preemption indication.

Recalibrating resource profiles for network slices in a 5G or other next generation wireless network

The technologies described herein are generally directed to facilitating the allocation, scheduling, and management of network slice resources. According some embodiments, a system can facilitate performance of operations. The operations can include, based on a request for a network service type that was received from a user device, allocating a network slice of a network to the user device, with the network slice being previously assigned a capacity of a resource of the network in accordance with a resource profile. Further, operations include monitoring performance of the network slice, resulting in monitored slice performance compared to a performance requirement of the network service type. Another operation includes, based on the monitored slice performance, facilitating recalibration of the resource profile in accordance with a condition associated with the network service type, resulting in a modification of the capacity of the resource assigned to the network slice.

NETWORK OPERATION METHOD, APPARATUS, AND DEVICE AND STORAGE MEDIUM
20220417112 · 2022-12-29 ·

Provided are a network operation method and apparatus, a device, and a storage medium. The network operation method includes that a management node receives virtualized network function information carrying at least one dynamic network change flag, where the at least one dynamic network change flag is used for indicating whether a dynamic network change is supported; and that the management node operates on a first-type network according to the virtualized network function information.

CONTINUOUS LEARNING MODELS ACROSS EDGE HIERARCHIES

Systems and methods are provided for continuous learning of models across hierarchies under a multi-access edge computing. In particular, an on-premises edge server, using a model, generates inference data associated with captured stream data. A data drift determiner determines a data drift in the inference data by comparing the data against reference data generated using a golden model. The data drift indicates a loss of accuracy in the inference data. A gateway model maintains one or more models in a model cache for update the model. The gateway model instructs the one or more servers to train the new model. The gateway model transmits the trained model to update the model in the on-premises edge server. Training the new model includes determining an on-premises edge server with computing resources available to train the new model while generating other inference data for incoming stream data in the data analytic pipeline.

METHOD AND APPARATUS FOR DEPLOYING TENANT DEPLOYABLE ELEMENTS ACROSS PUBLIC CLOUDS BASED ON HARVESTED PERFORMANCE METRICS OF TYPES OF RESOURCE ELEMENTS IN THE PUBLIC CLOUDS

Some embodiments of the invention provide a method of deploying first and second tenant deployable elements to a set of one or more public clouds, the first and second tenant deployable elements being different types of elements. The method identifies first and second sets of performance metrics respectively for first and second sets of candidate resource elements to use to deploy the first and second tenant deployable elements, the two sets of performance metrics being different sets of metrics because the first and second tenant deployable elements being different types of elements, the first set of performance metrics having at least one metric that is not included in the second set of performance metrics. The method uses the different sets of metrics evaluate the first and second sets of candidate resource elements, in order to select one of the first set of candidate resource elements for the first tenant deployable element and to select one of the second set of candidate resource elements for the second tenant deployable element. The method deploys the first and second tenant deployable elements in the set of PCDs by using the selected candidate resource elements.

Self-describing packet headers for concurrent processing

A Self-Describing Packet block (SDPB) is defined that allows concurrent processing of various fixed headers in a packet block defined to take advantage of multiple cores in a networking node forwarding path architecture. SPDB allows concurrent processing of various pieces of header data, metadata, and conditional commands carried in the same data packet by checking a serialization flag set upon creation of the data packet, without needing to serialize the processing or even parsing of the packet. When one or h more commands in one or more sub-blocks may be processed concurrently, the one or more commands are distributed to multiple processing resources for processing the commands in parallel. This architecture allows multiple unique functionalities each with their own separate outcome (execution of commands, doing service chaining, performing telemetry, allows virtualization and path steering) to be performed concurrently with simplified packet architecture without incurring additional encapsulation overhead.

Self-describing packet headers for concurrent processing

A Self-Describing Packet block (SDPB) is defined that allows concurrent processing of various fixed headers in a packet block defined to take advantage of multiple cores in a networking node forwarding path architecture. SPDB allows concurrent processing of various pieces of header data, metadata, and conditional commands carried in the same data packet by checking a serialization flag set upon creation of the data packet, without needing to serialize the processing or even parsing of the packet. When one or h more commands in one or more sub-blocks may be processed concurrently, the one or more commands are distributed to multiple processing resources for processing the commands in parallel. This architecture allows multiple unique functionalities each with their own separate outcome (execution of commands, doing service chaining, performing telemetry, allows virtualization and path steering) to be performed concurrently with simplified packet architecture without incurring additional encapsulation overhead.