H04L47/83

Slice management system and slice management method

An object is to provide a slice management system capable of assigning slices to a plurality of business operators. In a parent SMF 100, a slice management table 103 can manage resources of a slice managed by a child SMF 100a or the like, and a communication unit 101 can notify the child SMF 100a or the like of the resources. The child SMF 100a or the like receives the resources and stores the resources in the slice management table 106. Therefore, the child SMF 100a or the like can manage the resources of the slice managed by the child SMF 100a or the like, and the child SMF 100a can independently enable the management of the resources.

Cloud infrastructure planning assistant via multi-agent AI

Cloud infrastructure planning systems and methods can utilize artificial intelligence/machine learning agents for developing a plan of demand, plan of record, plan of execution, and plan of availability for developing cloud infrastructure plans that are more precise and accurate, and that learn from previous planning and deployments. Some agents include one or more of supervised, unsupervised, and reinforcement machine learning to develop accurate predictions and perform self-tuning alone or in conjunction with other agents.

Method and system for managing service quality according to network status predictions

Aspects of the subject disclosure may include, for example, obtaining predicted available bandwidths for an end user device, monitoring buffer occupancy of a buffer of the end user device, determining bit rates for portions of media content according to the predicted available bandwidths and according to the buffer occupancy, and adjusting bit rates for portions of media content according to the predicted available bandwidths and according to the buffer occupancy during streaming of the media content to the end user device over a wireless network. Other embodiments are disclosed.

Technologies for switching network traffic in a data center

Technologies for switching network traffic include a network switch. The network switch includes one or more processors and communication circuitry coupled to the one or more processors. The communication circuitry is capable of switching network traffic of multiple link layer protocols. Additionally, the network switch includes one or more memory devices storing instructions that, when executed, cause the network switch to receive, with the communication circuitry through an optical connection, network traffic to be forwarded, and determine a link layer protocol of the received network traffic. The instructions additionally cause the network switch to forward the network traffic as a function of the determined link layer protocol. Other embodiments are also described and claimed.

Throttling queue for a request scheduling and processing system

Various methods and systems for implementing request scheduling and processing in a multi-tenant distributed computing environment are provided. Requests to utilize system resources in the distributed computing environment are stored in account queues corresponding to tenant accounts. If storing a request in an account queue would exceed a throttling threshold such as a limit on the number of requests stored per account, the request is dropped to a throttling queue. A scheduler prioritizes processing requests stored in the processing queue before processing requests stored in the account queues. The account queues can be drained using dominant resource scheduling. In some embodiments, a request is not picked up from an account queue if processing the request would exceed a predefined hard limit on system resource utilization for the corresponding tenant account. In some embodiments, the hard limit is defined as a percentage of threads the system has to process requests.

Accelerated resource allocation techniques

Examples described herein can be used to determine and suggest a computing resource allocation for a workload request made from an edge gateway. The computing resource allocation can be suggested using computing resources provided by an edge server cluster. Telemetry data and performance indicators of the workload request can be tracked and used to determine the computing resource allocation. Artificial intelligence (AI) and machine learning (ML) techniques can be used in connection with a neural network to accelerate determinations of suggested computing resource allocations based on hundreds to thousands (or more) of telemetry data in order to suggest a computing resource allocation. Suggestions made can be accepted or rejected by a resource allocation manager for the edge gateway and the edge server cluster.

Differential overbooking in a cloud computing environment

Techniques for differential overbooking on a cloud database. These techniques may include determining a reservation amount of a multi-tenant resource for a first service of a based upon an overbooking characteristic of the first service, and determining that a total usage value of the multi-tenant resource by a plurality of services is greater than a threshold value. In addition, the techniques may include determining a service usage value of the multi-tenant resource by the first service, determining a first overage value of the first service based on the service usage value and the reservation amount, and performing a resource reclamation process over the multi-tenant resource based on the first overage value of the first service.

METHOD OF LOAD FORECASTING VIA ATTENTIVE KNOWLEDGE TRANSFER, AND AN APPARATUS FOR THE SAME

A method of forecasting a future load may include: obtaining source data sets and a target data set that have been collected from a plurality of source base stations and a target base station, respectively; among a plurality of source machine learning models, selecting at least one machine learn source model that has a traffic load prediction performance higher than that of a target machine learning model through a negative transfer analysis; obtaining model weights to be applied to the target machine learning model and the selected at least one source machine learning model via an attention neural network that is jointly trained with the target machine learning model and the selected source machine learning models; obtaining a load forecasting model for the target base station by combining the target machine learning model and the selected at least one source machine learning model according to the model weights; and predicting a future communication traffic load of the target base station based on the load forecasting model.

UTILIZING A MODEL TO MANAGE RESOURCES OF A NETWORK DEVICE AND TO PREVENT NETWORK DEVICE OVERSUBSCRIPTION BY ENDPOINT DEVICES
20220368648 · 2022-11-17 ·

A network device may receive configuration data identifying resource subscription thresholds associated with a plurality of respective endpoint devices and may receive traffic from the plurality of endpoint devices. The network device may process the traffic and the configuration data, with a resource allocation model, to determine that processing traffic associated with a first endpoint device requires allocating a resource of the network device, and may process the configuration data, with the resource allocation model, to identify the resource of the network device from a particular resource of the network device that is currently allocated to traffic associated with a second endpoint device. The network device may allocate the particular resource of the network device to the traffic associated with the first endpoint device, and may process the traffic associated with the first endpoint device with the particular resource to generate processed traffic.

Method and Device of Network Resource Allocation

Disclosed is a method of network resource allocation. The method includes: generating an adjacency matrix of nodes in a metropolitan area network (MAN) according to spatial adjacency relationships of the nodes; generating a network state feature matrix according to traffic information of each node; extracting traffic spatial features of the nodes from the adjacency matrix and the network state feature matrix through a traffic spatial feature extraction model; obtaining predicted traffic of the nodes from the traffic spatial features through a traffic prediction model; and performing a network resource allocation according to the predicted traffic of the nodes. Further, a device of network resource allocation and a non-transitory computer-readable storage medium are also disclosed.