H04L41/0897

Dynamically scalable application firewall deployment for cloud native applications

A configuration of a cloud application exposed via a public IP address is duplicated with modifications to include a private IP address to expose the application internally. The original configuration is updated so that external network traffic sent to the application is redirected to and distributed across agents running on nodes of a cloud cluster by which web application firewalls (WAFs) are implemented. A set of agents for which the respective WAFs should inspect the redirected network traffic are selected based on cluster metrics, such as network and resource utilization metrics. The redirected network traffic targets a port allocated to the agents that is unique to the application, where ports are allocated on a per-application basis so each of the agents can support WAF protection for multiple applications. Network traffic which a WAF allows to pass is directed from the agent to the application via its private IP address.

CUSTOMER-DEFINED CAPACITY LIMIT PLANS FOR COMMUNICATION NETWORKS
20230100729 · 2023-03-30 ·

Disclosed are various embodiments for customer-defined capacity limit plans in communication networks. In one embodiment, a request for a service from a radio-based network is received from a first client device. A network function in the radio-based network is determined to be at a capacity limit. Service from the network function to a second client device to the network function is suspended in response to determining that the network function in the radio-based network is at the capacity limit and based at least in part on a rule set specific to the radio-based network. The first client device is provided access to the network function instead of the second client device.

HETEROGENEOUS INDEXING AND LOAD BALANCING OF BACKUP AND INDEXING RESOURCES

Indexing preferences generally associate each data source with a type of indexing technology and/or with an index/catalog and/or with a computing device that hosts the index/catalog for tracking backup data generated from the source data. Indexing preferences govern which index/catalog receives transaction logs for a given storage operation. Thus, indexing destinations are defined granularly and flexibly in reference to the source data. Load balancing without user intervention assures that the various index/catalogs are fairly distributed in the illustrative backup systems by autonomously initiating migration jobs. Criteria for initiating migration jobs are based on past usage and going-forward trends. An illustrative migration job re-associates data sources with a different destination media agent and/or index/catalog, including transferring some or all relevant transaction logs and/or indexing information from the old host to the new host.

Network Slice Quota Management

Apparatuses, systems, and methods for performing network slice quota management. A network slice quota management function may store capacity information for one or more network slices. The network slice quota management function may receive a request for an indication of whether a network slice has additional capacity. The network slice quota management function may provide an indication of whether the network slice has additional capacity in response to the request. The capacity information may relate to the capacity of the network slice with respect to the number of wireless devices registered for the network slice, or to the capacity of the network slice with respect to the number of packet sessions established with the network slice, or both, among various possibilities.

Self-healing and dynamic optimization of VM server cluster management in multi-cloud platform

Virtual machine server clusters are managed using self-healing and dynamic optimization to achieve closed-loop automation. The technique uses adaptive thresholding to develop actionable quality metrics for benchmarking and anomaly detection. Real-time analytics are used to determine the root cause of KPI violations and to locate impact areas. Self-healing and dynamic optimization rules are able to automatically correct common issues via no-touch automation in which finger-pointing between operations staff is prevalent, resulting in consolidation, flexibility and reduced deployment time.

Wireless communication network to serve a protocol data unit (PDU) session type over a radio access network (RAN)

A wireless communication network serves a Protocol Data Unit (PDU) session type over a Radio Access Network (RAN). The wireless communication network comprises a Network Repository Function (NRF), Management and Orchestration (MANO) system, a User Plane Function (UPF), a RAN, and User Equipment (UEs). The NRF receives UPF requests for User Plane Functions (UPFs) that can serve the PDU session type over the RAN and responsively transfers UPF responses indicating other UPFs that cannot serve the PDU session type over the RAN. The NRF determines when the transfer of the UPF responses is excessive. In response, the NRF transfers an instantiation request to a Management and Orchestration (MANO) system to instantiate a new UPF that can serve the PDU session type over the RAN. The MANO system instantiates the new UPF. The new UPF serves the PDU session type to the UEs over the RAN.

RESOURCE SELECTION FOR COMPLEX SOLUTIONS

Techniques described herein relate to a method for composition for complex solutions. The method may include receiving, by a system control processor manager, a composition request to compose a composed information handling system, the request comprising a solution manifest file; parsing, by the system control processor manager, the solution manifest file to identify a solution requirement set; performing, using the solution requirement set, an analysis of a telemetry data map and a topology and connectivity graph; making a determination, based on the analysis, that the composition request may be satisfied using resources represented in the topology and connectivity graph; and composing the composed information handling system based on the determination.

MULTI-SITE HYBRID NETWORKS ACROSS CLOUD ENVIRONMENTS

A method of deploying a network service across a plurality of data centers, includes the steps of: in response to a request for or relating to a network service, identifying virtual network functions associated with the network service and determining network connectivity requirements of the virtual network functions, issuing commands to provision a virtual link between at least two of the data centers in which the virtual network functions are to be deployed.

5G-ENABLED MASSIVELY DISTRIBUTED ON-DEMAND PERSONAL CLOUD SYSTEM AND METHOD
20230093627 · 2023-03-23 ·

The technology described herein allocates resources in a cloud computing environment using a 5G network. The system can connect a device to the 5G network and collect data related to the device such as a location of the device and characteristics of use of the device with the 5G network. The system can create a device service profile of the device based at least in part on the data related to the device. The system can then dynamically partition computing resources within the cloud computing environment for the device based on the device service profile and a time-of-day in the location of the device to thereby provide on-demand access to content or services in the cloud computing environment to the device over the 5G network.

REINFORCEMENT LEARNING (RL) AND GRAPH NEURAL NETWORK (GNN)-BASED RESOURCE MANAGEMENT FOR WIRELESS ACCESS NETWORKS

A computing node to implement an RL management entity in an NG wireless network includes a NIC and processing circuitry coupled to the NIC. The processing circuitry is configured to generate a plurality of network measurements for a corresponding plurality of network functions. The functions are configured as a plurality of ML models forming a multi-level hierarchy. Control signaling from an ML model of the plurality is decoded, the ML model being at a predetermined level (e.g., a lowest level) in the hierarchy. The control signaling is responsive to a corresponding network measurement and at least second control signaling from a second ML model at a level that is higher than the predetermined level. A plurality of reward functions is generated for training the ML models, based on the control signaling from the MLO model at the predetermined level in the multi-level hierarchy.