Patent classifications
H04L47/805
Optimized disaster-recovery-as-a-service system
Methods, computer program products, and systems are presented. The methods include, for instance: analyzing a dataset associated with a service provided by the data protection service provider in order to determine a policy for when and how to replicate the respective components of the dataset corresponding to the service from a source site to a target site, such that the target site may perform the service with a minimum cost.
ALLOCATING RESOURCES FOR COMMUNICATION AND SENSING SERVICES
An apparatus comprising at least one processor and at least one memory including computer program code, the at least one memory and the computer program code being configured to, with the at least one processor, cause the apparatus at least to perform determining, per a communication service to which resources are to be allocated, a metric value using a first predefined set of rules, determining, per a sensing service to which resources are to be allocated, a metric value using a second predefined set of rules, sorting communication services to which resources are to be allocated and sensing services to which resources are to be allocated based on the metric values to a sorted order using a third rule, and allocating resources for the communication services and the sensing services based on the sorted order.
Techniques for excess resource utilization
Techniques to utilize excess resources in a cloud system, such as by enabling an auxiliary resource utilizer to use resources while they are not needed to support primary resource utilizers, are described herein. Some embodiments are directed to identifying and allocating excess capacity of resources in a cloud system to auxiliary resource utilizers based on one or more policies. In various embodiments, excess resources in one or more of the set of resources in the cloud system, or cloud resources, may be determined based on monitoring utilization of the cloud resources by the primary resource utilizers. In many embodiments, an auxiliary resource utilizer that is in compliance with a set of utilization policies may be identified and the excess resources may be allocated to the auxiliary resource utilizer.
MULTI-TENANT RESOURCE MANAGEMENT IN A GATEWAY
Described herein are systems, methods, and software to manage resources in a gateway shared by multiple tenants. In one example, a system may monitor usage of resources by a tenant of the gateway and compare the usage with usage limits associated with the resources. The system may further determine when the usage of a resource exceeds a usage limit associated with the resource and, when the usage of the resource exceeds the usage limit, identify an operation associated with causing the usage limit to be exceeded and blocking the operation.
Auto switching for enterprise federated network slice
A method in which an enterprise switches its devices to various federated network slices across operators based on cost, time, quality, and/or availability parameters defined in flexible rules managed by the enterprise. The method includes obtaining, by a controller of an enterprise, one or more parameters of a device served by a network slice of a core network. The method further includes, based on the one or more parameters of the device and one or more rules, determining, by the controller, whether a triggering event associated with a slice reselection occurred and based on the triggering event and the one or more rules, selecting, by the controller, a federated network slice from among a plurality of network slices provided by a plurality of core networks. The method further includes the controller causing the device to switch from the network slice to the federated network slice.
Network Access Control Method, SDF, CP, UP, and Network System
A network device having at least one processor and one or more non-transitory memories storing programming instructions that are associated with a steering decision function (SDF) in a network system and including instructions to obtain a carrier-grade network address translation (CGN) resource pool by receiving CGN resources reported by a plurality of user planes (UPs), where the network system includes the SDF, the plurality UPs, and a control plane (CP), receive a CGN instance obtaining request sent by the CP, the CGN instance obtaining request indicating to allocate a CGN instance to a user equipment, allocate a first CGN instance to the user equipment based on the CGN resource pool, the first CGN instance indicating a first UP, of the plurality of UPs, having an available CGN resource, and send the first CGN instance to the CP.
DYNAMIC BANDWIDTH ALLOCATION IN CLOUD NETWORK SWITCHES BASED ON TRAFFIC DEMAND PREDICTION
Embodiments for dynamic bandwidth allocation in cloud network switches in a cloud computing environment are provided. Quality of service (QoS) policies may be dynamically changed in one or more cloud network switches based on dynamically estimating expected traffic demands for each of a plurality of traffic classes, wherein bandwidth is dynamically allocated among queues based on changing the QoS policies.
System and method for providing bandwidth congestion control in a private fabric in a high performance computing environment
Systems and methods for providing bandwidth congestion control in a private fabric in a high performance computing environment. An exemplary method can provide, at one or more microprocessors, a first subnet, the first subnet comprising a plurality of switches, and a plurality of host channel adapters, wherein each of the host channel adapters comprise at least one host channel adapter port, and wherein the plurality of host channel adapters are interconnected via the plurality of switches, and a plurality of end nodes. The method can provide, at a host channel adapter, an end node ingress bandwidth quota associated with an end node attached to the host channel adapter. The method can receive, at the end node of the host channel adapter, ingress bandwidth, the ingress bandwidth exceeding the ingress bandwidth quota of the end node.
Reducing latency in downloading electronic resources using multiple threads
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reducing latency in presenting content. In one aspect, a system includes a native application that presents an interactive item and a latency reduction engine. The latency reduction engine detects interaction with the interactive item that links to a first electronic resource that is different from the native application and provided by a first network domain and in response to the detecting, reduces latency in presenting the first electronic resource, including executing a first processing thread and a second processing thread in parallel. The first processing thread requests a second electronic resource from a second network domain and loads the second electronic resource and, in response to the loading, stores a browser cookie for the second network domain. The second processing thread requests the first electronic resource and presents the first electronic resource.
AUTOMATIC NETWORK CONFIGURATION
Automatic network configuration includes obtaining, by a virtual private network service provider infrastructure system, ranking data for data transport pathways between the virtual private network service provider infrastructure system and an external system, wherein a respective data transport pathway from the data transport pathways includes a respective exit node in the virtual private network service provider infrastructure system in communication with a respective entry node in the external system, wherein obtaining the ranking data includes obtaining at least a portion of the ranking data by testing a service provided by the external system via the entry node, and allocating, by the virtual private network service provider infrastructure system, a data transport pathway from the data transport pathways to a communication session, wherein the data transport pathway is a highest-ranking data transport pathway in the ranking data.