Patent classifications
H04L47/781
On-demand application-driven network slicing
Disclosed are various embodiments for on-demand application-driven network slicing. In one embodiment, it is determined that an application implemented in a particular computing device has an increased quality-of-service requirement in order to send or receive data via a communications network. The increased quality-of-service requirement is greater than an existing quality-of-service provided to the application by the communications network. The application sends a request that causes capacity in a network slice having the increased quality-of-service requirement in the communications network to be allocated for the application. The data is transmitted to or from the application using the network slice.
Methods and apparatus to execute a workload in an edge environment
Methods and apparatus to execute a workload in an edge environment are disclosed. An example apparatus includes a node scheduler to accept a task from a workload scheduler, the task including a description of a workload and tokens, a workload executor to execute the workload, the node scheduler to access a result of execution of the workload and provide the result to the workload scheduler, and a controller to access the tokens and distribute at least one of the tokens to at least one provider, the provider to provide a resource to the apparatus to execute the workload.
Throttling data streams from source computing devices
Local management of data stream throttling in data movement operations, such as secondary-copy operations in a storage management system, is disclosed. A local throttling manager may interoperate with co-resident data agents and/or a media agent executing on any given local computing device, whether a client computing device or a secondary storage computing device. The local throttling manager may allocate and manage the available bandwidth for various jobs and their constituent data streams—across the data agents and/or media agent. Bandwidth is allocated and re-allocated to data streams used by ongoing jobs, in response to new jobs starting and old jobs completing, without having to pause and restart ongoing jobs to accommodate bandwidth adjustments. The illustrative embodiment also provides local users with a measure of control over data streams—to suspend, pause, and/or resume them—independently from the centralized storage manager that manages the overall storage system.
DYNAMIC SLICE PRIORITY HANDLING
Embodiments provide functionality for dynamic handling of network slice priorities. In an embodiment, a slice priority manager receives, from a network services node, data indicating changes in network resources available for maintaining instantiated network slices in a communication network. Based on the received data, changes in the network services node's ability to maintain the instantiated network slices are identified and a communication service provider is notified of the change. To continue, slice priority data that indicates a preferred order of a portion of the network slices for allocating the network resources is received from the communication service provider. In turn, an indication of the slice priority data is forwarded to the network services node to update allocation of network resources for the slices.
CLOUD DATA CENTER TENANT-LEVEL OUTBOUND RATE LIMITING METHOD AND SYSTEM
A cloud data center tenant-level outbound rate limiting method includes: starting a timer, receiving and generating statistics of outbound packets of tenants in a current period, obtaining local traffic rate information of the tenants based on all the outbound packets of the tenants in the current period, and generating local bandwidth demand frames of the tenants based on the local traffic rate information of the tenants; when a timing of the timer reaches the end of the current period, sending the local bandwidth demand frames of the tenants to a switch; receiving a global bandwidth demand frame sent by the switch, and computing bandwidth budgets of the tenants based on the local traffic rate information of the tenants and the global bandwidth demand frames of the tenants; modifying rate limiting parameters, and limiting the rate of the outbound packets of the tenants in a next period.
METHOD AND SYSTEM FOR A PROACTIVE ASSIGNMENT OF VIRTUAL NETWORK FUNCTIONS IN LOCAL DATA SYSTEMS
A method for managing data includes obtaining, by a service function chain (SFC) orchestrator, a SFC request for a SFC, wherein the SFC comprises at least one virtual network function (VNF) and one service, in response to the SFC request: determining a set of candidate local data systems (LDSs) based on a resource availability mapping, performing a LDS analysis on the set of candidate LDSs, based on the LDS analysis: assigning the VNF to a candidate LDS of the set of candidate LDSs, assigning the service to a second LDS of the set of candidate LDSs, and based on the assigning of the VNF and the assigning of the service, initiating a deployment of the VNF and the service.
BLOCKED XOR FILTER FOR BLACKLIST FILTERING
A method of filtering a URL against a blacklist includes receiving at least a portion of a Uniform Resource Locator (URL), and determining which of a plurality of XOR filters is applicable to the received at least a portion of a URL, where each of the plurality of XOR filters represents a different portion of a URL blacklist. At least a portion of a URL is forwarded to the applicable one of the plurality of XOR filters, and the at least a portion of the URL is processed in the applicable one of the plurality of XOR filters to produce an output indicating whether the URL is likely on the blacklist.
Techniques for excess resource utilization
Techniques to utilize excess resources in a cloud system, such as by enabling an auxiliary resource utilizer to use resources while they are not needed to support primary resource utilizers, are described herein. Some embodiments are directed to identifying and allocating excess capacity of resources in a cloud system to auxiliary resource utilizers based on one or more policies. In various embodiments, excess resources in one or more of the set of resources in the cloud system, or cloud resources, may be determined based on monitoring utilization of the cloud resources by the primary resource utilizers. In many embodiments, an auxiliary resource utilizer that is in compliance with a set of utilization policies may be identified and the excess resources may be allocated to the auxiliary resource utilizer.
Abstraction layer to cloud services
Aspects of the disclosure relate to providing cloud computing resources from one or more cloud service providers for a client computing device through a computing platform. The client computing device may benefit from an economy of scale while being able to obtain different types of cloud services over a plurality of cloud providers. The client computing device may request an initial amount of cloud services and subsequently may request cloud services that utilize a requested amount of cloud resources. The requested amount of cloud resources may be apportioned among the plurality of cloud service providers, to provide the requested cloud service. The computing platform may also support a cloud abstraction layer interacting between client computing device and one or more cloud providers so that the client computing device can obtain cloud service in a transparent manner.
EFFICIENT ROUTING OF COMMUNICATIONS IN A MESH NETWORK
A method including receiving, by an infrastructure device in communication with a first device in a mesh network, a binding request from a meshnet local port associated with the first device that is dedicated for communicating meshnet data associated with the first device, the binding request requesting the infrastructure device to determine a currently allocated public port associated with the first device; and transmitting, by the infrastructure device to the first device, a response indicating the currently allocated public port associated with the first device. Various other aspects are contemplated.