Patent classifications
H04L47/827
METHOD AND SYSTEM FOR DISTRIBUTIVE FLOW CONTROL AND BANDWIDTH MANAGEMENT IN A NETWORK
A method and system for distributive flow control and bandwidth management in networks is disclosed. The method includes: providing multiple Internet Protocol (IP) Gateways (IPGWs) that each have a maximum send rate and one or more sessions with associated throughput criteria, wherein each IPGW performs flow control by limiting information flows by the respective maximum send rate and throughput criteria; providing multiple Code Rate Organizers (CROs) that each have a bandwidth capacity, wherein each CRO performs bandwidth allocation of its respective bandwidth capacity to one or more IPGWs of the multiple IPGWs; interconnecting the multiple IPGWs with the multiple CROs; and performing bandwidth management across the multiple CROs and IPGWs. In the method, an IPGW of the multiple IPGWs provides flow control across a plurality of the CROs of the multiple CROs, and a CRO of the multiple CROs allocates bandwidth to a plurality of the IPGWs of the multiple IPGWs.
Remote port for network connectivity for non-colocated customers of a cloud exchange
In general, techniques are described for network connectivity for non-colocated customers of a cloud exchange. A programmable network platform for the cloud exchange comprises processing circuitry configured to: configure a virtual network device in the data center to run a network service for a customer; receive, from the customer, a request for a remote port and network information for a network service provider connectivity service for the customer; assign, in response to receiving the request for the remote port, a remote port of the cloud exchange to the customer; and configure, in response to receiving the request for the remote port using the network information, the cloud exchange to connect the network service provider connectivity service to the virtual network device via the remote port of the cloud exchange.
INTEROPERABLE CLOUD BASED MEDIA PROCESSING USING DYNAMIC NETWORK INTERFACE
A method of processing media content in Moving Picture Experts Group (MPEG) Network Based Media Processing (NBMP) includes obtaining a plurality of tasks for processing the media content, providing an interface between an NBMP workflow manager and a cloud manager by providing an NBMP Link application program interface (API), which links the plurality of tasks together, identifying an amount of network resources to be used for processing the media content, by using the NBMP Link API, and processing the media content in accordance with the identified amount of network resources.
On-demand access to compute resources
Disclosed are systems, methods and computer-readable media for controlling and managing the identification and provisioning of resources within an on-demand center as well as the transfer of workload to the provisioned resources. One aspect involves creating a virtual private cluster within the on-demand center for the particular workload from a local environment. A method of managing resources between a local compute environment and an on-demand environment includes detecting an event associated with a local compute environment and based on the detected event, identifying information about the local environment, establishing communication with an on-demand compute environment and transmitting the information about the local environment to the on-demand compute environment, provisioning resources within the on-demand compute environment to substantially duplicate the local environment and transferring workload from the local-environment to the on-demand compute environment. The event can be a threshold or a triggering event within or outside of the local environment.
Aggregated service status reporter
Systems as described herein may include generating an aggregated service status report for a real-time service delivery platform. A plurality of services running in a service domain may be determined. A request for a status of system behavior corresponding to a particular service may be received. Service connection details of the particular service may be discovered and metric data of real-time data movement may be tracked. Real-time snapshot aggregation of the particular service may be provided. In a variety of embodiments, a real-time system behavior report for the service across availability zones may be presented.
AUTOMATED SERVER WORKLOAD MANAGEMENT USING MACHINE LEARNING
Systems and methods are disclosed for managing workload among server clusters is disclosed. According to certain embodiments, the system may include a memory storing instructions and a processor. The processor may be configured to execute the instructions to determine historical behaviors of the server clusters in processing a workload. The processor may also be configured to execute the instructions to construct cost models for the server clusters based at least in part on the historical behaviors. The cost model is configured to predict a processor utilization demand of a workload. The processor may further be configured to execute the instructions to receive a workload and determine efficiencies of processing the workload by the server clusters based at least in part on at least one of the cost models or an execution plan of the workload.
END-TO-END SERVICE LEVEL METRIC APPROXIMATION
Described are examples for providing service level monitoring for a network hosting applications as a cloud service. A service level monitoring device may receive end-to-end measurements of service usage collected at user devices for a plurality of applications hosted as a cloud services. The service level monitoring device may determine degraded applications of the plurality of applications based on anomalies in the measurements. The service level monitoring device may determine a service level metric based on an aggregation of the degraded applications. In some examples, the service level monitoring device may detect a network outage affecting the service.
END-TO-END SERVICE LEVEL METRIC APPROXIMATION
Described are examples for providing service level monitoring for a network hosting applications as a cloud service. A service level monitoring device may receive end-to-end measurements of service usage collected at user devices for a plurality of applications hosted as a cloud services. The service level monitoring device may determine degraded applications of the plurality of applications based on anomalies in the measurements. The service level monitoring device may determine a service level metric based on an aggregation of the degraded applications. In some examples, the service level monitoring device may detect a network outage affecting the service.
Fallback command in a modular control system
A device may include a memory storing instructions and a processor configured to execute the instructions to receive an instruction from an administration device; identify a link selector in the instruction that corresponds to a resource attribute of a first resource that specifies how a second resource is to be controlled by the first resource; query a database of contracts between resources to determine that the second resource is available to be controlled by the first resource, based on resource contracts associated with the second resource. The processor may be further configured to generate a resource contract between the first resource and the second resource that indicates the second resource is controlled by the first resource and enable the first resource to communicate with the second resource in accordance with the generated resource contract.
System for request aggregation in cloud computing services
Cloud-based computing systems, although claimed to have virtually unlimited resources, could get oversubscribed due to budget constraints of cloud users. The disclosed invention proposes a mechanism to identify various types of “mergeable” tasks. The system also determines when it is appropriate to aggregate tasks and how to allocate them so that the QoS of other tasks is not affected. Experimental results under real-world workload settings show that the disclosed system can improve robustness of the system in the face of oversubscription and also saves the overall time of using cloud services by more than 14%.