H04L47/827

PACKET AGGREGATION AND DISAGGREGATION METHOD
20220060569 · 2022-02-24 ·

The present disclosure provides packet aggregation and disaggregation methods. In the packet aggregation methods, the protocol-independent packet processor (P4) switch stores plural message headers of plural packets in the ring buffer. When the plural message headers stored in the ring buffer reach a pre-defined amount of data, the P4 switch replaces the first flag header in the current packet with a second flag header so as to form a work packet. The egress pipeline of the P4 switch recirculates the work packet repeatedly, whenever it receives a work packet, a message header is extracted from a plurality of message headers in the ring buffer and added to the working packet for packet aggregation.

EFFICIENT ERROR CORRECTION THAT AGGREGATES DIFFERENT MEDIA INTO ENCODED CONTAINER PACKETS

Large source data packets having large packet sizes and small source data packets having small packet sizes that are smaller than the large packet sizes are received. The small source data packets and the large source data packets are sent to a receiving device without forward error correction (FEC). The small source data packets are aggregated into a container packet having a header configured to differentiate the container packet from the large source data packets and the small source data packets. The large source data packets and the container packet are encoded with forward error correction to produce FEC-encoded packets to enable forward error correction of the large source data packets and the container packet at the receiving device. The FEC-encoded packets are sent to the receiving device.

Methods and systems for portably deploying applications on one or more cloud systems

Methods and systems for provisioning services or resources on a cloud service for successful execution of an application includes detecting a request for executing an application on a cloud service. In response to the request, a descriptor record for the application is retrieved from a descriptor file. The descriptor record is specific for the cloud service and provides details of environmental resources or services required for executing the application. Resource and service requirements are translated into actions to be taken in the cloud service environment for provisioning the resources or services required for the application. The actions to be taken are brokered to occur in pre-defined sequence based on details provided in the descriptor record for the application. Status of the actions taken is provided. The status is used to determine if the required resources or services have been provisioned for successful execution of the application in the cloud service.

SYSTEM AND METHOD FOR OPTIMIZING DEPLOYMENT OF A PROCESSING FUNCTION IN A MEDIA PRODUCTION WORKFLOW
20220052922 · 2022-02-17 ·

A system is provided for optimizing deployment of a processing function in a media production workflow. The system includes a media production workflow generator that builds the media production workflow that includes the processing function and determines deployment criteria that includes an input dataset for the processing function and an atomic compute function for executing the processing function. Moreover, a deployment topology generator generates a topologies of the resources available in a cloud computing network and based on the determined deployment criteria, with the generated topologies indicating different configurations of resources for executing the processing function and a processor for executing the atomic compute function of the processing function. Furthermore, a deployment optimizer selects an optimal topology to deploy the processing function within the cloud computing network, with the optimal topology selected to include the processor for optimizing accessibility of electronic memory to execute the atomic compute function.

Anomaly Detection and Classification Using Telemetry Data

Historical telemetry data can be used to generate predictions for various classes of data at various aggregates of a system that implements an online service. An anomaly detection process can then be utilized to detect anomalies for a class of data at a selected aggregate. An example anomaly detection process includes receiving telemetry data originating from a plurality of client devices, selecting a class of data from the telemetry data, converting the class of data to a set of metrics, aggregating the set of metrics according to a component of interest to obtain values of aggregated metrics over time for the component of interest, determining a prediction error by comparing the values of the aggregated metrics to a prediction, detecting an anomaly based at least in part on the prediction error, and transmitting an alert message of the anomaly to a receiving entity.

Flow state aware management of QoS through dynamic aggregate bandwidth adjustments

Conventional packet network nodes react to congestion in the packet network by dropping packets in a manner which is perceived by users to be indiscriminate. In embodiments of the invention, indiscriminate packet discards are prevented by causing packets to be discarded on lower priority flows and flow aggregates. Further action is taken to reduce the likelihood of packet discards. When an aggregate set of flows raises a congestion alarm, action is taken to try to increase aggregate capacity by excising capacity from pre-assigned donor aggregates. A donor aggregate may be carrying flows, for example, classified as best effort. Another type of donor capacity is donor re-assignable unused capacity. Aggregates may have capacity added either up to a defined limit or, temporarily, exceeding any limit provided there is free capacity available, but removable back to the defined limit when other aggregates need increased capacity and are below their defined limits.

Systems, methods, and apparatuses for predicting availability of a resource

Techniques for predicting the availability of a resource are described. An exemplary method includes obtaining capacity data indicating an amount of capacity available in a cloud provider network to satisfy the request; generating, using a machine learning model that has been trained based at least in part on an output of an automated historical hindsight learner that is an integer linear program, an approval prediction, wherein the approval prediction indicates that the request is to be approved based on one or more launch parameters of the request and the capacity data; receiving information from a downstream component that controls the resource that the approval prediction is incorrect; and evaluating the incorrect approval prediction using a hindsight learner and predictor explainer.

Acquiring resource lease using multiple lease servers

The obtaining of a lease on a resource in a circumstance in which multiple lease servers are capable of granting a lease to the resource. A computing entity attempts to obtain the lease on the resource by causing a lease request to be sent to each of at least most (and perhaps all) of the lease servers. In response, the computing entity receives one or more responses to the lease requests. If the computing entity receives grants of a lease from a majority of the lease servers that are capable of granting a lease to the resource, then it is determined that the computing entity acquired a lease on the resource. On the other hand, if the computing entity receives grants of a lease from less than a majority of the lease servers, it is determined that the computing entity failed to acquire the lease on the resource.

Caching of service decisions
11431639 · 2022-08-30 · ·

Some embodiments provide a method for processing a packet received by a managed forwarding element. The method performs a series of packet classification operations based on header values of the received packet. The packet classifications operations determine a next destination of the received packet. When the series of packet classification operations specifies to send the packet to a network service that performs payload transformations on the packet, the method (1) assigns a service operation identifier to the packet that identifies the service operations for the network service to perform on the packet, (2) sends the packet to the network service with the service operation identifier, and (3) stores a cache entry for processing subsequent packets without the series of packet classification operations. The cache entry includes the assigned service operation identifier. The network service uses the assigned service operation identifier to process packets without performing its own classification operations.

Data center management system

Provided is a data center management system including a data center, a cloud platform and an application platform. The application platform is configured to perform external network access through an application interface layer and send a calculation request to the data center. The data center includes a storage resource pool configured to perform distributed storage of files, and a network resource pool configured to send a scheduling request to the cloud platform according to the calculation request, to schedule the cloud platform. The cloud platform includes a calculation resource pool configured to perform a distributed calculation between adjacent processing nodes according to a received scheduling request, and call files in the storage resource pool, or a calculation resource in a shared database and external shared data in the shared database, and the shared database configured to collect and store the calculation resource and the external shared data.