H04L41/147

EDGE PROCESSING FOR DATA TRANSMISSION
20180013635 · 2018-01-11 ·

In some examples, a computing device may determine a prediction of a network outage of a network. The computing device may determine a priority of one or more data types expected to be received during the network outage. Further, the computing device may determine a latency category of the one or more data types expected to be received during the network outage. The computing device may store a data transmission rule for the one or more data types at least partially based on the priority and the latency category. The computing device may receive, from one or more data generators, during the network outage, data for transmission to the network. The computing device may transmit at least some of the received data to the network at least partially based on the data transmission rule.

SYSTEM AND METHOD FOR SCALING APPLICATION CONTAINERS IN CLOUD ENVIRONMENTS

A method includes polling, via a service specific manager operating on a software container in a cloud infrastructure, usage of different application resources and parameters for each service of a plurality of services provided in the cloud infrastructure to yield respective polled data for each service, collating, at the service specific manager, the respective polled data for each service to yield a collation, and based on the collation, deriving a respective weight for each service which a container manager can use to create multiple instances of a new service. The method further includes communicating the respective weight for each service to the container manager and determining, via the container manager, whether to scale up or scale down container services based on the respective weight for each service.

Impact predictions based on incident-related data

The disclosure herein describes predicting potential impact of issues reported in incident ticket data on infrastructure element. A ticket manager component includes an impact model utilizing machine learning to analyze real-time event and metric data with incident-related data to generate predicted impact data. The predicted impact data identifies potentially impacted infrastructure elements, such as, potentially impacted users, predicted infrastructure components impacted by the issue and/or an updated time-period associated with the issue. The ticket manager component creates labeled incident tickets by updating user-generated incident tickets with additional data generated by the impact model, including predicted impact data and/or additional details associated with the issue. The labeled incident tickets are provided back to the model as training data to further refine predictions generated by the model.

System and method for determining a network performance property in at least one network
11711310 · 2023-07-25 · ·

Systems and methods of determining a network performance property in at least one computer network, including: sampling traffic in active communication with the at least one computer network, analyzing the sampled traffic to group communication packets to flows, and predicting at least one network property of the at least one network based on the grouped communication packets and based on at least one traffic parameter in the at least one network, where the at least one traffic parameter is selected from the group consisting of: union of packet streams, intersection of packet streams, and differences of packet streams, and where the predicted at least one network property is selected from the group consisting of: total number of flows, number of flows with a predefined characteristic, number of packets, and volume of packets.

Device pairing by cognitive computing

A computer-implemented method and a computer program product for device pairing by cognitive computing. A cognitive computing system creates a knowledge corpus about the historical activities of pairing devices of a user. The cognitive computing system predicts needs of device pairing of the user, based on analysis of the historical activities. The cognitive computing system identifies devices that can be paired in a surrounding area. An augmented reality device tracks an eye direction of the user at a device, extrapolates the eye direction to create an eye focus direction of the user, obtains from the cognitive computing system an eye focus line with an arrow pointing to the device, and creates an augmented reality overlay which shows the eye focus line. The augmented reality device pairs a user currently used device and the device, upon user approval.

Predictive routing using machine learning in SD-WANs

In one embodiment, a supervisory service for a software-defined wide area network (SD-WAN) obtains telemetry data from one or more edge devices in the SD-WAN. The service trains, using the telemetry data as training data, a machine learning-based model to predict tunnel failures in the SD-WAN. The service receives feedback from the one or more edge devices regarding failure predictions made by the trained machine learning-based model. The service retrains the machine learning-based model, based on the received feedback.

Predictive routing using machine learning in SD-WANs

In one embodiment, a supervisory service for a software-defined wide area network (SD-WAN) obtains telemetry data from one or more edge devices in the SD-WAN. The service trains, using the telemetry data as training data, a machine learning-based model to predict tunnel failures in the SD-WAN. The service receives feedback from the one or more edge devices regarding failure predictions made by the trained machine learning-based model. The service retrains the machine learning-based model, based on the received feedback.

Load balancing during increased data traffic latency

A system includes at least one server that is configured to provide a multi-client network service to a plurality of existing users. When the server receives requests to join the multi-client network service from new users, the server may issue timestamps to each new user, obtain load metric based on the requests or timestamps, and collect the load metric to obtain historical data characterizing a demand in the multi-client network service over time. Further, based on the historical data, the server can predict a future load demand in the multi-client network service and selectively enable to join the multi-client network service by at least one of the plurality of new users based on the future load demand.

Load balancing during increased data traffic latency

A system includes at least one server that is configured to provide a multi-client network service to a plurality of existing users. When the server receives requests to join the multi-client network service from new users, the server may issue timestamps to each new user, obtain load metric based on the requests or timestamps, and collect the load metric to obtain historical data characterizing a demand in the multi-client network service over time. Further, based on the historical data, the server can predict a future load demand in the multi-client network service and selectively enable to join the multi-client network service by at least one of the plurality of new users based on the future load demand.

Unified recommendation engine
11711287 · 2023-07-25 · ·

A system receives, from one or more subsystems, one or more predicted outcomes associated with a device. The system provides provide at least a subset of the predicted outcomes as input to a machine learning model trained to identify a set of resolution actions. The system receives, from the machine learning model, the set of resolution actions for the subset of the predicted outcomes, wherein each resolution action in the set of resolution actions is associated with a probability of resolving at least one of the predicted outcomes in the subset of predicted outcomes. The system identifies a first resolution action from the set of resolution actions, wherein the first resolution action has a highest probability of resolving the at least one of the predicted outcomes in the subset of predicted outcomes. The system provides a first instruction to execute the first resolution action.