H04L45/08

SELECTING PATHS FOR HIGH PREDICTABILITY USING CLUSTERING

In one embodiment, a device forms a plurality of clusters of network paths used to convey traffic for an online application by applying clustering to telemetry data for those network paths. The device determines a predictability metric for a particular cluster in the plurality of clusters. The device provides an indication of the predictability metric for the particular cluster for display. The device enables, based in part on the predictability metric, predictive routing for the network paths in the particular cluster.

ROUTING ONLINE APPLICATION TRAFFIC BASED ON PATH FATE SHARING METRICS

In one embodiment, a device identifies a plurality of paths between a pair of network addresses, wherein one of the pair of network addresses is associated with an online application. The device obtains telemetry data from the plurality of paths for the online application. The device computes, based on the telemetry data, fate sharing metrics for the plurality of paths. The device controls routing of application traffic between the pair of network addresses, based on the fate sharing metrics for the plurality of paths.

COMMUNICATION ANALYSIS FOR DYNAMIC AUTO-ROUTING AND LOAD BALANCING
20220353173 · 2022-11-03 ·

A network analysis device that is configured to obtain metric information that is associated with a plurality of messages and to input the metric information into a first machine learning model that outputs a traffic volume classification based on the metric information. The network analysis device is further configured to obtain bandwidth information that is associated with a plurality of network devices and to input the bandwidth information and the traffic volume classification into a second machine learning model that outputs routing recommendations based on the bandwidth information and the traffic volume classification. The network analysis device is further configured to generate routing instructions based on the routing recommendations and to reconfigure a routing device based on the routing instructions.

Offline optimization for traffic engineering with segment routing

Various exemplary embodiments relate to a method of offline traffic matrix aware segment routing. The method may include receiving a traffic matrix based upon all the traffic between nodes i and j that is routed in the network; and determining the amount of traffic between nodes i and j will be routed through node k, based on minimizing a maximum link utilization for the traffic matrix by determining that the total amount of flow on a link e in the network is less than the link's capacity.

Topological Learning Method and Apparatus for OPENFLOW Network Cross Conventional IP Network
20170302562 · 2017-10-19 ·

A topological learning method and apparatus for an OPENFLOW network cross a conventional Internet Protocol (IP) network. The method includes obtaining, by a controller, M OPENFLOW switch (OFS) ports connected to a same conventional IP network, determining whether there is a logical switch corresponding to the conventional IP network, if the controller determines that there is no logical switch corresponding to the conventional IP network, creating and storing the information about the logical switch, where the information about the logical switch includes related information of the M OFS ports, and related information of each OFS port includes link information in a direction from the port to the logical switch and/or link information in a direction from the logical switch to the port, and managing, by the controller, the logical switch as a common OPENFLOW switch of an OPENFLOW network.

BLOCKING UNDESIRABLE COMMUNICATIONS IN VOICE OVER INTERNET PROTOCOL SYSTEMS
20170303126 · 2017-10-19 · ·

Blocking of undesirable voice over internet protocol (VOIP) communications is disclosed. A communication screening service initiates operations to block a threat posed by a VOIP communication upon receiving the communication from a gateway server. The communication may include an audio/video conversation and/or an audio/video conference. Next, metadata and content of the communication is analyzed to detect a threat, such as a scamming scheme and/or a phishing scheme, from a sender of the communication. A rejection of the communication is generated to disrupt the threat associated with the communication. The rejection is transmitted to the gateway server to prompt the gateway server to block the communication.

SYSTEM AND METHOD TO REDUCE BANDWIDTH REQUIREMENT FOR VISIBILITY EVENT PACKET STREAMING USING A PREDICTED MAXIMAL VIEW FRUSTUM AND PREDICTED MAXIMAL VIEWPOINT EXTENT, EACH COMPUTED AT RUNTIME
20170295222 · 2017-10-12 · ·

There is provided a method of predictive prefetching and transmitting from a server to a client device at least one partial visibility event packet and/or deferred visibility event packet including renderable graphics information occluded from a first viewcell and not occluded from a second viewcell, including otherwise renderable graphics information in a client view frustum not previously transmitted to the client device; determining an estimated maximal client view frustum; calculating a subset comprising renderable graphics information that is included in the estimated maximal client view frustum; determining whether the calculated subset has previously been transmitted to the client device by comparing the calculated subset to the stored renderable graphics information previously transmitted, and transmitting the at least one partial visibility event packet and/or deferred visibility event packet to the client device if said packet has not been previously transmitted to the client device.

Data plane for learning flows, collecting metadata regarding learned flows and exporting metadata regarding learned flows

Some embodiments provide a data-plane forwarding circuit that can be configured to learn about a new message flow and to maintain metadata about the new message flow without first having a control plane first configure the data plane to maintain metadata about the flow. To perform its forwarding operations, the data plane includes several data message processing stages that are configured to process the data tuples associated with the data messages received by the data plane. In some embodiments, parts of the data plane message-processing stages are also configured to operate as a flow-tracking circuit that includes (1) a flow-identifying circuit to identify message flows received by the data plane, and (2) a first set of storages to store metadata about the identified flows.

Overlay network routing using a programmable switch
11258635 · 2022-02-22 · ·

The techniques discussed herein include storing a fast-path and a slow-path table in a memory associated with a programmable switch, such as a cache of the programmable switch. An offload controller may control the contents of the fast-path and/or slow-path table and may thereby control behavior of the programmable switch. The programmable may route a received packet to a gateway if the packet generates a hit in the slow-path table. If the received packet generates a hit in the fast-path table, the packet may be forwarded directly to a virtual private cloud (VPC), virtual switch thereof, and/or to a virtual machine (VM).

Collaborative AI on transactional data with privacy guarantees

A data intersection is assessed of data to be used between at least two parties. The data is to be used in an artificial intelligence (AI) application. Evaluation is performed of set of instructions required for the AI application, where the evaluation creates a modified set of instructions where operands are symbolically associated with corresponding privacy levels. Using the assessed data intersection and the modified set of instructions, a mapping is created from the data to operands with associated privacy metrics. The mapping treats overlapping data from the assessed data intersection differently from data that is not overlapping to improve privacy relative to without the mapping. The AI application is executed using the data to produce at least one parameter of the AI application. The at least one parameter is output for use for a trained version of the AI application. Apparatus, methods, and computer program products are described.