Patent classifications
H04L41/142
Increasing QoS throughput and efficiency through lazy byte batching
Described embodiments improve the performance of a computer network via selectively forwarding packets to bypass quality of service (QoS) processing, avoiding processing delays during critical periods of high demand, increasing throughput and efficiency may be increased by sacrificing a small amount of QoS accuracy. QoS processing may be applied to a subset of packets of a flow or connection, referred to herein as “lazy” processing or lazy byte batching. Packets that bypass QoS processing may be immediately forwarded with the same QoS settings as packets of the flow for which QoS processing is applied, resulting in tremendous overhead savings with only minimal decline in accuracy.
Feature-agnostic behavior profile based anomaly detection
Techniques for user behavior anomaly detection. At least one low-variance characteristic is compared to an expected result for the corresponding low-variance characteristics to determine if the low-variance characteristic(s) is/are within a pre-selected range of the expected results. A security response action is taken in response to the low-variance characteristic not being within the first pre-selected range of the expected results. At least one high-variance characteristic is compared to an expected result for the corresponding high-variance characteristics to determine if the high-variance characteristic(s) is/are within a pre-selected range of the expected results. A security response action is taken in response to the high-variance characteristic not being within the first pre-selected range of the expected results. Access is provided if the low-variance and the high-variance characteristics are within the respective expected ranges.
Feature-agnostic behavior profile based anomaly detection
Techniques for user behavior anomaly detection. At least one low-variance characteristic is compared to an expected result for the corresponding low-variance characteristics to determine if the low-variance characteristic(s) is/are within a pre-selected range of the expected results. A security response action is taken in response to the low-variance characteristic not being within the first pre-selected range of the expected results. At least one high-variance characteristic is compared to an expected result for the corresponding high-variance characteristics to determine if the high-variance characteristic(s) is/are within a pre-selected range of the expected results. A security response action is taken in response to the high-variance characteristic not being within the first pre-selected range of the expected results. Access is provided if the low-variance and the high-variance characteristics are within the respective expected ranges.
Providing dynamic serviceability for software-defined data centers
Examples described herein include systems and methods for providing dynamic serviceability for a software-defined data center (“SDDC”). An example method can include collecting data-center metrics from a management service that monitors the SDDC, filtering the data-center information based on a predetermined list of metrics provided by a partner entity, and translating the filtered data-center information into a partner-specific format requested by the partner entity. The example method can also include generating metadata associated with the translated data-center information and transmitting the metadata and translated data-center information to a partner site associated with the partner entity. If the partner site is not available, the method can include transmitting the information to a partner-accessible storage location and, when the partner site becomes available, identifying the storage location and failed attempt to deliver the information.
Network flow measurement method, network measurement device, and control plane device
A network flow measurement method is applicable to a system including a network measurement device and a control plane device. The network flow measurement method includes measuring, by the network measurement device, first data, where the first data includes a first-type data structure, the first-type data structure includes first measurement information of a flow, and the first measurement information corresponds to a bit of a keyword of the flow, and sending, by the network measurement device, the first data to the control plane device.
Determining delay based on a measurement code block
This application provides an example delay measurement method and an example network device. The method includes receiving, by a first network device, a first service flow. The method also includes determining, by the first network device, a first delay value based on a first measurement code block in the first service flow. The first delay value is a time difference between a first moment at which the first measurement code block is detected in the first network device and a second moment at which the first measurement code block is detected in the first network device.
Methods and apparatus for supporting dynamic network scaling based on learned patterns and sensed data
Methods and apparatus for predicting communications resources which will be needed at a venue and then controlling the amount of available resources dynamically are described. In various embodiments real time or near real time video of areas of the venue are used to predict the number of people in a portion of a venue and/or the direction of movement. Along with other information such as the type of event and/or event schedule collected information is supplied to a set of trained resource requirement models which are used to predict future resource needs at a venue, e.g., while an event is ongoing. Commands are sent to dynamically vary the amount of communications resources provided to one or more portions of the venue. Resources which can be varied included but are not limited to fixed wired WAN bandwidth, WiFi bandwidth, cellular bandwidth, network based on-demand services, transcoding services, firewall services, etc.
DEVICE, METHOD AND PROGRAM FOR GENERATING NETWORK TOPOLOGIES
It is an objective of the present disclosure to reduce computational complexity for obtaining a network topology configuration capable of efficiently accommodating a plurality of traffic demands assumed between nodes present in a network.
A device of the present disclosure includes: a computation traffic generation unit that creates a traffic demand matrix whose elements are values individually obtained by demands between nodes present in a target network area from a plurality of traffic demand matrices created based on predicted traffic demands between the nodes; and a network topology generation unit that generates a network topology based on the traffic demand matrix generated by the computation traffic generation unit and port information of the nodes in the network area.
Methods and Apparatus Relating to Machine-Learning in a Communications Network
Aspects of the disclosure provide a method performed by a central Network Data Analytics Function (NWDAF) in a communications network. The communications network comprises one or more local NWDAFs configured to develop a model using federated learning, in which each local NWDAF stores a copy of the model and trains the copy of the model by inputting training data into a machine-learning process. The method comprises receiving, from the one or more local NWDAFs, a respective local model update comprising an update to values of one or more parameters of the model generated by training a respective copy of the model using machine-learning. The method further comprises combining the local model updates received from the one or more local NWDAFs to obtain a combined model update.
Methods and Apparatus Relating to Machine-Learning in a Communications Network
Aspects of the disclosure provide a method performed by a central Network Data Analytics Function (NWDAF) in a communications network. The communications network comprises one or more local NWDAFs configured to develop a model using federated learning, in which each local NWDAF stores a copy of the model and trains the copy of the model by inputting training data into a machine-learning process. The method comprises receiving, from the one or more local NWDAFs, a respective local model update comprising an update to values of one or more parameters of the model generated by training a respective copy of the model using machine-learning. The method further comprises combining the local model updates received from the one or more local NWDAFs to obtain a combined model update.