Patent classifications
H04L47/2441
SYSTEM AND METHOD FOR MANAGING NETWORK TRAFFIC USING FAIR-SHARE PRINCIPLES
A system and method for managing network traffic in a distributed environment. the system including: a plurality of logic modules configured to determine policy data related to bandwidth management and at least one split criteria for a basis for shaping network traffic; a control processor associated with each one of the plurality of logic modules, each control processor configured to determine data associated with each of a plurality of traffic flows at the associated logic module and to coordinate traffic actions over the plurality of logic modules; a packet processor associated with each control processor and configured to determine a traffic action based on each traffic flow and received policy data; and at least two shaper objects configured to receive a split of the traffic flows and enforce the determined traffic action on their respective traffic flow.
METHOD AND APPARATUS FOR PERFORMING TRAFFIC FLOW MANAGEMENT IN WIRELESS COMMUNICATIONS SYSTEM WITH AID OF AUXILIARY INFORMATION
A method for performing traffic flow management with aid of auxiliary information and associated apparatus are provided. The method may include: carrying first auxiliary information in at least one first data unit of first data units in a first stream, wherein the first stream and a second stream are assigned to a same traffic identifier (TID); and sending the at least one first data unit carrying the first auxiliary information and at least one second data unit in the second stream to the second device, wherein the first auxiliary information comprises an indication of the first data units being part of the first stream.
METHOD AND APPARATUS FOR PERFORMING TRAFFIC FLOW MANAGEMENT IN WIRELESS COMMUNICATIONS SYSTEM WITH AID OF AUXILIARY INFORMATION
A method for performing traffic flow management with aid of auxiliary information and associated apparatus are provided. The method may include: carrying first auxiliary information in at least one first data unit of first data units in a first stream, wherein the first stream and a second stream are assigned to a same traffic identifier (TID); and sending the at least one first data unit carrying the first auxiliary information and at least one second data unit in the second stream to the second device, wherein the first auxiliary information comprises an indication of the first data units being part of the first stream.
INCREASED COVERAGE OF APPLICATION-BASED TRAFFIC CLASSIFICATION WITH LOCAL AND CLOUD CLASSIFICATION SERVICES
A cloud-based traffic classification engine maintains a catalog of application-based traffic classes which have been developed based on known applications, and a local traffic classification engine maintains a subset of these classes. Network traffic intercepted by the firewall which cannot be classified by the local engine is forwarded to the cloud-based engine for classification. Upon determination of a class of the traffic, the cloud-based engine forwards the determined class and corresponding signature to the local engine. The firewall maintains a cache which is updated with the signatures corresponding to the class communicated by the cloud-based engine. Subsequent network traffic sent from the application can be determined to correspond to the application and classified according locally at the firewall based on the cached signatures. Localization of the cache to the firewall reduces latency of traffic classification operations as the catalog of classification information stored in the cloud scales.
INCREASED COVERAGE OF APPLICATION-BASED TRAFFIC CLASSIFICATION WITH LOCAL AND CLOUD CLASSIFICATION SERVICES
A cloud-based traffic classification engine maintains a catalog of application-based traffic classes which have been developed based on known applications, and a local traffic classification engine maintains a subset of these classes. Network traffic intercepted by the firewall which cannot be classified by the local engine is forwarded to the cloud-based engine for classification. Upon determination of a class of the traffic, the cloud-based engine forwards the determined class and corresponding signature to the local engine. The firewall maintains a cache which is updated with the signatures corresponding to the class communicated by the cloud-based engine. Subsequent network traffic sent from the application can be determined to correspond to the application and classified according locally at the firewall based on the cached signatures. Localization of the cache to the firewall reduces latency of traffic classification operations as the catalog of classification information stored in the cloud scales.
SYSTEMS, DEVICES AND METHODS WITH OFFLOAD PROCESSING DEVICES
A method can include receiving network packets including forwarding plane packets; evaluating header information of the network packets to map network packets to any of a plurality of destinations on the module, each destination corresponding to any of a plurality of services executed by offload processors of the module; configuring operations of the offload processors; and in response to forwarding plane packets, executing operations on the forwarding plane packets; wherein the receiving, evaluation and processing of the forwarding plane packets are performed independent of the host processor. Corresponding systems and methods are also disclosed.
SYSTEMS, DEVICES AND METHODS WITH OFFLOAD PROCESSING DEVICES
A method can include receiving network packets including forwarding plane packets; evaluating header information of the network packets to map network packets to any of a plurality of destinations on the module, each destination corresponding to any of a plurality of services executed by offload processors of the module; configuring operations of the offload processors; and in response to forwarding plane packets, executing operations on the forwarding plane packets; wherein the receiving, evaluation and processing of the forwarding plane packets are performed independent of the host processor. Corresponding systems and methods are also disclosed.
Low latency for network devices not supporting LLD
An optimizing agent of a network device that does not support low latency DOCSIS can identify traffic or packets associated with a client resource for an optimization service flow. For example, the optimizing agent can receive a priority notification associated with a client resource from a low latency controller that is indicative of a low latency requirement associated with the client resource. The optimizing agent identifies the traffic for the optimized service flow based on the priority notification. The identifying can require modifying one or more parameters of an existing service flow, creating a new service flow, or selecting an existing service flow with low latency. The identified traffic can be routed to the optimized service flow to achieve low latency or high QoS.
High performance software-defined core network
A method comprising instantiating virtual routers (VRs) at each of a set of nodes that form a network. Each VR is coupled to the network and to a tenant of the node. The network comprises virtual links in an overlay network provisioned over an underlay network including servers of a public network. The method comprises configuring at least one VR to include a feedback control system comprising at least one objective function that characterizes the network. The method comprises configuring the VR to receive link state data of a set of virtual links of the virtual links, and control routing of a tenant traffic flow of each tenant according to a best route of the network determined by the at least one objective function using the link state data.
Inferring quality of experience (QoE) based on choice of QoE inference model
In one example, a location of a potential bottleneck of network traffic in a network is identified. Based on the location of the potential bottleneck, a first QoE inference model is selected from a plurality of respective QoE inference models. The respective QoE inference models are each trained to infer a respective QoE of the network traffic based on one or more respective network traffic metrics generated by monitoring the network traffic at a respective location in the network. One or more first network traffic metrics of the one or more respective network traffic metrics are generated by monitoring the network traffic at a first respective location. The one or more first network traffic metrics are provided to the first QoE inference model to infer a first respective QoE.