Patent classifications
H04L47/10
Flow-based management of shared buffer resources
An apparatus for controlling a Shared Buffer (SB), the apparatus including an interface and a SB controller. The interface is to access flow-based data counts and admission states. The SB controller is to perform flow-based accounting of packets received by a network device coupled to a communication network, for producing flow-based data counts, each flow-based data count associated with one or more respective flows, and to generate admission states based at least on the flow-based data counts, each admission state being generated from one or more respective flow-based data counts.
Grade of service control closed loop
Systems and methods for managing network traffic receives, at a grade of service device, network traffic information for a plurality of network traffic channels from a network device separate from the grade of service device. The network traffic information is compared to a threshold to determine a behavior value for each network traffic channel. Each network traffic channel is mapped to a grade of service according to the behavior value.
Grade of service control closed loop
Systems and methods for managing network traffic receives, at a grade of service device, network traffic information for a plurality of network traffic channels from a network device separate from the grade of service device. The network traffic information is compared to a threshold to determine a behavior value for each network traffic channel. Each network traffic channel is mapped to a grade of service according to the behavior value.
Queue protection using a shared global memory reserve
The subject technology relates to the management of a shared buffer memory in a network switch. Systems, methods, and machine readable media are provided for receiving a data packet at a first network queue from among a plurality of network queues, determining if a fill level of a queue in a shared buffer of the network switch exceeds a dynamic queue threshold, and in an event that the fill level of the shared buffer exceeds the dynamic queue threshold, determining if a fill level of the first network queue is less than a static queue minimum threshold.
SYSTEM AND METHOD FOR ADAPTING TRANSMISSION RATE COMPUTATION BY A CONTENT TRANSMITTER
A computerized system having multiple congestion control modules for determining a transmission rate for data traffic towards a destination device over a communication network, the transmission rate updated for specific time intervals, each congestion control module repeatedly collects performance-related data describing performance of content transmitted from the congestion control module to the destination device during specific time intervals, each congestion control module executes a transmission function for computing a next transmission rate for a next time interval, the transmission function receives as input performance-related data associated with prior transmission rates selected at prior time intervals, the transmission function including configurable parameters, the system also including one or more analyzers, each analyzer communicating with one or more of the multiple congestion control modules, where each analyzer periodically executes an adjusting function for reconfiguring the configurable parameters of the function for computing the next transmission rate.
SYSTEMS, DEVICES AND METHODS WITH OFFLOAD PROCESSING DEVICES
A method can include receiving network packets including forwarding plane packets; evaluating header information of the network packets to map network packets to any of a plurality of destinations on the module, each destination corresponding to any of a plurality of services executed by offload processors of the module; configuring operations of the offload processors; and in response to forwarding plane packets, executing operations on the forwarding plane packets; wherein the receiving, evaluation and processing of the forwarding plane packets are performed independent of the host processor. Corresponding systems and methods are also disclosed.
SYSTEMS, DEVICES AND METHODS WITH OFFLOAD PROCESSING DEVICES
A method can include receiving network packets including forwarding plane packets; evaluating header information of the network packets to map network packets to any of a plurality of destinations on the module, each destination corresponding to any of a plurality of services executed by offload processors of the module; configuring operations of the offload processors; and in response to forwarding plane packets, executing operations on the forwarding plane packets; wherein the receiving, evaluation and processing of the forwarding plane packets are performed independent of the host processor. Corresponding systems and methods are also disclosed.
End-to-end prioritization for mobile base station
A method for utilizing quality of service information in a network with tunneled backhaul is disclosed, comprising: establishing a backhaul bearer at a base station with a first core network, the backhaul bearer established by a backhaul user equipment (UE) at the base station, the backhaul bearer having a single priority parameter, the backhaul bearer terminating at a first packet data network gateway in the first core network; establishing an encrypted internet protocol (IP) tunnel between the base station and a coordinating gateway in communication with the first core network and a second core network; facilitating, for at least one UE attached at the base station, establishment of a plurality of UE data bearers encapsulated in the secure IP tunnel, each with their own QCI; and transmitting prioritized data of the plurality of UE data bearers via the backhaul bearer and the coordinating gateway to the second core network.
TECHNIQUES FOR PROCESSING NETWORK FLOWS
Improved network traffic flow processing techniques are described. In a network device providing multiple processing planes, each processing plane comprising multiple processing units, techniques are described that take advantage of flow affinity/locality principles such that the same processing component of a processing plane, which previously performed processing for a network flow, is used for performing subsequent processing for the same network flow. This enables faster processing of network traffic flows by the network device. In certain implementations, the techniques described herein can be implemented in a network virtualization device (NVD) that is configured to perform network virtualization functions.
TECHNIQUES FOR PROCESSING NETWORK FLOWS
Improved network traffic flow processing techniques are described. In a network device providing multiple processing planes, each processing plane comprising multiple processing units, techniques are described that take advantage of flow affinity/locality principles such that the same processing component of a processing plane, which previously performed processing for a network flow, is used for performing subsequent processing for the same network flow. This enables faster processing of network traffic flows by the network device. In certain implementations, the techniques described herein can be implemented in a network virtualization device (NVD) that is configured to perform network virtualization functions.