Patent classifications
H04L47/562
System and method for distributing packets in a network
A system and method for distributing packets in a network arc disclosed. The method comprises a step of receiving at least one data packet at a first node front a second node. The method also comprises a step of determining a current set of weights which are applied by the second node to distribute data packets across the first plurality of links. The received data packets are analysed to determine if the current set of weights are to be adjusted (step S102). When it is determined that the current set of weights is to be adjusted, an adjusted set of weights is generated by determining an adjustment factor (step S104). The adjustment factor is applied to the current weight for the selected link and at least one other current w eight in the current set of w eights.
MONITORING PACKET RESIDENCE TIME AND CORRELATING PACKET RESIDENCE TIME TO INPUT SOURCES
An output circuit, included in a device, may determine counter information associated with a packet provided via an output queue managed by the output circuit. The output circuit may determine that a latency event, associated with the output queue, has occurred. The output circuit may provide the counter information and time of day information associated with the counter information. The output circuit may provide a latency event notification associated with the output queue. An input circuit, included in the device, may receive the latency event notification associated with the output queue. The input circuit may determine performance information associated with an input queue. The input queue may correspond to the output queue and may be managed by the input circuit. The input circuit may provide the performance information associated with the input queue and time of day information associated with the performance information.
Methods and systems for queue and pipeline latency metrology in network devices and smart NICs
Inbound packets can be received by a network device that determines a receive pipeline latency metric based on a plurality of receive pipeline residency times of the inbound packets and determines a receive queue latency metric based on a plurality of receive queue residency times of the inbound packets. The receive queue latency metric and the receive pipeline latency metric can be reported to a data collector. The network appliance may also receive a plurality of outbound packets on a transmit queue, determine a transmit queue latency metric based on the transmit queue residency times of the outbound packets, and determine a transmit pipeline latency metric based on the transmit pipeline residency times of the outbound packets. The outbound packets may be transmitted toward their destination. The transmit queue latency metric and the transmit pipeline latency metric can be reported to the data collector.
Buffer management method and apparatus
A memory management method includes: determining that available storage space of a first memory in a network device is less than a first threshold, where the first threshold is greater than 0 and the first memory stores a first packet queue; and deleting at least one packet at the tail of the first packet queue from the first memory based on the available storage space of the first memory being less than the first threshold. When the available storage space of the first memory is less than the first threshold, a packet queue, namely, the first packet queue, is selected and a packet at the tail of the packet queue is deleted from the first memory.
METHODS AND SYSTEMS FOR QUEUE AND PIPELINE LATENCY METROLOGY IN NETWORK DEVICES AND SMART NICS
Inbound packets can be received by a network device that determines a receive pipeline latency metric based on a plurality of receive pipeline residency times of the inbound packets and determines a receive queue latency metric based on a plurality of receive queue residency times of the inbound packets. The receive queue latency metric and the receive pipeline latency metric can be reported to a data collector. The network appliance may also receive a plurality of outbound packets on a transmit queue, determine a transmit queue latency metric based on the transmit queue residency times of the outbound packets, and determine a transmit pipeline latency metric based on the transmit pipeline residency times of the outbound packets. The outbound packets may be transmitted toward their destination. The transmit queue latency metric and the transmit pipeline latency metric can be reported to the data collector.
TRAFFIC-SHAPING HTTP PROXY FOR DENIAL-OF-SERVICE PROTECTION
In accordance with some aspects of the present disclosure, an apparatus is disclosed. In some embodiments, the apparatus includes a processor and a memory. In some embodiments, the memory includes programmed instructions that, when executed by the processor, cause the apparatus to receive a request from a client; determine family of metrics; schedule the request based on the family of metrics; and in response to satisfying one or more scheduling criteria, send the request to a backend server.
IN-ORDER PROCESSING OF NETWORK PACKETS
The described technology relates to a real-time processing of network packets. An example system relates to reordering messages received at a server over a communication network from distributed clients, in order to, among other things, eliminate or at least substantially reduce the effects of jitter (delay variance) experienced in the network. The reordering of messages may enable the example data processing application to improve the consistency of processing packets in the time order of when the respective packets entered a geographically distributed network.
REQUEST THROTTLING USING PI-ES CONTROLLER
Techniques for providing request throttling using proportional, integral, and exponential smoothing algorithms are disclosed. A distributed computing system can include a throttler engine that receives a plurality of requests targeting a software component within the distributed computing system. The throttler engine can aggregate the requests into a queue based on a time window. The throttler engine can determine a received request rate and a request rate limit for the software component and then compute a throttled request rate. The throttled request rate can include correction terms derived from proportional and integral computations and a correction term obtained from an exponential smoothing algorithm. The throttler engine can then provide throttled requests from the queue to the software component.
Implementing a queuing system in a distributed network
A web application has a limit on the total number of concurrent users. As requests from client devices are received from users, a determination is made whether the application can accept those users. When the threshold number of users has been exceeded, new users are prevented from accessing the web application and are assigned to a queue system. A webpage may be sent to the users indicating queue status and may provide their estimated wait time. A cookie may be sent to the client for tracking the position of the user in the application queue. The users are assigned to a user bucket associated with a time interval of their initial request. When user slots become available, the users queued in the user bucket (starting from the oldest user bucket) are allowed access to the web application.
METHOD AND SYSTEM FOR SEQUENCING USER DATA PACKETS
A method and a system for sequencing user data packets are provided. The method includes the following. A user data packet including a packet data convergence protocol (PDCP) data packet data unit (PDU) header is received at hardware of a sublayer of a data link layer. The PDCP data PDU header is decoded at the sublayer to obtain a sequence number of the user data packet. The received user data packet is queued at the sublayer according to the sequence number of the user data packet to assemble a set of consecutively numbered user data packets. At least a portion of the set of consecutively numbered user data packets is delivered from the sublayer to another sublayer of the data link layer.