Patent classifications
H04L49/9084
Host device with multi-path layer configured for detection and resolution of oversubscription conditions
An apparatus comprises a host device configured to communicate over a network with a storage system comprising a plurality of storage devices. The host device comprises a set of input-output queues and a multi-path input-output driver configured to select input-output operations from the set of input-output queues for delivery to the storage system over the network. The multi-path input-output driver is further configured to maintain payload size counters to track outstanding command payload for respective ones of a plurality of paths from the host device to the storage system, to detect an oversubscription condition relating to at least one of the paths based at least in part on values of one or more of the payload size counters, and to initiate one or more automated actions responsive to the detected oversubscription condition. For example, automated deployment of one or more additional paths associated with respective spare communication links between the host device and the storage system may be initiated.
DATA PACKET PROCESSING METHOD AND APPARATUS, AND DEVICE
Embodiments of the present invention disclose a data packet processing method and apparatus, and a device. The method includes: if a first data packet is received, determining a first cache queue that is in the first buffer and that is used to store the first data packet; buffering the first data packet in the second buffer if a state of the first cache queue is an invalid state, where a data amount of the first data packet is less than the capacity of the second buffer, and the state of the first cache queue is set to the invalid state when a current data amount of the first buffer reaches the capacity of the first buffer; and if a data amount of the second buffer reaches the capacity of the second buffer, sending all data packets that are in the second buffer to a control plane device.
Read instruction queues in a network device
To more efficiently utilize buffer resources, schedulers within a traffic manager may generate and queue read instructions for reading buffered portions of data units that are ready to be sent to the egress blocks. The traffic manager may be configured to select a read instruction for a given buffer bank from the read instruction queues based on a scoring mechanism or other selection logic. To avoid sending too much data to an egress block during a given time slot, once a data unit portion has been read from the buffer, it may be temporarily stored in a shallow read data cache. Alternatively, a single, non-bank specific controller may determine all of the read instructions and write operations that should be executed in a given time slot. The read instruction queue architecture may be duplicated for link memories and other memories in addition to the buffer memory.
PRIORITY-BASED FLOW CONTROL
Some embodiments provide a method for a hardware forwarding element. The method adds a received packet to a buffer. The method determines whether adding the packet to the buffer causes the buffer to pass one of multiple flow control thresholds, each of which corresponds to a different packet priority. When adding the packet to the buffer causes the buffer to pass a particular flow control threshold corresponding to a particular priority, the method generates a flow control message for the particular priority.
Hierarchical statistically multiplexed counters and a method thereof
Embodiments of the present invention relate to an architecture that uses hierarchical statistically multiplexed counters to extend counter life by orders of magnitude. Each level includes statistically multiplexed counters. The statistically multiplexed counters includes P base counters and S subcounters, wherein the S subcounters are dynamically concatenated with the P base counters. When a row overflow in a level occurs, counters in a next level above are used to extend counter life. The hierarchical statistically multiplexed counters can be used with an overflow FIFO to further extend counter life.
Data plane for processing function scalability
The present disclosure generally discloses a data plane configured for processing function scalability. The processing functions for which scalability is supported may include charging functions, monitoring functions, security functions, or the like.
APPLICATION AND NETWORK AWARE ADAPTIVE COMPRESSION FOR BETTER QOE OF LATENCY SENSITIVE APPLICATIONS
This disclosure is directed to embodiments of systems and methods for performing compression of data in a queue. A device intermediary between a client and a server may determine that a length of time to move existing data maintained in a queue from the queue exceeds a predefined threshold. The device may identify, responsive to the determination, a first quantity of the existing data to undergo compression, and a second quantity of the existing data according to a compression ratio of the compression. The device may reserve, according to the second quantity, a first portion of the queue that maintained the first quantity of the existing data, to place compressed data obtained from applying the compression on the first quantity of the existing. The device may place incoming data into the queue beyond the reserved first portion of the queue.
WiFi receiver architecture
A communication device determines that a data rate of a data packet exceeds a threshold, and in response, lowers a processing speed of an equalizer device to prevent a buffer from overflowing. The buffer stores outputs generated by the equalizer device. A forward error correction code decoder device of the communication device processes outputs of the equalizer device corresponding to the data packet to generate decoded information corresponding to the data packet. The communication device transmits an acknowledgment packet to acknowledge the data packet such that transmission of the acknowledgment packet begins within required time period, defined by a communication protocol, after an end of the data packet.
TRAFFIC MANAGEMENT IN A NETWORK SWITCHING SYSTEM WITH REMOTE PHYSICAL PORTS
In a switching system that comprises a central switching device an at least one port extender device, the central switching device includes at least one port configured to interface with the port extender device, and the port extender device includes a plurality of front ports for interfacing with one or more networks. The central switching device includes a processor that processes packets received from the at least one port extender device, and a plurality of egress queues for storing processed packets that are to be forwarded to the at least one port extender device for transmission via ones of the front ports. The central switching device also includes a flow control processor configured to, responsively to flow control messages received from the at least one port extender device, control transmission of packets to the at least one port extender device to prevent overflow of egress queues of the port extender device.
Method of data caching in delay tolerant network based on information centric network, computer readable medium and device for performing the method
Provided is a method of data caching in delay tolerant network based on information centric network and a recording medium and a device for performing the same. The data caching method includes: the step of checking a remaining buffer amount and a buffer usage amount of node, the step of caching data in the node which is received from another node according to a data caching policy, in case remaining buffer amount of the node is greater than a preset remaining buffer amount threshold, the step of deleting data cached in the node from the node according to a data deletion policy, in case the buffer usage amount of the node is less than a preset buffer usage amount threshold, and the step of setting an initial Time-to-Live (TTL) value of the data received from another node or updating a TTL of the data cached in the node using information of the data received from another node or information of the node.