H04L49/9084

HOST DEVICE WITH MULTI-PATH LAYER CONFIGURED FOR DETECTION AND RESOLUTION OF OVERSUBSCRIPTION CONDITIONS
20200204495 · 2020-06-25 ·

An apparatus comprises a host device configured to communicate over a network with a storage system comprising a plurality of storage devices. The host device comprises a set of input-output queues and a multi-path input-output driver configured to select input-output operations from the set of input-output queues for delivery to the storage system over the network. The multi-path input-output driver is further configured to maintain payload size counters to track outstanding command payload for respective ones of a plurality of paths from the host device to the storage system, to detect an oversubscription condition relating to at least one of the paths based at least in part on values of one or more of the payload size counters, and to initiate one or more automated actions responsive to the detected oversubscription condition. For example, automated deployment of one or more additional paths associated with respective spare communication links between the host device and the storage system may be initiated.

Method and system for storing packets for a bonded communication links

Method and system for storing packets received from a bonded communication links according to latency of the communication link that has the largest latency among all communication links of the bonded communication links. Embodiments of present inventions can be applied to bonded communication links, including wireless connection, Ethernet connection, Internet Protocol connection, asynchronous transfer mode, virtual private network, WiFi, high-speed downlink packet access, GPRS, LTE, and X.25. The present invention presents methods comprising the steps of estimating storage size of a queue, wherein the queue is for storage the one or more packets received from the bonded communication links. The storage size is based on one or more factors, including largest latency, bandwidth of each of the plurality of communication links, and allowed time duration of packet storage

REMOTE MEMORY MANAGEMENT
20200192857 · 2020-06-18 ·

Remote memory management of the memory of a consumer computer by a producer computer is described. A system is described that can include a first computer, and a second computer communicatively coupled to the first computer via a remote direct memory access enabled communication network. The first computer can include a first operating system. The second computer can include a second operating system and a second memory. The second memory can include a plurality of buffers. The first computer can remotely manage the plurality of buffers of the second memory of the second computer without involving either the first operating system or the second operating system. The managing can further include the first computer identifying available buffers amongst the plurality of buffers. Related methods, apparatuses, articles, non-transitory computer program products, non-transitory computer readable media are also within the scope of this disclosure.

CONTROL APPARATUS

It is possible to perform transfer with low latency. The control apparatus includes a routing control unit, transmission queues, and a plurality of controllers. The routing control unit includes a buffer, a normal transmission unit configured to output, among inputted frames, a frame other than a frame to be retransmitted to the transmission queue of the controller corresponding to a network serving as a transfer destination, and, when the controller corresponding to the network serving as the transfer destination is in a full state in which no more frames cannot be stored in the transmission queue, specify the inputted frame as the frame to be retransmitted and store the inputted frame in the buffer, and a signal handling unit configured to, when a cancellation signal indicating that the full state has been canceled is received from any of the plurality of controllers, output, to the transmission queue of the controller that has transmitted the cancellation signal, the frame to be retransmitted that is to be transferred to the network corresponding to the controller that has transmitted the cancellation signal. When the full state is canceled, the controller transmits the cancellation signal to the signal handling unit.

Network Device
20200177514 · 2020-06-04 ·

This application provides a network device, including a buffer module, a counting module, a control module, and a sending module. The buffer module includes N queues, configured to buffer M data streams, where N is less than M. The counting module includes M counters, the M counters are in a one-to-one correspondence with the M data streams, and the M counters are configured to count buffer quantities for the M data streams in the N queues. The control module is configured to: when a count value on a first counter exceeds a corresponding threshold, discard a to-be-enqueued data packet of a data stream corresponding to the first counter, or control the sending module to send pause indication information to an upper-level control module. On one hand, resource consumption of the network device can be reduced. On the other hand, buffer pressure of the network device can be released.

Network interface device that sets an ECN-CE bit in response to detecting congestion at an internal bus interface

A network device includes a Network Interface Device (NID) and multiple servers. Each server is coupled to the NID via a corresponding PCIe bus. The NID has a network port through which it receives packets. The packets are destined for one of the servers. The NID detects a PCIe congestion condition regarding the PCIe bus to the server. Rather than transferring the packet across the bus, the NID buffers the packet and places a pointer to the packet in an overflow queue. If the level of bus congestion is high, the NID sets the packet's ECN-CE bit. When PCIe bus congestion subsides, the packet passes to the server. The server responds by returning an ACK whose ECE bit is set. The originating TCP endpoint in turn reduces the rate at which it sends data to the destination server, thereby reducing congestion at the PCIe bus interface within the network device.

System and method to perform lossless data packet transmissions
11876735 · 2024-01-16 · ·

A system may include a primary memory, a secondary memory, and a processor that may be communicatively coupled to one another. The processor may be configured to control data packet transmissions received via an input to the primary memory and the secondary memory. Further, the processor may be configured to monitor a current buffering level of the primary memory; and compare the first current buffering level to a first buffering threshold. The first buffering threshold may be indicative of a buffering capacity difference between a first buffering capacity of the primary memory and a second buffering capacity of the secondary memory. In response to determining that the current buffering level is equal to or greater than the first buffering threshold, pause the data packet transmissions via the input to the to the primary memory and the secondary memory.

STREAMING MEDIA DELIVERY SYSTEM
20200153881 · 2020-05-14 ·

Streaming media, such as audio or video files, is sent via the Internet. The media are immediately played on a user's computer. Audio/video data is transmitted from the server under control of a transport mechanism. A server buffer is prefilled with a predetermined amount of the audio/video data. When the transport mechanism causes data to be sent to the user's computer, it is sent more rapidly than it is played out by the user system. The audio/video data in the user buffer accumulates; and interruptions in playback as well as temporary modem delays are avoided.

EGRESS FLOW MIRRORING IN A NETWORK DEVICE
20200153759 · 2020-05-14 ·

At least a payload of a packet that is received by a network device is stored in a packet memory. The packet is processed at least to determine at least one egress port via which the packet is to be transmitted, modify a header of the packet to generate a modified header, and determine, based at least in part on the modified header, whether the packet is to be transmitted or to be discarded by the network device. In response to determining that the packet is to be transmitted, the at least the payload of the packet is retrieved from the packet memory, a transmit packet is generated at least by combining the at least the payload of the packet with the modified header, and the transmit packet is transmitted via the determined at least one egress port of the network device.

Store and forward logging in a content delivery network
10630611 · 2020-04-21 · ·

A computer-implemented method on a device in a content delivery (CD) network. The device has hardware including storage with at least one first class of storage and at least one second class of storage, the first class of storage being faster than the second class of storage. A first portion of the first class of storage is allocated for log data, and a second portion of the second class of storage is allocated for log data. The method includes obtaining log event data from at least one component or service on the device that is to be delivered to a component or service on a distinct device. Each log event data item has a priority. If a connection to an external location is lost, at least some of the log event data items are selectively stored in the storage, wherein the storing is based on priority of the log event data items. Otherwise, if the connection is not lost, at least some of the log event data items are sent to the at least one external location.