Patent classifications
H04L49/9026
PACKET FORWARDING APPARATUS WITH BUFFER RECYCLING AND ASSOCIATED PACKET FORWARDING METHOD
A packet forwarding apparatus includes a first storage device and a processor. The first storage device has a plurality of buffers allocated therein, and at least one buffer included in the plurality of buffers is arranged to buffer at least one packet. The processor is arranged to execute a Linux kernel to perform software-based packet forwarding associated with the at least one packet. The at least one buffer allocated in the first storage device is recycled through direct memory access (DMA) management, and is reused for buffering at least one other packet.
MULTI-STREAM SCHEDULING FOR TIME SENSITIVE NETWORKING
A network interface device for implementing multi-stream scheduling for time sensitive networking includes direct memory access (DMA) circuitry, comprising: descriptor parsing circuitry to read a packet descriptor from a descriptor cache, wherein the packet descriptor includes at least one scheduling control parameter including: a launch time offset, a gate cycle offset, or a reduction ratio; wherein the packet descriptor is associated with a packet stream having a traffic class; and scheduling circuitry to schedule packets from the packet stream for transmission using the at least one scheduling control parameter.
Combined write enable mask and credit return field
A credit return field is used in a credit-based flow control system to indicate that one or more credits are being returned to a sending device from a receiving device. Based on the number of credits available, the sending device determines whether to send device or wait until more credits are returned. A write enable mask allows a wide data field to be used even when a smaller amount of data is to be written. A novel data packet uses a combined write enable mask and credit return field. In one mode, the field contains a write enable mask. In another mode, the field contains credit return data. If the field contains credit return data, a default value (e.g., all ones) is used for the write enable mask. The mode may be selected based on another value in the data packet.
COMBINED WRITE ENABLE MASK AND CREDIT RETURN FIELD
A credit return field is used in a credit-based flow control system to indicate that one or more credits are being returned to a sending device from a receiving device. Based on the number of credits available, the sending device determines whether to send device or wait until more credits are returned. A write enable mask allows a wide data field to be used even when a smaller amount of data is to be written. A novel data packet uses a combined write enable mask and credit return field. In one mode, the field contains a write enable mask. In another mode, the field contains credit return data. If the field contains credit return data, a default value (e.g., all ones) is used for the write enable mask. The mode may be selected based on another value in the data packet.
COMBINED WRITE ENABLE MASK AND CREDIT RETURN FIELD
A credit return field is used in a credit-based flow control system to indicate that one or more credits are being returned to a sending device from a receiving device. Based on the number of credits available, the sending device determines whether to send device or wait until more credits are returned. A write enable mask allows a wide data field to be used even when a smaller amount of data is to be written. A novel data packet uses a combined write enable mask and credit return field. In one mode, the field contains a write enable mask. In another mode, the field contains credit return data. If the field contains credit return data, a default value (e.g., all ones) is used for the write enable mask. The mode may be selected based on another value in the data packet.
PACKET PROCESSING OF STREAMING CONTENT IN A COMMUNICATIONS NETWORK
Aspects of present disclosure include devices within a transmission path of streamed content forwarding received data packets of the stream to the next device or “hop” in the path prior to buffering the data packet at the device. In this method, typical buffering of the data stream may therefore occur at the destination device for presentation at a consuming device, while the devices along the transmission path may transmit a received packet before buffering. Further, devices along the path may also buffer the content stream after forwarding to fill subsequent requests for dropped data packets of the content stream. Also, in response to receiving the request for the content stream, a device may first transmit a portion of the contents of the gateway buffer to the requesting device to fill a respective buffer at the receiving device.
DEVICE AND METHOD FOR PROCESSING DATA PACKET
An electronic device, according to various embodiments of the present invention, comprises a network connection device, at least one processor, and a memory operatively connected to the at least one processor, wherein the memory stores instructions which, when executed, cause the at least one processor to: receive a data packet from the network connection device; add the data packet to a packet list corresponding to the data packet; and when the number of data packets included in the packet list is less than a threshold value, flush the data packets to a network stack on the basis of a flush time value for controlling a packet aggregation function, wherein the flush time value may be determined on the basis of the network throughput.
CPU AND PRIORITY BASED EARLY DROP PACKET PROCESSING SYSTEMS AND METHODS
Described embodiments provide systems and methods for CPU load and priority based early drop packet processing. A device can establish a priority level for each traffic class of a plurality of traffic classes. The device can receive a plurality of packets. The device can determine a processing level of one or more processors of the device prior to processing the plurality of packets. The device can select one or more packets of the plurality of packets to drop responsive to the priority level of one or more traffic classes associated with the one or more packets and the processing level of the one or more processors.
DATA PROCESSING SYSTEM AND ACCELERATOR THEREFOR
A data processing system includes a host and an accelerator. The host transmits, to the accelerator, input data together with data identification information based on a data classification criterion. The accelerator classifies the input data as any one of feature data, a parameter, and a bias based on the data identification information when the input data is received from the host, distributes the input data, performs pre-processing on the feature data, and outputs computed result data to the host or feeds the result data back so that computation processing is performed on the result data again.
Technologies for scalable network packet processing with lock-free rings
Technologies for network packet processing include a computing device that receives incoming network packets. The computing device adds the incoming network packets to an input lockless shared ring, and then classifies the network packets. After classification, the computing device adds the network packets to multiple lockless shared traffic class rings, with each ring associated with a traffic class and output port. The computing device may allocate bandwidth between network packets active during a scheduling quantum in the traffic class rings associated with an output port, schedule the network packets in the traffic class rings for transmission, and then transmit the network packets in response to scheduling. The computing device may perform traffic class separation in parallel with bandwidth allocation and traffic scheduling. In some embodiments, the computing device may perform bandwidth allocation and/or traffic scheduling on each traffic class ring in parallel. Other embodiments are described and claimed.