H04L49/9042

Packet descriptor storage in packet memory with cache

A first memory device stores (i) a head part of a FIFO queue structured as a linked list (LL) of LL elements arranged in an order in which the LL elements were added to the FIFO queue and (ii) a tail part of the FIFO queue. A second memory device stores a middle part of the FIFO queue, the middle part comprising a LL elements following, in an order, the head part and preceding, in the order, the tail part. A queue controller retrieves LL elements in the head part from the first memory device, moves LL elements in the middle part from the second memory device to the head part in the first memory device prior to the head part becoming empty, and updates LL parameters corresponding to the moved LL elements to indicate storage of the moved LL elements changing from the second memory device to the first memory device.

Addressless merge command with data item identifier
10146468 · 2018-12-04 · ·

An addressless merge command includes an identifier of an item of data, and a reference value, but no address. A first part of the item is stored in a first place. A second part is stored in a second place. To move the first part so that the first and second parts are merged, the command is sent across a bus to a device. The device translates the identifier into a first address ADR1, and uses ADR1 to read the first part. Stored in or with the first part is a second address ADR2 indicating where the second part is stored. The device extracts ADR2, and uses ADR1 and ADR2 to issue bus commands. Each bus command causes a piece of the first part to be moved. When the entire first part has been moved, the device returns the reference value to indicate that the merge command has been completed.

Bus interface unit and operating method therefor

A bus interface unit for exchanging data via a bus system includes at least one bus control unit for connection to the bus system, having a control unit that is configured to output data received via the bus control unit from the bus system, and/or data derived therefrom, to an external unit, and/or to output data obtained from an external unit, and/or data derived therefrom, via the bus control unit to the bus system.

POWER SAVE PROTOCOL FOR LOW POWER DEVICE EXPLOITING WAKEUP SIGNAL RECEIVER

Disclosed herein is a method performed by a wireless device. The method includes wirelessly transmitting a legacy physical layer protocol data unit (PPDU) before wirelessly transmitting a wake-up receiver PPDU to protect the wake-up receiver PPDU. The legacy PPDU includes a preamble and a legacy frame. The legacy frame includes a power management field that indicates that the wireless device is transitioning to a doze state to cause other wireless devices to refrain from transmitting to the wireless device.

CACHING A DATA PAYLOAD ON A PERIPHERAL DEVICE FOR DELIVERY TO A TARGET DEVICE

Described are techniques for caching a data payload on a peripheral device for delivery to a target device. The techniques include receiving, at a peripheral device via a short-range wireless protocol, a data payload intended for a target device, where the data payload is received from a source device configured to send the data payload to the target device. The techniques further include storing the data payload in a memory of the peripheral device for a time that allows the peripheral device to be placed in network proximity of the target device and transfer the data payload from the peripheral device to the target device. The techniques further include detecting the target device via a short-range wireless network, and sending the data payload to the target device via a short-range wireless protocol used by the target device.

Packet processing method, network device, and related device

In a packet processing method, a network device receives a packet of an application running in a server connected to the network device. The network device separates data of the application from the packet, and writes the data of the application into a memory allocated in the server to the application.

PROCESSING PACKETS ACCORDING TO HIERARCHY OF FLOW ENTRY STORAGES
20180262434 · 2018-09-13 ·

Some embodiments provide a method for processing a packet received by a managed forwarding element. The method performs a series of packet classification operations based on header values of the received packet. The packet classifications operations determine a next destination of the received packet. When the series of packet classification operations specifies to send the packet to a network service that performs payload transformations on the packet, the method (1) assigns a service operation identifier to the packet that identifies the service operations for the network service to perform on the packet, (2) sends the packet to the network service with the service operation identifier, and (3) stores a cache entry for processing subsequent packets without the series of packet classification operations. The cache entry includes the assigned service operation identifier. The network service uses the assigned service operation identifier to process packets without performing its own classification operations.

DETERMINISTIC AND EFFICIENT MESSAGE PACKET MANAGEMENT

Methods, devices, and systems for facilitation of deterministic management of a plurality of electronic message packets communicated to an application via a network from a plurality of message sources. The facilitation involves an electronic message packet from the network, determining data indicative of order the electronic message packet was received relative to previously received electronic message packets, and providing the order to the application.

Method of dynamically allocating buffers for packet data received onto a networking device

A method of dynamically allocating buffers involves receiving a packet onto an ingress circuit. The ingress circuit includes a memory that stores a free buffer list, and an allocated buffer list. Packet data of the packet is stored into a buffer. The buffer is associated with a buffer identification (ID). The buffer ID is moved from the free buffer list to the allocated buffer list once the packet data is stored in the buffer. The buffer ID is used to read the packet data from the buffer and into an egress circuit and is stored in a de-allocation buffer list in the egress circuit. A send buffer IDs command is received from a processor onto the egress circuit and instructs the egress circuit to send the buffer ID to the ingress circuit such that the buffer ID is pushed onto the free buffer list.

USER TRAFFIC GENERATION METHOD AND APPARATUS
20180227233 · 2018-08-09 ·

A user traffic generation method is disclosed, the method includes: receiving a user traffic generation instruction; performing, according to the user traffic generation instruction and index information pre-stored in a first on-chip static random access memory SRAM of a field programmable gate array FPGA, a prefetch operation and a cache operation on a user packet that is stored in a dynamic random access memory DRAM and indicated by the index information, where the first on-chip SRAM is configured to store index information of all user packets that need to be used, and the DRAM is configured to store all the user packets; and generating user traffic according to a user packet that is cached during the cache operation. According to the embodiments of the present invention, storage of a mass of user packets can be implemented, and user traffic can be generated at a line rate.