H04L49/9031

JOINING DATA WITHIN A RECONFIGURABLE FABRIC
20180324112 · 2018-11-08 ·

Techniques are disclosed for managing data within a reconfigurable computing environment. In a multiple processing element environment, such as a mesh network or other suitable topology, there is an inherent need to pass data between processing elements. Subtasks are divided among multiple processing elements. The output resulting from the subtasks is then merged by a downstream processing element. In such cases, a join operation can be used to combine data from multiple upstream processing elements. A control agent executes on each processing element. A memory buffer is disposed between upstream processing elements and the downstream processing element. The downstream processing element is configured to automatically perform an operation based on the availability of valid data from the upstream processing elements.

DATA TRANSMISSION DEVICE ON SERVER, DATA TRANSMISSION METHOD AND PROGRAM ON SERVER
20240333541 · 2024-10-03 ·

An on-server data transmission device (200) that performs data transfer control on an interface part in a user space includes: a data transfer part (220) configured to launch a thread that monitors packet arrivals using a polling model; and a sleep control manager (210) configured to manage data arrival schedule information and deliver the data arrival schedule information to the data transfer part (220), to perform sleep control on the data transfer part (220). The data transfer part (220) is configured to put the thread into a sleep state based on the data arrival schedule information delivered from the sleep control manager (210) and cause a timer to expire immediately before an arrival of data to perform cancellation of the sleep state, causing the thread to wake up.

Systems and methods for efficiently storing packet data in network switches
10044646 · 2018-08-07 · ·

A network switch allocates large-scale memory units as data packets are received in order to implement per-queue, circular egress buffers. Each large-scale memory unit is larger than the maximum packet length of the received packets and is capable of storing a plurality of data packets, thereby reducing the number of memory allocation events that are required to process a given number of data packets. Efficient techniques for writing to and reading from the large-scale egress memory units have been developed and may be used to reduce processing delays. Such techniques are compatible with relatively inexpensive memory devices, such as dynamic random access memory (DRAM), that may be separate from the circuitry used to process the data packets. The described architectures are easily scalable so that that a large number of ports (e.g., thousands) may be implemented at a relatively low cost and complexity without introducing significant processing delays.

SYSTEM AND METHOD TO EFFICIENTLY SERIALIZE PARALLEL STREAMS OF INFORMATION
20180167332 · 2018-06-14 ·

A system and method for serializing parallel streams of information. The system and method employ a plurality of buffers and a controller. The plurality of buffers are configured to store information received from a demodulator and output the stored information to a decoder. The controller is configured to store a plurality of frames of information output in a parallel manner from the demodulator into the plurality of buffers, and control the output of the plurality of buffers such that each of the plurality of frames is output to the decoder once stored.

PACKET VALIDATION IN VIRTUAL NETWORK INTERFACE ARCHITECTURE

Roughly described, a network interface device receiving data packets from a computing device for transmission onto a network, the data packets having a certain characteristic, transmits the packet only if the sending queue has authority to send packets having that characteristic. The data packet characteristics can include transport protocol number, source and destination port numbers, source and destination IP addresses, for example. Authorizations can be programmed into the NIC by a kernel routine upon establishment of the transmit queue, based on the privilege level of the process for which the queue is being established. In this way, a user process can use an untrusted user-level protocol stack to initiate data transmission onto the network, while the NIC protects the remainder of the system or network from certain kinds of compromise.

Packet validation in virtual network interface architecture

Roughly described, a network interface device receiving data packets from a computing device for transmission onto a network, the data packets having a certain characteristic, transmits the packet only if the sending queue has authority to send packets having that characteristic. The data packet characteristics can include transport protocol number, source and destination port numbers, source and destination IP addresses, for example. Authorizations can be programmed into the NIC by a kernel routine upon establishment of the transmit queue, based on the privilege level of the process for which the queue is being established. In this way, a user process can use an untrusted user-level protocol stack to initiate data transmission onto the network, while the NIC protects the remainder of the system or network from certain kinds of compromise.

Antenna buffer management for downlink physical layer

A method and system for managing a transmission buffer memory at a base station in a wireless communication system, the base station having a plurality of cells, the memory being circular and shared among the cells so that memory fragmentation when cells are removed or added is avoided.

Memory buffer management method and system having multiple receive ring buffers
09584446 · 2017-02-28 · ·

The present invention is directed to a method and system of memory management that features dual buffer rings, each of which includes descriptors identifying addresses of a memory space, referred to as buffers, in which portions of data packets are stored. Typically, the header segment of each data packet is stored at a first set of a plurality of buffers, and the portion of the payload segment that does not fit among the buffers of the first set is stored in the buffers of a second set. In this manner, the size of the individual buffers associated with the first buffer rings may be kept to the smallest size of useable storage space, and the buffers corresponding to the second buffer ring may be arbitrary in size.

ANTENNA BUFFER MANAGEMENT FOR DOWNLINK PHYSICAL LAYER

A method and system for managing a transmission buffer memory at a base station in a wireless communication system, the base station having a plurality of cells, the memory being circular and shared among the cells so that memory fragmentation when cells are removed or added is avoided.

Utilization based multi-buffer self-calibrated dynamic adjustment management

Provided are techniques for utilization based multi-buffer self-calibrated dynamic adjustment management. A sub-buffer is assigned to each entity of multiple entities. Under control of each of the multiple entities, a current utilization rate of the entity and each other entity is summed up; a number of data segments for the entity are determined based on the current utilization rate of the entity relative to the current utilization rate of each of the other entities; and a size of the assigned sub-buffer is adjusted based on the determination.