H04L49/9052

Buffer management for multiple radio access technologies

Certain aspects of the present disclosure relate to methods and apparatus for buffer management for a user equipment (UE) for multiple radio access technologies (RATs). Certain aspects provide transmitting data using a first RAT and second RAT to the UE based on an assumed size of a first portion of a buffer at the UE assumed allocated for storing data received by the UE using the first RAT and an assumed size of a second portion of the buffer assumed allocated for storing data received by the UE using the second RAT. The size of the first portion is based on a first number of resources the UE is capable of using for communicating and a second number of resources the UE is configured or allocated to use. The size of the second portion is based on an overall size of the buffer and the size of the first portion.

Architecture for wireless avionics communication networks

Embodiments of the invention include methods and systems for architectures for wireless avionics communication networks. The embodiments further include detecting a signal strength of wireless nodes, assigning a primary data controller and standby data controller for each of the wireless nodes based at least in part on the signal strength, and generating a deployment matrix based on the assignment of the primary data controller and the standby data controller. The embodiments also include broadcasting the deployment matrix over a wired connection, allocating a buffer size based on data rates of each of the wireless nodes connected to the primary data controller and the standby data controller, and exchanging data based on the deployment matrix.

Hybrid automatic repeat request management for differing types of hybrid automatic repeat request processes

A method, an apparatus, and a computer program product for wireless communication are provided. The apparatus may determine whether traffic received by the wireless communication device is associated with a first type of hybrid automatic repeat request (HARQ) process or a second type of HARQ process, and/or may allocate a sub-buffer for the traffic, wherein the sub-buffer is selected from a set of sub-buffers of a first size when the traffic is associated with the first type of HARQ process, wherein the sub-buffer is selected from a set of sub-buffers of a second size when the traffic is associated with the second type of HARQ process, and wherein at least one sub-buffer of the first size includes two or more sub-buffers of the second size.

BUFFER CONTROL METHOD, NETWORK ELEMENT, AND CONTROLLER
20200120046 · 2020-04-16 · ·

A buffer control method, a network element, and a system. The method includes: receiving, by a network element, a flow table message from a controller, where the flow table message includes buffer information of a data packet matching a flow table; processing, by the network element, a buffer of the data packet based on the buffer information, and sending a flow table response message to the controller. In the method, the network element can save, based on a corresponding saving manner, at least one data packet matching the flow table to the buffer corresponding to the flow table. Thus a data flow granularity-based buffer processing manner can be supported in an OpenFlow protocol, and a data buffering requirement of a mobile network can be met.

Adjusting buffer size for network interface controller
10608963 · 2020-03-31 · ·

Systems and methods for adjusting the receive buffer size for network interface controllers. An example method may comprise: maintaining, by a computer system, a moving window referencing a pre-defined number of incoming data packets; responsive to receiving a new data packet, shifting the moving window to include the new data packet while excluding a least recently received data packet; calculating a weighted average value of sizes the incoming data packets referenced by the moving window, wherein a most recently received data packet is associated with a first weight that is higher that a second weight associated with a least recently received data packet; and adjusting, using the weighted average value, a size of a buffer allocated for incoming data packets.

Technologies for buffering received network packet data

Technologies for buffering received network packet data include a compute device with a network interface controller (NIC) configured to determine a packet size of a network packet received by the NIC and identify a preferred buffer size between a small buffer and a large buffer. The NIC is further configured to select, from the descriptor, a buffer pointer based on the preferred buffer size, wherein the buffer pointer comprises one of a small buffer pointer corresponding to a first physical address in memory allocated to the small buffer or a large buffer pointer corresponding to a second physical address in memory allocated to the large buffer. Additionally, the NIC is configured to store at least a portion of the network packet in the memory based on the selected buffer pointer. Other embodiments are described herein.

TECHNOLOGIES FOR JITTER-ADAPTIVE LOW-LATENCY, LOW POWER DATA STREAMING BETWEEN DEVICE COMPONENTS
20200092185 · 2020-03-19 ·

Technologies for low-latency data streaming include a computing device having a processor that includes a producer and a consumer. The producer generates a data item, and in a local buffer producer mode adds the data item to a local buffer, and in a remote buffer producer mode adds the data item to a remote buffer. When the local buffer is full, the producer switches to the remote buffer producer mode, and when the remote buffer is below a predetermined low threshold, the producer switches to the local buffer producer mode. The consumer reads the data item from the local buffer while operating in a local buffer consumer mode and reads the data item from the remote buffer while operating in a remote buffer consumer mode. When the local buffer is above a predetermined high threshold, the consumer may switch to a catch-up operating mode. Other embodiments are described and claimed.

Efficient means of combining network traffic for 64Bit and 31 bit workloads

A method, system and computer-usable medium are disclosed for performing a network traffic combination operation. With the network traffic combination operation, a plurality of input queues are defined by an operating system for an adapter based upon workload type (e.g., as determined by a transport layer). Additionally, the operating system defines each input queue to match a virtual memory architecture of the transport layer (e.g., one input queue is defined as 31 bit and other input queue is defined as 64 bit). When data is received off the wire as inbound data from a physical NIC, the network adapter associates the inbound data with the appropriate memory type. Thus, data copies are eliminated and memory consumption and associated storage management operations are reduced for the smaller bit architecture communications while allowing the operating system to continue executing in a larger bit architecture configuration.

Efficient means of combining network traffic for 64Bit and 31Bit workloads

A method, system and computer-usable medium are disclosed for performing a network traffic combination operation. With the network traffic combination operation, a plurality of input queues are defined by an operating system for an adapter based upon workload type (e.g., as determined by a transport layer). Additionally, the operating system defines each input queue to match a virtual memory architecture of the transport layer (e.g., one input queue is defined as 31 bit and other input queue is defined as 64 bit). When data is received off the wire as inbound data from a physical NIC, the network adapter associates the inbound data with the appropriate memory type. Thus, data copies are eliminated and memory consumption and associated storage management operations are reduced for the smaller bit architecture communications while allowing the operating system to continue executing in a larger bit architecture configuration.

Technologies for jitter-adaptive low-latency, low power data streaming between device components

Technologies for low-latency data streaming include a computing device having a processor that includes a producer and a consumer. The producer generates a data item, and in a local buffer producer mode adds the data item to a local buffer, and in a remote buffer producer mode adds the data item to a remote buffer. When the local buffer is full, the producer switches to the remote buffer producer mode, and when the remote buffer is below a predetermined low threshold, the producer switches to the local buffer producer mode. The consumer reads the data item from the local buffer while operating in a local buffer consumer mode and reads the data item from the remote buffer while operating in a remote buffer consumer mode. When the local buffer is above a predetermined high threshold, the consumer may switch to a catch-up operating mode. Other embodiments are described and claimed.