Patent classifications
H04L47/6275
Flow-based management of shared buffer resources
An apparatus for controlling a Shared Buffer (SB), the apparatus including an interface and a SB controller. The interface is to access flow-based data counts and admission states. The SB controller is to perform flow-based accounting of packets received by a network device coupled to a communication network, for producing flow-based data counts, each flow-based data count associated with one or more respective flows, and to generate admission states based at least on the flow-based data counts, each admission state being generated from one or more respective flow-based data counts.
Flow-based management of shared buffer resources
An apparatus for controlling a Shared Buffer (SB), the apparatus including an interface and a SB controller. The interface is to access flow-based data counts and admission states. The SB controller is to perform flow-based accounting of packets received by a network device coupled to a communication network, for producing flow-based data counts, each flow-based data count associated with one or more respective flows, and to generate admission states based at least on the flow-based data counts, each admission state being generated from one or more respective flow-based data counts.
Congestion Mitigation in a Distributed Storage System
A system comprises a plurality of computing devices that are communicatively coupled via a network and have a file system distributed among them, and comprises one or more file system request buffers residing on one or more of the plurality of computing devices. File system choking management circuitry that resides on one or more of the plurality of computing devices is operable to separately control: a first rate at which a first type of file system requests (e.g., one of data requests, data read requests, data write requests, metadata requests, metadata read requests, and metadata write requests) are fetched from the one or more buffers , and a second rate at which a second type of file system requests (e.g., another of data requests, data read requests, data write requests, metadata requests, metadata read requests, and metadata write requests) are fetched from the one or more buffers.
Congestion Mitigation in a Distributed Storage System
A system comprises a plurality of computing devices that are communicatively coupled via a network and have a file system distributed among them, and comprises one or more file system request buffers residing on one or more of the plurality of computing devices. File system choking management circuitry that resides on one or more of the plurality of computing devices is operable to separately control: a first rate at which a first type of file system requests (e.g., one of data requests, data read requests, data write requests, metadata requests, metadata read requests, and metadata write requests) are fetched from the one or more buffers , and a second rate at which a second type of file system requests (e.g., another of data requests, data read requests, data write requests, metadata requests, metadata read requests, and metadata write requests) are fetched from the one or more buffers.
DYNAMIC LOAD BALANCING FOR MULTI-CORE COMPUTING ENVIRONMENTS
Methods, apparatus, systems, and articles of manufacture are disclosed for dynamic load balancing for multi-core computing environments. An example apparatus includes a first and a plurality of second cores of a processor, and circuitry in a die of the processor separate from the first and the second cores, the circuitry to enqueue identifiers in one or more queues in the circuitry associated with respective ones of data packets of a packet flow, allocate one or more of the second cores to dequeue first ones of the identifiers in response to a throughput parameter of the first core not satisfying a throughput threshold to cause the one or more of the second cores to execute one or more operations on first ones of the data packets, and provide the first ones to one or more data consumers to distribute the first data packets.
DYNAMIC LOAD BALANCING FOR MULTI-CORE COMPUTING ENVIRONMENTS
Methods, apparatus, systems, and articles of manufacture are disclosed for dynamic load balancing for multi-core computing environments. An example apparatus includes a first and a plurality of second cores of a processor, and circuitry in a die of the processor separate from the first and the second cores, the circuitry to enqueue identifiers in one or more queues in the circuitry associated with respective ones of data packets of a packet flow, allocate one or more of the second cores to dequeue first ones of the identifiers in response to a throughput parameter of the first core not satisfying a throughput threshold to cause the one or more of the second cores to execute one or more operations on first ones of the data packets, and provide the first ones to one or more data consumers to distribute the first data packets.
Apparatus and method for prioritization of random access in a multi-user wireless communication system
The present disclosure relates to the prioritization of devices taking part in a multi-user random access wireless communication. Based on some known conditions, devices that comply with the conditions are given preferential treatment during the random access period. The preferential treatment may refer to the eligible devices being allowed to access more resource units during the random access, or it may also mean faster access to the medium during the random access. By taking advantage of the methods described in the present disclosure, it is possible to assign higher priority to selected frame types and/or device categories in a multi-user random access wireless communication system.
UPLINK DATA TRANSMISSION SCHEDULING
An apparatus and method for uplink data transmission scheduling are disclosed. In an example, the method can include obtaining, by at least one processor, a plurality of packets to be transmitted via uplink. The method can also include queueing, by the at least one processor, the plurality of packets according to logical channel prioritization. The method can further include receiving, by the at least one processor, a service grant after the queueing. The method can additionally include trimming, by the at least one processor, the plurality of packets according to a grant size of the service grant.
COMMUNICATION METHOD AND APPARATUS
Embodiments of this disclosure provide a communication method and apparatus, to reduce a waiting delay during transmission of retransmitted data and out-of-order data. In this method, a user plane function network element may receive first data and second data, determines, based on first information, that the second data is retransmitted data of the first data, and sends indication information to an access network network element. The indication information herein may indicate that the second data is the retransmitted data, or indicate a sending priority of the second data. Based on this solution, the access network network element may send the first data and the second data based on the indication information, and may preferentially send the retransmitted data to reduce the waiting delay during data transmission.
CONCURRENT USE OF MULTIPLE PROTOCOLS ON A SINGLE RADIO
A method for concurrent execution of multiple protocols using a single radio of a wireless communication device is provided that includes receiving, in a radio command scheduler, a first radio command from a first protocol stack of a plurality of protocol states executing on the wireless communication device, determining a scheduling policy for the first radio command based on a current state of each protocol stack of the plurality of protocol stacks, and scheduling the first radio command in a radio command queue for the radio based on the scheduling policy, wherein the radio command scheduler uses the radio command queue to schedule radio commands received from the plurality of protocol stacks.