Patent classifications
H04L47/527
STREAMING ALGORITHM FOR DEFICIT ROUND ROBIN ARBITRATION
Methods and systems are provided for implementing a streaming deficit round robin arbiter to provide fair utilization of a single link. In some aspects, methods and systems are provided and can include specifying a quantum size indicating how much of a link of a stream is available for use, adding the quantum size to a deficit counter indicating available bandwidth, determining whether to provide a first data packet to an autonomous vehicle system based on the deficit counter and without determining a data packet size of the first data packet, and providing the first data packet to the autonomous vehicle system based on the determining of whether to provide the first data packet to the autonomous vehicle system.
Scheduling method and apparatus for a quality of service data flow
This application provides a scheduling method and an apparatus. The method includes: determining, by an application processor, a type of a to-be-sent data packet, and putting, by the application processor, the to-be-sent data packet into a quality of service QoS data flow corresponding to the type of the to-be-sent data packet, where the type of the to-be-sent data packet is a GBR type or a non-GBR type; and scheduling, by the application processor, a to-be-sent data packet in a QoS data flow corresponding to the GBR type to send the to-be-sent data packet to a modem in a terminal in which the application processor is located, and after determining that a data transmission rate requirement of the GBR type is met, scheduling, by the application processor, a to-be-sent data packet in a QoS data flow corresponding to the non-GBR type to send the to-be-sent data packet to the modem.
METHOD AND APPARATUS FOR QUANTUM COMPUTING BASED RESOURCE ALLOCATION IN WIRELESS COMMUNICATION SYSTEM
The disclosure relates to a 5G or 6G communication system for supporting a higher data transmission rate. A method performed by an apparatus of a wireless communication system is provided. The method includes receiving, from a base station, first information related to interference among a plurality of user equipments (UEs) that are to receive a resource allocation, second information related to a number of available resources, and third information related to a resource allocation reward associated with each of the plurality of user equipments (UEs), selecting a plurality of qubits based on the first information and the second information, and generating, based on the third information, resource allocation information derived from the plurality of qubits, where the resource allocation to the plurality of UEs is based on the resource allocation information.
Early credit return for credit-based flow control
A device allocates buffer space for storing data received from another device. The other device has a credit balance corresponding to the amount of buffer space. A sending device reduces its number of credits by a cost of a packet and sends the packet. To ensure that the buffer does not overflow, the sending device spends a credit for each entry in the buffer that could be consumed by the sent data packet. When received data is added to the buffer without consuming a new entry, a response packet that returns a credit is sent to the sending device before the data is read from the buffer. Thus, the sending device is enabled to continue sending data without waiting for the buffer to be read, enabling the communication between the two devices to make more efficient use of the buffer.
SYSTEMS AND METHODS FOR QUEUE CONTROL BASED ON CLIENT-SPECIFIC PROTOCOLS
The present disclosure generally relates to controlling access to resources by selectively processing requests stored in a task queue to prioritize certain requests over others, thereby preventing automated scripts from accessing the resources. More specifically, the present disclosure relates to a normalization and prioritization system for controlling access to resources by queuing resource requests based on a client-defined normalization process that uses one or more data sources.
Processing packets in an electronic device
A network traffic manager receives, from an ingress port in a group of ingress ports, a cell of a packet destined for an egress port. Upon determining that a number of cells of the packet stored in a buffer queue meets a threshold value, the manager checks whether the group of ingress ports has been assigned a token for the queue. Upon determining that the group of ingress ports has been assigned the token, the manager determines that other cells of the packet are stored in the buffer, and accordingly stores the received cell in the buffer, and stores linking information for the received cell in a receive context for the packet. When all cells of the packet have been received, the manager copies linking information for the packet cells to the buffer queue or a copy generator queue, and releases the token from the group of ingress ports.
VOQ-based network switch architecture using multi-stage arbitration fabric scheduler
A network switch is capable of supporting cut-through switching and interface channelization with enhanced system performance. The network switch includes a plurality of ingress tiles, each tile including a virtual output queue (VOQ) scheduler operable to submit schedule requests to a fabric scheduler. Data is requested in unit of quantum, which may aggregate multiple packets, and which reduces schedule latency. Each request is associated with a start-of-quantum (SoR) state or a middle-of-quantum (MoR) state to support cut-through. The fabric scheduler performs a multi-stage scheduling process to progressively narrow the selection of requests, including stages of arbitration in virtual output port level, virtual output port group level, tile level, egress port level, and port group level. Each tile receives the grants for its requests and accordingly sends request data to a switch fabric for transmission to the destination egress ports.
TOKENIZED BANDWIDTH AND NETWORK AVAILABILITY IN A NETWORK
Embodiments described herein are directed to utilizing a tokenized system to manage network bandwidth. A total network bandwidth availability is determined for a network, and a total number of tokens is determined for that total network bandwidth. The system also determines a total number of users for the network. When a user sends a network usage request to use or access the network, the system selects and allocates a number of tokens for the user based on the network usage request, the total number of network users, and the total number of tokens. The user's device can then access and use the network if the user has a sufficient number of available tokens for that usage. The number of tokens for the user is reduced based on the amount of data used by the user.
Packet transfer apparatus, method, and program
A packet transfer apparatus is configured to perform packet exchange processing for exchanging multiple continuous packets with low delay while maintaining fairness between communication flows of the same priority level. The packet transfer apparatus includes: a packet classification unit; queues that holds the classified packets for each classification; and a dequeue processing unit that extracts packets from the queues. The dequeue processing unit includes a scheduling unit that controls the packet extraction amount extracted from the queue for a specific communication flow based on information on the amount of data that is requested by the communication flow and is to be continuously transmitted in packets.
Techniques to manage data transmissions
A transmitter can manage when a transmit queue is permitted to transmit and an amount of data permitted to be transmitted. After a transmit queue is permitted to transmit, the transmit queue can be placed in a sleep state if the transmit queue has exceeded its permitted data transmission quota. The wake time of the transmit queue can be scheduled based on a token accumulation rate for the transmit queue. The token accumulation rate can be increased if the transmit queue has other data to transmit after the data transmission. The token accumulation rate can be decreased if the transmit does not have other data to transmit.