H04L47/58

RELEASE-TIME BASED PRIORITIZATION OF ON-BOARD CONTENT
20220263763 · 2022-08-18 ·

Approaches are described for release-time-driven (RTD) prioritization of on-board content scheduling and delivery to in-transit transport craft via communications systems. In context of a constrained network, content is scheduled to be delivered to those in-transit on-board media servers in a manner driven by respective release times and other prioritization factors associated with the updated content. Each content is associated with a RTD priority profile that can define a release time, a release priority, and a profile plot for the content. The RTD priority profiles can be used to compute priority surfaces that define priority scores over a multidimensional space for a particular time. A subset of the content can be selected for delivery based on the priority surfaces, and can be scheduled for delivery according to network capacity determinations.

WIRELESS COMMUNICATION METHOD FOR MULTI-USER TRANSMISSION SCHEDULING, AND WIRELESS COMMUNICATION TERMINAL USING SAME
20220109637 · 2022-04-07 ·

The present invention relates to a wireless communication terminal and a wireless communication method for efficiently scheduling uplink multi-user transmission.

To this end, provided are a base wireless communication terminal, including: a transceiver configured to transmit and receive a wireless signal; and a processor configured to control an operation of the transceiver, wherein the processor selects an access category for transmitting a trigger frame which solicits an uplink multi-user transmission, performs a backoff procedure for transmitting the trigger frame based on the selected access category, and transmits the trigger frame when a backoff counter of the backoff procedure expires and a wireless communication method using the same.

Method and device for supporting multiple wireless protocols with a medium access control preprocessor

In one embodiment, a method includes: obtaining a multi-protocol schedule, wherein the multi-protocol schedule includes scheduling information characterizing packets associated with a plurality of wireless protocols, wherein each of the plurality of wireless protocols is associated with a respective virtual gateway of a plurality of virtual gateways; detecting, by a wireless transceiver, a first packet related to a first wireless protocol of the plurality of wireless protocols based on the multi-protocol schedule; and transmitting, by the wireless transceiver, the first packet related to the first wireless protocol to a first virtual gateway of the plurality of virtual gateways. According to some embodiments, the method is performed by a device (e.g., a MAC preprocessor) that includes a wireless transceiver, one or more processors, and non-transitory memory.

SCHEDULING SOLUTION CONFIGURATION METHOD AND APPARATUS, COMPUTER READABLE STORAGE MEDIUM THEREOF, AND COMPUTER DEVICE
20210120097 · 2021-04-22 ·

A scheduling scheme configuration method includes performing state verification on a plurality of operation dimensions involved in generating a scheduling scheme, and, in response to one or more of the operation dimensions being abnormal, removing the one or more abnormal operation dimensions to generate a new scheduling scheme.

Systems and methods for providing lockless bimodal queues for selective packet capture

In a network system, an application receiving packets can consume one or more packets in two or more stages, where the second and the later stages can selectively consume some but not all of the packets consumed by the preceding stage. Packets are transferred between two consecutive stages, called producer and consumer, via a fixed-size storage. Both the producer and the consumer can access the storage without locking it and, to facilitate selective consumption of the packets by the consumer, the consumer can transition between awake and sleep modes, where the packets are consumed in the awake mode only. The producer may also switch between awake and sleep modes. Lockless access is made possible by controlling the operation of the storage by the producer and the consumer both according to the mode of the consumer, which is communicated via a shared memory location.

Backpressure signaling for wireless communications

Methods, systems, and devices for wireless communications are described. In some wireless systems, a base station centralized unit (CU) may communicate with a user equipment (UE) through a multi-hop backhaul architecture. This multi-hop backhaul connection may include a donor base station and any number of relay base stations connected via backhaul links. In some cases, the relay base stations or the UE may experience data congestion in a logical channel-specific buffer. The relay base stations or UE may implement backpressure signaling (e.g., in the medium access control (MAC) layer) to mitigate the congestion. A wireless device operating as a mobile termination (MT) endpoint may transmit a backpressure report message to a wireless device operating as a base station distributed unit (DU) endpoint for the logical channel. The base station DU may adjust a scheduling rate for data unit transmissions over the indicated logical channel based on the backpressure report.

SYSTEMS AND METHODS FOR PREDICTIVE SCHEDULING AND RATE LIMITING
20210083986 · 2021-03-18 ·

Systems and methods are disclosed for enhancing network performance by using modified traffic control (e.g., rate limiting and/or scheduling) techniques to control a rate of packet (e.g., data packet) traffic to a queue scheduled by a Quality of Service (QoS) engine for reading and transmission. In particular, the QoS engine schedules packets using estimated packet sizes before an actual packet size is known by a direct memory access (DMA) engine coupled to the QoS engine. The QoS engine subsequently compensates for discrepancies between the estimated packet sizes and actual packet sizes (e.g., when the DMA engine has received an actual packet size of the scheduled packet). Using these modified traffic control techniques that leverage estimating packet sizes may reduce and/or eliminate latency introduced due to determining actual packet sizes.

RESOURCE ALLOCATION METHOD AND APPARATUS
20210068126 · 2021-03-04 ·

Various embodiments provide a resource allocation method and an apparatus. In those embodiments, a terminal device obtains a first parameter indicating a maximum service data volume to be provided by an access network device for a first service in a first time length; and determines, based on the maximum service data volume, a resource of a media access control protocol data unit MAC PDU, the resource being occupied by buffered data of the first service. In those embodiments, the terminal device determines, based on the maximum service data volume of the first service, the resource of the MAC PDU. Compared with a conventional method, the maximum service data volume of the first service is considered. This helps improve reasonableness of allocating the resource of the MAC PDU to the data of the first service.

METHODS, SYSTEMS AND APPRATUSES FOR OPTIMIZING TIME-TRIGGERED ETHERNET (TTE) NETWORK SCHEDULING BY USING A DIRECTIONAL SEARCH FOR BIN SELECTION

Methods, systems and apparatuses for scheduling a plurality of Virtual Links (VLs) in a Time-Triggered Ethernet (TTE) network by a network scheduling and configuration tool (NST) by establishing a collection of bins that corresponds to the smallest harmonic period allowing full network traversal of a time-triggered traffic packet in the network for determining available bin sets for sending the VL data by the NST; processing by a scheduling algorithm the VLs to be sent in accordance with a strict order comprising scheduling all the highest rate VLs prior to scheduling lower rate VLs; and scheduling reservations for the VLs in bins by tracking the available time available in each bin and optionally spreading the VL data across available bin sets by sorting a list of available bins by ascending bin utilization and by specifying a left-to-right or right-to-left sort order when searching for available bins based on a position in the timeline between the transmitter and receiver end stations.

Hybrid queue system for request throttling
10944683 · 2021-03-09 · ·

Systems and related methods are disclosed to store and throttle requests received by a service provider. In embodiments, the system includes two queues, a first-in-first-out (FIFO) queue and an overflow queue. An incoming request is stored in the overflow queue when there is no room in the FIFO queue. The overflow queue stores the requests in some priority order, which determines the order that the requests are promoted onto the FIFO queue and throttled. The FIFO queue may be sized according to a response time requirement provided in a service level agreement (SLA). In some embodiments, the FIFO queue may dynamically adjust its size based on the expected processing time or abandon duration of incoming requests. The hybrid approach allows a system to handle requests in simple FIFO order in normal circumstances, and in a more sophisticated priority order when the system is overloaded.