Patent classifications
H04L12/873
Wireless communication method for multi-user transmission scheduling, and wireless communication terminal using same
The present invention relates to a wireless communication terminal and a wireless communication method for efficiently scheduling uplink multi-user transmission. To this end, provided are a base wireless communication terminal, including: a transceiver configured to transmit and receive a wireless signal; and a processor configured to control an operation of the transceiver, wherein the processor selects an access category for transmitting a trigger frame which solicits an uplink multi-user transmission, performs a backoff procedure for transmitting the trigger frame based on the selected access category, and transmits the trigger frame when a backoff counter of the backoff procedure expires and a wireless communication method using the same.
Systems and methods for packing data in a scalable memory system protocol
A memory device includes a memory component that stores data and a processor. The processor may receive requests from a requesting component to perform a plurality of data operations, generate a plurality of packets associated with the plurality of data operations, and continuously transmit each of the plurality of packets until each of the plurality of packets are transmitted. Each of the plurality of packets after the first packet of the plurality of packets is transmitted on a subsequent clock cycle immediately after a previous packet is transmitted.
INTEGRATED GATEWAY PLATFORM FOR FULFILLMENT SERVICES
An integrated gateway system configured to perform: receiving online data transmissions from a user computing device of a user; authenticating that a source of the online data transmissions matches the user computing device; transmitting the online data transmissions to the internal gateway system; authenticating credentials of the user as a pre-authorized user; restricting a number of incoming calls using a rate-limiting throttle system; transmitting the online data transmissions to the communication management system; batching the online data transmissions into one or more micro-batches based on one or more rules; transmitting the one or more micro-batches to one or more respective backend services using an events stream system; receiving respective responses transmitted from the one or more respective backend services in response to each one of the one or more micro-batches; performing each respective task of one or more tasks based on the respective responses from the one or more respective backend services. Other embodiments are disclosed.
Fabric control protocol for data center networks with packet spraying over multiple alternate data paths
A fabric control protocol is described for use within a data center in which a switch fabric provides full mesh interconnectivity such that any of the servers may communicate packet data for a given packet flow to any other of the servers using any of a number of parallel data paths within the data center switch fabric. The fabric control protocol enables spraying of individual packets for a given packet flow across some or all of the multiple parallel data paths in the data center switch fabric and, optionally, reordering of the packets for delivery to the destination. The fabric control protocol may provide end-to-end bandwidth scaling and flow fairness within a single tunnel based on endpoint-controlled requests and grants for flows. In some examples, the fabric control protocol packet structure is carried over an underlying protocol, such as the User Datagram Protocol (UDP).
PACKET FORWARDING APPARATUS, METHOD AND PROGRAM
For an application collectively transmitting a group of packets desired to be transmitted at once, the group of successive packets are transferred with low latency, while fairness is maintained among communication flows with the identical priority. A packet transfer device 100 includes a packet classification unit 120 configured to classify received packets, queues 140 for respective classifications, priorities being set to the queues, a dequeue processing unit 150 configured to extract packets from the queue under a predetermined rule based on the priorities set to the queues, and a queue priority control unit 130 configured to perform control, upon detecting that a reception amount of packets related to a communication flow temporarily or intermittently increases from a reception amount under a normal condition, such that a priority of one of the queues holding the packets related to the communication flow is temporarily raised from a priority under the normal condition, during a period while the reception amount of packets related to the communication flow temporarily or intermittently increases.
Flow table aging optimized for dram access
A flow table management system can include a hardware memory module communicatively coupled to a network interface card. The hardware memory module is configured to store a flow table including a plurality of network flow entries. The network interface card further includes a flow table age cache configured to store a set of recently active network flows and a flow table management module configured to manage a duration for which respective network flow entries in the flow table stored in the hardware memory module remain in the flow table using the flow table age cache. In some implementations, age information about each respective flow in the flow table is stored in the hardware memory module in an age state table that is separate from the flow table.
Hybrid packet memory for buffering packets in network devices
A network device processes received packets at least to determine port or ports of the network device via which to transmit the packet. The network device also classifies the packets into packet flows, the packet flows being further categorized into traffic pattern categories characteristic of traffic pattern characteristics of the packet flows. The network device buffers, according to the traffic pattern categories of the packet flows, packets that belong to the packet flows in a first packet memory or in a second packet memory, the first packet memory having a memory access bandwidth different from a memory access bandwidth of the second packet memory. After processing the packets, the network device retrieves the packets from the first packet memory or the second packet memory in which the packets are buffered, and forwards the packets to the determined one or more ports for transmission of the packets.
SERVER, SERVER SYSTEM, AND METHOD OF INCREASING NETWORK BANDWIDTH OF SERVER
[Problem] An available network bandwidth is increased without limiting processing of applications.
[Solution] A server 20A includes a normal NIC 11 as an NIC having an expansion function, and a virtual patch panel 21 having a transfer function of transferring packets between the normal NIC 11 and an accelerator utilization type NIC 15, which is implemented by software. The server 20A is configured such that, when a packet is transferred between the normal NIC 11 and the accelerator utilization type NIC 15 via the virtual patch panel 21, the target function 16 transfers the packet to and from the APLs 12a to 12c.
PACKET TRANSFER APPARATUS, METHOD, AND PROGRAM
Provided is a packet transfer apparatus configured to per form packet exchange processing for exchanging multiple continuous packets with low delay while maintaining fairness between communication flows of the same priority level. A packet transfer apparatus 100 includes: a packet classification unit 120; queues 130 that holds the classified packets for each classification; and a dequeue processing unit 140 that extracts packets from the queues 130. The dequeue processing unit 140 includes a scheduling unit 141 that controls the packet extraction amount extracted from the queue 130 for a specific communication flow based on information on the amount of data that is requested by the communication flow and is to be continuously transmitted in packets.
SYSTEM AND METHOD FOR DYNAMIC QUEUE MANAGEMENT USING QUEUE PROTOCOLS
A system and method for efficiently processing and managing data stored in a queue. A processing device may process the data stored in the queue. Queue protocols can be applied to the queue to efficiently process and manage data stored in the queue. Queue protocols may facilitate efficient use of processing resources that process the data stored in one or more queues. A queue protocol may include at least a first protocol for facilitating transfer of data in the queue to another queue processed by another processing device or a second protocol for inhibiting transfer of data in the queue to another queue.