H04L47/622

Traffic-shaping HTTP proxy for denial-of-service protection
11757929 · 2023-09-12 · ·

In accordance with some aspects of the present disclosure, an apparatus is disclosed. In some embodiments, the apparatus includes a processor and a memory. In some embodiments, the memory includes programmed instructions that, when executed by the processor, cause the apparatus to receive a request from a client; determine family of metrics; schedule the request based on the family of metrics; and in response to satisfying one or more scheduling criteria, send the request to a backend server.

Time-sensitive networking

A network device comprising a set of queues and a time-aware shaper which comprises a set of transmission gates and gate control instructions. The gate control list comprises a set of individual gate control lists, each individual gate control list configured to control a respective gate and which comprises a sequence of entries, each entry comprising a duration of time.

Method for maintaining cache consistency during reordering

Systems, apparatuses, and methods for performing efficient data transfer in a computing system are disclosed. A computing system includes multiple fabric interfaces in clients and a fabric. A packet transmitter in the fabric interface includes multiple queues, each for storing packets of a respective type, and a corresponding address history cache for each queue. Queue arbiters in the packet transmitter select candidate packets for issue and determine when address history caches on both sides of the link store the upper portion of the address. The packet transmitter sends a source identifier and a pointer for the request in the packet on the link, rather than the entire request address, which reduces the size of the packet. The queue arbiters support out-of-order issue from the queues. The queue arbiters detect conflicts with out-of-order issue and adjust the outbound packets and fields stored in the queue entries to avoid data corruption.

Techniques for ephemeral messaging with a message queue

Techniques for ephemeral message are described. In one embodiment, an apparatus may comprise a delayed-action worker module operative to wake according to a wake timer; determine a current update object for a delayed-action cursor for a recipient update queue for a messaging system, the delayed-action cursor associated with an action delay for the recipient update queue; determine a delayed-action activity for the current update object; perform the delay-action activity for the current update object; determine a next update object for the delayed-action cursor for the recipient update queue; and determine a next wake timer for the delayed-action worker module based on the action delay and a creation time for the next update object. Other embodiments are described and claimed.

SERVICE PROCESSING METHOD AND APPARATUS, AND STORAGE MEDIUM
20220217091 · 2022-07-07 ·

This application discloses example service processing methods and apparatuses. One example method includes obtaining a quantity of transmission windows corresponding to each of n services within a unit time period or in a unit data frame, wherein the unit time period or the unit data frame comprises m transmission windows, a total quantity of transmission windows corresponding to the n services is not greater than m, and both m and n are integers greater than 1. Corresponding transmission windows from the m transmission windows can then be allocated to each service based on the quantity of transmission windows corresponding to each of the n services. Based on the transmission windows corresponding to each service, service data of then services can then be multiplexed into multiplexed data transmitted in one channel. The multiplexed data can then be sent.

COMMUNICATION DEVICE, NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM, AND COMMUNICATION SYSTEM

A communication device installed on a moving body includes: multiple types of communication interfaces configured to perform communications based on multiple types of communication methods, respectively; multiple types of queues associated with the multiple types of communication interfaces, respectively; and a communication controller provided between at least one application and the multiple types of queues. Each communication interface is configured to transmit a packet stored in an associated queue among the multiple types of queues to an external device. The communication controller is configured to: receive a transmission packet to be transmitted to the external device from the at least one application; identify a communication requirement for each transmission packet; select at least one selection queue for each transmission packet among the multiple types of queues according to the communication requirement; and store the transmission packet in the at least one selection queue.

EFFICIENT RESOURCE ALLOCATION IN LATENCY FLOOR IMPLEMENTATION
20220301053 · 2022-09-22 · ·

The disclosed embodiments relate to electronic trading system architectures, for processing incoming orders to an electronic trading system, which feature a latency floor mechanism which imparts a delay on incoming orders. In particular, the disclosed embodiments implement a latency floor mechanism which compensates for both latency variations among trader's ability to submit transactions and also variations in the volume of submitted transactions therefrom.

Multi-path RDMA transmission

In accordance with implementations of the subject matter described herein, there provides a solution for multi-path RDMA transmission. In the solution, at least one packet is generated based on an RDMA message to be transmitted from a first device to a second device. The first device has an RDMA connection with the second device via a plurality of paths. A first packet in the at least one packet includes a plurality of fields, which include information for transmitting the first packet over a first path of the plurality of paths. The at least one packet is transmitted to the second device over the plurality of paths via an RDMA protocol. The first packet is transmitted over the first path. The multi-path RDMA transmission solution according to the subject matter described herein can efficiently utilize rich network paths while maintaining a low memory footprint in a network interface card.

MULTI-ACCESS MANAGEMENT SERVICE QUEUEING AND REORDERING TECHNIQUES
20220116334 · 2022-04-14 ·

The present disclosure is related to multi-queue management techniques and packet reordering techniques for inter-radio access technology (RAT) and intra-RAT traffic steering. The multi-queue management and packet reordering techniques may be used in Multi-Access Management Services (MAMS) framework, which is a programmable framework that provides mechanisms for the flexible selection of network paths in a multi-access (MX) communication environment, based on an application's needs. Other embodiments may be described and/or claimed.

SWITCH AND SCHEDULING METHOD FOR PACKET FORWARDING OF THE SAME
20220070109 · 2022-03-03 ·

A switch and a scheduling method for packet forwarding of the same are provided. The switch includes a plurality of absorb queues, a plurality of egress ports, and an absorb scheduler. Each of the egress ports includes a plurality of egress queues that are respectively connected to one of the absorb queues that are different from one another. The scheduling method includes: generating a priority state for each of the egress queues of each of the egress ports; a packet forwarding priority state of each of the absorb queues is determined according to the priority state of each of the egress queues connected thereto; and the absorb scheduler selecting one of the absorb queues to send a packet stored therein to a target egress queue of a target egress port to be sent to, according to the priority state of each of the egress queues.