H04L47/562

BUFFER MANAGEMENT METHOD AND APPARATUS
20210392092 · 2021-12-16 ·

A memory management method includes: determining that available storage space of a first memory in a network device is less than a first threshold, where the first threshold is greater than 0 and the first memory stores a first packet queue; and deleting at least one packet at the tail of the first packet queue from the first memory based on the available storage space of the first memory being less than the first threshold. When the available storage space of the first memory is less than the first threshold, a packet queue, namely, the first packet queue, is selected and a packet at the tail of the packet queue is deleted from the first memory.

Access control method, access control device, and data processing device

An access control unit includes packet buffers provided for each of users, a packet identification unit that stores received packets in a corresponding packet buffer, a scheduling unit that decides a packet buffer to be the object of transfer, a transfer control unit that, in a case that updating of reference data can be performed at an application processing circuit, and also the packet buffer decided by the scheduling unit is different from the current packet buffer that is the object of transfer, updates to reference data corresponding to the packet buffer decided by the scheduling unit, and a buffer selection unit that connects the packet buffers decided to be the object of transfer to the packet transfer unit when updating of reference data is completed.

INFERENCE MODEL OPTIMIZATION
20220188676 · 2022-06-16 ·

An approach to optimize performance for large scale inference models. Data in the form of images is received from sensors such as cameras. The data is processed to generate data tags associated with the context of the image and portion the images. Model tags are generated based on data characteristics or user input. The tags and their associated data are added to a time-based queue for delivery to the appropriate inference models. Based on the embedded delivery time and frequency, the portioned images are delivered to the appropriate inference models.

FLOWLET SCHEDULER FOR MULTICORE NETWORK PROCESSORS
20230275689 · 2023-08-31 ·

Systems and methods of using a packet order work scheduler (POWS) to assign packets to a set of scheduler queues for supplying packets to parallel processing units. A processing unit and the associated scheduler queue are dedicated to a specific flow until a queue-reallocation event, which may correspond to the associated scheduler queue being idle for at least a certain interval as indicated by its age counter, or the queue being the least recently used, when a new flow arrives. In this case, the scheduler queue and the associated processing unit may be reallocated to the new flow and disassociated with the previous flow. As a result, dynamic packet workload balancing can be advantageously achieved across the multiple processing paths.

Implementing a queuing system in a distributed network

A web application has a limit on the total number of concurrent users. As requests from client devices are received from users, a determination is made whether the application can accept those users. When the threshold number of users has been exceeded, new users are prevented from accessing the web application and are assigned to a queue system. A webpage may be sent to the users indicating queue status and may provide their estimated wait time. A cookie may be sent to the client for tracking the position of the user in the application queue. The users are assigned to a user bucket associated with a time interval of their initial request. When user slots become available, the users queued in the user bucket (starting from the oldest user bucket) are allowed access to the web application.

Highly deterministic latency in a distributed system

A distributed computing system, such as may be used to implement an electronic trading system, supports a notion of fairness in latency. The system does not favor any particular client. Thus, being connected to a particular access point into the system (such as via a gateway) does not give any particular device an unfair advantage or disadvantage over another. That end is accomplished by precisely controlling latency, that is, the time between when request messages arrive at the system and a time at which corresponding response messages are permitted to leave. The precisely controlled, deterministic latency can be fixed over time, or it can vary according to some predetermined pattern, or vary randomly within a pre-determined range of values.

PACKET FORWARDING METHOD, PACKET FORWARDING APPARATUS AND ELECTRONIC DEVICE
20230254263 · 2023-08-10 ·

The embodiments of the present disclosure provide a packet forwarding method, a packet forwarding apparatus, and an electronic device. In the embodiments of the present disclosure, by setting up a corresponding outbound-interface-direction-queue scheduling period for the outbound interface on the network device, a number of time slices in the outbound-interface-direction-queue scheduling period, and a length of one time slice, and allocating corresponding time slices in the outbound-interface-direction-queue scheduling period to the outbound-interface-direction queues corresponding to the outbound interface, such that the network device can schedule, on each time slice in the outbound-interface-direction-queue scheduling period, packets in the outbound-interface-direction queue corresponding to the time slice. This can realize that the delay of waiting for scheduling outbound-interface-direction queues is maintained between the worst delay and the best delay as discussed above, realize deterministic delay of waiting for scheduling outbound-interface-direction queues, and thus realize deterministic delay of packet forwarding.

SWITCH DEVICE, CONTROL DEVICE AND CORRESPONDING METHODS FOR ENHANCED SCHEDULABILITY AND THROUGHPUT ON A TSN NETWORK

A device and method for a switch to operate as an intermediate node in a Time Sensitive Network (TSN) are provided. The switch transmits a frame at a time if it is the right frame to be transmitted at the time according to a configuration stored by the switch. The switch does not transmit the frame at the time if it is not the right frame to be sent at the time according to the configuration. Further, a device and method for scheduling transmission of a data packet from a talker node to a listener node are provided, including sending a configuration to each switch of a subset of switches in the network comprising information on the flow and a timing when it is to be output from the switch.

Packet Scheduling Method, Scheduler, Network Device, and Network System
20210359931 · 2021-11-18 ·

A network device adds an extreme low latency (ELL) service packet to an ELL queue, and adds a (time sensitive) TS service packet to a TS queue. A packet in the TS queue is sent within a time window corresponding to the TS queue, and the packet in the TS queue is not allowed to be sent within a time period beyond the time window corresponding to the TS queue. When a remaining time period obtained by subtracting a time period required by a to-be-sent TS service packet within the time window from the time window is greater than or equal to a first threshold, a packet in the ELL queue is allowed to be sent within the time window corresponding to the TS queue. The first threshold is a time period required for sending one or more ELL service packets in the ELL queue.

Traffic-shaping HTTP proxy for denial-of-service protection
11757929 · 2023-09-12 · ·

In accordance with some aspects of the present disclosure, an apparatus is disclosed. In some embodiments, the apparatus includes a processor and a memory. In some embodiments, the memory includes programmed instructions that, when executed by the processor, cause the apparatus to receive a request from a client; determine family of metrics; schedule the request based on the family of metrics; and in response to satisfying one or more scheduling criteria, send the request to a backend server.