H04L47/621

PACKET PROCESSING CONFIGURATIONS
20230043461 · 2023-02-09 ·

Examples described herein relate to an interface and a network interface device coupled to the interface and comprising circuitry. In some examples, the circuitry is to receive packet data to be egressed, wherein the packet data does not specify a destination for the packet data and process the packet data to be egressed to generate a mapping of ingress packet-to-target based on a determination.

CONVERGENCE SUBLAYER FOR USE IN A WIRELESS BROADCASTING SYSTEM

A method of encapsulating data and a single frequency network configured to perform the method are disclosed. A content stream of data packets is received, and the data packets in the content stream are formatted in accordance with a first protocol. Information identifying a container size established for the content stream is received. The data packets formatted in accordance with the first protocol are fragmented and packed to form data units formatted in accordance with a second protocol, and the data units are sized based on the container size. The data units formatted in accordance with the second protocol are encapsulated to form second protocol data packets. The second protocol data packets are provided to a transmitter that is synchronized to one or more transmitters in a single frequency network so that each transmitter in the single frequency network broadcasts a same signal that includes the second protocol data packets.

CIRCUIT AND METHOD FOR CREDIT-BASED FLOW CONTROL
20180013689 · 2018-01-11 ·

A receiving circuit of a communications link comprises: a first data buffer configured to input, under control of a first clock signal, data of a first data stream transmitted by a transmitting circuit, and to generate a credit trigger signal indicating when a data value is read from the first data buffer, wherein data is read from the first data buffer, or from a further data buffer coupled to the output of the first data buffer, under control of a second clock signal; and a credit generation circuit configured to generate, based on the credit trigger signal, a credit signal for transmission to the transmitting circuit under control of the first clock signal, the credit signal indicating that one or more further data values of the first data stream can be transmitted by the transmitting circuit.

SYSTEMS AND METHODS FOR ZERO-FOOTPRINT LARGE-SCALE USER-ENTITY BEHAVIOR MODELING
20230006892 · 2023-01-05 · ·

Systems and methods are disclosed herein for reducing storage space used in tracking behavior of a plurality of network endpoints by modeling the behavior with a behavior model. To this end, control circuitry may determine a respective network endpoint, of a plurality of network endpoints, to which each respective record of a plurality of received records corresponds. The control circuitry then may assign a dedicated queue for each respective network endpoint, and transmit, to each dedicated queue, each record that corresponds to the respective network endpoint to which the respective dedicated queue is assigned. The control circuitry may then determine, for each respective network endpoint, a respective behavior model, and may store each respective behavior model to memory.

TRAFFIC SHAPING METHOD AND APPARATUS
20230239248 · 2023-07-27 ·

This application provides a traffic shaping method and apparatus. The method includes: A packet marking apparatus receives a first packet; the packet marking apparatus determines an enqueuing queue of the first packet; and the packet marking apparatus marks a queue identifier of the first packet as a queue identifier of the enqueuing queue of the first packet, and then sends the queue identifier of the first packet to a packet output apparatus, where the packet output apparatus is configured to send, based on the queue identifier of the first packet, the first packet to a corresponding queue for outputting. Therefore, packet output time after traffic shaping can be determined.

Reduced-complexity integrated guaranteed-rate optical packet switch
11716557 · 2023-08-01 ·

A reduced-complexity optical packet switch which can provide a deterministic guaranteed rate of service to individual traffic flows is described. The switch contains N input ports, M output ports and N*M Virtual Output Queues (VOQs). Packets are associated with a flow f, which arrive an input port and depart on an output port, according to a predetermined routing for the flow. These packets are buffered in a VOQ. The switch can be configured to store several deterministic periodic schedules, which can be managed by an SDN control-plane. A scheduling frame is defined as a set of F consecutive time-slots, where data can be transmitted over connections between input ports and output ports in each time-slot. Each input port can be assigned a first deterministic periodic transmission schedule, which determines which VOQ is selected to transmit, for every time-slot in the scheduling frame. Each input port can be assigned a second deterministic periodic schedule, which determines which traffic flow within a VOQ is selected to transmit. Each input port can be assigned a third deterministic periodic schedule, which specifies to which VOQ an arriving packet (if any) is destined, for each time-slot in a scheduling frame. Each input port can be assigned a fourth deterministic periodic schedule, which specifies to which Flow-VOQ within a VOQ an arriving packet (if any) is destined. In this manner, each traffic flow can receive a deterministic guaranteed-rate of transmission through the switch.

NETWORK DEVICE THAT UTILIZES PACKET GROUPING
20230013473 · 2023-01-19 ·

A packet group processor of a network device defines groups of packets among packets that are being processed by the network device, each of at least some of the groups of packets defining a respective group of at least two different packets. Each group includes one or more packets to be transmitted via a respective same network interface. A transmit processor makes a single transmit decision that a particular group of at least two packets is to be transmitted via a corresponding network interface, and in response to the single transmit decision, transfers the particular group of at least two packets to the corresponding network interface for transmission.

Managing virtual output queues

A first node of a packet switched network transmits at least one flow of protocol data units of a network to at least one output context of one of a plurality of second nodes of the network. The first node includes X virtual output queues (VOQs). The first node receives, from at least one of the second nodes, at least one fair rate record. Each fair rate record corresponds to a particular second node output context and describes a recommended rate of flow to the particular output context. The first node allocates up to X of the VOQs among flows corresponding to i) currently allocated VOQs, and ii) the flows corresponding to the received fair rate records. The first node operates each allocated VOQ according to the corresponding recommended rate of flow until a deallocation condition obtains for the each allocated VOQ.

Throttling queue for a request scheduling and processing system

Various methods and systems for implementing request scheduling and processing in a multi-tenant distributed computing environment are provided. Requests to utilize system resources in the distributed computing environment are stored in account queues corresponding to tenant accounts. If storing a request in an account queue would exceed a throttling threshold such as a limit on the number of requests stored per account, the request is dropped to a throttling queue. A scheduler prioritizes processing requests stored in the processing queue before processing requests stored in the account queues. The account queues can be drained using dominant resource scheduling. In some embodiments, a request is not picked up from an account queue if processing the request would exceed a predefined hard limit on system resource utilization for the corresponding tenant account. In some embodiments, the hard limit is defined as a percentage of threads the system has to process requests.

Systems and methods for zero-footprint large-scale user-entity behavior modeling
11509540 · 2022-11-22 · ·

Systems and methods are disclosed herein for reducing storage space used in tracking behavior of a plurality of network endpoints by modeling the behavior with a behavior model. To this end, control circuitry may determine a respective network endpoint, of a plurality of network endpoints, to which each respective record of a plurality of received records corresponds. The control circuitry then may assign a dedicated queue for each respective network endpoint, and transmit, to each dedicated queue, each record that corresponds to the respective network endpoint to which the respective dedicated queue is assigned. The control circuitry may then determine, for each respective network endpoint, a respective behavior model, and may store each respective behavior model to memory.