H04L49/90

BUFFERING OF LAYER THREE MESSAGES FOR SUBSCRIBER TRACING
20230231793 · 2023-07-20 ·

Systems, methods, apparatuses, and computer program products for buffering of messages for more complete subscriber or user equipment tracing are provided. For example, a method can include buffering information needed for an individual trace prior to an activation opportunity of the individual trace. The method may also include providing the buffering information to a trace collection entity conditioned on the individual trace being activated.

BUFFERING OF LAYER THREE MESSAGES FOR SUBSCRIBER TRACING
20230231793 · 2023-07-20 ·

Systems, methods, apparatuses, and computer program products for buffering of messages for more complete subscriber or user equipment tracing are provided. For example, a method can include buffering information needed for an individual trace prior to an activation opportunity of the individual trace. The method may also include providing the buffering information to a trace collection entity conditioned on the individual trace being activated.

Congestion Mitigation in a Distributed Storage System

A system comprises a plurality of computing devices that are communicatively coupled via a network and have a file system distributed among them, and comprises one or more file system request buffers residing on one or more of the plurality of computing devices. File system choking management circuitry that resides on one or more of the plurality of computing devices is operable to separately control: a first rate at which a first type of file system requests (e.g., one of data requests, data read requests, data write requests, metadata requests, metadata read requests, and metadata write requests) are fetched from the one or more buffers , and a second rate at which a second type of file system requests (e.g., another of data requests, data read requests, data write requests, metadata requests, metadata read requests, and metadata write requests) are fetched from the one or more buffers.

Congestion Mitigation in a Distributed Storage System

A system comprises a plurality of computing devices that are communicatively coupled via a network and have a file system distributed among them, and comprises one or more file system request buffers residing on one or more of the plurality of computing devices. File system choking management circuitry that resides on one or more of the plurality of computing devices is operable to separately control: a first rate at which a first type of file system requests (e.g., one of data requests, data read requests, data write requests, metadata requests, metadata read requests, and metadata write requests) are fetched from the one or more buffers , and a second rate at which a second type of file system requests (e.g., another of data requests, data read requests, data write requests, metadata requests, metadata read requests, and metadata write requests) are fetched from the one or more buffers.

Datapath for multiple tenants

A novel design of a gateway that handles traffic in and out of a network by using a datapath pipeline is provided. The datapath pipeline includes multiple stages for performing various data-plane packet-processing operations at the edge of the network. The processing stages include centralized routing stages and distributed routing stages. The processing stages can include service-providing stages such as NAT and firewall. The gateway caches the result previous packet operations and reapplies the result to subsequent packets that meet certain criteria. For packets that do not have applicable or valid result from previous packet processing operations, the gateway datapath daemon executes the pipelined packet processing stages and records a set of data from each stage of the pipeline and synthesizes those data into a cache entry for subsequent packets.

PACKET PROCESSING OF STREAMING CONTENT IN A COMMUNICATIONS NETWORK

Aspects of present disclosure include devices within a transmission path of streamed content forwarding received data packets of the stream to the next device or “hop” in the path prior to buffering the data packet at the device. In this method, typical buffering of the data stream may therefore occur at the destination device for presentation at a consuming device, while the devices along the transmission path may transmit a received packet before buffering. Further, devices along the path may also buffer the content stream after forwarding to fill subsequent requests for dropped data packets of the content stream. Also, in response to receiving the request for the content stream, a device may first transmit a portion of the contents of the gateway buffer to the requesting device to fill a respective buffer at the receiving device.

Processing task deployment in adapter devices and accelerators

Example approaches for processing task deployment in adapter devices and accelerators, are described. In an example, a service request is received by an adapter device. The service request is indicative of a service associated with a virtual multi-layer network switch. An accelerator may be integrated to the adapter device or coupled to the adapter device. A set of processing tasks associated with the service is identified based on the service request. A processing task instance corresponding to at least one of the set of processing tasks is deployed in one of the adapter device and the accelerator, based on predefined configuration information. The predefined configuration information includes policies for executing each of the set processing tasks in one of the adapter device and the accelerator.

Application-level network queueing

There is disclosed in one example a network interface card (NIC), comprising: an ingress interface to receive incoming traffic; a plurality of queues to queue incoming traffic; an egress interface to direct incoming traffic to a plurality of server applications; and a queuing engine, including logic to: uniquely associate a queue with a selected server application; receive an incoming network packet; determine that the selected server application may process the incoming network packet; and assign the incoming network packet to the queue.

Application-level network queueing

There is disclosed in one example a network interface card (NIC), comprising: an ingress interface to receive incoming traffic; a plurality of queues to queue incoming traffic; an egress interface to direct incoming traffic to a plurality of server applications; and a queuing engine, including logic to: uniquely associate a queue with a selected server application; receive an incoming network packet; determine that the selected server application may process the incoming network packet; and assign the incoming network packet to the queue.

Accelerating distributed reinforcement learning with in-switch computing

A programmable switch includes an input arbiter to analyze packet headers of incoming packets and determine which of the incoming packets are part of gradient vectors received from worker computing devices that are performing reinforcement learning. The programmable switch also includes an accelerator coupled to the input arbiter, the accelerator to: receive the incoming packets from the input arbiter; asynchronously aggregate gradient values of the incoming packets, as the gradient values are received, to generate an aggregated data packet associated with a gradient segment of the gradient vectors; and transfer the aggregated data packet to the input arbiter to be transmitted to the worker computing devices, which are to update local weights based on the aggregated data packet.