H04L49/9063

Queuing system to predict packet lifetime in a computing device
10623329 · 2020-04-14 · ·

Techniques are disclosed for a queuing system for network devices. In one example, a network device includes a plurality of memories and processing circuitry connected to the plurality of memories. The plurality of memories includes a local memory of processing circuitry and an external memory to the processing circuitry. The processing circuitry is configured to receive an incoming network packet to be processed, wherein the network packet is held in a queue prior to processing and determine a predicted lifetime of the network packet based on a dequeue rate for the queue. The processing circuitry is further configured to select a first memory from the plurality of memories based on the predicted lifetime and store the network packet at the first memory in response to selecting the first memory from the plurality of memories.

Software-defined Interconnection Method and Apparatus for Heterogeneous Protocol Data

The present disclosure provides a software-defined interconnection method and apparatus for heterogeneous protocol data, including: determining ports respectively corresponding to multiple network node devices connected with a software-definable network switching device; configuring each of the ports corresponding to a respective network node device according to a protocol type corresponding to the network node device, to obtain the first configuration information, and configure a port rate of the port according to a transmission rate of data of the network node devices, to obtain the second configuration information; receiving a data packet sent by the connected network node device through each of the ports; determining whether protocol conversion needs to be performed according to the protocol type of the destination port of the data packet, wherein if needed, the data packet sent from the port is encapsulated into the protocol type of the destination port, and sent to the destination port.

QUEUING SYSTEM TO PREDICT PACKET LIFETIME IN A COMPUTING DEVICE
20200007454 · 2020-01-02 ·

Techniques are disclosed for a queuing system for network devices. In one example, a network device includes a plurality of memories and processing circuitry connected to the plurality of memories. The plurality of memories includes a local memory of processing circuitry and an external memory to the processing circuitry. The processing circuitry is configured to receive an incoming network packet to be processed, wherein the network packet is held in a queue prior to processing and determine a predicted lifetime of the network packet based on a dequeue rate for the queue. The processing circuitry is further configured to select a first memory from the plurality of memories based on the predicted lifetime and store the network packet at the first memory in response to selecting the first memory from the plurality of memories.

PACKET BUFFERING TECHNOLOGIES

Examples described herein relate to a switch. In some examples, the switch includes circuitry that is configured to: based on receipt of a packet and a level of a first queue, select among a first memory and a second memory device among multiple second memory devices to store the packet, based on selection of the first memory, store the packet in the first memory, and based on selection of the second memory device among multiple second memory devices, store the packet into the selected second memory device. In some examples, the packet is associated with an ingress port and an egress port, and the selected second memory device is associated with a third port that is different than the ingress port or the egress port associated with the packet.

AXI-CAPI adapter

The coherent accelerator processor interface (CAPI) provides a high-performance when using heterogeneous compute architectures, but CAPI is not compatible with the advanced extensible interface (AXI) which is used by many accelerators. The examples herein describe an AXI-CAPI adapter (e.g., a hardware architecture) that converts AXI signals to CAPI signals and vice versus. In one example, the AXI-CAPI adapter includes four modules: a low-level shim, a high-level shim, an AXI full module, and an AXI Lite module which are organized in a hierarchy of hardware elements. Each of the modules outputs can output a different version of the AXI signals using the hierarchical structure.

Packet loss mitigation in an elastic container-based network

Packet loss mitigation may be provided. First, queue control data may be sent to a first container and then a route may be stalled after sending the queue control data. The route may correspond to a data path that leads to the first container. Next, modified queue control data may be received from the first container and the first container may be deleted safely with empty queues, preventing packet loss in response to receiving the modified queue control data.

Protocol data unit management
10455641 · 2019-10-22 · ·

A system and method is provided for configuring a plurality of PDU sessions. In an implementation, a core network function transmits a PDU bulk configuration request to at least one network node associated with the plurality of PDU sessions. The PDU bulk configuration request including a PDU session identifying component to enable the at least one network node to identify the plurality of PDU sessions, and includes a PDU session modifying component identifying required changes to one or more PDU session parameters to configure the plurality of PDU sessions.

Programmable logic applications for an array of high on/off ratio and high speed non-volatile memory cells

A non-volatile programmable circuit configurable to perform logic functions, is provided. The programmable circuit can employ two-terminal non-volatile memory devices to store information, thereby mitigating or avoiding disturbance of programmed data in the absence of external power. Two-terminal resistive switching memory devices having high current on/off ratios and fast switching times can also be employed for high performance, and facilitating a high density array. For look-up table applications, input/output response times can be several nanoseconds or less, facilitating much faster response times than a memory array access for retrieving stored data.

Hierarchical network traffic scheduling using dynamic node weighting
10382582 · 2019-08-13 · ·

The techniques may provide a hierarchical scheduler for dynamically computing rate credits when a plurality of queues share an intermediate node. For example, the hierarchical scheduler may group respective sets of queues with respective virtual subscribers to be associated with a shared intermediate node. The weight used by the shared intermediate node may be computed as a function of the number of virtual subscriber child members of the shared intermediate node and their respective weights to correctly proportion the services to the queues. The techniques may also provide a hierarchical scheduler for dynamically computing rate credits allocated to queues associated with a shared intermediate node. For example, the number of rate credits allocated to a queue for a virtual subscriber is based on the product of the virtual subscriber weight and a queue weighted fraction of the queues for the virtual subscriber.

Parallel processing apparatus and method for controlling communication
10353857 · 2019-07-16 · ·

A packet transmitting unit transmits, to a node via RDMA communication, a packet with a first identifier that represents a predetermined process and a second identifier that represents a destination communication interface and is a logical identifier, as a destination, being added thereto. A plurality of communication interfaces exist. A packet receiving unit receives a packet transmitted from the node via RDMA communication, selects a communication interface that is a destination of a received packet and is used in the predetermined process, based on the first identifier and the second identifier added to the received packet, and transfers the received packet to a selected communication interface.