Patent classifications
H04L49/9078
SYSTEM AND METHOD OF A HIGH BUFFERED HIGH BANDWIDTH NETWORK ELEMENT
A method and apparatus of a network element that processes a packet in the network element is described. In an exemplary embodiment, the network element receives a data packet that includes a destination address. The network element receives a packet, with a packet switch unit, wherein the packet was received by the network element on an ingress interface. The network element further determines if the packet is to be stored in an external queue. In addition, the network element identifies the external queue for the packet based on one or more characteristics of the packet. The network element additionally forwards the packet to a packet storage unit, wherein the packet storage unit includes storage for the external queue. Furthermore, the network element receives the packet from the packet storage unit and forwards the packet to an egress interface corresponding to the external queue.
Technologies for jitter-adaptive low-latency, low power data streaming between device components
Technologies for low-latency data streaming include a computing device having a processor that includes a producer and a consumer. The producer generates a data item, and in a local buffer producer mode adds the data item to a local buffer, and in a remote buffer producer mode adds the data item to a remote buffer. When the local buffer is full, the producer switches to the remote buffer producer mode, and when the remote buffer is below a predetermined low threshold, the producer switches to the local buffer producer mode. The consumer reads the data item from the local buffer while operating in a local buffer consumer mode and reads the data item from the remote buffer while operating in a remote buffer consumer mode. When the local buffer is above a predetermined high threshold, the consumer may switch to a catch-up operating mode. Other embodiments are described and claimed.
PACKET DESCRIPTOR STORAGE IN PACKET MEMORY WITH CACHE
A first memory device stores (i) a head part of a FIFO queue structured as a linked list (LL) of LL elements arranged in an order in which the LL elements were added to the FIFO queue and (ii) a tail part of the FIFO queue. A second memory device stores a middle part of the FIFO queue, the middle part comprising a LL elements following, in an order, the head part and preceding, in the order, the tail part. A queue controller retrieves LL elements in the head part from the first memory device, moves LL elements in the middle part from the second memory device to the head part in the first memory device prior to the head part becoming empty, and updates LL parameters corresponding to the moved LL elements to indicate storage of the moved LL elements changing from the second memory device to the first memory device.
NETWORK PROCESSOR WITH EXTERNAL MEMORY PROTECTION
Systems and methods for protecting external memory resources to prevent bandwidth collapse in a network processor. One embodiment is a network processor including an input port configured to receive packets from a source device, on-chip memory configured to store packets in queues, and external memory configured to provide a backing store to the on-chip memory. The network processor also includes a processor configured, in response to determining that the source device is unresponsive to a congestion notification, to reduce a size of one or more queues to prevent packets transferring from the on-chip memory to the external memory.
Switching methods and systems for a network interface card
Methods and systems for network communication are provided. One method includes, receiving a network packet at a first network interface card (NIC) operationally coupled to a computing device; identifying a second NIC as a destination for the network packet; placing the network packet by the first NIC at a host memory location, without utilizing resources of a processor of the computing device; notifying the second NIC that the network packet has been placed at the host memory location; retrieving the network packet by the second NIC from the host memory location; transmitting the network packet by the second NIC to another destination; notifying the first NIC by the second NIC that the packet has been transmitted by the second NIC; and freeing the host memory location by the first NIC.
Apparatus and method for adjusting a rate at which data is transferred from a media access controller to a memory in a physical-layer circuit
A physical-layer circuit including a memory, a physical-layer device and a control circuit. The memory receives data from a media access controller (MAC) at a first rate. The MAC is separate from the physical-layer circuit. The physical-layer device receives the data from the memory and transmits the data from the physical-layer circuit to a peer device. The physical-layer device transfers the data from the memory to the peer device at a second rate. An amount of data stored in the memory is based on a difference between the first and second rates. The second rate is less than the first rate. The control circuit is connected between the memory and the physical layer device. The control circuit monitors the amount of the data stored in the memory and, based on the amount of the data stored in the memory, transmits a frame to the MAC to decrease the first rate.
Method and Apparatus for Managing Reception of Secure Data Packets
A logic circuit for managing reception of secure data packets in an industrial controller snoops data being transferred by a Media Access Controller (MAC) between a network port and a shared memory location within the industrial controller. The logic circuit is configured to perform authentication and/or decryption on the data packet as the data packet is being transferred between the port and the shared memory location. The logic circuit performs authentication as the data is being transferred and completes authentication shortly after the MAC has completed transferring the data to the shared memory. The logic circuit coordinates operation with the MAC and signals a Software Packet Processing (SPP) module when authentication is complete. The logic circuit is further configured to decrypt the data packet, if necessary, and to similarly coordinate operation with the MAC and delay signaling the SPP module that data is ready until decryption is complete.
TECHNOLOGIES FOR JITTER-ADAPTIVE LOW-LATENCY, LOW POWER DATA STREAMING BETWEEN DEVICE COMPONENTS
Technologies for low-latency data streaming include a computing device having a processor that includes a producer and a consumer. The producer generates a data item, and in a local buffer producer mode adds the data item to a local buffer, and in a remote buffer producer mode adds the data item to a remote buffer. When the local buffer is full, the producer switches to the remote buffer producer mode, and when the remote buffer is below a predetermined low threshold, the producer switches to the local buffer producer mode. The consumer reads the data item from the local buffer while operating in a local buffer consumer mode and reads the data item from the remote buffer while operating in a remote buffer consumer mode. When the local buffer is above a predetermined high threshold, the consumer may switch to a catch-up operating mode. Other embodiments are described and claimed.
DATA ACCESS TECHNOLOGIES
Examples described herein relate to at least one processor and circuitry, when operational, to: cause a first number of processors of the at least one processor to access queues exclusively allocated for packets to be processed by the first number of processors; cause a second number of processors of the at least one processor to identify commands consistent with Non-volatile Memory Express (NVMe) over Quick User Data Protocol Internet Connections (QUIC), wherein the commands are received in the packets and the second number is based at least in part on a rate of received commands; and cause performance of the commands using a third number of processors. In some examples, the circuitry, when operational, is to: based on detection of a new connection on a first port, associate the new connection with a second port, wherein the second port is different than the first port and select at least one processor to identify and process commands received on the new connection.
System and method for regulating NVMe-oF command requests and data flow across a network with mismatched rates
One embodiment can provide a method and system for implementing flow control. During operation, a switch identifies a command from a host to access a storage device coupled to the switch. The switch queues the command in a command queue corresponding to the host. In response to determining that an amount of data pending transmission to the host from the storage device is below a predetermined threshold, the switch removes a command from the command queue and forwards the removed command to the storage device.