Patent classifications
H04L49/9078
Packet Buffer Spill-Over in Network Devices
Packets to be transmitted from a network device are buffered in queues in a first packet memory. In response to detecting congestion in a queue in the first packet memory, groups of multiple packets are transferred from the first packet memory to a second packet memory, the second packet memory configured to buffer a portion of traffic bandwidth supported by the network device. Prior to transmission of the packets among the one or more groups of multiple packets from the network device, packets among the one or more groups of multiple packets are transferred from the second packet memory back to the first packet memory. The packets transferred from the second packet memory back to the first packet memory are retrieved from the first packet memory and are forwarded to one or more network ports for transmission of the packets from the network device.
Protocol data unit end handling with fractional data alignment and arbitration fairness
In at least one embodiment, a method for handling data units in a multi-user system includes granting a shared resource to a user of a plurality of users for a transaction associated with an entry of a transaction data structure. The method includes determining whether the transaction stored last partial data of a data unit associated with the user in an alignment register associated with the user. The method includes asserting a request for arbitration of a plurality of transactions associated with the plurality of users. The request is asserted for an additional transaction associated with the entry in response to determining that the transaction stored the last partial data in the alignment register. The method may include flushing the last partial data from the alignment register to a target memory in response to detecting an additional grant of the shared resource to the user for the additional transaction.
COMPUTER NETWORK PACKET TRANSMISSION TIMING
Establishing an expected transmit time at which a network interface controller (NIC) is expected to transmit a next packet. Enqueuing, with the NIC and before the expected transmit time, a packet P.sub.1 to be transmitted at the expected transmit time. Upon enqueuing P.sub.1, incrementing the expected transmit time by an expected transmit duration of P.sub.1. Transmitting at the NIC's line rate and timestamping enqueued P.sub.1 with its actual transmit time. Adjusting the expected transmit time by a difference between P.sub.1's actual transmit and P.sub.1's expected transmit time. Requesting, before completion of transmitting P.sub.1, to transmit a P.sub.2 at time t(P.sub.2). Enqueuing, in sequence, zero or more P.sub.0, such that the current expected transmit time plus the duration of the transmission of the P.sub.0s at the line rate equals t(P.sub.2). Transmitting at the line rate each enqueued P.sub.0. Upon enqueuing each P.sub.0, incrementing, for each P.sub.0, the expected transmit time by the expected transmit duration of the P.sub.0. Enqueuing P.sub.2 for transmission directly following enqueuing the final P.sub.0. Transmitting, by the NIC, enqueued P.sub.2 at t(P.sub.2).
Low latency data synchronization
In some examples, a computing device for processing data streams includes storage to store instructions and a processor to execute the instructions. The processor is to execute the instructions to receive respective data streams provided from a plurality of data producer sensors. The processor is also to execute the instructions to stagger a time of triggering of a first of the plurality of data producer sensors relative to a time of triggering of a second of the plurality of data producer sensors to minimize a concurrency of data frames of the data stream received from the first data producer sensor and data frames of the data stream received from the second of the plurality of data producer sensors. The processor is also to execute the instructions to process the data streams from the plurality of data producer sensors in a time-shared manner. The processor is also to execute the instructions to provide the processed data streams to one or more consumer of the processed data streams.
Low-latency data packet distributor
A data packet distributor (DPD) includes a memory and a data packet transmission device (DPTD) connected to the memory. The DPTD receives a first data packet and identifies a destination queue for attempting transmission of the first data packet. The attempt for transmission is unsuccessful when a second data packet associated with the identified destination queue is present in the memory or the identified destination queue is unavailable for receiving the first data packet. The DPTD stores the first data packet in the memory when the attempt is unsuccessful. The DPTD re-attempts the transmission of the first data packet to the identified destination queue at end of a time interval. The re-attempt is successful when the second data packet is absent in the memory and the identified destination queue is available for receiving the first data packet.
DATA TRAFFIC REDUCTION FOR REDUNDANT DATA STREAMS
Systems and methods are disclosed for reducing processing for received redundant data streams. A first network controller receives a first stream of data packets and a second network controller receives a second stream of data packets redundant to the first stream. The first network controller determines, using a value of an identifier of a received packet of the first stream of data packets, that a corresponding packet of the second stream of data packets having the value of the identifier has not already been received. In response to the determining, the first network controller outputs the first data packet. The second network controller determines, using a value of an identifier of a second packet of the second stream of data packets, that the first data packet has already been received and drops the second packet in response to the determining.
Queue management method and apparatus
A queue management method and apparatus are disclosed. The queue management method includes: storing a first packet to a first buffer cell included in a first macrocell, where the first macrocell is enqueued to a first entity queue, the first macrocell includes N consecutive buffer cells, and the first buffer cell belongs to the N buffer cells; correcting, based on a packet length of the first packet, an average packet length in the first macrocell that is obtained before the first packet is stored, to obtain a current average packet length in the first macrocell; and generating, based on the first macrocell and the first entity queue, queue information corresponding to the first macrocell of the first macrocell in the first entity queue, a head pointer in the first macrocell, a tail pointer in the first macrocell, and the current average packet length in the first macrocell.
Computer network packet transmission timing
Establishing an expected transmit time at which a network interface controller (NIC) is expected to transmit a next packet. Enqueuing, with the NIC and before the expected transmit time, a packet P.sub.1 to be transmitted at the expected transmit time. Upon enqueuing P.sub.1, incrementing the expected transmit time by an expected transmit duration of P.sub.1. Transmitting at the NIC's line rate and timestamping enqueued P.sub.1 with its actual transmit time. Adjusting the expected transmit time by a difference between P.sub.1's actual transmit and P.sub.1's expected transmit time. Requesting, before completion of transmitting P.sub.1, to transmit a P.sub.2 at time t(P.sub.2). Enqueuing, in sequence, zero or more P.sub.0, such that the current expected transmit time plus the duration of the transmission of the P.sub.0s at the line rate equals t(P.sub.2). Transmitting at the line rate each enqueued P.sub.0. Upon enqueuing each P.sub.0, incrementing, for each P.sub.0, the expected transmit time by the expected transmit duration of the P.sub.0. Enqueuing P.sub.2 for transmission directly following enqueuing the final P.sub.0. Transmitting, by the NIC, enqueued P.sub.2 at t(P.sub.2).
EFFICIENT PACKET QUEUEING FOR COMPUTER NETWORKS
A method during a first cycle includes receiving, at a first port of a device, a plurality of network packets. The method may include storing, by the device, at least some portion of a first packet of the plurality of network packets at a first address within a first record bank and storing, by the device and concurrent with storing the at least some portion of the first packet from the first address, at least some portion of a second packet of the plurality of network packets at a second address within a second record bank, different than the first record bank. The method may further include storing, by the device, the first address within the first record bank and the second address within the second record bank in the first link stash associated with the first record bank and updating, by the device, a tail pointer to reference the second address.
Low-Latency Data Packet Distributor
A data packet distributor (DPD) includes a memory and a data packet transmission device (DPTD) connected to the memory. The DPTD receives a first data packet and identifies a destination queue for attempting transmission of the first data packet. The attempt for transmission is unsuccessful when a second data packet associated with the identified destination queue is present in the memory or the identified destination queue is unavailable for receiving the first data packet. The DPTD stores the first data packet in the memory when the attempt is unsuccessful. The DPTD re-attempts the transmission of the first data packet to the identified destination queue at end of a time interval. The re-attempt is successful when the second data packet is absent in the memory and the identified destination queue is available for receiving the first data packet.