Patent classifications
H04L49/9015
Hierarchical hardware linked list approach for multicast replication engine in a network ASIC
A multicast rule is represented in a hierarchical linked list with N tiers. Each tier or level in the hierarchical linked list corresponds to a network layer of a network stack that requires replication. Redundant groups in each tier are eliminated such that the groups in each tier are stored exactly once in a replication table. A multicast replication engine traverses the hierarchical linked list and replicates a packet according to each node in the hierarchical linked list.
SENDING PACKETS USING OPTIMIZED PIO WRITE SEQUENCES WITHOUT SFENCES AND OUT OF ORDER CREDIT RETURNS
Methods and apparatus for sending packets using optimized PIO write sequences without sfences and out-of-order credit returns. Sequences of Programmed Input/Output (PIO) write instructions to write packet data to a PIO send memory are received by a processor in an original order and executed out of order, resulting in the packet data being written to send blocks in the PIO send memory out of order, while the packets themselves are stored in sequential order once all of the packet data is written. The packets are egressed out of order by egressing packet data contained in the send blocks to an egress block using a non-sequential packet order that is different than the sequential packet order. In conjunction with egressing the packets, corresponding credits are returned in the non-sequential packet order. A block list comprising a linked list and a free list are used to facilitate out-of-order packet egress and corresponding out-of-order credit returns.
Flow control device and method
A flow control device includes an analysis unit identifying a flow of a received packet, a plurality of queues temporarily storing packets sorted according to each flow, an allocation information storage unit storing allocation information regarding a queue allocated for each flow, a sorting unit deciding a queue to be a storage destination of the received packet and sorts the packet based on a result identified by the analysis unit and the allocation information, a saved packet holding unit saving a packet belonging to a flow determined to have no allocation information regarding the queue to be allocated by the sorting unit, and a transmission unit transmitting the packet temporarily stored in the plurality of queues and the packet saved in the saved packet holding unit to a processing unit that processes a packet.
Hierarchical hardware linked list approach for multicast replication engine in a network ASIC
A multicast rule is represented in a hierarchical linked list with N tiers. Each tier or level in the hierarchical linked list corresponds to a network layer of a network stack that requires replication. Redundant groups in each tier are eliminated such that the groups in each tier are stored exactly once in a replication table. A multicast replication engine traverses the hierarchical linked list and replicates a packet according to each node in the hierarchical linked list.
Data scheduling method and switching device
Embodiments of this application provide a method, includes: receiving a first data flow that includes a plurality of data units; inputting N.sub.1 data units of the plurality of data units and a first source marking unit into the first source queue; inputting M.sub.1 data units of the plurality of data units and a first target marking unit into the first target queue, wherein the N.sub.1 data units and the M.sub.1 data units are different data units; scheduling the N.sub.1 data units and the first source marking unit based on the first source marking unit and the first target marking unit; and scheduling the first target marking unit and the M.sub.1 data units, wherein the first target marking unit and the M.sub.1 data units are scheduled later than the N.sub.1 data units and the first source marking unit are scheduled.
Systems and methods for efficiently storing a distributed ledger of records
Systems and methods for efficiently storing a distributed ledger of records. In an exemplary aspect, a method may include generating a record comprising a payload and a header, wherein the payload stores a state of a data object associated with a distributed ledger and the header stores a reference to state information in the payload. The method may further comprise including the record in a trunk filament comprising a first plurality of records indicative of historic states of the data object, wherein the trunk filament is part of a first lifeline. The method may include identifying a jet of the distributed ledger, wherein the jet is a logical structure storing a second lifeline with a second plurality of records. In response to determining that the first plurality of records is related to the second plurality of records, the method may include storing the first lifeline in the jet.
Systems and methods for efficiently storing a distributed ledger of records
Systems and methods for efficiently storing a distributed ledger of records. In an exemplary aspect, a method may include generating a record comprising a payload and a header, wherein the payload stores a state of a data object associated with a distributed ledger and the header stores a reference to state information in the payload. The method may further comprise including the record in a trunk filament comprising a first plurality of records indicative of historic states of the data object, wherein the trunk filament is part of a first lifeline. The method may include identifying a jet of the distributed ledger, wherein the jet is a logical structure storing a second lifeline with a second plurality of records. In response to determining that the first plurality of records is related to the second plurality of records, the method may include storing the first lifeline in the jet.
Method and apparatus for using multiple linked memory lists
An apparatus and method for queuing data to a memory buffer. The method includes selecting a queue from a plurality of queues; receiving a token of data from the selected queue and requesting, by a queue module, addresses and pointers from a buffer manager for addresses allocated by the buffer manager for storing the token of data. Subsequently, a memory list is accessed by the buffer manager and addresses and pointers are generated to allocated addresses in the memory list which comprises a plurality of linked memory lists for additional address allocation. The method further includes writing into the accessed memory list the pointers for the allocated address where the pointers link together allocated addresses; and migrating to other memory lists for additional address allocations upon receipt of subsequent tokens of data from the queue; and generating additional pointers linking together the allocated addresses in the other memory lists.
Apparatus and buffer control method thereof in a wireless communication system
A 5G communication system or pre-5G communication system for supporting a higher data rate than that of a beyond 4G communication system such as an LTE is provided. A method by an apparatus for controlling buffers in a wireless communication system comprises storing information related to a packet in at least one of a first buffer or a second buffer, transmitting data generated based on the packet, and, when an acknowledgement signal is received for the data, discarding the information.
Technologies for scalable network packet processing with lock-free rings
Technologies for network packet processing include a computing device that receives incoming network packets. The computing device adds the incoming network packets to an input lockless shared ring, and then classifies the network packets. After classification, the computing device adds the network packets to multiple lockless shared traffic class rings, with each ring associated with a traffic class and output port. The computing device may allocate bandwidth between network packets active during a scheduling quantum in the traffic class rings associated with an output port, schedule the network packets in the traffic class rings for transmission, and then transmit the network packets in response to scheduling. The computing device may perform traffic class separation in parallel with bandwidth allocation and traffic scheduling. In some embodiments, the computing device may perform bandwidth allocation and/or traffic scheduling on each traffic class ring in parallel. Other embodiments are described and claimed.