Patent classifications
H04L12/883
SYSTEMS AND METHODS FOR EFFICIENTLY STORING A DISTRIBUTED LEDGER OF RECORDS
Systems and methods for efficiently storing a distributed ledger of records. In an exemplary aspect, a method may include generating a record comprising a payload and a header, wherein the payload stores a state of a data object associated with a distributed ledger and the header stores a reference to state information in the payload. The method may further comprise including the record in a trunk filament comprising a first plurality of records indicative of historic states of the data object, wherein the trunk filament is part of a first lifeline. The method may include identifying a jet of the distributed ledger, wherein the jet is a logical structure storing a second lifeline with a second plurality of records. In response to determining that the first plurality of records is related to the second plurality of records, the method may include storing the first lifeline in the jet.
Packet processing method and router
Embodiments of the application describe a packet processing method and a router. The method includes: receiving, by an input line card, at least one packet; obtaining, by the input line card, information about an available first buffer block in a third buffer module, where the third buffer module is a first buffer module that includes an available first buffer block; allocating, by the input line card, a third buffer block to each of the at least one packet based on at least one buffer information block stored in the input line card and the information about an available first buffer block; and buffering, by the input line card, each packet into the third buffer block. Distributed packet buffering can be implemented by using the method.
Uplink orthogonal frequency multiple access (UL-OFDMA) resource unit (RU) distribution among deinterleavers
A method for deinterleaving a plurality of resource units (RUs) where each RU includes data and parameters. The method includes assigning each RU into one of a first deinterleaver and a second deinterleaver and storing the parameters of each respective RU to a respective buffer in an order. The method also includes processing the data of the respective RU in the one of the first deinterleaver and the second deinterleaver and upon completion of the processing of the data of a respective RU, outputting the data of the respective RU from the one of the first deinterleaver and the second deinterleaver into which the respective RU was assigned, outputting the parameters of the respective RU corresponding to one of the first deinterleaver and the second deinterleaver into which the respective RU was assigned, and aligning the parameters of the respective RU with the data of the respective RU based on the order of storage of the parameters in the respective buffer.
Delayed VoIP packet delivery
An approach is provided that a number of incoming packets over a computer network. The packets are part of a Voice over Internet Protocol (VoIP) session and correspond to vocalizations spoken by a sender during the session. At least one of the packets is received out of order from the order the packets were sent by the sender. Based on a delay encountered during the receiving of the incoming packets, the approach increases a playback speed. The set of packets are then used to audibly play an analog rendition of the vocalizations to the receiving user at the increased playback speed.
Network Interface Device
Roughly described: a network interface device has an interface. The interface is coupled to first network interface device circuitry, host interface circuitry and host offload circuitry. The host interface circuitry is configured to interface to a host device and has a scheduler configured to schedule providing and/or receiving of data to/from the host device. The interface is configured to allow at least one of: data to be provided to said host interface circuitry from at least one of said first network device interface circuitry and said host offload circuitry; and data to be provided from said host interface circuitry to at least one of said first network interface device circuitry and said host offload circuitry.
UPLINK ORTHOGONAL FREQUENCY MULTIPLE ACCESS (UL-OFDMA) RESOURCE UNIT (RU) DISTRIBUTION AMONG DEINTERLEAVERS
A method for deinterleaving a plurality of resource units (RUs) where each RU includes data and parameters. The method includes assigning each RU into one of a first deinterleaver and a second deinterleaver and storing the parameters of each respective RU to a respective buffer in an order. The method also includes processing the data of the respective RU in the one of the first deinterleaver and the second deinterleaver and upon completion of the processing of the data of a respective RU, outputting the data of the respective RU from the one of the first deinterleaver and the second deinterleaver into which the respective RU was assigned, outputting the parameters of the respective RU corresponding to one of the first deinterleaver and the second deinterleaver into which the respective RU was assigned, and aligning the parameters of the respective RU with the data of the respective RU based on the order of storage of the parameters in the respective buffer.
Network interface device
Roughly described: a network interface device has an interface. The interface is coupled to first network interface device circuitry, host interface circuitry and host offload circuitry. The host interface circuitry is configured to interface to a host device and has a scheduler configured to schedule providing and/or receiving of data to/from the host device. The interface is configured to allow at least one of: data to be provided to said host interface circuitry from at least one of said first network device interface circuitry and said host offload circuitry; and data to be provided from said host interface circuitry to at least one of said first network interface device circuitry and said host offload circuitry.
TECHNOLOGIES FOR JITTER-ADAPTIVE LOW-LATENCY, LOW POWER DATA STREAMING BETWEEN DEVICE COMPONENTS
Technologies for low-latency data streaming include a computing device having a processor that includes a producer and a consumer. The producer generates a data item, and in a local buffer producer mode adds the data item to a local buffer, and in a remote buffer producer mode adds the data item to a remote buffer. When the local buffer is full, the producer switches to the remote buffer producer mode, and when the remote buffer is below a predetermined low threshold, the producer switches to the local buffer producer mode. The consumer reads the data item from the local buffer while operating in a local buffer consumer mode and reads the data item from the remote buffer while operating in a remote buffer consumer mode. When the local buffer is above a predetermined high threshold, the consumer may switch to a catch-up operating mode. Other embodiments are described and claimed.
Buffer assignment balancing in a network device
Techniques for improved handling of queues of data units are described, such as queues of buffered data units of differing types and/or sources within a switch or other network device. When the size of a queue surpasses the state entry threshold for a certain state, the queue is said to be in the certain state. While in the certain state, data units assigned to the queue may be handled differently in some respect, such as being marked or being dropped without further processing. The queue remains in this certain state until its size falls below the state release threshold for the state. The state release threshold is adjusted over time in, for example, a random or pseudo-random manner. Among other aspects, in some embodiments, this adjustment of the state release threshold addresses fairness issues that may arise with respect to the treatment of different types or sources of data units.
Network processors
The present disclosure is directed to a network processor for processing high volumes of traffic provided by todays access networks at (or near) wireline speeds. The network process can be implemented within a residential gateway to perform, among other functions, routing to deliver high speed data services (e.g., data services with rates up to 10 Gbit/s) from a wide area network (WAN) to end user devices in a local area network (LAN).