H03M13/276

MULTICORE SHARED CACHE OPERATION ENGINE

Techniques for accessing memory by a memory controller, comprising receiving, by the memory controller, a memory management command to perform a memory management operation at a virtual memory address, translating the virtual memory address to a physical memory address, wherein the physical memory address comprises an address within a cache memory, and outputting an instruction to the cache memory based on the memory management command and the physical memory address.

DEINTERLEAVING METHOD AND DEINTERLEAVING SYSTEM PERFORMING THE SAME

A deinterleaving method and a deinterleaving system performing the same are disclosed. According to an example embodiment, a data processing method includes dividing data into first data blocks of a first number of bits, performing deinterleaving on the first data blocks, and dividing deinterleaved data into second data blocks of a second number of bits and outputting the second data blocks, wherein the first number of bits is determined based on a minimum switching unit of a deinterleaving operation and the second number of bits.

Butterfly network on load data return

A method is shown that is operable to transform and align a plurality of fields from an input to an output data stream using a multilayer butterfly or inverse butterfly network. Many transformations are possible with such a network which may include separate control of each multiplexer. This invention supports a limited set of multiplexer control signals, which enables a similarly limited set of data transformations. This limited capability is offset by the reduced complexity of the multiplexor control circuits.

Multicore shared cache operation engine

Techniques for accessing memory by a memory controller, comprising receiving, by the memory controller, a memory management command to perform a memory management operation at a virtual memory address, translating the virtual memory address to a physical memory address, wherein the physical memory address comprises an address within a cache memory, and outputting an instruction to the cache memory based on the memory management command and the physical memory address.

Multi-processor bridge with cache allocate awareness

Techniques for loading data, comprising receiving a memory management command to perform a memory management operation to load data into the cache memory before execution of an instruction that requests the data, formatting the memory management command into one or more instruction for a cache controller associated with the cache memory, and outputting an instruction to the cache controller to load the data into the cache memory based on the memory management command.

Virtual network pre-arbitration for deadlock avoidance and enhanced performance

A device includes a data path, a first interface configured to receive a first memory access request from a first peripheral device, and a second interface configured to receive a second memory access request from a second peripheral device. The device further includes an arbiter circuit configured to, in a first clock cycle, a pre-arbitration winner between a first memory access request and a second memory access request based on a first number of credits allocated to a first destination device and a second number of credits allocated to a second destination device. The arbiter circuit is further configured to, in a second clock cycle select a final arbitration winner from among the pre-arbitration winner and a subsequent memory access request based on a comparison of a priority of the pre-arbitration winner and a priority of the subsequent memory access request.

Decoding method and apparatus, network device, and storage method
11843396 · 2023-12-12 · ·

A decoding method and apparatus, a network device, and a storage medium are provided. The method includes: receiving data before de-interleaving and soft bit encoding locations; dividing the data before de-interleaving to obtain first data banks; acquiring punctured data, and obtaining second data banks according to the punctured data, wherein the data before de-interleaving and the punctured data are determined in encoded data according to the soft bit encoding locations; and performing decoding according to the soft bit encoding locations, the first data banks and the second data banks, so as to obtain decoded data.

DEINTERLEAVER
20210175905 · 2021-06-10 · ·

A method, apparatus, and system for a deinterleaver.

TIME INTERLEAVER, TIME DEINTERLEAVER, TIME INTERLEAVING METHOD, AND TIME DEINTERLEAVING METHOD
20210152486 · 2021-05-20 ·

A convolutional interleaver included in a time interleaver, which performs convolutional interleaving includes: a first switch that switches a connection destination of an input of the convolutional interleaver to one end of one of a plurality of branches; a FIFO memories provided in some of the plurality of branches except one branch, wherein a number of FIFO memories is different among the plurality of branches; and a second switch that switches a connection destination of an output of the convolutional interleaver to another end of one of the plurality of branches. The first and second switches switch the connection destination when the plurality of cells as many as the codewords per frame have passed, by switching a corresponding branch of the connection destination sequentially and repeatedly among the plurality of branches.

SYSTEM, METHOD, AND APPARATUS FOR INTERLEAVING DATA
20210143841 · 2021-05-13 · ·

A method, system, and apparatus for interleaving data including creating a buffer, writing input data, and reading output data out of the buffer.