H03M13/01

Multi-processor bridge with cache allocate awareness

Techniques for loading data, comprising receiving a memory management command to perform a memory management operation to load data into the cache memory before execution of an instruction that requests the data, formatting the memory management command into one or more instruction for a cache controller associated with the cache memory, and outputting an instruction to the cache controller to load the data into the cache memory based on the memory management command.

Virtual network pre-arbitration for deadlock avoidance and enhanced performance

A device includes a data path, a first interface configured to receive a first memory access request from a first peripheral device, and a second interface configured to receive a second memory access request from a second peripheral device. The device further includes an arbiter circuit configured to, in a first clock cycle, a pre-arbitration winner between a first memory access request and a second memory access request based on a first number of credits allocated to a first destination device and a second number of credits allocated to a second destination device. The arbiter circuit is further configured to, in a second clock cycle select a final arbitration winner from among the pre-arbitration winner and a subsequent memory access request based on a comparison of a priority of the pre-arbitration winner and a priority of the subsequent memory access request.

Quality of service (QoS) aware data storage decoder

Techniques related to a QoS-aware decoder architecture for data storage are described. In an example, QoS specifications include a QOS latency specification indicative of an acceptable latency for completing the processing of a data read command. The decoder may store this QOS latency specification. In operation, the decoder generates a latency measurement indicative of the actual latency for the processing. If a comparison of the latency measurement and QOS latency specification indicates a violation of the QOS latency specification, the decoder can terminate the decoding and generate a decoding failure.

Quality of service (QoS) aware data storage decoder

Techniques related to a QoS-aware decoder architecture for data storage are described. In an example, QoS specifications include a QOS latency specification indicative of an acceptable latency for completing the processing of a data read command. The decoder may store this QOS latency specification. In operation, the decoder generates a latency measurement indicative of the actual latency for the processing. If a comparison of the latency measurement and QOS latency specification indicates a violation of the QOS latency specification, the decoder can terminate the decoding and generate a decoding failure.

Memory controller and method for decoding memory devices with early hard-decode exit

A method and apparatus for decoding are disclosed. The method includes receiving a first Forward Error Correction (FEC) block of read values, starting a hard-decode process in which a number of check node failures is identified and, during the hard-decode process comparing the identified number of check node failures to a decode threshold. When the identified number of check node failures is not greater than the decode threshold the hard-decode process is continued. When the identified number of check node failures is greater than the decode threshold, the method includes: stopping the hard-decode process prior to completion of the hard-decode process; generating output indicating that additional reads are required; receiving one or more additional FEC blocks of read values, mapping the first FEC block of read values and the additional FEC blocks of read values into soft-input values; and performing a soft-decode process on the soft-input values.

Memory control method, memory storage device and memory control circuit unit

A memory control method for a rewritable non-volatile memory module is provided according to an exemplary embodiment of the disclosure. The method includes: sending a first read command sequence which indicates a reading of a first physical unit by using a first read voltage level to obtain first data; decoding the first data; sending a second read command sequence which indicates a reading of the first physical unit by using a second read voltage level to obtain second data; decoding the second data with assistance information to improve a decoding success rate of the second data if the second read voltage level meets a first condition or the second data meets a second condition; and decoding the second data without the assistance information if the second read voltage level does not meet the first condition and the second data does not meet the second condition.

SEPARATE STORAGE AND CONTROL OF STATIC AND DYNAMIC NEURAL NETWORK DATA WITHIN A NON-VOLATILE MEMORY ARRAY
20210304009 · 2021-09-30 ·

Methods and apparatus are disclosed for managing the storage of static and dynamic neural network data within a non-volatile memory (NVM) die for use with deep neural networks (DNN). Some aspects relate to separate trim sets for separately configuring a static data NVM array for static input data and a dynamic data NVM array for dynamic synaptic weight data. For example, the static data NVM array may be configured via one trim set for data retention, whereas the dynamic data NVM array may be configured via another trim set for write performance. The trim sets may specify different configurations for error correction coding, write verification, and read threshold calibration, as well as different read/write voltage thresholds. In some examples, neural network regularization is provided within a DNN by setting trim parameters to encourage bit flips to avoid overfitting. Some examples relate to managing non-DNN data, such as stochastic gradient data.

Data interpretation with modulation error ratio analysis
11042433 · 2021-06-22 · ·

Methods and systems for analyzing data are disclosed. An example method can comprise receiving a first data signal, decoding the first data signal, determining a second data signal based on the decoded first data signal, and determining a modulation error ratio based on a difference between the first data signal and the second data signal.

IMPROVING PERFORMANCE OF A BIT FLIPPING (BF) DECODER OF AN ERROR CORRECTION SYSTEM
20210281278 · 2021-09-09 ·

Techniques are described for improving the decoding latency and throughput of an error correction system that includes a bit flipping (BF) decoder, where the BF decoder uses a bit flipping procedure. In an example, different decoding parameters are determined including any of a decoding number of a decoding iteration, a checksum of a codeword, a degree of a variable node, and a bit flipping threshold defined for the bit flipping procedure. Based on one or more of these decoding parameters, a decision can be generated to skip the bit flipping decoding procedure, thereby decreasing the decoding latency and increasing the decoding throughput. Otherwise, the bit flipping decoding procedure can be performed to compute a bit flipping energy and determine whether particular bits are to be flipped or not. Hence, the overall performance (e.g., bit error rate) is not significantly impacted.

Performance of a bit flipping (BF) decoder of an error correction system

Techniques are described for improving the decoding latency and throughput of an error correction system that includes a bit flipping (BF) decoder, where the BF decoder uses a bit flipping procedure. In an example, different decoding parameters are determined including any of a decoding number of a decoding iteration, a checksum of a codeword, a degree of a variable node, and a bit flipping threshold defined for the bit flipping procedure. Based on one or more of these decoding parameters, a decision can be generated to skip the bit flipping decoding procedure, thereby decreasing the decoding latency and increasing the decoding throughput. Otherwise, the bit flipping decoding procedure can be performed to compute a bit flipping energy and determine whether particular bits are to be flipped or not. Hence, the overall performance (e.g., bit error rate) is not significantly impacted.