Patent classifications
G06F12/04
METHOD AND SYSTEM FOR IN-LINE ECC PROTECTION
A memory system having an interconnect configured to receive commands from a system to read data from and/or write data to a memory device. The memory system also has a bridge configured to receive the commands from the interconnect, to manage ECC data and to perform address translation between system addresses and physical memory device addresses by calculating a first ECC memory address for a first ECC data block that is after and adjacent to a first data block having a first data address, calculating a second ECC memory address that is after and adjacent to the first ECC block, and calculating a second data address that is after and adjacent to the second ECC block. The bridge may also check and calculate ECC data for a complete burst of data, and/or cache ECC data for a complete burst of data that includes read and/or write data.
CONTROLLER AND OPERATION METHOD THEREOF FOR MANAGING READ COUNT INFORMATION OF MEMORY BLOCK
A method for performing a sudden power-off recovery operation of a controller controlling a memory device, the method includes: obtaining open block information for open blocks of the memory device and read counts for the open blocks; updating each of the read counts by adding a set value to each of the read counts; storing the updated read counts in the memory device; sequentially reading pages in each of the open blocks without updating the read counts for the open blocks, based on the open block information, to detect a boundary page after the storing of the updated read counts in the memory device; and controlling the memory device to program dummy data in the detected boundary page.
Snap read optimization for media management for a memory sub-system
A plurality of entries associated with a media management operation for a plurality of transfer units are stored. A respective destination location for each of the respective transfer units are determined in connection with the garbage procedure such that a subset of the plurality of transfer units aligns with a codeword boundary on the memory page. A plurality of write commands in connection with the media management operation are issued based at least in part on the determining.
Data writing method and apparatus, and electronic device
In the field of data reading and writing technologies, a data writing method is associated with a data writing apparatus and an electronic device. The data writing method includes: determining whether a start storage address of a first data block is aligned with a bus bit width of a storage; in response to that the start storage address of the first data block is not aligned with the bus bit width of the storage, determining whether a second data block which is a data block immediately before the first data block is compressed; in response to that the second data block is compressed, executing complete writing on a first beat of the first data block.
Apparatus for obsolete mapping counting in NAND-based storage devices
In one or more embodiments, a NAND-based data storage device includes a device controller configured to receive a memory write command from a host specifying a set of memory locations to be written to, and to determine whether the command is for a random write. In response to the determination, the device controller is further configured to configure one or more update entries to an update layer of a mapping architecture of the device for the set of memory locations, such that the one or more update entries are respectively aligned with a size of a pre-defined MRU of mapping data for the device. By aligning the update entries with the smaller MRU, smaller regions of memory may be flagged as obsolete, increasing efficiency. In one embodiment, the device controller further includes a RAM, and the update layer is stored in the RAM.
SYSTEMS AND METHODS FOR COMPOSABLE COHERENT DEVICES
Provided are systems, methods, and apparatuses for resource allocation. The method can include: determining a first value of a parameter associated with at least one first device in a first cluster; determining a threshold based on the first value of the parameter; receiving a request for processing a workload at the first device; determining that a second value of the parameter associated with at least one second device in a second cluster meets the threshold; and responsive to meeting the threshold, routing at least a portion of the workload to the second device.
SELECT DECOMPRESSION HEADERS AND SYMBOL START INDICATORS USED IN WRITING DECOMPRESSED DATA
One or more units of decompressed data of a plurality of units of decompressed data is written to a target location for subsequent writing to memory. The plurality of units of decompressed data includes a plurality of symbol outputs and has associated therewith a plurality of decompression headers. A determination is made that the subsequent writing to memory of at least a portion of another unit of decompressed data to be written to the target location is to be stalled. A symbol start position of the other unit of decompressed data and a decompression header of a selected unit of the one or more units of decompressed data written to the target location are provided to a component of the computing environment. The decompression header is used for the subsequent writing of the other unit of decompressed data to memory.
PROCESSOR INSTRUCTIONS FOR DATA COMPRESSION AND DECOMPRESSION
A processor that includes compression instructions to compress multiple adjacent data blocks of uncompressed read-only data stored in memory into one compressed read-only data block and store the compressed read-only data block in multiple adjacent blocks in the memory is provided. During execution of an application to operate on the read-only data, one of the multiple adjacent blocks storing the compressed read-only block is read from memory, stored in a prefetch buffer and decompressed in the memory controller. In response to a subsequent request during execution of the application for an adjacent data block in the compressed read-only data block, the uncompressed adjacent block is read directly from the prefetch buffer.
COMPRESSED CACHE MEMORY WITH PARALLEL DECOMPRESS ON FAULT
An embodiment of an integrated circuit may comprise, coupled to a core, hardware decompression accelerators, a compressed cache, a processor communicatively coupled to the hardware decompression accelerators and the compressed cache, and memory communicatively coupled to the processor, wherein the memory stores microcode instructions that when executed by the processor causes the processor to load a page table entry in response to an indication of a page fault, determine if the page table entry indicates that the page is to be decompressed on fault, and, if so determined, modify a first decompression work descriptor at a first address and a second decompression work descriptor at a second address based on information from the page table entry, and generate a first enqueue transaction to the hardware decompression accelerators with the first address of the first decompression work descriptor and a second enqueue transaction to the hardware decompression accelerators with the second address of the second decompression work descriptor. Other embodiments are disclosed and claimed.
Electronic device and control method thereof
Disclosed are an electronic device and a control method thereof. The electronic device according to the present disclosure includes a memory, a cache memory, a CPU, and includes a processor which controls the electronic device by using a program stored in the memory, wherein the CPU monitors an input address through which an input value is accessed in the cache memory, and changes the input address when the input address through which the input value is accessed in the cache memory is changed to a preset pattern.