Patent classifications
G06F2212/214
MEMORY EXPANSION WITH PERSISTENT PREDICTIVE PREFETCHING
A memory device with non-volatile memory and persistent predictive prefetching provides highspeed storage to a computer system. The memory device uses a non-volatile memory to store data and a volatile memory to cache the data from the non-volatile memory. The computer system sends access requests to obtain data in the non-volatile memory. A prediction engine in the memory device receives the access requests. The prediction engine compute access histories based on the access requests and stores them in an access history table. The prediction engine computes prediction of non-volatile memory addresses that will be accessed in the future based on the stored access history table. The prediction engine causes to store the data from the predicted addresses of the non-volatile memory in the volatile memory. The memory device stores the prediction in the non-volatile memory so the past predictions can be used after restarting the computer system.
CLOUD STORAGE ACCELERATION LAYER FOR ZONED NAMESPACE DRIVES
Systems, apparatuses, and methods provide for a memory controller to manage a tiered memory including a zoned namespace drive memory capacity tier. For example, a memory controller includes logic to translate a standard zoned namespace drive address associated with a user write to a tiered memory address write. The tiered memory address write is associated with the tiered memory including the persistent memory cache tier and the zoned namespace drive memory capacity tier. A plurality of tiered memory address writes are collected, where the plurality of tiered memory address writes include the tiered memory address write and other tiered memory address writes in the persistent memory cache tier. The collected plurality of tiered memory address writes are transferred from the persistent memory cache tier to the zoned namespace drive memory capacity tier, via an append-type zoned namespace drive write command.
STORAGE SYSTEM AND METHOD FOR ACCESSING SAME
A data access system including a processor and a storage system including a main memory and a cache module. The cache module includes a FLC controller and a cache. The cache is configured as a FLC to be accessed prior to accessing the main memory. The processor is coupled to levels of cache separate from the FLC. The processor generates, in response to data required by the processor not being in the levels of cache, a physical address corresponding to a physical location in the storage system. The FLC controller generates a virtual address based on the physical address. The virtual address corresponds to a physical location within the FLC or the main memory. The cache module causes, in response to the virtual address not corresponding to the physical location within the FLC, the data required by the processor to be retrieved from the main memory.
TECHNIQUES FOR NON-CONSECUTIVE LOGICAL ADDRESSES
Methods, systems, and devices for memory operations are described. A first set of commands may be received for accessing a memory device. The first set of commands may include non-consecutive logical addresses that correspond to consecutively indexed physical addresses. A determination that the non-consecutive logical addresses correspond to consecutively indexed physical addresses may be determined based on a first mapping stored in a volatile memory. A second mapping may be transferred to the volatile memory based on the determination. The second mapping may include an indication of whether information stored at a set of physical address is valid. A second set of commands including non-consecutive logical addresses may be received for accessing the memory device. Data for the second set of commands that include the non-consecutive logical addresses may be retrieved from the memory device using the second mapping.
Architecture utilizing a middle map between logical to physical address mapping to support metadata updates for dynamic block relocation
A method for block addressing is provided. The method includes moving content of a data block referenced by a logical block address (LBA) from a first physical block corresponding to a first physical block address (PBA) to a second physical block corresponding to a second PBA, wherein prior to the moving a logical map maps the LBA to a middle block address (MBA) and a middle map maps the MBA to the first PBA and in response to the moving, updating the middle map to map the MBA to the second PBA instead of the first PBA.
Memory system and method of controlling memory system
According to one embodiment, a memory system includes a non-volatile semiconductor memory, a block management unit, and a transcription unit. The semiconductor memory includes a plurality of blocks to which data can be written in both the first mode and the second mode. The block management unit manages a block that stores therein no valid data as a free block. When the number of free blocks managed by the block management unit is smaller than or equal to a predetermined threshold value, the transcription unit selects one or more used blocks that stores therein valid data as transcription source blocks and transcribes valid data stored in the transcription source blocks to free blocks in the second mode.
Information processing apparatus, computer-readable recording medium having stored therein memory control program, and computer-readable recording medium having stored therein information processing program
An information processing apparatus including: a first management data storing region that stores a plurality of first links being provided one for each of multiple calculating cores and representing an order of migration of pages of a page group allocated to the calculating core among a plurality of the pages; a second management data storing region that stores a second link being provided for an operating system and managing a plurality of pages selected in accordance with the order of migration among the page group of the plurality of first links as a group of candidate pages to be migrated to the second memory; and a migration processor that migrates data of a page selected from the group of the second link from the first memory to the second memory. With this configuration, occurrence of a spinlock is reduced, so that the load on processor is reduced.
METADATA MANAGEMENT IN NON-VOLATILE MEMORY DEVICES USING IN-MEMORY JOURNAL
Various implementations described herein relate to systems and methods for managing metadata for an atomic write operation, including determining metadata for data, queuing the metadata in an atomic list, in response to determining that atomic commit has occurred, moving the metadata from the atomic list to write lookup lists based on logical information of the data, and determining one of metadata pages of a non-volatile memory for each of the write lookup lists based on the logical information.
Maintaining an active track data structure to determine active tracks in cache to process
Provided are a computer program product for managing tracks in a storage in a cache. An active track data structure indicates tracks in the cache that have an active status. An active bit in a cache control block for a track is set to indicate active for the track indicated as active in the active track data structure. In response to processing the cache control block, a determination is made, from the cache control block for the track, whether the track is active or inactive to determine processing for the cache control block.
Hybrid memory module
A hybrid memory module includes cache of relatively fast and durable dynamic, random-access memory (DRAM) in service of a larger amount of relatively slow and wear-sensitive flash memory. An address buffer on the module maintains a static, random-access memory (SRAM) cache of addresses for data cached in DRAM.