Patent classifications
G06F12/0864
STORAGE SYSTEM AND METHOD FOR ACCESSING SAME
A data access system including a processor and a storage system including a main memory and a cache module. The cache module includes a FLC controller and a cache. The cache is configured as a FLC to be accessed prior to accessing the main memory. The processor is coupled to levels of cache separate from the FLC. The processor generates, in response to data required by the processor not being in the levels of cache, a physical address corresponding to a physical location in the storage system. The FLC controller generates a virtual address based on the physical address. The virtual address corresponds to a physical location within the FLC or the main memory. The cache module causes, in response to the virtual address not corresponding to the physical location within the FLC, the data required by the processor to be retrieved from the main memory.
STORAGE SYSTEM AND METHOD FOR ACCESSING SAME
A data access system including a processor and a storage system including a main memory and a cache module. The cache module includes a FLC controller and a cache. The cache is configured as a FLC to be accessed prior to accessing the main memory. The processor is coupled to levels of cache separate from the FLC. The processor generates, in response to data required by the processor not being in the levels of cache, a physical address corresponding to a physical location in the storage system. The FLC controller generates a virtual address based on the physical address. The virtual address corresponds to a physical location within the FLC or the main memory. The cache module causes, in response to the virtual address not corresponding to the physical location within the FLC, the data required by the processor to be retrieved from the main memory.
Information processing apparatus and cache control method
According to one embodiment, an information processing apparatus includes a storage device, a volatile memory, and a processor. The storage device includes a controller, a first nonvolatile storage module, and a second nonvolatile storage module whose access speed is higher than an access speed of the first nonvolatile storage module. The processor is configured to execute an operating system and a cache driver that are loaded into the volatile memory. The cache driver uses at least part of an area in the second nonvolatile storage module as a cache for the first nonvolatile storage module.
Information processing apparatus and cache control method
According to one embodiment, an information processing apparatus includes a storage device, a volatile memory, and a processor. The storage device includes a controller, a first nonvolatile storage module, and a second nonvolatile storage module whose access speed is higher than an access speed of the first nonvolatile storage module. The processor is configured to execute an operating system and a cache driver that are loaded into the volatile memory. The cache driver uses at least part of an area in the second nonvolatile storage module as a cache for the first nonvolatile storage module.
CACHE-BASED TRACE LOGGING USING TAGS IN AN UPPER-LEVEL CACHE
Cache-based trace logging using tags in an upper cache level. A processor influxes a cache line into a first cache level from an upper second cache level. Influxing the cache line into the first cache level includes, based on the first cache level being a recording cache, the processor reading a tag that is (i) stored in the second cache level and (ii) associated with the cache line. Based on reading the tag, the processor determines whether a first value of the cache line within the second cache level has been previously captured by a trace. The processor performs one of (i) when the first value is determined to have been previously logged, following a logged value logic path when influxing the cache line; or (ii) when the first value is determined to have not been previously logged, following a non-logged value logic path when influxing the cache line.
Power optimized prefetching in set-associative translation lookaside buffer structure
A computer system includes a processor and a prefetch engine. The processor is configured to generate a demand access stream. The prefetch engine is configured to initiate a first prefetch request based on the demand access stream and perform a first prefetch that includes performing a translation lookaside buffer (TLB) lookup on a TLB structure in response to the first prefetch request. The processor determines a TLB entry in response to performing the TLB lookup and performs at least one second prefetch based on the TLB entry without performing a subsequent TLB lookup on the TLB structure.
Power optimized prefetching in set-associative translation lookaside buffer structure
A computer system includes a processor and a prefetch engine. The processor is configured to generate a demand access stream. The prefetch engine is configured to initiate a first prefetch request based on the demand access stream and perform a first prefetch that includes performing a translation lookaside buffer (TLB) lookup on a TLB structure in response to the first prefetch request. The processor determines a TLB entry in response to performing the TLB lookup and performs at least one second prefetch based on the TLB entry without performing a subsequent TLB lookup on the TLB structure.
Hybrid memory module
A hybrid memory module includes cache of relatively fast and durable dynamic, random-access memory (DRAM) in service of a larger amount of relatively slow and wear-sensitive flash memory. An address buffer on the module maintains a static, random-access memory (SRAM) cache of addresses for data cached in DRAM.
Data caching methods of cache systems
A cache system includes a cache memory having a plurality of blocks, a dirty line list storing status information of a predetermined number of dirty lines among dirty lines in the plurality of blocks, and a cache controller controlling a data caching operation of the cache memory and providing statuses and variation of statuses of the dirty lines, according to the data caching operation, to the dirty line list. The cache controller performs a control operation to always store status information of a least-recently-used (LRU) dirty line into a predetermined storage location of the dirty line list.
Update of deduplication fingerprint index in a cache memory
In some examples, a system performs data deduplication using a deduplication fingerprint index in a hash data structure comprising a plurality of blocks, wherein a block of the plurality of blocks comprises fingerprints computed based on content of respective data values. The system merges, in a merge operation, updates for the deduplication fingerprint index to the hash data structure stored in a persistent storage. As part of the merge operation, the system mirrors the updates to a cached copy of the hash data structure in a cache memory, and updates, in an indirect block, information regarding locations of blocks in the cached copy of the hash data structure.