Patent classifications
G06F2212/604
PRE-FETCH MECHANISM FOR COMPRESSED MEMORY LINES IN A PROCESSOR-BASED SYSTEM
Some aspects of the disclosure relate to a pre-fetch mechanism for a cache line compression system that increases RAM capacity and optimizes overflow area reads. For example, a pre-fetch mechanism may allow the memory controller to pipeline the reads from an area with fixed size slots (main compressed area) and the reads from an overflow area. The overflow area is arranged so that a cache line most likely containing the overflow data for a particular line may be calculated by a decompression engine. In this manner, the cache line decompression engine may fetch, in advance, the overflow area before finding the actual location of the overflow data.
Cache memory system using a tag comparator to determine update candidates and operating method thereof
A cache memory apparatus includes a tag comparator configured to compare upper bits of each of pieces of tag data included in a set indicated by a set address that is received with upper bits of a tag address that is received, compare other bits of each of the pieces of the tag data with other bits of the tag address, and determine whether there is a cache hit or a cache miss based on results of the comparisons, and an update controller configured to, in response to the cache miss, determine, as an update candidate, a piece of cache data included in the set and corresponding to the pieces of the tag data, based on the result of the comparison of the upper bits of each of the pieces of the tag data and the upper bits of the tag address, and update the update candidate with new data.
Preserving user changes to a shared layered resource
User changes may be preserved across updates to a layer. When a layering client mounts a layer, a corresponding layering write cache is mounted. Changes to layered resources, such as files, registry entries, and registry values, are made only to the layering write cache. A request to create a file in the layer is directed to the layering write cache such that the new file is created in the layering write cache. A request to open a layered resource, is directed to the layered resource if the layered resource is in the layering write cache. A request to write to a layered resource is directed to the layering write cache if the layered resource is in the layering write cache. If the layered resource is not in the layering write cache, the layered resource is copied to the layering write cache before redirecting the write request.
Caching of metadata for deduplicated LUNs
Efficient processing of user data read requests in a deduplicated data storage system places the metadata for most frequently requested data in data structures and locations in the system hierarchy where the metadata will be most rapidly available. The total amount of such metadata makes storing all of the metadata in high speed memory expensive, and the system and method described uses both the temporal and the spatial characteristics of the user system activity in any epoch to adjust the contents of metadata cache so as to respond to the dynamics of a multi user or multi-application environment where the storage system is not made aware of the time changing mix of operations except by observation of the individual requests. A history record is used to promote metadata from the slow memory to the fast memory, and a process selection may be adjusted based on the address-space activity.
Information processing system and information processing method
An information processing system includes: an information processing apparatus; and a terminal apparatus, wherein the terminal apparatus includes a first processor configured to measure response times for respective volumes in the information processing apparatus, and the information processing apparatus includes a second processor configured to reduce a capacity of an allocated cache memory in accordance with the response times.
MULTIPLE READ AND WRITE PORT MEMORY
A memory device includes content banks configured to store content data and parity banks configured to store parity data for reconstructing the content data. In response to receiving, in a first clock cycle, a first request requesting a first operation to be performed in a first content bank and a second request requesting to write new content data to the first content bank, the memory device performs the first operation in the first content bank, and writes the new content data to a second content bank. The second content bank is selected from a subset of content banks defined by content banks that correspond with parity banks different from parity banks that correspond with the first content bank. The memory device updates, based on the new content data written to the second content bank, parity data in the parity banks that correspond with the second content bank.
DYNAMIC ALLOCATION OF CACHE BASED ON INSTANTANEOUS BANDWIDTH CONSUMPTION AT COMPUTING DEVICES
A mechanism is described for facilitating dynamic cache allocation in computing devices in computing devices. A method of embodiments, as described herein, includes facilitating monitoring one or more bandwidth consumptions of one or more clients accessing a cache associated with a processor; computing one or more bandwidth requirements of the one or more clients based on the one or more bandwidth consumptions; and allocating one or more portions of the cache to the one or more clients in accordance with the one or more bandwidth requirements.
Cache Associativity Allocation
Cache associativity allocation is described. In accordance with the described techniques, a portion of associativity of a cache is allocated to a category of cache requests. The portion of associativity corresponds to a subset of cachelines of the cache. A request is received to access the cache, and a cacheline of the subset of cachelines is allocated to the request based on a category associated with the request. Data corresponding to the request is loaded into the cacheline of the subset of cachelines.
Generation and use of memory access instruction order encodings
Apparatus and methods are disclosed for controlling execution of memory access instructions in a block-based processor architecture using a hardware structure that indicates a relative ordering of memory access instruction in an instruction block. In one example of the disclosed technology, a method of executing an instruction block having a plurality of memory load and/or memory store instructions includes selecting a next memory load or memory store instruction to execute based on dependencies encoded within the block, and on a store vector that stores data indicating which memory load and memory store instructions in the instruction block have executed. The store vector can be masked using a store mask. The store mask can be generated when decoding the instruction block, or copied from an instruction block header. Based on the encoded dependencies and the masked store vector, the next instruction can issue when its dependencies are available.
Selective allocation of CPU cache slices to database objects
A central processing unit (CPU) forming part of a computing device, initiates execution of code associated with each of a plurality of objects used by a worker thread. The CPU has an associated cache that is split into a plurality of slices. It is determined, by a cache slice allocation algorithm for each object, whether any of the slices will be exclusive to or shared by the object. Thereafter, for each object, any slices determined to be exclusive to the object are activated such that the object exclusively uses such slices and any slices determined to be shared by the object are activated such that the object shares or is configured to share such slices.