Patent classifications
G06F2212/1021
Apparatus and method for writing data in a memory
A device for writing data to a memory, the device including: a first write buffer having a first data width that matches a width of write data included in a write request and wherein the first write buffer is configured to store the write data as first data; a second write buffer having a second data width that matches a data width of the memory and is greater than the first data width; and a controller configured to, based on a write address included in the write request and an address of the second data stored in the second write buffer, write the first data stored in the first write buffer to the second write buffer and write the second data stored in the second write buffer to the memory.
Tile-based graphics
A tile-based graphics system has a rendering space sub-divided into a plurality of tiles which are to be processed. Graphics data items, such as parameters or texels, are fetched into a cache for use in processing one of the tiles. Indicators are determined for the graphics data items, whereby the indicator for a graphics data item indicates the number of tiles with which that graphics data item is associated. The graphics data items are evicted from the cache in accordance with the indicators of the graphics data items. For example, the indicator for a graphics data item may be a count of the number of tiles with which that graphics data item is associated, whereby the graphics data item(s) with the lowest count(s) is (are) evicted from the cache.
Victim cache that supports draining write-miss entries
A caching system including a first sub-cache and a second sub-cache in parallel with the first sub-cache, wherein the second sub-cache includes a set of cache lines, line type bits configured to store an indication that a corresponding cache line of the set of cache lines is configured to store write-miss data, and an eviction controller configured to flush stored write-miss data based on the line type bits.
Machine learning to improve caching efficiency in a storage system
A system and method improve caching efficiency in a data storage system by performing machine learning processes on metadata relating to extents of data blocks, rather than individual blocks themselves. Thus, once the storage devices are divided into extents, various metadata regarding access to the blocks within each extent are aggregated, and per-extent features are extracted. These features are used to train a data regression model that is subsequently used to infer a most likely “hotness” value for each extent at a future time. These predicted values, which may be further classified as e.g. “hot”, “warm”, and “cold” using thresholds, are used to implement the cache replacement policy. Embodiments scale to large and multi-layered caches, and may avoid common caching problems like thrashing, by adjusting the extent size. Policy goal functions may be optimized by dynamically adjusting the classification thresholds.
COMPUTER-READABLE RECORDING MEDIUM STORING ADJUSTMENT PROGRAM AND ADJUSTMENT METHOD
A non-transitory computer-readable recording medium stores an adjustment program for causing a computer to perform a process including: acquiring a computation performance characteristic that indicates a computation performance value that corresponds to each adjustable dimension, through computation in which a cache memory in a processor that includes the cache memory is used; extracting, by using the computation performance characteristic, an adjustment condition for adjusting an adjustable dimension for which a decrease in computation performance due to a cache miss caused by a cache-line conflict in the cache memory occurs; and inserting adjustment processing based on the adjustment condition into a specific program that is executed by the processor and uses the adjustable dimension.
Apparatus, Device, Method, and Computer Program for Managing Memory of a Computer System
Examples relate to an apparatus, a device, a method, and computer program for managing memory of a computer system, and to a computer system comprising such an apparatus or device. The apparatus is configured to obtain first information on accesses to at least one of a first tier of memory and a second tier of memory within a memory hierarchy of the computer system from a page table, the first and second tiers of memory being below the processor cache tiers of the memory hierarchy, the first tier of memory having a higher memory performance than the second tier of memory. The apparatus is configured to obtain second information on accesses to at least one of the first tier of memory and the second tier of memory from logged processor events related to the accesses to the first tier of memory and the second tier of memory. The apparatus is configured to select one or more memory pages to be moved between the first tier of memory and the second tier of memory based on the first and second information on the accesses to at least one of the first tier of memory and the second tier of memory.
CACHE SUPPORT FOR INDIRECT LOADS AND INDIRECT STORES IN GRAPH APPLICATIONS
Techniques for operating on an indirect memory access instruction, where the instruction accesses a memory location via at least one indirect address. A pipeline processes the instruction and a memory operation engine generates a first access to the at least one indirect address and a second access to a target address determined by the at least one indirect address. A cache memory used with the pipeline and the memory operation engine caches pointers. In response to a cache hit when executing the indirect memory access instruction, operations dereference a pointer to obtain the at least one indirect address, not set a cache bit, and return data for the instruction without storing the data in the cache memory; and in response to a cache miss, operations set the cache bit, obtain, and store a cache line for a missed pointer, and return data without storing the data in the cache memory.
Lookahead priority collection to support priority elevation
A queuing requester for access to a memory system is provided. Transaction requests are received from two or more requestors for access to the memory system. Each transaction request includes an associated priority value. A request queue of the received transaction requests is formed in the queuing requester. Each transaction request includes an associated priority value. A highest priority value of all pending transaction requests within the request queue is determined. An elevated priority value is selected when the highest priority value is higher than the priority value of an oldest transaction request in the request queue; otherwise the priority value of the oldest transaction request is selected. The oldest transaction request in the request queue with the selected priority value is then provided to the memory system. An arbitration contest with other requesters for access to the memory system is performed using the selected priority value.
MULTI-STAGE CACHE TAG WITH FIRST STAGE TAG SIZE REDUCTION
An embodiment of an integrated circuit comprises circuitry to generate a cache tag for data to be stored in a cache memory, store a first portion of the cache tag in a primary tag memory, and store a second portion of the cache tag in a secondary tag memory, wherein a size of the first portion is smaller than a size of the second portion. Other embodiments are disclosed and claimed.
FROZEN TIME CACHE FOR MULTI-HOST READ OPERATIONS
Aspects of a storage device including a memory and a controller are provided. The controller may receive a prefetch request to retrieve data for a host having a promoted stream. The controller may access a frozen time table indicating hosts for which data has been prefetched and frozen times associated with the host and other hosts. The controller can determine whether the host has a higher priority over other hosts included in the frozen time table based on corresponding frozen times and data access parameters associated with the host. The controller may determine to prefetch the data for the host in response to the prefetch request when the host has a higher priority than the other hosts. The controller can receive a host read command associated with the promoted stream from the host and provide the prefetched data to the host in response to the host read command.