Patent classifications
G06F2212/221
Lookahead priority collection to support priority elevation
A queuing requester for access to a memory system is provided. Transaction requests are received from two or more requestors for access to the memory system. Each transaction request includes an associated priority value. A request queue of the received transaction requests is formed in the queuing requester. Each transaction request includes an associated priority value. A highest priority value of all pending transaction requests within the request queue is determined. An elevated priority value is selected when the highest priority value is higher than the priority value of an oldest transaction request in the request queue; otherwise the priority value of the oldest transaction request is selected. The oldest transaction request in the request queue with the selected priority value is then provided to the memory system. An arbitration contest with other requesters for access to the memory system is performed using the selected priority value.
GUARANTEED REAL-TIME CACHE CARVEOUT FOR DISPLAYED IMAGE PROCESSING SYSTEMS AND METHODS
An electronic device may include an electronic display to display an image based on processed image data. The electronic device may also include image processing circuitry to generate the processed image data based on input image data and previously determined data stored in memory. The image processing circuitry may also operate according to real-time computing constraints. Cache memory may store the previously determined data in a provisioned section of the cache memory allotted to the image processing circuitry. Additionally, a controller may manage reading and writing of the previously determined data to the provisioned section of the cache memory.
Cache program operation of three-dimensional memory device with static random-access memory
Embodiments of three-dimensional (3D) memory devices with a 3D NAND memory array having a plurality of pages, an on-die cache coupled to the memory array on a same chip and configured to cache a plurality of batches of program data between a host and the memory array, the on-die cache having SRAM cells, and a controller coupled to the on-die cache on the same chip. The controller is configured to check a status of an (N−2).sup.th batch of program data, N being an integer equal to or greater than 2, program an (N−1).sup.th batch of program data into respective pages in the 3D NAND memory array, and cache an N.sup.th batch of program data in respective space in the on-die cache as a backup copy of the N.sup.th batch of program data.
INTEGRATED CIRCUIT AND METHOD FOR EXECUTING CACHE MANAGEMENT OPERATION
An integrated circuit and a method for executing a cache management operation are provided. The integrated circuit includes a master interface, a slave interface, and a link. The link is connected between the master interface and the slave interface, and the link includes an A-channel, a B-channel, a C-channel, a D-channel, and an E-channel. The A-channel is configured to transmit a cache management operation message of the master interface to the slave interface, and the cache management operation message is configured to manage data consistency between different data caches. The D-channel is configured to transmit a cache management operation acknowledgement message of the slave interface to the master interface.
Coherent memory access
Apparatuses and methods related to providing coherent memory access. An apparatus for providing coherent memory access can include a memory array, a first processing resource, a first cache line and a second cache line coupled to the memory array, a first cache controller, and a second cache controller. The first cache controller coupled to the first processing resource and to the first cache line can be configured to provide coherent access to data stored in the second cache line and corresponding to a memory address. A second cache controller coupled through an interface to a second processing resource external to the apparatus and coupled to the second cache line can be configured to provide coherent access to the data stored in the first cache line and corresponding to the memory address. Coherent access can be provided using a first cache line address register of the first cache controller which stores the memory address and a second cache line address register of the second cache controller which also stores the memory address.
SEMICONDUCTOR DEVICE AND METHOD FOR CONTROLLING SEMICONDUCTOR DEVICE
A semiconductor device includes: a first cache that includes a first memory and rewrite flags that indicate whether rewriting has been performed for each piece of data held in the first memory; and a second cache that includes a second memory and a third memory that has a lower writing speed than the second memory, stores data evicted from the first cache in the second memory when a rewrite flag corresponding to the evicted data indicates a rewrite state, and stores data evicted from the first cache in the third memory when a rewrite flag corresponding to the evicted data indicates a non-rewrite state.
INTERLEAVED CACHE CONTROLLERS WITH SHARED METADATA AND RELATED DEVICES AND SYSTEMS
Interleaved cache controllers with shared metadata are disclosed and described. A memory system may comprise a plurality of cache controllers and a metadata store interconnected by a metadata store fabric. The metadata store receives information from at least one of the plurality of cache controllers, a portion of which is stored as shared distributed metadata. The metadata store provides shared access of the shared distributed metadata hosted to the plurality of cache controllers
ADAPTIVE RESIZABLE CACHE/LCM FOR IMPROVED POWER
Systems, apparatuses and methods of adaptively controlling a cache operating voltage are provided that comprise receiving indications of a plurality of cache usage amounts. Each cache usage amount corresponds to an amount of data to be accessed in a cache by one of a plurality of portions of a data processing application. The plurality of cache usage amounts are determining based on the received indications of the plurality of cache usage amounts. A voltage level applied to the cache is adaptively controlled based on one or more of the plurality of determined cache usage amounts. Memory access to the cache is controlled to be directed to a non-failing portion of the cache at the applied voltage level.
Cache architecture for comparing data
The present disclosure includes apparatuses and methods for a cache architecture. An example apparatus that includes a cache architecture according to the present disclosure can include an array of memory cells configured to store multiple cache entries per page of memory cells; and sense circuitry configured to determine whether cache data corresponding to a request from a cache controller is located at a location in the array corresponding to the request, and return a response to the cache controller indicating whether cache data is located at the location in the array corresponding to the request.
Cache architecture for comparing data on a single page
The present disclosure includes apparatuses and methods for a cache architecture. An example apparatus that includes a cache architecture according to the present disclosure can include an array of memory cells configured to store multiple cache entries per page of memory cells; and sense circuitry configured to determine whether cache data corresponding to a request from a cache controller is located at a location in the array corresponding to the request, and return a response to the cache controller indicating whether cache data is located at the location in the array corresponding to the request.