Patent classifications
G06F12/0215
Selectively utilizing a read page cache mode in a memory subsystem
A method is described, which includes receiving, by a memory subsystem, a memory command targeted at a memory array; determining, by the memory subsystem, if the memory command is a high priority memory command; and determining if the memory subsystem is processing any non-high priority memory commands. The memory subsystem enables a read page cache mode for processing the memory command in response to determining that (1) the memory command is a high priority memory command and (2) the memory subsystem is not processing any non-high priority memory commands Thereafter, the memory subsystem processes the memory command using the read page cache mode.
Memory-network processor with programmable optimizations
Various embodiments are disclosed of a multiprocessor system with processing elements optimized for high performance and low power dissipation and an associated method of programming the processing elements. Each processing element may comprise a fetch unit and a plurality of address generator units and a plurality of pipelined datapaths. The fetch unit may be configured to receive a multi-part instruction, wherein the multi-part instruction includes a plurality of fields. First and second address generator units may generate, based on different fields of the multi-part instruction, addresses from which to retrieve first and second data for use by an execution unit for the multi-part instruction or a subsequent multi-part instruction. The execution units may perform operations using a single pipeline or multiple pipelines based on third and fourth fields of the multi-part instruction.
Adaptive page close prediction
Systems, apparatuses, and methods for performing efficient memory accesses for a computing system are disclosed. In various embodiments, a computing system includes one or more computing resources and a memory controller coupled to a memory device. The memory controller determines a memory access request targets a given bank of multiple banks. An access history is updated for the given bank based on whether the memory access request hits on an open page within the given bank and a page hit rate for the given bank is determined. The memory controller sets an idle cycle limit based on the page hit rate. The idle cycle limit is a maximum amount of time the given bank will be held open before closing the given bank while the bank is idle. The idle cycle limit is based at least in part on a page hit rate for the bank.
CONCURRENT PAGE CACHE RESOURCE ACCESS IN A MULTI-PLANE MEMORY DEVICE
A memory device includes a first memory array, a second memory array, and a page cache circuit coupled to the first memory array and the second memory array. The page cache circuit includes at least one set of concurrent resources and at least one shared resource, wherein the at least one set of concurrent resources are asynchronously and concurrently accessible by the first memory array and the second memory array, and wherein the at least one shared resource is accessible in a time-multiplexed fashion by the first memory array and the second memory array.
Apparatuses and methods for transferring data using a cache
The present disclosure includes apparatuses and methods related to shifting data. An example apparatus comprises a cache coupled to an array of memory cells and a controller. The controller is configured to perform a first operation beginning at a first address to transfer data from the array of memory cells to the cache, and perform a second operation concurrently with the first operation, the second operation beginning at a second address.
Memory access techniques in memory devices with multiple partitions
Methods, systems, and devices for operating a memory array are described. A memory controller may be configured to provide enhanced bandwidth on a command/address (C/A) bus, which may have a relatively low pin count, through use of a next partition command that may repeat an array command from a current partition at a different partition indicated by the next partition command. Such a next partition command may use fewer clock cycles than a command that includes a complete instruction and memory location information.
Systems and methods for revising permanent ROM-based programming
An application program stored in a ROM includes a function lookup data structure in which functions called by the application program have identifiers and memory addresses at which the function is located and can be executed. Upon startup, the function lookup data structure is copied to a RAM as a revised lookup data structure and is compared to a revision lookup data structure also written to that RAM or elsewhere. If the revision lookup data structure contains replacement functions having the same function identifiers but new memory addresses, these new memory addresses are written over the existing addresses in the revised lookup data structure for those replacement functions. The application program refers to the revised lookup data structure to find and execute the functions; thus the original application program on the ROM can continue to be used with revised functions.
Servicing CPU demand requests with inflight prefetches
Disclosed embodiments provide a technique in which a memory controller determines whether a fetch address is a miss in an L1 cache and, when a miss occurs, allocates a way of the L1 cache, determines whether the allocated way matches a scoreboard entry of pending service requests, and, when such a match is found, determine whether a request address of the matching scoreboard entry matches the fetch address. When the matching scoreboard entry also has a request address matching the fetch address, the scoreboard entry is modified to a demand request.
METHODS AND APPARATUS FOR ALLOCATION IN A VICTIM CACHE SYSTEM
Methods, apparatus, systems and articles of manufacture are disclosed for allocation in a victim cache system. An example apparatus includes a first cache storage, a second cache storage, a cache controller coupled to the first cache storage and the second cache storage and operable to receive a memory operation that specifies an address, determine, based on the address, that the memory operation evicts a first set of data from the first cache storage, determine that the first set of data is unmodified relative to an extended memory, and cause the first set of data to be stored in the second cache storage.
MEMORY SYSTEM AND OPERATING METHOD THEREOF
Systems and methods that relate to memory devices are disclosed. In some implementations, a memory system includes a first data storage device and a second data storage device. Each of the first and second data storage devices includes a plurality of memory blocks, each memory block including a plurality of memory cells each operable to store one or more data bits, and page buffers that cache data to be written to the memory blocks or read from the plurality of memory blocks on a page basis, and a controller including a cache memory configured to temporarily store first data, and configured to move the first data from a portion of the cache memory to one or more of the page buffers of the first data storage device and allocate the portion of the cache memory as a temporary buffer for storing data.