Patent classifications
G06F12/0859
Apparatuses and methods for transferring data
The present disclosure includes apparatuses and methods related to shifting data. An example apparatus comprises a cache coupled to an array of memory cells and a controller. The controller is configured to perform a first operation beginning at a first address to transfer data from the array of memory cells to the cache, and perform a second operation concurrently with the first operation, the second operation beginning at a second address.
Cache miss thread balancing
A simultaneous multithread (SMT) processor having a shared dispatch pipeline includes a first circuit that detects a cache miss thread. A second circuit determines a first cache hierarchy level at which the detected cache miss occurred. A third circuit determines a Next To Complete (NTC) group in the thread and a plurality of additional groups (X) in the thread. The additional groups (X) are dynamically configured based on the detected cache miss. A fourth circuit determines whether any groups in the thread are younger than the determined NTC group and the plurality of additional groups (X), and flushes all the determined younger groups from the cache miss thread.
Performing maintenance operations
There is provided an apparatus that includes an input port to receive, from a requester, any one of: a lookup operation comprising an input address, and a maintenance operation. Maintenance queue circuitry stores a maintenance queue of at least one maintenance operation and address storage stores a translation between the input address and an output address in an output address space. In response to receiving the input address, the output address is provided in dependence on the maintenance queue. In response to storing the maintenance operation, the maintenance queue circuitry causes an acknowledgement to be sent to the requester. By providing a separate maintenance queue for performing the maintenance operation, there is no need for a requester to be blocked while maintenance is performed.
Multi-cache based digital output generation
Multi-cache-based digital output generation is provided. A system receives data objects that include fields from a remote data source. The system sorts the data objects based on a field to generate a sorted data set. The system cleans the sorted data set to generate a clean data set based on a policy. The system receives a request for a type of digital output based on the data objects received from the data source and loads a portion of the clean data set to a first level cache. The system selects a machine learning model configured for the type of digital output, and loads a primary cache with a subset of fields stored in the first level cache selected based on the machine learning model. The system generates, based on the first level cache being complete, digital output corresponding to the type of digital output from data in the primary cache.
Method and Apparatus for Dynamically Adapting Sizes of Cache Partitions in a Partitioned Cache
The sizes of cache partitions, in a partitioned cache, are dynamically adjusted by determining, for each request, how many cache misses will occur in connection with implementing the request against the cache partition. The cache partition associated with the current request is increased in size by the number of cache misses and one or more other cache partitions is decreased in size causing cache evictions to occur from the other cache partitions rather than from the current cache partition. The other cache partitions, that are to be decreased in size, may be determined by ranking the cache partitions according to frequency of use and selecting the least frequently used cache partition to be reduced in size.
Indicating latency associated with a memory request in a system
Methods, systems, and devices for a latency indication in a memory system or sub-system are described. An interface controller of a memory system may transmit an indication of a time delay (e.g., a wait signal) to a host in response to receiving an access command from the host. The interface controller may transmit such an indication when a latency associated with performing the access command is likely to be greater than a latency anticipated by the host. The interface controller may determine a time delay based on a status of buffer or a status of memory device, or both. The interface controller may use a pin designated and configured to transmit a command or control information to the host when transmitting a signal including an indication of a time delay. The interface controller may use a quantity, duration, or pattern of pulses to indicate a duration of a time delay.
LATENCY INDICATION IN MEMORY SYSTEM OR SUB-SYSTEM
Methods, systems, and devices for a latency indication in a memory system or sub-system are described. An interface controller of a memory system may transmit an indication of a time delay (e.g., a wait signal) to a host in response to receiving an access command from the host. The interface controller may transmit such an indication when a latency associated with performing the access command is likely to be greater than a latency anticipated by the host. The interface controller may determine a time delay based on a status of buffer or a status of memory device, or both. The interface controller may use a pin designated and configured to transmit a command or control information to the host when transmitting a signal including an indication of a time delay. The interface controller may use a quantity, duration, or pattern of pulses to indicate a duration of a time delay.
MANAGING DIRECT MEMORY ACCESS
Managing direct memory access (DMA) by: defining a translate control entity (TCE) cache flag for cache memory addresses, receiving a DMA TCE related request, checking the TCE cache flag status, and completing the TCE related request according to the TCE cache flag status.
PROCESSOR PIPELINE MANAGEMENT DURING CACHE MISSES
Systems and methods of performing processor pipeline management include receiving an instruction for processing, determining that data in a first memory sub-group of a memory group needed to process the instruction is not available in a cache that ensures fixed latency access, and determining that the instruction should be put in a sleep state. The sleep state indicates that the instruction will not be reissued until the instruction is moved to a wakeup state. The methods also include associating the instruction with a ticket identifier (ID) that corresponds with a second memory sub-group of the memory group, and moving the instruction to the wakeup state based on the second memory sub-group of the memory group being moved into the cache.
Method for pipelined read optimization to improve performance of reading data from data cache and storage units
According to some embodiments, a backup storage system receives a request from a client at a storage system for accessing data segments. For each of a first groups of the data segments requested that are stored in a solid state device (SSD) cache, the system requests a first batch job for each of the first groups to retrieve the first groups of the data segments from the SSD cache via a first set of input/output (IO) threads. For each of a second groups of the data segments requested that are not stored in the SSD cache, the system requests a second batch job for each of the second groups to retrieve the second groups of the data segments from storage units of the storage system via a second set of input/output (IO) threads. The system assembles received segments and returns them to the client altogether.