G06F12/0859

Techniques for non-deterministic operation of a stacked memory system
11422887 · 2022-08-23 · ·

Techniques for non-deterministic operation of a stacked memory system are provided. In an example, a method of operating a memory package can include receiving a plurality of memory access requests for a channel at a logic die, returning first data to a host in response to a first memory access request of the plurality of memory access requests, returning an indication of data not ready to the host in response to a second memory access request of the plurality of memory access requests for second data, returning a first index to the host with the indication of data not ready, returning an indication data is ready with third data in response to a third memory access request of the plurality of memory access requests, and returning the first index with the indication of data ready.

Forward caching memory systems and methods

Systems, apparatuses, and methods related to memory systems and operation are described. A memory system may be coupled to a processor, which includes a memory controller. The memory controller may determine whether targeting of first data and second data by the processor to perform an operation results in processor-side cache misses. When targeting of the first data and the second data result in processor-side cache misses, the memory controller may determine a single memory access request that requests return of both the first data and the second data and instruct the processor to output the single memory access request to a memory system via one or more data buses coupled between the processor and the memory system to enable processing circuitry implemented in the processor to perform the operation based at least in part on the first data and the second data when returned from the memory system.

MEMORY CACHE FOR DISAGGREGATED MEMORY

A memory control system comprises a memory cache and a disaggregated memory pool including a plurality of physical memory media configured to provide volatile data storage for any of a plurality of compute nodes communicatively coupled with the disaggregated memory pool. Processing componentry of the memory control system is configured to populate the memory cache with data items stored by the plurality of compute nodes within the disaggregated memory pool according to a cache fill policy. Upon receiving a memory read request for a data item stored in the disaggregated memory pool from a compute node, the memory cache and disaggregated memory pool are searched in parallel for the data item. Upon retrieving the data item from either the memory cache or disaggregated memory pool, the data item is provided to the compute node.

Processor pipeline management during cache misses using next-best ticket identifier for sleep and wakeup

Systems and methods of performing processor pipeline management include receiving an instruction for processing, determining that data in a first memory sub-group of a memory group needed to process the instruction is not available in a cache that ensures fixed latency access, and determining that the instruction should be put in a sleep state. The sleep state indicates that the instruction will not be reissued until the instruction is moved to a wakeup state. The methods also include associating the instruction with a ticket identifier (ID) that corresponds with a second memory sub-group of the memory group, and moving the instruction to the wakeup state based on the second memory sub-group of the memory group being moved into the cache.

Method and apparatus for dynamically adapting sizes of cache partitions in a partitioned cache

The sizes of cache partitions, in a partitioned cache, are dynamically adjusted by determining, for each request, how many cache misses will occur in connection with implementing the request against the cache partition. The cache partition associated with the current request is increased in size by the number of cache misses and one or more other cache partitions is decreased in size causing cache evictions to occur from the other cache partitions rather than from the current cache partition. The other cache partitions, that are to be decreased in size, may be determined by ranking the cache partitions according to frequency of use and selecting the least frequently used cache partition to be reduced in size.

Managing direct memory access

Managing direct memory access (DMA) by: defining a translate control entity (TCE) cache flag for cache memory addresses, receiving a DMA TCE related request, checking the TCE cache flag status, and completing the TCE related request according to the TCE cache flag status.

APPARATUSES AND METHODS FOR TRANSFERRING DATA
20210173770 · 2021-06-10 ·

The present disclosure includes apparatuses and methods related to shifting data. An example apparatus comprises a cache coupled to an array of memory cells and a controller. The controller is configured to perform a first operation beginning at a first address to transfer data from the array of memory cells to the cache, and perform a second operation concurrently with the first operation, the second operation beginning at a second address.

CACHING DEVICE, CACHE, SYSTEM, METHOD AND APPARATUS FOR PROCESSING DATA, AND MEDIUM
20210271475 · 2021-09-02 ·

A caching device, an instruction cache, a system for processing an instruction, a method and apparatus for processing data and a medium are provided. The caching device includes a first queue, a second queue, a write port group, a read port, a first pop-up port, a second pop-up port and a press-in port. The is configured to write cache data into a set storage address in the first queue and/or the second queue; the read port is configured to read all cache data from the first queue and/or the second queue at one time; the press-in port is configured to press cache data into the first queue and/or the second queue; the first pop-up port is configured to pop up cache data from the first queue; and the second pop-up port is configured to pop up cache data from the second queue.

TECHNIQUES FOR NON-DETERMINISTIC OPERATION OF A STACKED MEMORY SYSTEM
20210200632 · 2021-07-01 ·

Techniques for non-deterministic operation of a stacked memory system are provided. In an example, a method of operating a memory package can include receiving a plurality of memory access requests for a channel at a logic die, returning first data to a host in response to a first memory access request of the plurality of memory access requests, returning an indication of data not ready to the host in response to a second memory access request of the plurality of memory access requests for second data, returning a first index to the host with the indication of data not ready, returning an indication data is ready with third data in response to a third memory access request of the plurality of memory access requests, and returning the first index with the indication of data ready.

Calculation processing apparatus and method for controlling calculation processing apparatus
10996954 · 2021-05-04 · ·

By including a storing device that stores a plurality of memory access instructions decoded by a decoder and outputs the memory access instruction stored therein to a cache memory, a determiner that determines whether the storing device is afford to store the plurality of memory access instructions; and an inhibitor that inhibits, when the determiner determines that the storing device is not afford to store a first memory access instruction included in the plurality of memory access instructions, execution of a second memory access instruction being included in the plurality of memory access instructions and being subsequent to the first memory access instruction for a predetermined time period, regardless of a result of determination made on the second memory access instruction by the determiner, the calculation processing apparatus inhibits a switch of the order of a store instruction and a load instruction.