G06F12/0844

Address probing for transaction

Embodiments relate to address probing for a transaction. An aspect includes determining, before starting execution of a transaction, a plurality of addresses that will be used by the transaction during execution. Another aspect includes probing each address of the plurality of addresses to determine whether any of the plurality of addresses has an address conflict. Yet another aspect includes, based on determining that none of the plurality of addresses has an address conflict, starting execution of the transaction.

Address probing for transaction

Embodiments relate to address probing for a transaction. An aspect includes determining, before starting execution of a transaction, a plurality of addresses that will be used by the transaction during execution. Another aspect includes probing each address of the plurality of addresses to determine whether any of the plurality of addresses has an address conflict. Yet another aspect includes, based on determining that none of the plurality of addresses has an address conflict, starting execution of the transaction.

DATA STORAGE DEVICE AND OPERATING METHOD THEREOF
20210382824 · 2021-12-09 ·

A data storage device includes a first memory device; a second memory device including a fetch region configured to store data evicted from the first memory device and a prefetch region divided into a plurality of sections; storage; and a controller configured to control the first memory device, the second memory device, and the storage. The controller may include a memory manager configured to select prefetch data having a set section size from the storage, load the selected prefetch data into the prefetch region and update the prefetch data based on a data read hit ratio of each of the plurality of sections.

Timed Data Transfer between a Host System and a Memory Sub-System
20210374060 · 2021-12-02 ·

A memory sub-system configured to schedule the transfer of data from a host system for write commands to reduce the amount and time of data being buffered in the memory sub-system. For example, after receiving a plurality of streams of write commands from a host system, the memory sub-system identifies a plurality of media units in the memory sub-system for concurrent execution of a plurality of write commands respectively. In response to the plurality of commands being identified for concurrent execution in the plurality of media units respectively, the memory sub-system initiates communication of the data of the write commands from the host system to a local buffer memory of the memory sub-system. The memory sub-system has capacity to buffer write commands in a queue, for possible out of order execution, but limited capacity for buffering only the data of a portion of the write commands that are about to be executed.

Timed Data Transfer between a Host System and a Memory Sub-System
20210374060 · 2021-12-02 ·

A memory sub-system configured to schedule the transfer of data from a host system for write commands to reduce the amount and time of data being buffered in the memory sub-system. For example, after receiving a plurality of streams of write commands from a host system, the memory sub-system identifies a plurality of media units in the memory sub-system for concurrent execution of a plurality of write commands respectively. In response to the plurality of commands being identified for concurrent execution in the plurality of media units respectively, the memory sub-system initiates communication of the data of the write commands from the host system to a local buffer memory of the memory sub-system. The memory sub-system has capacity to buffer write commands in a queue, for possible out of order execution, but limited capacity for buffering only the data of a portion of the write commands that are about to be executed.

HIGH SPEED MEMORY SYSTEM INTEGRATION
20220197806 · 2022-06-23 ·

Embodiments disclosed herein include memory architectures with stacked memory dies. In an embodiment, an electronic device comprises a base die and an array of memory dies over and electrically coupled to the base die. In an embodiment, the array of memory dies comprise caches. In an embodiment, a compute die is over and electrically coupled to the array of memory dies. In an embodiment, the compute die comprises a plurality of execution units.

HIGH SPEED MEMORY SYSTEM INTEGRATION
20220197806 · 2022-06-23 ·

Embodiments disclosed herein include memory architectures with stacked memory dies. In an embodiment, an electronic device comprises a base die and an array of memory dies over and electrically coupled to the base die. In an embodiment, the array of memory dies comprise caches. In an embodiment, a compute die is over and electrically coupled to the array of memory dies. In an embodiment, the compute die comprises a plurality of execution units.

STORAGE DEVICE AND OPERATING METHOD THEREOF
20220188234 · 2022-06-16 ·

A storage device includes: a memory device including a plurality of planes, and a plurality of cache buffers and data buffers; and a memory controller for controlling the memory device to transmit first data and second data from first plane and second plane into the respective first cache buffer and second cache buffer, and control the first cache buffer and the second cache buffer to transmit the first data and the second data to the memory controller. In response to a read request for third data from a host while the first data is transmitting from the first cache buffer to the memory controller, the memory controller transmits a cache read command to the memory device such that the memory device reads the third data after the first data is completely transmitted to the memory controller, before the second data is transmitted from the second cache buffer.

STORAGE DEVICE AND OPERATING METHOD THEREOF
20220188234 · 2022-06-16 ·

A storage device includes: a memory device including a plurality of planes, and a plurality of cache buffers and data buffers; and a memory controller for controlling the memory device to transmit first data and second data from first plane and second plane into the respective first cache buffer and second cache buffer, and control the first cache buffer and the second cache buffer to transmit the first data and the second data to the memory controller. In response to a read request for third data from a host while the first data is transmitting from the first cache buffer to the memory controller, the memory controller transmits a cache read command to the memory device such that the memory device reads the third data after the first data is completely transmitted to the memory controller, before the second data is transmitted from the second cache buffer.

Processor system having memory interleaving, and method for accessing interleaved memory banks with one clock cycle

A processor system comprises a memory having at least two interleaved memory banks, at least two multiplexers which are respectively coupled to one of the at least two interleaved memory banks via a respective memory bank bus, a first processor or processor core which is coupled to first multiplexer inputs of the at least two multiplexers via a first data bus, a second processor or processor core which is coupled to second multiplexer inputs of the at least two multiplexers via a second data bus, and at least two queue buffers which are arranged in the second data bus between the second processor or processor core and the second multiplexer inputs of the at least two multiplexers. The first processor or processor core is configured to have read access or write access only to one of the at least two interleaved memory banks within one clock cycle.