G06F3/0613

Apparatus and method for improving input/output throughput of memory system
11567667 · 2023-01-31 · ·

Disclosed is a memory system including a plurality of memory dies configured to store data in various storage modes; and a controller coupled with the plurality of memory dies via a plurality of channels and configured to perform a correlation operation on multiple read requests among a plurality of read requests received from a host so that the plurality of memory dies output plural pieces of data corresponding to the plurality of read requests via the plurality of channels in an interleaving way, wherein the controller is configured to determine whether to perform the correlation operation based on the number of read requests, and perform the correlation operation on the multiple read requests which are related to the same storage mode and different channels.

Managing memory command engine using command sequence analysis

Various embodiments described herein provide for using analysis of a sequence of commands (issued by a host system) to manage a memory command component, such as a read engine or a write engine of a memory system.

Storage system accommodating varying storage capacities
11714715 · 2023-08-01 · ·

A plurality of storage nodes in a single chassis is provided. The plurality of storage nodes in the single chassis is configured to communicate together as a storage cluster. Each of the plurality of storage nodes includes nonvolatile solid-state memory for user data storage. The plurality of storage nodes is configured to distribute the user data and metadata associated with the user data throughout the plurality of storage nodes such that the plurality of storage nodes maintain the ability to read the user data, using erasure coding, despite a loss of two of the plurality of storage nodes. A plurality of compute nodes is included in the single chassis, each of the plurality of compute nodes is configured to communicate with the plurality of storage nodes. A method for accessing user data in a plurality of storage nodes having nonvolatile solid-state memory is also provided.

Nonvolatile memory device supporting high-efficiency I/O interface

A nonvolatile memory device includes a first pin that receives a first signal, a second pin that receives a second signal, third pins that receive third signals, a fourth pin that receives a write enable signal, a memory cell array, and a memory interface circuit that obtains a command, an address, and data from the third signals in a first mode and obtains the command and the address from the first signal and the second signal and the data from the third signals in a second mode. In the first mode, the memory interface circuit obtains the command from the third signals and obtains the address from the third signals. In the second mode, the memory interface circuit obtains the command from the first signal and the second signal and obtains the address from the first signal and the second signal.

Data flow management in a heterogeneous memory device using a thermal profile

A computer-implemented method, a computer program product, and a computer system for data flow management in a heterogeneous memory device. A media controller redirects traffic from first non-volatile memory (NVM) to second NVM, in response to an instantaneous temperature of the first NVM reaches a first predetermined temperature at which redirecting the traffic is started. The media controller throttles to reduce the traffic to the second NVM, in response to determining that the instantaneous temperature is higher than a second predetermined temperature at which throttling is started. The media controller redirects the traffic back to the first NVM, in response to determining that the instantaneous temperature is not higher than the second predetermined temperature and lower than a third predetermined temperature at which throttling is ended. The first NVM is thermally sensitive, while the second NVM is thermally tolerant.

Dynamic latency management of active-active configurations using multi-pathing software

An apparatus comprises a host device that includes a multi-path input-output (MPIO) driver configured to control delivery of input-output (IO) operations from the host device to first and second storage systems over a plurality of paths through a network. The MPIO driver determines latency values for the paths to the first and second storage systems, retrieves additional information corresponding to the paths and first and second storage systems, generates a first message comprising at least portions of the latency values and additional information, and sends the first message to a multi-pathing management appliance. A second message is received from the multi-pathing management appliance, the second message being generated based on at least a portion of the first message. The MPIO driver selects one or more paths for delivery of given ones of the IO operations based at least in part on at least a portion of the second message.

PARTIAL ARRAY REFRESH TIMING

A memory controller combines information about which memory component segments are not being refreshed with the information about which rows are going to be refreshed next, to determine, for the current refresh command, the total number of rows that are going to be refreshed. Based on this total number of rows, the memory controller selects how long to wait after the refresh command before issuing a next subsequent command. When the combination of masked segments and the refresh scheme results in less than the ‘nominal’ number of rows typically refreshed in response to a single refresh command, the waiting period before the next command (e.g., non-refresh command) is issued may be reduced from the ‘nominal’ minimum time period, thereby allowing the next command to be issued earlier.

DATA MANAGEMENT APPARATUS, DATA MANAGEMENT METHOD, AND DATA STORAGE DEVICE
20230028301 · 2023-01-26 ·

A data management apparatus, a data management method, and a data storage device are provided. The data management apparatus includes a management unit and a data migration unit. The management unit manages data transmission channels between two types of storage media with different transmission performance. Then, the data migration unit migrates data between the two types of storage media through the managed data transmission channels. In this way, the data management apparatus can directly migrate data between storage media with different transmission performance, and a CPU in a system does not need to perform processing such as instruction conversion and protocol conversion, so that a delay of performing the foregoing processing by the CPU can be shortened. In addition, because the CPU does not need to perform data migration, resource overheads of the CPU can be reduced.

METHOD OF INPUTTING AND OUTPUTTING DATA, ELECTRONIC DEVICE AND COMPUTER PROGRAM PRODUCT
20230026565 · 2023-01-26 ·

A method, an electronic device, and a computer program product for inputting and outputting data is disclosed. The method includes receiving a target I/O request for a storage device from an application, determining that a first offset or a second offset is greater than zero, and generating a plurality of I/O requests based on the target address. The I/O requests include a first I/O request for a first data segment in target data and at least one other I/O request for other data segments in the target data. For the first I/O request, the method includes executing a direct I/O operation on the first data segment by bypassing a cache associated with the storage device.

REDUCING WRITE DELAYS WHILE TAKING SNAPSHOTS
20230022243 · 2023-01-26 · ·

Snapshots are processed without holding all write operations while the snapshots are being activated. Rather than holding all write operations until snapshots are activated, write operations may be allowed to proceed. Snapshot write processing may be temporarily suspended while the snapshots are being activated, including snapshot metadata being updated, while write operations received while the snapshots are being activated are logged. After snapshots have been activated for all logical LSUs for which snapshots were instructed to be activated, the logging of write operations may be stopped, and the logged write entries processed to determine whether any of the logged write operations require updating snapshot information of any logical storage elements (LSEs) of the LSUs. While the logged write operations are being processed, any write operations received from a host for an LSE having a logged write operation may be held until the held operation, or all held operations are processed.