G06F2212/224

Cluster-based storage device buffering

Technologies are provided for storing data by alternating the performance of data write operations using multiple clusters of storage devices. Data is written to internal buffers of storage devices in one cluster while data stored in buffers of storage devices in another cluster is transferred to the storage devices' permanent storages. When available buffer capacity in a cluster falls below a specified threshold, data write commands are no longer sent the cluster and the storage devices in the cluster transfer data stored in their buffers to their permanent storages. While the data is being transferred, data write commands are transmitted to other clusters. When the data transfer is complete, the storage devices in the cluster can be scheduled to receive data write commands again. A cluster can be selected for performing a given data write request by matching the attributes of the cluster to parameters of the data write request.

CONSIDERING A FREQUENCY OF ACCESS TO GROUPS OF TRACKS AND DENSITY OF THE GROUPS TO SELECT GROUPS OF TRACKS TO DESTAGE
20190294552 · 2019-09-26 ·

Provided are a computer program product, system, and method for considering a frequency of access to groups of tracks and density of the groups to select groups of tracks to destage. One of a plurality of densities for one of a plurality of groups of tracks is incremented in response to determining at least one of that the group is not ready to destage and that one of the tracks in the group in the cache transitions to being ready to destage. A determination is made of a group frequency indicating a frequency at which tracks in the group are modified. At least one of the density and the group frequency is used for each of the groups to determine whether to destage the group. The tracks in the group in the cache are destaged to the storage in response to determining to destage the group.

Considering a frequency of access to groups of tracks and density of the groups to select groups of tracks to destage

Provided are a computer program product, system, and method for considering a frequency of access to groups of tracks and density of the groups to select groups of tracks to destage. One of a plurality of densities for one of a plurality of groups of tracks is incremented in response to determining at least one of that the group is not ready to destage and that one of the tracks in the group in the cache transitions to being ready to destage. A determination is made of a group frequency indicating a frequency at which tracks in the group are modified. At least one of the density and the group frequency is used for each of the groups to determine whether to destage the group. The tracks in the group in the cache are destaged to the storage in response to determining to destage the group.

SHINGLED MAGNETIC RECORDING DRIVE THAT UPDATES MEDIA CACHE DATA IN-PLACE
20190278710 · 2019-09-12 ·

When a shingled magnetic recording (SMR) hard disk drive (HDD) receives a write command that references one or more target logical block addresses (LBAs) and determines that one or more target LBAs are included in a range of LBAs for which data are stored in a memory of the drive, additional data are written to the media cache of the SMR HDD along with the write data during the same disk access. The additional data include data that are stored in the volatile memory and are associated with one or more LBAs that are adjacent in LBA space to the target LBAs. The one or more LBAs that are adjacent in LBA space to the target LBAs may include a first group of LBAs that is adjacent to and follows the target LBAs and a second group of LBA that is adjacent to and precedes the target LBAs.

SMR DRIVE THAT MAINTAINS PHYSICAL LOCATIONS OF BANDS VIA FLUSH/COPY OPERATIONS

When a shingled magnetic recording (SMR) hard disk drive (HDD) performs additional SMR band copy and/or flush operations to ensure that data associated with logical bands that are adjacent or proximate in logical space are stored in physical locations in the SMR HDD that are proximate in physical space. As a result, efficient execution is ensured of read commands that span multiple logical bands of the SMR HDD.

Sense flags in a memory device

Methods for programming sense flags may include programming memory cells coupled to first data lines in a main memory array, and programming memory cells coupled to second data lines in the main memory array while programming memory cells coupled to data lines in a flag memory array with flag data indicative of the memory cells coupled to the second data lines being programmed. Methods for sensing flags may include performing a sense operation on memory cells coupled to first data lines of a main memory array and memory cells coupled to data lines of a flag memory array, and determining a program indication of memory cells coupled to second data lines of the main memory array from the sense operation performed on the memory cells coupled to the data lines of the flag memory array.

Caching network fabric for high performance computing

An apparatus and method exchange data between two nodes of a high performance computing (HPC) system using a data communication link. The apparatus has one or more processing cores, RDMA engines, cache coherence engines, and multiplexers. The multiplexers may be programmed by a user application, for example through an API, to selectively couple either the RDMA engines, cache coherence engines, or a mix of these to the data communication link. Bulk data transfer to the nodes of the HPC system may be performed using paged RDMA during initialization. Then, during computation proper, random access to remote data may be performed using a coherence protocol (e.g. MESI) that operates on much smaller cache lines.

DRAM-BASED STORAGE CACHING METHOD AND DRAM-BASED SMART TERMINAL
20190258582 · 2019-08-22 ·

Embodiments of the present disclosure provide a DRAM-based storage caching method for a smart terminal, and the method includes: capturing an IO delivered by an upper-layer application; determining, based on a configuration policy, whether the IO belongs to a pre-specified to-be-cached IO type; and when the IO belongs to the pre-specified to-be-cached IO type, performing a corresponding caching operation for the IO in a DRAM disk based on a read/write type of the IO and a preset caching policy, where the DRAM disk is a block device created by using a reserved part of DRAM space of an operating system.

Split head invalidation for consumer batching in pointer rings
10372608 · 2019-08-06 · ·

A split head invalidation system includes a first memory including a ring buffer, a second memory, and a processor in communication with the first memory. The processor includes a consumer processor and a producer processor. The consumer processor is configured to maintain a head and tail pointer, detect a request to copy a memory entry from the ring buffer, and consume the memory entry. Consuming the memory entry includes iteratively testing a value associated with the memory entry in a slot indicated by the head pointer, retrieving the respective memory entry from the slot, and advancing the head pointer to the next slot until reaching a threshold quantity of slots. Additionally, the consumer processor is configured to invalidate each slot from the head pointer to the tail pointer after reaching the threshold quantity.

ELECTRONIC SYSTEM, INFORMATION PROCESSING DEVICE, AND CONTROL METHOD
20190235599 · 2019-08-01 · ·

An electronic system includes N electronic components where N is an integer of 2 or more, (N+1) batteries, and N selection circuits associated with the respective N electronic components. Each of the N electronic components is coupled to two batteries among the (N+1) batteries. Combinations of two batteries coupled to the respective N electronic components are different from each other. Each of (N1) batteries among the (N+1) batteries is coupled to two electronic components among the N electronic components. Combinations of two electronic components coupled to the respective (N1) batteries are different from each other. Each of the N selection circuits is configured to supply, as driving power, electric power output from at least one of two batteries coupled to a corresponding electronic component among the N electronic components to the corresponding electronic component.