Patent classifications
G06F2212/21
Method and system for hardware accelerated read-ahead caching
A system and method for efficient cache buffering are provided. The disclosed method determining that a read-ahead operation is to be performed in response to receiving a host Input/Output (I/O) command. In response to determining that the read-ahead operation is to be performed, allocating a new Local Message Identifier (LMID) for the read-ahead operation. The method further includes sending a buffer allocation request to a buffer manager module, the buffer allocation request containing parameters associated with the read-ahead operation and then causing the buffer manager module to allocate at least one Internal Scatter Gather List (ISGL) and Buffer Section Identifier (BSID) in accordance with the parameters contained in the buffer allocation request. The method further includes enabling the cache manager module to perform a hash search using a row or strip number and identification information available in the new LMID.
Data storage device performance optimization method and apparatus
A method includes storing a data group in a first zone of a plurality of radial zones of a data storage disc. Each different one of the plurality of zones has a different throughput level. The method further includes obtaining information related to an access frequency of the data group stored in the first zone of the plurality of zones. Based on the information related to the access frequency of the data group and the different throughput levels of the different zones, a determination is made as to whether to migrate the data group from the first zone of the plurality of zones to a second zone of the plurality of zones.
Surface-based logical storage units in multi-platter disks
Techniques for treating surfaces of a multi-platter disk as independent units are described herein. Each surface of a plurality of surfaces of a multi-platter disk is identified and a set of storage layout data describing the storage format of the surface is received. A logical address is calculated based on the surface layout data and at least a portion of the storage layout data is stored on the surface. The logical address of the surface is then provided for use by other services.
Power saving mechanisms for a dynamic mirror service policy
Described is storage system and method for reducing power consumption. The storage system has first and second physical disks configured to provide mirroring. The first physical disk is placed into a power-saving mode of operation, while the second physical disk is in an active mode of operation responding to read and write requests. The first physical disk transitions from the power-saving mode of operation to an active mode of operation for destaging writes pending from cache to the first physical disk, while the second physical disk responds to read and write requests. The second physical disk transitions from the active mode of operation to the power-saving mode of operation, while the first physical disk responds to read and write requests.
SMR drive with multi-level write-cache for high random-write performance
A shingled magnetic recording (SMR) hard disk drive (HDD) is configured with a multi-level cache. To expedite execution of read commands, the SMR HDD is configured to generate and store a Bloom filter in a memory that can be quickly accessed by the drive controller whenever data are stored in certain levels of the multi-level cache. When data are flushed from one level of media cache to an SMR band included in a lower level of media cache, a Bloom filter is generated based on the logical block addresses (LBAs) stored in that SMR band. Thus, when the SMR HDD receives a read command for data that are associated with a particular LBA and are stored in an SMR region of the HDD, the drive controller can query the Bloom filter for each different SMR region of the HDD in which data for that LBA can possibly be stored.
Two-pass logical write for interlaced magnetic recording
An exemplary write method disclosed herein includes receiving a request to write data to a consecutive sequence of logical block addresses (LBAs); identifying a first non-contiguous sequence of data tracks mapped to a first portion of the consecutive sequence of LBAs; and identifying a second non-contiguous sequence of data tracks mapped to a second portion of the consecutive sequence of LBAs, where the second portion sequentially follows the first portion. The method further includes writing the data of the second portion of the consecutive sequence of LBAs to the first non-contiguous sequence of data tracks during a first pass of a transducer head through the radial zone and writing the data of the first portion of the consecutive sequence of LBAs to the second non-contiguous sequence of data tracks during a second, subsequent pass of the transducer head through the radial zone.
INTELLIGENT PREFETCH DISK-CACHING TECHNOLOGY
Systems, apparatuses and methods may provide for technology to automatically identify a plurality of non-volatile memory locations associated with a file in response to a close operation with respect to the file and automatically conduct a prefetch from one or more of the plurality of non-volatile memory locations that have been most recently accessed and do not reference cached file segments. The prefetch may be conducted in response to an open operation with respect to the file and on a per-file segment basis.
SYSTEM AND METHOD FOR NEGATIVE FEEDBACK CACHE DATA FLUSH IN PRIMARY STORAGE SYSTEMS
A method, computer program product, and computer system for determining, by a computing device, a number of dirty pages capable of being generated per process on a backing device. It may be determined whether the number of dirty pages capable of being generated per process on the backing device exceeds a threshold set point of actual dirty pages currently generated per process on the backing device. A variable amount of time to sleep may be determined. Sleep may be executed for the variable amount of time, wherein generation of additional dirty pages is paused.
METHOD AND APPARATUS FOR CACHING DATA IN AN SOLID STATE DISK (SSD) OF A HYBRID DRIVE THAT INCLUDES THE SSD AND A HARD DISK DRIVE (HDD)
A system includes a read/write module and a caching module. The read/write module is configured to access a first portion of a recording surface of a rotating storage device. Data is stored on the first portion of the recording surface of the rotating storage device at a first density. The caching module is configured to cache data on a second portion of the recording surface of the rotating storage device at a second density. The second portion of the recording surface of the rotating storage device is separate from the first portion of the recording surface of the rotating storage device. The second density is less than the first density.
Two-pass logical write for interlaced magnetic recording
An exemplary write method disclosed herein includes receiving a request to write data to a consecutive sequence of logical block addresses (LBAs) that is the mapped to a non-contiguous sequence of data tracks on a storage medium, and writing the data of the consecutive sequence of LBAs to a non-contiguous sequence of data tracks on the storage medium and according to a consecutive track order.