G06F12/124

Tiering between storage media in a content aware storage system
11216388 · 2022-01-04 · ·

Tiering data between storage media in a content aware storage system is provided. An aspect includes, for each metadata page (MP) of a plurality of MPs: storing a first copy of the MP in a high tier storage, a second copy in an intermediate tier storage, and a third copy in low tier storage. Upon determining, in response to monitoring available space in the high tier storage, usage of the high tier storage exceeds a threshold value, an aspect includes identifying a least recently used (LRU) MP, deleting the LRU MP from the high tier storage, and destaging active entries of a metadata journal for the LRU MP. An aspect further includes receiving a request to read one of the plurality of MPs and, upon determining one of the MPs is the LRU metadata page, an aspect includes reading MP from the intermediate tier storage.

METHODS AND SYSTEMS FOR STORAGE, RETRIEVAL, AND VISUALIZATION OF SIGNALS AND SIGNAL FEATURES
20220300434 · 2022-09-22 ·

An implantable device includes a memory and a processor coupled to the memory and configured to perform actions, including: receiving electrical signals from tissue of a patient; and in response to each of a plurality of triggers, storing a portion of the received electrical signals, occurring after the trigger and extending for a limited duration, in the memory on a first-in-first-out basis. Another an implantable device includes a memory; and a processor coupled to the memory and configured to perform actions, including: receiving electrical signals from tissue of a patient; and in response to each of a plurality of triggers, determining at least one feature of the received electrical signals; and storing the at least one feature in the memory on a first-in-first-out basis.

Deduplication assisted caching policy for content based read cache (CBRC)

The disclosure provides an approach for implementing a deduplicated (DD) assisted caching policy for a content based read cache (CBRC). Embodiments include receiving a first input/output (I/O) to write first data in storage as associated with a first logical block address (LBA); when the first data is located in a CBRC or in a DD cache located in memory, incrementing a first deduplication counter associated with the first data; when the first data is located in neither the CBRC nor the DD cache, creating the first deduplication counter; when the first deduplication counter meets a threshold after incrementing, and the first data is not located in the DD cache, adding the first data to the DD cache; and writing the first data to the storage as associated with the first LBA.

System for improving input / output performance

In one embodiment, data communication apparatus includes a network interface including one or more ports for connection to a packet data network and configured to receive content transfer requests from at least one remote device over the network, a storage sub-system to be connected to local peripheral storage devices, and including at least one peripheral interface, and a memory sub-system including a cache and RAM, and processing circuitry to manage transfer of content between the remote device(s) and the local peripheral storage devices via the peripheral interface(s) and the cache, responsively to the content transfer requests, while pacing commencement of serving of respective ones of the content transfer requests responsively to a metric of the storage sub-system so that while ones of the content transfer requests are being served, other ones of the content transfer requests pending serving are queued in at least one pending queue.

Maintaining ghost cache statistics for demoted data elements

A method for maintaining statistics for data elements in a cache is disclosed. The method maintains a heterogeneous cache comprising a higher performance portion and a lower performance portion. The method maintains, within the lower performance portion, a ghost cache containing statistics for data elements that are currently contained in the heterogeneous cache, and data elements that have been demoted from the heterogeneous cache within a specified time interval. The method maintains updates to the statistics in an update area within the higher performance portion. The method determines whether the updates have reached a specified threshold and, in the event the updates have reached the specified threshold, flushes the updates from the update area to the ghost cache to update the statistics. A corresponding system and computer program product are also disclosed.

Cache replacement with no additional memory space

A method configures a cache to implement a LRU management technique. The cache has N entries divided into B buckets. Each bucket has a number of entries equal to P entries*M vectors, wherein N=B*P*M. Any P entry within any M vector is ordered using an in-vector LRU ordering process. Any entry within any bucket is ordered in LRU within the vectors and buckets. The LRU management technique moves a found entry to a first position within a same M vector, responsive to a lookup for a specified key, and permutes the found entry and a last entry in a previous M vector, responsive to the found entry already being in the first position within a vector and the same one of the M vectors not being a first vector in the bucket in the moving step.

SEMICONDUCTOR APPARATUS AND TRANSFER METHOD
20220083490 · 2022-03-17 · ·

A semiconductor apparatus that selects a first packet from a plurality of packets stored in a buffer and transfers the first packet. The semiconductor apparatus switches a plurality of different conditions for grouping the plurality of packets according to a priority order of the plurality of conditions; and selects the first packet from a plurality of packets pertaining to a group extracted on a condition selected by the switching according to a given selecting scheme, and transfers the first packet from the buffer.

Access optimized partial cache collapse

Aspects of the present disclosure relate to systems and methods for improving performance of a partial cache collapse by a processing device. Certain embodiments provide a method for performing a partial cache collapse procedure, the method including: counting, in each cache way of a group of cache ways, a number of dirty cache lines having dirty bits indicating the cache line has been modified; selecting, from the group, at least one cache way for collapse, based on its corresponding number of dirty cache lines; and performing the partial cache collapse procedure based on the at least one cache way selected from the group for collapse.

Cache systems of memory systems and data caching methods of cache systems
11269785 · 2022-03-08 · ·

A cache system includes a cache memory having a plurality of blocks, a dirty line list storing status information of a predetermined number of dirty lines among dirty lines in the plurality of blocks, and a cache controller controlling a data caching operation of the cache memory and providing statuses and variation of statuses of the dirty lines, according to the data caching operation, to the dirty line list. The cache controller performs a control operation to always store status information of a least-recently-used (LRU) dirty line into a predetermined storage location of the dirty line list.

CACHE REPLACEMENT WITH NO ADDITIONAL MEMORY SPACE
20210326272 · 2021-10-21 ·

A method configures a cache to implement a LRU management technique. The cache has N entries divided into B buckets. Each bucket has a number of entries equal to P entries*M vectors, wherein N=B*P*M. Any P entry within any M vector is ordered using an in-vector LRU ordering process. Any entry within any bucket is ordered in LRU within the vectors and buckets. The LRU management technique moves a found entry to a first position within a same M vector, responsive to a lookup for a specified key, and permutes the found entry and a last entry in a previous M vector, responsive to the found entry already being in the first position within a vector and the same one of the M vectors not being a first vector in the bucket in the moving step.