G06F2212/281

PRIORITY-BASED STORAGE AND ACCESS OF COMPRESSED MEMORY LINES IN MEMORY IN A PROCESSOR-BASED SYSTEM

In an aspect, high priority lines are stored starting at an address aligned to a cache line size for instance 64 bytes, and low priority lines are stored in memory space left by the compression of high priority lines. The space left by the high priority lines and hence the low priority lines themselves are managed through pointers also stored in memory. In this manner, low priority lines contents can be moved to different memory locations as needed. The efficiency of higher priority compressed memory accesses is improved by removing the need for indirection otherwise required to find and access compressed memory lines, this is especially advantageous for immutable compressed contents. The use of pointers for low priority is advantageous due to the full flexibility of placement, especially for mutable compressed contents that may need movement within memory for instance as it changes in size over time

METHODS FOR MINIMIZING FRAGMENTATION IN SSD WITHIN A STORAGE SYSTEM AND DEVICES THEREOF
20170371556 · 2017-12-28 ·

A method, non-transitory computer readable medium, and device that assists with reducing memory fragmentation in solid state devices includes identifying an allocation area within an address range to write data from a cache. Next, the identified allocation area is determined for including previously stored data. The previously stored data is read from the identified allocation area when it is determined that the identified allocation area comprises previously stored data. Next, both the write data from the cache and the read previously stored data are written back into the identified allocation area sequentially through the address range.

SYNCHRONOUS INPUT/OUTPUT (I/O) CACHE LINE PADDING

A computer-implemented method for synchronous input/output (I/O) cache line padding is described. The cache line padding occurs between a server having a processor executing an operating system and a recipient control unit. The method can include receiving, via the processor at the recipient control unit, a partial line direct memory access (DMA) write request; fetching, via the processor, a device table entry (DTE) associated with the partial line DMA write request; determining, via the processor, a cache line size for a synchronous input/output (I/O) cache line; and writing a full cache line DMA write request by padding, via the processor, the partial line DMA write request with a padded portion, where the padded portion is based on the cache line size.

SYSTEM AND METHOD FOR PROTECTING GPU MEMORY INSTRUCTIONS AGAINST FAULTS

A system and method for protecting memory instructions against faults are described. The system and method include converting the slave instructions to dummy operations, modifying memory arbiter to issue up to N master and N slave global/shared memory instructions per cycle, sending master memory requests to memory system, using slave requests for error checking, entering master requests to the GM/LM FIFO, storing slave requests in a register, and comparing the entered master requests with the stored slave requests.

USING A SECOND CONTENT-ADDRESSABLE MEMORY TO MANAGE MEMORY BURST ACCESSES IN MEMORY SUB-SYSTEMS
20220382677 · 2022-12-01 ·

A request to access data at an address is received from a host system. A tag associated with the address is determined to not be found in first entries in a first content-addressable memory (CAM) or in second entries in a second CAM. Responsive to determining that the tag is not found in the first entries or in the second entries, a particular entry of the first entries that each includes valid data is selected. A determination is made whether the particular entry satisfies a condition indicating that content in the particular entry is to be stored in the second CAM. The content is associated with other data stored in the cache. Responsive to determining that the condition is satisfied, the content of the particular entry is stored in one of the second entries to maintain the data in the cache.

SCALED SET DUELING FOR CACHE REPLACEMENT POLICIES
20170357588 · 2017-12-14 ·

A processing system includes a cache that includes a cache lines that are partitioned into a first subset of the cache lines and a second subsets of the cache lines. The processing system also includes one or more counters that are associated with the second subsets of the cache lines. The processing system further includes a processor configured to modify the one or more counters in response to a cache hit or a cache miss associated with the second subsets. The one or more counters are modified by an amount determined by one or more characteristics of a memory access request that generated the cache hit or the cache miss.

DEMOTE SCAN PROCESSING TO DEMOTE TRACKS FROM CACHE
20170351619 · 2017-12-07 ·

Provided are a computer program product, system, and method demote scan processing to demote tracks from cache. Tracks in the storage stored in the cache are indicated in a cache list. The cache list is scanned to determine unmodified tracks to initiate to demote. In response to processing an indicated modified track in the cache list while scanning the cache list, a destage is initiated for the processed indicated modified track and continuing to scan the cache list to determine unmodified tracks. In response to processing a number of modified tracks indicted in the cache list, a determination is made of an unmodified track in the cache list and continuing to scan, from the determined unmodified track, for unmodified tracks to initiate to demote.

Guest ordering of host file system writes
09836402 · 2017-12-05 · ·

Systems and methods for data storage management technology that enables a guest module of a virtual machine to indicate an order in which a host module should write data from physical memory to a secondary storage. An example method may comprise: identifying, by a processing device executing a host module, a plurality of modifications to physical memory made by a plurality of direct access operations executed by a guest module of a virtual machine; determining, by the host module, an order of the plurality of modifications to physical memory; receiving, by the host module, a synchronization request from the guest module; and responsive to the synchronization request, copying, by the host module, data from the physical memory to a secondary storage in view of the order of the plurality of modifications.

CACHING SYSTEMS AND METHODS FOR HARD DISK DRIVES AND HYBRID DRIVES
20170344276 · 2017-11-30 ·

A system includes a read/write module and a caching module. The read/write module is configured to access a first portion of a recording surface of a rotating storage device. Data is stored on the first portion of the recording surface of the rotating storage device at a first density. The caching module is configured to cache data on a second portion of the recording surface of the rotating storage device at a second density. The second portion of the recording surface of the rotating storage device is separate from the first portion of the recording surface of the rotating storage device. The second density is less than the first density.

Cost Effective Service Level Agreement Data Management

The embodiments described herein relate to dynamically managing metric data of a network environment with respect to a data storage system. A data retention policy is analyzed, which includes extracting one or more metric definitions from the retention policy. A relevance of a set of metric data is identified based on the analysis. The set of metric data includes an aggregation of one or more metric observations. A storage location in a data storage system for the set of metric values is selected based on the identified relevance. The data storage system includes a cache storage location and a persistent storage location. The set of metric data is retained in the selected storage location. As the retention policy is modified, select data may be re-classified and moved within the storage system based on the re-classification.