Patent classifications
G06F2212/70
Non-sequential write for sequential read back
A storage device controller addresses consecutively-addressed portions of incoming data to consecutive data tracks on a storage medium and writes the consecutively-addressed portions to the consecutive data tracks in a non-consecutive track order. In one implementation, the storage device controller reads the data back from the consecutive data tracks in a consecutive address order in a single sequential read operation.
INVOKING INPUT/OUTPUT (I/O) THREADS AND DEMOTE THREADS ON PROCESSORS TO DEMOTE TRACKS FROM A CACHE
Provided are a computer program product, system, and method for invoking Input/Output (I/O) threads and demote threads on processors to demote tracks from a cache. An Input/Output (I/O) thread, executed by a processor, processes I/O requests directed to tracks from the storage stored in the cache. A demote thread, executed by the processor, processes a demote ready list, indicating tracks eligible to demote from cache, to select tracks to demote from the cache to free cache segments in the cache. After processing a number of I/O requests, the I/O thread processes the demote ready list to demote tracks from the cache in response to determining that a number of free cache segments in the cache is below a free cache segment threshold.
PRESERVATION OF MODIFIED CACHE DATA IN LOCAL NON-VOLATILE STORAGE FOLLOWING A FAILOVER
A dual-server based storage system maintains a first cache and a first non-volatile storage (NVS) in a first server, and a second cache and a second NVS in a second server, where data in the first cache is also written in the second NVS and data in the second cache is also written in the first NVS. In response to a failure of the first server, a determination is made as to whether space exists in the second NVS to accommodate the data stored in the second cache. In response to determining that space exists in the second NVS to accommodate the data stored in the second cache, the data is transferred from the second cache to the second NVS.
INVOKING DEMOTE THREADS ON PROCESSORS TO DEMOTE TRACKS INDICATED IN DEMOTE READY LISTS FROM A CACHE WHEN A NUMBER OF FREE CACHE SEGMENTS IN THE CACHE IS BELOW A FREE CACHE SEGMENT THRESHOLD
Provided are a computer program product, system, and method for invoking demote threads on processors to demote tracks from a cache. A plurality of demote ready lists indicate tracks eligible to demote from the cache. In response to determining that a number of free cache segments in the cache is below a free cache segment threshold, a determination is made of a number of demote threads to invoke on processors based on the number of free cache segments and the free cache segment threshold. The determined number of demote threads are invoked to demote tracks in the cache indicated in the demote ready lists, wherein each invoked demote thread processes one of the demote ready lists to select tracks to demote from the cache to free cache segments in the cache.
Synchronizing updates of page table status indicators and performing bulk operations
A synchronization capability to synchronize updates to page tables by forcing updates in cached entries to be made visible in memory (i.e., in in-memory page table entries). A synchronization instruction is used that ensures after the instruction has completed that updates to the cached entries that occurred prior to the synchronization instruction are made visible in memory. Synchronization may be used to facilitate memory management operations, such as bulk operations used to change a large section of memory to read-only, operations to manage a free list of memory pages, and/or operations associated with terminating processes.
CACHING SYSTEMS AND METHODS FOR HARD DISK DRIVES AND HYBRID DRIVES
A system includes a read/write module and a caching module. The read/write module is configured to access a first portion of a recording surface of a rotating storage device. Data is stored on the first portion of the recording surface of the rotating storage device at a first density. The caching module is configured to cache data on a second portion of the recording surface of the rotating storage device at a second density. The second portion of the recording surface of the rotating storage device is separate from the first portion of the recording surface of the rotating storage device. The second density is less than the first density.
REDUCING LATENCY BY CACHING DERIVED DATA AT AN EDGE SERVER
To deliver up-to-date, coherent user data to applications upon request, the disclosed technology includes systems and methods for caching data and metadata after it has been synchronously loaded—for future retrieval with a page load time close to zero milliseconds. To provide this experience, data needs to be stored as locally to a user as possible, in the cache on the local device or in an edge cache located geographically nearby, for use in responding to requests. Applications which maintain caches of API results can be notified of their invalidation, and can detect the invalidation, propagate the invalidation to any further client tiers with the appropriate derivative type mapping, and refresh their cached values so that clients need not synchronously make the API requests again—ensuring that the client has access to the most up-to-date copy of data as inexpensively as possible—in terms of bandwidth and latency.
METHOD OF CONTROLLING STORAGE DEVICE AND RANDOM ACCESS MEMORY AND METHOD OF CONTROLLING NONVOLATILE MEMORY DEVICE AND BUFFER MEMORY
A method of controlling a storage device and a random access memory includes, when a size of write-requested data is greater than a threshold, writing the write-requested data in the storage device and writing an address of the storage device in which the write-requested data is written in the random access memory. When the size of the write-requested data is smaller than or equal to the threshold, the write-requested data is written in the random access memory. The threshold is correlated to a size greater than a size of an area allocated to store the address in the random access memory.
Caching and deduplication of data blocks in cache memory
Techniques for deduplicating data in cache memory include determining that a first data block stored in the cache memory matches a second data block stored in the cache memory. It is further determined that a number of accesses associated with at least one of the first data block or the second data block is equal to or greater than a threshold number of accesses. In response to determining that the number of accesses is equal to or greater than the threshold number of accesses, the first data block is deduplicated in the cache memory.
Technologies for indirect branch target security
Technologies for indirect branch target security include a computing device having a processor to execute an indirect branch instruction. The processor may determine an indirect branch target of the indirect branch instruction, load a memory tag associated with the indirect branch target, and determine whether the memory tag is set. The processor may generate a security fault if the memory tag is not set. The processor may load an encrypted indirect branch target, decrypt the encrypted branch target using an activation record key stored in an activation key register, and perform a jump to the indirect branch target. The processor may generate a next activation record coordinate as a function of the activation record key and a return address of a call instruction and generate the next activation record key as a function of the next activation record coordinate. Other embodiments are described and claimed.