G06F2212/281

Multicast tree-based data distribution in distributed shared cache

Systems and methods for multicast tree-based data distribution in a distributed shared cache. An example processing system comprises: a plurality of processing cores, each processing core communicatively coupled to a cache; a tag directory associated with caches of the plurality of processing cores; a shared cache associated with the tag directory; a processing logic configured, responsive to receiving an invalidate request with respect to a certain cache entry, to: allocate, within the shared cache, a shared cache entry corresponding to the certain cache entry; transmit, to at least one of: a tag directory or a processing core that last accessed the certain entry, an update read request with respect to the certain cache entry; and responsive to receiving an update of the certain cache entry, broadcast the update to at least one of: one or more tag directories or one or more processing cores identified by a tag corresponding to the certain cache entry.

Increased destaging efficiency for smoothing of destage tasks based on speed of disk drives

For increased destaging efficiency by smoothing destaging tasks to reduce long input/output (I/O) read operations in a computing environment, the ramp up of the destaging tasks is adjusted based on speed of disk drives when smoothing the destaging of storage tracks between a desired number of destaging tasks and a current number of destaging tasks by calculating destaging tasks according to one of a standard time interval and a variable recomputed destaging task interval.

Compressing portions of a buffer cache using an LRU queue

Techniques are described for compressing cache pages from an LRU (Least-Recently-Used) queue so that data takes longer to age off and be removed from the cache. This increases the likelihood that data will be available within the cache upon subsequent re-access, reducing the need for costly disk accesses due to cache misses.

MEMORY SYSTEM AND OPERATION METHOD THEREOF
20170220472 · 2017-08-03 ·

A memory system may include a plurality of first and second memory devices each comprising M-bit multi-level cells (MLCs), M-bit multi-buffers, and transmission buffers, a cache memory suitable for caching data inputted to or outputted from the plurality of first and second memory devices, and a controller suitable for programming program data cached by the cache memory to a memory device selected among the first and second memory devices by transferring the program data to M-bit multi-buffers of the selected memory device whenever the program data are cached by M bits into the cache memory, and controlling the selected memory device to perform a necessary preparation operation, except for a secondary preparation operation, of a program preparation operation, until an input of the program data is ended or the M-bit multi-buffers of the selected memory device are full.

CACHE OFFLOAD ACROSS POWER FAIL

The disclosed technology provides for selection of a subset of available non-volatile memory devices in an array to receive a dirty cache data of a volatile cache responsive to detection of a power failure. In one implementation, the selection of the non-volatile memory devices is based on one or more predictive power parameters usable to estimate a time remaining during which a reserve power supply can support a cache offload to the selected subset of devices.

Method and device for processing data
09772946 · 2017-09-26 · ·

A method and device are provided for processing data. The method includes, after receiving data input by a data bus, according to a destination indication of the data and a valid bit field indication of the data, writing the data input by the data bus into an uplink side shared cache, polling the uplink side shared cache according to a fixed timeslot order, reading out the data in the uplink side shared cache, and outputting the data to respective corresponding channels. The method and device enable effective saving of cache resources, reduction of pressure on area and timing, and improvement of cache utilization while reliably achieving data cache and bit width conversion.

METHOD AND SYSTEM FOR USING DOWNGRADED FLASH DIE FOR CACHE APPLICATIONS
20170271030 · 2017-09-21 ·

A method and apparatus for using low-cost un-qualified dies suitable for an SSD cache application in an SSD cache are disclosed. Embodiments of the present invention enable production of a cache-die SSD with sufficient data retention and endurance to meet demands of modern data centers while reducing infrastructure costs. According to one embodiment, a method of identifying and using low-cost un-qualified dies suitable for an SSD cache application in an SSD cache is disclosed. The method includes extracting application data from the SSD cache application, modeling a behavior of the SSD cache application based on the application data, characterizing a first un-qualified die to determine at least one quantified property of the first un-qualified die, and testing the at least one quantified property of the first un-qualified die against the modeled behavior of the SSD cache application to determine if the un-qualified die is suitable for the SSD cache.

Information processing apparatus, non-transitory computer readable medium, and information processing method
09811149 · 2017-11-07 · ·

An information processing apparatus includes a first controller, a second controller, and a shared memory. The first controller outputs a transition signal that indicates a transition to a power-saving mode in which power consumption is reduced compared to a normal mode. The second controller outputs instruction information that indicates an instruction provided from the first controller before the transition signal is output on the basis of the transition signal output from the first controller. The shared memory is connected to the first controller and the second controller to be shared, and stores the instruction information output from the second controller even after a transition is made to the power-saving mode.

PROVISIONING VIRTUAL MACHINES WITH A SINGLE IDENTITY AND CACHE VIRTUAL DISK
20210406186 · 2021-12-30 ·

A virtual disk is provided to a computing environment. The virtual disk includes identity information to enable identification of a virtual machine within the computing environment. A size of the virtual disk is increased within the computing environment to enable the virtual disk to act as a storage for the identity information and as a cache of other system data to operate the virtual machine. The virtual machine is booted within the computing environment. The virtual machine is configured to at least access the virtual disk that includes both identity information and caches other system data to operate the virtual machine. Related apparatus, systems, techniques and articles are also described.

LOW-BIT DENSITY MEMORY CACHING OF PARALLEL INDEPENDENT THREADS
20220188231 · 2022-06-16 ·

A first data item is programmed to a first memory page of a first block included in a cache that resides in a first portion of a memory device. The first data item is associated with a first processing thread. A second memory page including a second data item associated with the first processing thread is identified. The second memory page is contained by a second block of the cache. The first data item and the second data item are copied to a second portion of the memory device. The first memory page and each of the one or more second memory pages are designated as invalid.