G06F12/122

Calculating and adjusting ghost cache size based on data access frequency

A method for maintaining statistics for data elements in a cache is disclosed. The method maintains a heterogeneous cache comprising a higher performance portion and a lower performance portion. The method maintains, within the lower performance portion, a ghost cache containing statistics for data elements that are currently contained in the heterogeneous cache, and data elements that have been demoted from the heterogeneous cache within a specified time interval. The method calculates a size of the ghost cache based on an amount of frequently accessed data that is stored in backend storage volumes behind the heterogeneous cache. The method alters the size of the ghost cache as the amount of frequently accessed data changes. A corresponding system and computer program product are also disclosed.

Calculating and adjusting ghost cache size based on data access frequency

A method for maintaining statistics for data elements in a cache is disclosed. The method maintains a heterogeneous cache comprising a higher performance portion and a lower performance portion. The method maintains, within the lower performance portion, a ghost cache containing statistics for data elements that are currently contained in the heterogeneous cache, and data elements that have been demoted from the heterogeneous cache within a specified time interval. The method calculates a size of the ghost cache based on an amount of frequently accessed data that is stored in backend storage volumes behind the heterogeneous cache. The method alters the size of the ghost cache as the amount of frequently accessed data changes. A corresponding system and computer program product are also disclosed.

Content Distribution Network Supporting Popularity-Based Caching

A content delivery network may provide content items to requesting devices using a popularity-based distribution hierarchy. A central analysis system may determine popularity data for a content item stored in a first caching device. The central analysis system may determine that a change in the popularity data is beyond a threshold value. The central analysis system may then transmit an instruction to move the content item from the first caching device to a second caching device in a different tier of caching devices than the first caching device. The central analysis system may update a content index to indicate that the content item has been moved to the second caching device. A user device may be redirected to request the content item directly from the second caching device.

Distributed Quantum Entanglement Cache

An entangled quantum cache includes a quantum store that receives a plurality of quantum states and is configured to store and order the plurality of quantum states and to provide select ones of the stored and ordered plurality of quantum states to a quantum data output at a first desired time. A fidelity system is configured to determine a fidelity of at least some of the plurality of quantum states. A classical store is coupled to the fidelity system and configured to store classical data comprising the determined fidelity information and an index that associates particular ones of classical data with particular ones of the plurality of quantum states and to supply at least some of the classical data to a classical data output at a second desired time. A processor is connected to the classical store and determines the first time based on the index.

Distributed Quantum Entanglement Cache

An entangled quantum cache includes a quantum store that receives a plurality of quantum states and is configured to store and order the plurality of quantum states and to provide select ones of the stored and ordered plurality of quantum states to a quantum data output at a first desired time. A fidelity system is configured to determine a fidelity of at least some of the plurality of quantum states. A classical store is coupled to the fidelity system and configured to store classical data comprising the determined fidelity information and an index that associates particular ones of classical data with particular ones of the plurality of quantum states and to supply at least some of the classical data to a classical data output at a second desired time. A processor is connected to the classical store and determines the first time based on the index.

HOT PAGE DETECTION BY SAMPLING TLB RESIDENCY
20230057083 · 2023-02-23 ·

The disclosed technology provides for an improved memory tiering arrangement. In one aspect, an apparatus includes a sampling register and logic, responsive to sequential read requests, to read page data entries stored in successive locations in a TLB and provide page data from the page data entries as sequential outputs of the sampling register. In another aspect, a method includes generating a page residency list based on scanning, via a sampling register, page data entries stored in successive locations in a TLB, determining, for each page, whether the respective page is a hot page or a cold page based on the page residency list, and assigning hot pages to a first memory tier and cold pages to a second memory tier. Scanning page data entries stored in the TLB can include issuing a sequence of read requests to the sampling register sufficient to read all entries in the TLB.

Maintaining data structures in a memory subsystem comprised of a plurality of memory devices

Provided are a computer program product, system, and method for maintaining data structures in a virtual memory comprised of a plurality of heterogeneous memory devices. Access counts are maintained for a plurality of data structures stored in a first level memory device. A determination is made of data structures in the first level memory device having lowest access counts. The determined data structures are deleted from the first level memory device and retaining copies of the data structures in a second level memory device, wherein the first level memory device has lower latency than the second level memory device.

Cache management in a printing system in a virtualized computing environment

A varied least recently used (VLRU) caching technique is used to enable print data to be available at a cache of a client for printing, even after an agent performs a deletion of a hash value for the print data at a cache of the agent. The deletion of the print data (cached at the cache of the client) is postponed at the client device via the use of a waiting list, so that the cached print data can be printed at a physical printer of the client, in response to receiving a delayed print job from the agent that specifies the hash value as a result of a deduplication process performed by the agent.

Cache management in a printing system in a virtualized computing environment

A varied least recently used (VLRU) caching technique is used to enable print data to be available at a cache of a client for printing, even after an agent performs a deletion of a hash value for the print data at a cache of the agent. The deletion of the print data (cached at the cache of the client) is postponed at the client device via the use of a waiting list, so that the cached print data can be printed at a physical printer of the client, in response to receiving a delayed print job from the agent that specifies the hash value as a result of a deduplication process performed by the agent.

Information processing apparatus and computer-readable recording medium having stored therein process allocation determining program
11487582 · 2022-11-01 · ·

An information processing apparatus including a plurality of groups, each group including a first memory, a second memory different in process speed from the first memory, and a processor including a memory controller that is connected to the first memory and the second memory and that controls an access from a process to the first memory and the second memory, wherein a first processor among a plurality of the processors of the plurality of groups is configured to determine, based on a characteristic of a plurality of the processes accessing data stored in the first memory or the second memory in each of the plurality of groups, an allocation of the plurality of processes onto the plurality of processors.