G06F2212/163

Reducing requests using probabilistic data structures

Techniques are disclosed relating to providing and using probabilistic data structures to at least reduce requests between database nodes. In various embodiments, a first database node processes a database transaction that involves writing a set of database records to an in-memory cache of the first database node. As part of processing the database transaction, the first database node may insert, in a set of probabilistic data structures, a set of database keys that correspond to the set of database records. The first database node may send, to a second database node, the set of probabilistic data structures to enable the second database node to determine whether to request, from the first database node, a database record associated with a database key.

Systems and methods for coupled cache management
11513968 · 2022-11-29 · ·

Methods, systems, and computer-readable storage media for maintaining and utilizing a unified cache memory. The method first identifies a unified cache memory associated with an application and populates it with data for access during application execution. The unified cache memory is associated with coupled lookup elements, which include multiple keys and multiple values coupled together. The coupled lookup elements are available to the application for access to all possible views of the data.

Preserving data integrity during controller failures

Systems and processes are disclosed to preserve data integrity during a storage controller failure. In some examples, a storage controller of an active-active controller configuration can back-up data and corresponding cache elements to allow a surviving controller to construct a correct state of a failed controller's write cache. To accomplish this, the systems and processes can implement a relative time stamp for the cache elements that allow the backed-up data to be merged on a block-by-block basis.

Instance Deployment Method, Instance Management Node, Computing Node, and Computing Device
20220365877 · 2022-11-17 ·

In a method, an instance management node receives a request for creating a service instance; the instance management node obtains a cache configuration corresponding to the service instance; and the instance management node creates the service instance on a computing node, and creates a cache instance on the computing node based on the cache configuration. In this way, the service instance may provide a service by using the matched cache instance.

Verifiable Cacheable Calclulations

A method implements verifiable cacheable calculations. A result is calculated. The result is hashed to generate a name of the result. The result is an input of a set of inputs from which the name is generated. Each input of the set of inputs identifies one of a data set, a query, and a function. The result is stored in a cache using the name generated from hashing the result. A request is received to access the result using the name. The result is retrieved from the cache using the name generated from hashing the result corresponding to the input. The result is presented in response to the request.

Virtual machine backup and restoration

Reversing deletion of a virtual machine including managing, by a storage system, a repository of virtual machine snapshots on a datastore; receiving, by the storage system, a request to recover a deleted virtual machine from the datastore; accessing, by the storage system, the repository of virtual machine snapshots on the datastore to generate a list of deleted virtual machines associated with virtual machine snapshots in the repository of virtual machine snapshots; receiving, by the storage system, a selection of one of the deleted virtual machines in the list of deleted virtual machines; and recovering, by the storage system, the selected deleted virtual machine using a virtual machine snapshot for the selected deleted virtual machine.

Data migration based on performance characteristics of memory blocks

A performance manager (400, 500) and a method (200) performed thereby are provided, for managing the performance of a logical server of a data center. The data center comprises at least one memory pool in which a memory block has been allocated to the logical server. The method (200) comprises determining (230) performance characteristics associated with a first portion of the memory block, comprised in a first memory unit of the at least one memory pool; and identifying (240) a second portion of the memory block, comprised in a second memory unit of the at least one memory pool, to which data of the first portion of the memory block may be migrated to apply performance characteristics associated with the second portion. The method (200) further comprises initiating migration (250) of the data to the second portion of the memory block.

REDUCING REQUESTS USING PROBABILISTIC DATA STRUCTURES
20230090835 · 2023-03-23 ·

Techniques are disclosed relating to providing and using probabilistic data structures to at least reduce requests between database nodes. In various embodiments, a first database node processes a database transaction that involves writing a set of database records to an in-memory cache of the first database node. As part of processing the database transaction, the first database node may insert, in a set of probabilistic data structures, a set of database keys that correspond to the set of database records. The first database node may send, to a second database node, the set of probabilistic data structures to enable the second database node to determine whether to request, from the first database node, a database record associated with a database key.

Asynchronous garbage collection in parallel transaction system without locking
11481321 · 2022-10-25 · ·

Methods, systems, and computer-readable storage media for determining that a transaction of a plurality of transactions performed in at least a portion of a system includes a delete operation, the plurality of transactions being managed by a secondary transaction manager and including a subset of all transactions performed in the system, in response to the delete operation, inserting a clean-up entry in the secondary transaction manager, attaching the clean-up entry to a subsequent transaction in order to determine and assign a time to the cleanup-entry that is used to subsequently trigger garbage collection, and selectively comparing the time to a most-recently-reported minimum read timestamp that is periodically reported to the secondary transaction manager from a primary transaction manager of the system, wherein the clean-up entry is executed in response to determining that the time is less than the most-recently-reported minimum read timestamp.

CACHE REFRESH SYSTEM AND PROCESSES
20230081780 · 2023-03-16 ·

The present disclosure relates generally to computer systems and, more particularly, to a cache refresh system and related processes and methods of use. The method of refreshing data in cache memory includes: setting, by a computer system, a refresh indicator to “true”; refreshing data in the cache memory, by the computer system, upon a determination that the refresh indicator is set to “true”; and setting, by the computer system, the refresh indicator to “false” after the refreshing of the cache memory.