G06F12/0269

Garbage collection command scheduling

Systems and methods are disclosed for the intelligent scheduling of garbage collection operations on a solid state memory. In certain embodiments, a method may comprise initiating a garbage collection process for a solid state memory (SSM) having a multiple die architecture, determining an order of die access for the garbage collection process based on an activity table indicating a use of one or more die in the multiple die architecture, and performing the garbage collection process based on the determined order of die access. Garbage collection reads may be directed to idle die to avoid conflicts with die busy performing other operations, thereby improving system performance.

REDUCING IDLE RESOURCE USAGE
20180011789 · 2018-01-11 ·

A method, computer program product, and system for reallocating resources of an idle application or program includes a computer for running an application or a program and starting a predetermined time interval. The computer increases a number counter for each event triggered during the predetermined time interval, and the event is a predetermined trigger that is activated during the running of the application or program. The method and system includes comparing a total number of events that occur during the predetermined time interval to a threshold value. The total number of events is the value of the number counter at the end of the predetermined interval. In response to determining, by the computer, the total number of events being below the threshold value, releasing resources allocated to the program by activating, using the computer, either: i) a garbage collector application, or ii) a resource release application.

Compiling application with multiple function implementations for garbage collection

Functions of an application may include multiple implementations that have corresponding behaviors but perform different garbage collection-related activities such that the different implementations may be executed during different garbage collection phases to reduce overall garbage collection overhead during application execution.

GARBAGE COLLECTION OF TREE STRUCTURE WITH PAGE MAPPINGS

A key-value engine may perform garbage collection for a tree or hierarchical data structure on an append-only storage device with page mappings. The key-value engine may separate hot and cold data to reduce write amplification, track extent usage using a restricted or limited amount of memory, efficiently answer queries of valid extent usage, and adaptively or selectively defragment pages in snapshots in rounds of garbage collection.

Write barrier for remembered set maintenance in generational Z garbage collector

During execution of garbage collection, an application receives a first request to overwrite a reference field of an object, the object comprising a first reference and the first request comprising a memory address at which the reference field is stored, and a second reference to be written to the reference field. Responsive to receiving the first request, the system determines a current remembered set phase, and loads the first reference. The application determines that remembered set metadata of the first reference does not match the current remembered set phase. Responsive to that determination, the application adds an entry to a remembered set data structure, modifies the second reference to include the current remembered set phase as the remembered set metadata, and stores the modified second reference to the reference field. In subsequent writes to the reference field, the application refrains from adding to the remembered set data structure.

Tracking garbage collection states of references

Garbage collection (GC) states are stored within references stored on a heap memory to track a progress of GC operations with respect to the references. GC state may be stored in a non-addressable portion of references. Based on the GC state of a particular reference, a set of GC operations are selected and performed for the reference. However, references stored on a call stack do not include any indication of GC state. Hence, loading a reference from heap to call stack involves removing the indication of GC state. Writing a reference to heap involves adding the indication of GC state. References embedded within a compiled method also do not indicate any GC state. Metadata of the compiled method indicate a GC state, which is implicated to the embedded references. GC operations are selected and performed for each embedded reference based on the GC state of the compiled method.

Implementing state-based frame barriers to process colorless roots during concurrent execution

An application thread executes concurrently with a garbage collection (GC) thread traversing a call stack of the application thread. Frames of the call stack that have been processed by the GC thread assume a global state associated with the GC thread. The application thread may attempt to return to a target frame that has not yet assumed the global state. The application thread hits a frame barrier, preventing return to the target frame. The application thread determines a frame state of the target frame. The application thread selects appropriate operations for bringing the target frame to the global state based on the frame state. The selected operations are performed to bring the target frame to the global state. The application thread returns to the target frame.

STORING HIGHLY READ DATA AT LOW IMPACT READ DISTURB PAGES OF A MEMORY DEVICE

A highly read data manager of a memory device receives a request to perform receives a request to perform a data relocation operation on a first wordline of a plurality of wordlines for a memory device, the memory device comprising a plurality of multi-level memory cells, wherein each multi-level memory cell comprises a plurality of pages; determines at the first wordline comprises data stored at one or more high read disturb pages of the plurality of pages; determines whether the data comprises a characteristic that satisfies a threshold criterion in relation to additional data stored on additional wordlines of the plurality of wordlines; responsive to determining that the data comprises the characteristic that satisfies the threshold criterion, identifies one or more low read disturb pages of the plurality of pages of a target wordline for relocating the data; and responsive to identifying the one or more low read disturb pages of the target wordline, stores at least a portion of the data at the one or more low read disturb pages of the target wordline.

Asynchronous garbage collection in parallel transaction system without locking
11481321 · 2022-10-25 · ·

Methods, systems, and computer-readable storage media for determining that a transaction of a plurality of transactions performed in at least a portion of a system includes a delete operation, the plurality of transactions being managed by a secondary transaction manager and including a subset of all transactions performed in the system, in response to the delete operation, inserting a clean-up entry in the secondary transaction manager, attaching the clean-up entry to a subsequent transaction in order to determine and assign a time to the cleanup-entry that is used to subsequently trigger garbage collection, and selectively comparing the time to a most-recently-reported minimum read timestamp that is periodically reported to the secondary transaction manager from a primary transaction manager of the system, wherein the clean-up entry is executed in response to determining that the time is less than the most-recently-reported minimum read timestamp.

MECHANISMS FOR TRUNCATING TENANT DATA
20230060733 · 2023-03-02 ·

Techniques are disclosed relating to truncating a tenant's data from a table. A database node may maintain a multi-tenant table having records for tenants. Maintaining the table may include writing a record for a tenant into an in-memory cache and performing a flush operation to flush the record to a shared storage. The database node may write a truncate record into the in-memory cache that truncates a tenant from the table such that records of the tenant having a timestamp indicating a time before the truncate record cannot be accessed as part of a record query. While the truncate record remains in the in-memory cache, the database node may receive a request to perform a record query for a key of the tenant, make a determination on whether a record was committed for the key after the truncate record was committed, and return a response based on the determination.