Patent classifications
G06F12/0269
Concurrent garbage collection with minimal graph traversal
Systems and techniques for garbage collection are disclosed for concurrently performing a garbage collection cycle in a single traversal of a garbage collection heap while application threads are running. The garbage collection cycle includes marking a first memory object as live. The garbage collection cycle also includes determining that a forwarding pointer of the first memory object points to the first memory object. The garbage collection cycle further includes evacuating the first memory object to a free region based on the determining. The garbage collection cycle additionally includes evacuating a second memory object in the same single traversal of the garbage collection heap in which the first memory object is being marked live.
Storing highly read data at low impact read disturb pages of a memory device
A highly read data manager of a memory device receives a request to perform receives a request to perform a data relocation operation on a first wordline of a plurality of wordlines for a memory device, the memory device comprising a plurality of multi-level memory cells, wherein each multi-level memory cell comprises a plurality of pages; determines at the first wordline comprises data stored at one or more high read disturb pages of the plurality of pages; determines whether the data comprises a characteristic that satisfies a threshold criterion in relation to additional data stored on additional wordlines of the plurality of wordlines; responsive to determining that the data comprises the characteristic that satisfies the threshold criterion, identifies one or more low read disturb pages of the plurality of pages of a target wordline for relocating the data; and responsive to identifying the one or more low read disturb pages of the target wordline, stores at least a portion of the data at the one or more low read disturb pages of the target wordline.
Colorless roots implementation in Z garbage collector
A request is received, from a mutator thread, to load a first reference to a first object from a heap memory onto a call stack of the application thread. Responsive to receiving the request, a system retrieves the first reference from the heap memory. The system executes a bitwise shift operation that (a) removes one or more bits representing the first garbage collection state and (b) generates a second reference from the first reference. Based on a particular bit, of the one or more bits removed from the first reference by the shift operation, the system determines whether to perform a set of garbage collection operations on the first reference to bring the first reference to a good state. The second reference, without any indication of any of the plurality of garbage collection states, is stored to the call stack.
Method and apparatus for performing access control of memory device with aid of multi-stage garbage collection management
A method and apparatus for performing access control of a memory device with aid of multi-stage garbage collection (GC) management are provided. The method includes: during a first GC stage, sending a first simple read command to the NV memory in order to try reading first valid data from a first source block, sending the first valid data into an internal buffer of the NV memory, for being programed into a first destination block, sending a second simple read command to the NV memory in order to try reading second valid data from the first source block, and in response to reading the second valid data from the first source block being unsuccessful, preventing retrying reading the second valid data from the first source block; completing at least one host-triggered operation; and during a second GC stage, retrying reading the second valid data from the first source block.
Garbage collection of tree structure with page mappings
A key-value engine may perform garbage collection for a tree or hierarchical data structure on an append-only storage device with page mappings. The key-value engine may separate hot and cold data to reduce write amplification, track extent usage using a restricted or limited amount of memory, efficiently answer queries of valid extent usage, and adaptively or selectively defragment pages in snapshots in rounds of garbage collection.
COLORLESS ROOTS IMPLEMENTATION IN Z GARBAGE COLLECTOR
A request is received, from a mutator thread, to load a first reference to a first object from a heap memory onto a call stack of the application thread. Responsive to receiving the request, a system retrieves the first reference from the heap memory. The system executes a bitwise shift operation that (a) removes one or more bits representing the first garbage collection state and (b) generates a second reference from the first reference. Based on a particular bit, of the one or more bits removed from the first reference by the shift operation, the system determines whether to perform a set of garbage collection operations on the first reference to bring the first reference to a good state. The second reference, without any indication of any of the plurality of garbage collection states, is stored to the call stack.
SNAPSHOT AT THE BEGINNING MARKING IN Z GARBAGE COLLECTOR
During execution of garbage collection marking, an application thread receives a first request to overwrite a reference field of an object, the object comprising at least a first reference and the first request comprising a second reference to be written to the reference field. Responsive to receiving the first request, the application thread determines a marking parity for objects being traversed by the garbage collection marking process and loads the first reference from the heap. The application thread determines that marking metadata of the first reference does not match the marking parity. Responsive to that determination, the application thread adds the first reference to a marking list, modifies the second reference to include the current marking parity as the marking metadata, and stores the modified second reference to the first reference field. In subsequent writes to the reference field, the application thread refrains from adding to the marking list.
WRITE BARRIER FOR REMEMBERED SET MAINTENANCE IN GENERATIONAL Z GARBAGE COLLECTOR
During execution of garbage collection, an application receives a first request to overwrite a reference field of an object, the object comprising a first reference and the first request comprising a memory address at which the reference field is stored, and a second reference to be written to the reference field. Responsive to receiving the first request, the system determines a current remembered set phase, and loads the first reference. The application determines that remembered set metadata of the first reference does not match the current remembered set phase. Responsive to that determination, the application adds an entry to a remembered set data structure, modifies the second reference to include the current remembered set phase as the remembered set metadata, and stores the modified second reference to the reference field. In subsequent writes to the reference field, the application refrains from adding to the remembered set data structure.
CONCURRENT COMPUTATION ON DATA STREAMS USING COMPUTATIONAL GRAPHS
Disclosed are some implementations of systems, apparatus, methods and computer program products for generating and implementing computational graphs that facilitate concurrent computation on data streams. A computational graph includes a plurality of nodes, where each node has one or more stages associated therewith. Each stage may be associated with a corresponding operation that is to be performed on data associated with that stage.
CONCURRENT COMPUTATION ON DATA STREAMS USING COMPUTATIONAL GRAPHS
Disclosed are some implementations of systems, apparatus, methods and computer program products for generating and implementing computational graphs that facilitate concurrent computation on data streams. A computational graph includes a plurality of nodes, where each node has one or more stages associated therewith. Each stage may be associated with a corresponding operation that is to be performed on data associated with that stage.