Patent classifications
G06F12/0269
METHOD AND APPARATUS FOR PERFORMING ACCESS CONTROL OF MEMORY DEVICE WITH AID OF MULTI-STAGE GARBAGE COLLECTION MANAGEMENT
A method and apparatus for performing access control of a memory device with aid of multi-stage garbage collection (GC) management are provided. The method includes: during a first GC stage, sending a first simple read command to the NV memory in order to try reading first valid data from a first source block, sending the first valid data into an internal buffer of the NV memory, for being programed into a first destination block, sending a second simple read command to the NV memory in order to try reading second valid data from the first source block, and in response to reading the second valid data from the first source block being unsuccessful, preventing retrying reading the second valid data from the first source block; completing at least one host-triggered operation; and during a second GC stage, retrying reading the second valid data from the first source block.
SYSTEMS AND METHODS FOR ZERO DOWNTIME DISTRIBUTED SEARCH SYSTEM UPDATES
A method and apparatus for performing search system upgrades is described. The method may include processing a software upgrade for a search system cluster distributed over one or more nodes, the one or more nodes comprising current search system data nodes. The method may also include allocating at least a set of one or more search system data nodes for the software upgrade including at least one upgraded search system data node. Furthermore, the method can include receiving, during the software upgrade, transaction data for a transaction, and receiving search requests to be executed by the search system cluster. Additionally, the method may include performing ingestion of all received transaction data comprising storing and indexing the transaction data in both the current search system data nodes and the at least one upgraded search system data node, and processing the search requests by the search system cluster against the current search system data nodes until the software upgrade is determined to be complete.
Intelligent write-amplification reduction for data storage devices configured on autonomous vehicles
Systems, methods and apparatus of intelligent write-amplification reduction for data storage devices configured on autonomous vehicles. For example, a data storage device of a vehicle includes: one or more storage media components; a controller configured to store data into and retrieve data from the one or more storage media components according to commands received in the data storage device; an address map configured to map between: logical addresses specified in the commands received in the data storage device, and physical addresses of memory cells in the one or more storage media components; and an artificial neural network configured to receive, as input and as a function of time, operating parameters indicative a data access pattern, and generate, based on the input, a prediction to determine an optimized data placement scheme. The controller is configured to adjust the address map according to the optimized data placement scheme.
Multi-threaded pause-less replicating garbage collection
A method and a system for garbage collection on a system. The method includes initiating a garbage collection process on a system by a garbage collector. The garbage collector includes one or more garbage collector threads. The method also includes marking a plurality of referenced objects using the garbage collector threads and one or more application threads during a preemption point. The method includes replicating the referenced objects using the garbage collector threads and marking for replication any newly discovered referenced objects found by scanning the application thread stack from a low-water mark. The method also includes replicating the newly discovered referenced objects and overwriting any reference to the old memory location.
INTERVAL GARBAGE COLLECTION FOR MULTI-VERSION CONCURRENCY CONTROL IN DATABASE SYSTEMS
Technologies for performing garbage collection in database systems, such as multi-version concurrency control (MVCC) database systems, are described. For example, different garbage collection techniques can be used separately or in various combinations, including interval garbage collection, group garbage collection, table garbage collection, and combinations. For example, a particular type of combination, called hybrid garbage collection, uses technique from interval garbage collection and group garbage collection, or from interval, group, and table garbage collection.
SEAMLESS HIGH PERFORMANCE INTEROPERABILITY BETWEEN DIFFERENT TYPE GRAPHS THAT SHARE A GARBAGE COLLECTOR
Multiple different type hierarchies can communicate in a high performance and seamless manner by sharing a GC and interface dispatch logic. A runtime environment can support multiple independent type hierarchies, each type hierarchy defined by the module which defines the root of a type graph and some other helper functionality. Code that uses the dispatch logic has to follow certain rules in order to maintain GC and type safety. Different types in disjoint type graphs can behave as if they were one type for cross type graph communication purposes.
Consolidated and concurrent remapping and identification for colorless roots
During a concurrent Relocation Phase, a GC thread relocates live objects, as an application thread executes. References in a frame on a call stack are remapped if the application thread attempts to access the frame. References on the call stack remains stale if no application thread attempts access. The GC thread may proceed with a subsequent phase of a GC cycle, even if a frame has stale references and therefore has not assumed a remap state. During a concurrent Mark/Remap Phase, the call stack may include frames in different frame states. The GC thread selects appropriate operations for processing each frame based on the respective frame state. When the GC thread encounters a frame not in the remap state, references therein are first remapped, and then identified as roots. Hence, root reference remapping and identification are performed in a single concurrent phase of a GC cycle.
System and method for improving memory usage in virtual machines at a cost of increasing CPU usage
An apparatus includes at least one processor executing a method for managing memory among a plurality of concurrently-running virtual machines, and a non-transitory memory device that stores a set of computer readable instructions for implementing and executing said memory management method. A memory optimization mechanism can reduce a memory usage of a virtual machine at a cost of increasing a central processing unit (CPU) usage. Information on a memory usage and a CPU usage of each virtual machine is periodically collected. When a first virtual machine exhibits high memory use, at least one second virtual machine with an extra CPU capacity is identified. A memory optimization mechanism is applied to the second virtual machine to reduce memory used by the second virtual machine, thereby providing a portion of freed memory that is then allocated to the first virtual machine.
MANAGING OBJECTS STORED IN MEMORY
In one example in accordance with the present disclosure, a method for managing objects stored in memory may include identifying a first object in a heap. The heap may be accessed by a plurality of mutator threads executing within a first plurality of operating system processes. The method may also include determining that the first object is a reachable object and determining that a first range of memory locations in the heap does not contain any reachable object. The method may also include receiving an allocation request from a second mutator thread from the plurality of mutator threads and allocating a first free chunk of memory including at least a part of the first range of memory locations to the second mutator thread.
REDUCING WRITE BARRIERS IN SNAPSHOT-BASED GARBAGE COLLECTION
Garbage collection methods and systems include determining that a condition for performing concurrent marking has been met, based on object write frequency. It is determined that an opportunity for performing concurrent marking has occurred, based on a stop-the-world event. Performance of concurrent marking is delayed until a future stop-the-world event, to prevent pre-write barriers.