Patent classifications
G06F2212/1036
Method and system for accelerating storage of data in write-intensive computer applications
A method of optimising a service rate of a buffer in a computer system having memory stores of first and second type is described. The method selectively services the buffer by routing data to each of the memory store of the first type and the second type based on read/write capacity of the memory store of the first type.
Operating method of memory controller to reduce lifespan damage, operating method of host and storage device
An operating method of a memory controller may include receiving a state analysis request of a memory from a host, determining a fragment state of the memory, determining a lifespan situation of the memory, generating an analysis result indicating whether a garbage collection is restricted, on the basis of the fragment state and the lifespan situation, and providing the analysis results to the host.
Electronic device and method for utilizing memory space thereof
In various embodiments, an electronic device may include a display, a memory including a first space storing no data and a second space storing data, and a processor. The processor may be configured to control the electronic device to: receive an input for inputting a setting value for a fast data storage mode of the memory, to allocate a predetermined size of a free space of a file system of the electronic device as a temporary memory space for the fast data storage mode based on the setting value for the fast data storage mode, to control the memory to allocate a predetermined size of the first space as a borrowed space for the fast data storage mode corresponding to the size of the temporary memory space, to recognize occurrence of an event for starting data storage through the fast data storage mode, and to control the memory to perform the data storage using the borrowed space through the fast data storage mode in response to the occurrence of the event.
Collecting statistics for persistent memory
Disclosed herein are techniques for management of a non-volatile memory device. In one example, an integrated circuit comprises a cache device and a management controller. The cache device is configured to store a first mapping between logical addresses and physical addresses of a first memory, the first mapping being a subset of mapping between logical addresses and physical addresses of the first memory stored in a second memory, and an access count associated with each of the physical addresses of the first mapping. The management controller is configured to: maintain access statistics of the first memory based on the access counts stored in the cache device; and determine the mapping between logical addresses and physical addresses stored in the second memory based on the access statistics and predicted likelihoods of at least some of the logical addresses receiving an access operation.
USING A COMMON POOL OF BLOCKS FOR USER DATA AND A SYSTEM DATA STRUCTURE
A method includes identifying, by a processing device, a common pool of blocks comprising a first plurality of blocks allocated to system data and a second plurality of blocks allocated to user data; determining whether user data has been written to the second plurality of blocks within a threshold period of time; and responsive to determining that the user data has not been written to the second plurality of blocks within the threshold period of time, allocating a block from the second plurality of blocks to the first plurality of blocks.
Memory system for storing data of log-structured merge tree structure and data processing system including the same
A memory system includes a storage medium having a plurality of memory regions. A controller is configured to allocate each of a plurality of open memory regions among the memory regions to one or more levels and store, in response to a write request received from a host device that includes data and a level of the data, the data in an open memory region allocated to the level. A level may be a level of a file in a predetermined unit in which the data is included, such as in a log-structured merge (LSM) tree structure.
DYNAMICALLY STORING DATA TO DATA STORAGE DEVICES WITHIN PRE-DEFINED COOLING ZONES
A method includes receiving a write command including data to be stored and a logical block address associated with the data. The method includes selecting, from a pre-defined plurality of cooling zones of a data storage chassis, a cooling zone in which to store the data. The method further includes writing the data to a data storage device in the cooling zone.
Relocating data in a memory device
Methods that can facilitate more optimized relocation of data associated with a memory are presented. In addition to a memory controller component, a memory manager component can be employed to increase available processing resources to facilitate more optimal execution of higher level functions. Higher level functions can be delegated to the memory manager component to allow execution of these higher level operations with reduced or no load on the memory controller component resources. A uni-bus or multi-bus architecture can be employed to further optimize data relocation operations. A first bus can be utilized for data access operations including read, write, erase, refresh, or combinations thereof, among others, while a second bus can be designated for higher level operations including data compaction, error code correction, wear leveling, or combinations thereof, among others.
MANAGING CACHE REPLACEMENT IN A STORAGE CACHE BASED ON INPUT-OUTPUT ACCESS TYPES OF DATA STORED IN THE STORAGE CACHE
An apparatus comprises a processing device configured to monitor a storage cache storing a plurality of cache pages to determine whether the storage cache reaches one or more designated conditions and to determine cache replacement scores for at least a subset of the cache pages, the cache replacement scores being determined based at least in part on input-output access types for data stored in the cache pages. The processing device is also configured to select, responsive to determining that the storage cache has reached at least one of the one or more designated conditions, at least one of the cache pages to move from the storage cache to a storage device based at least in part on the determined cache replacement scores. The processing device is further configured to move the selected at least one of the plurality of cache pages from the storage cache to the storage device.
Limiting hot-cold swap wear leveling
Embodiments include methods, systems, devices, instructions, and media for limiting hot-cold swap wear leveling in memory devices. In one embodiment, wear metric values are stored and monitored using multiple wear leveling criteria. The multiple wear leveling criteria include a hot-cold swap wear leveling criteria, which may make use of a write count offset value. Based on a first wear metric value of a first management group and a second wear metric value of a second management group, the first management group and the second management group are selected for a wear leveling swap operation. The wear leveling swap operation is performed with a whole management group read operation of the first management group to read a set of data, and a whole management group write operation to write the set of data to the second management group.