G06F3/0674

DEVICE, METHOD OF CONTROLLING A CACHE MEMORY, AND STORAGE MEDIUM
20170220486 · 2017-08-03 · ·

A device includes a processor that calculates, based on a position of a first region of a storage and a current position of a head in the storage, a first time for reading the first region associated with a first block that is among blocks, which are associated with regions obtained by dividing the storage region of the storage and to be accessed by the head and temporarily store data stored in the regions, and that is determined as a candidate to be deleted based on a method, calculates, based on a position of a second region and the current position of the head, a second time for reading the second region associated with a second block that is not determined as the candidate to be deleted based on the method, and deletes the second block when the processor determines that the second time is shorter than the first time.

Systems and methods for accessing storage-as-memory

Aspects of the embodiments are directed to systems, devices, and methods for accessing storage-as-memory. Embodiments include a microprocessor including a microprocessor system agent and a field programmable gate array (FPGA). The FPGA including an FPGA system agent to process memory access requests received from the microprocessor system agent across a communications link; a memory controller communicatively coupled to the system agent; and a high speed serial interface to link the system agent with a storage system. Embodiments can also include a storage device connected to the FPGA by the high speed serial interface.

DATA PROCESSING METHOD AND STORAGE DEVICE
20220269431 · 2022-08-25 ·

This application provides a data processing method and a storage device, and belongs to the field of storage technologies. In this application, the storage device performs deduplication and compression based on different granularities, deduplicates data based on a large granularity, and compresses the data based on a small granularity. Therefore, a limitation that a deduplication granularity and a compression granularity need to be the same is removed. A deduplication ratio decrease caused by an excessively large granularity and a compression ratio decrease caused by an excessively small granularity are avoided to some extent, to improve an overall reduction ratio of deduplication and compression.

Data management for a data storage device

Managing data stored in at least one data storage device (DSD) of a computer system where the at least one DSD includes at least one disk for storing data. A Linear Tape File System (LTFS) write or read command is generated including an LTFS block address. The LTFS block address is translated to a device address for the at least one DSD and data on a disk of the at least one DSD is written or read at the device address.

Per-memory group swap device
09772776 · 2017-09-26 · ·

Systems and methods are disclosed for swapping a memory page from memory to a swap device. An example system for swapping a memory page from memory to a swap device includes a memory to store one or more memory pages. The system also includes a swap device selector that receives an indication to swap out a memory page from memory to a swap device. The swap device selector identifies a memory group to which the memory page belongs and selects a swap device from a plurality of swap devices assigned to the identified memory group. The memory group identifies a plurality of applications having a common property. The system further includes a swap module that copies the memory page into the selected swap device.

Large object containers with size criteria for storing mid-sized large objects

A method, computer program product and system are provided. The method, computer program product and system execute a process for storing an object in an object container that is stored in a persistency of a disk storage. The object container has size criteria whereby objects meeting the size criteria of the object container can be assigned to the object container. The object container can facilitate storing multiple objects to optimize disk storage usage by facilitating the assigning of multiple objects to the same disk storage page.

STREAMING DATA READING METHOD BASED ON EMBEDDED FILE SYSTEM

A streaming data reading method based on an embedded file system, including: receiving a request for reading streaming data, when the requested streaming data exists in a disk, creating a new reading task for the request, allocating a storage space to the newly created reading task, and initializing relevant parameters; decomposing the reading task into a plurality of sub-tasks, each sub-task being responsible for reading a piece of physically continuous data, and caching same; extracting the data from the sub-task cache, packaging same according to a streaming data format, submitting the data to a caller of this reading task once one block of data is packaged, and releasing this sub-task and triggering the next sub-task after submission; and when all sub-tasks are successfully completed, reporting the normal completion of the task to the task caller, and waiting for the task caller to end the current reading task.

STORING MID-SIZED LARGE OBJECTS FOR USE WITH AN IN-MEMORY DATABASE SYSTEM

A method, computer program product and system are provided. The method, computer program product and system execute a process for storing an object in an object container that is stored in a persistency of a disk storage. The object container has size criteria whereby objects meeting the size criteria of the object container can be assigned to the object container. The object container can facilitate storing multiple objects to optimize disk storage usage by facilitating the assigning of multiple objects to the same disk storage page.

Server and associated computer program product using different transmission speed for different portions of cold data

The present invention provides a sever which includes a network interface, a processor and a first storage device, wherein the processor is arranged for communicating with an electronic device via the network interface, and the first storage device stores data. In the operations of the server, the processor determines whether the data is cold data; and when the data is determined as the cold data, the processor moves a second portion of the data to a second storage device, and a first portion of the data is remained in the first storage device, wherein the data amount of the first portion is less than data amount of the second portion, and the access speed of the first storage device is higher than the access speed of the second storage device.

Enhancing lifetime of non-volatile cache by reducing intra-block write variation

A method, a system and a computer-readable medium for writing to a cache memory are provided. The method comprises maintaining a write count associated with a set, the set containing a memory block associated with a physical block address. A mapping from a logical address to the physical address of the block is also maintained. The method shifts the mapping based on the value of the write count and writes data to the block based on the mapping.