G06F2212/281

System and method for protecting GPU memory instructions against faults

A system and method for protecting memory instructions against faults are described. The system and method include converting the slave instructions to dummy operations, modifying memory arbiter to issue up to N master and N slave global/shared memory instructions per cycle, sending master memory requests to memory system, using slave requests for error checking, entering master requests to the GM/LM FIFO, storing slave requests in a register, and comparing the entered master requests with the stored slave requests.

DISTRIBUTED VFS WITH SHARED PAGE CACHE
20220027327 · 2022-01-27 · ·

An apparatus includes a memory including a shared page cache and program instructions for a distributed virtual file system (VFS) for use in performing input/output (I/O) operations. An operating system of the computing system executes a central VFS in a first thread and executes a first application and the program instructions for the distributed VFS in a second thread. The distributed VFS determines that a first page, including data to which a first application has requested access, is stored in the shared page cache. In response to the determination, the distributed VFS accesses the requested data from the shared page cache without signaling the operating system or the central VFS. The computing system may be implemented in a device including a microkernel operating system.

Thread prefetch mechanism

An apparatus to facilitate data prefetching is disclosed. The apparatus includes a memory, one or more execution units (EUs) to execute a plurality of processing threads and prefetch logic to prefetch pages of data from the memory to assist in the execution of the plurality of processing threads.

MEMORY CACHE MANAGEMENT BASED ON STORAGE CAPACITY FOR PARALLEL INDEPENDENT THREADS
20220012176 · 2022-01-13 ·

A request to write a first data item associated with a first thread to a memory device is received. The memory device includes a first portion and a second portion. The first portion includes a cache that includes a first block to be utilized for data caching and a second block and a third block to be used for block compaction. The second block is associated with a high modification frequency and the third block is associated with a low modification frequency. In response to determining a first memory page in the first block is available for writing the first data item, the first data item is written to the first memory page. A determination is made that a memory page criterion associated with the first thread has been satisfied. In response to identifying each of a set of second memory pages associated with the first thread written to at least one of the second block or the third block, the data of first memory page and each of the set of second memory pages is copied to the second portion of the memory device. The first memory page is marked as invalid on the first block and each of the set of second memory pages associated with the first thread are marked as invalid on at least one of the second block or the third block.

Provisioning virtual machines with a single identity and cache virtual disk

A virtual disk is provided to a computing environment. The virtual disk includes identity information to enable identification of a virtual machine within the computing environment. A size of the virtual disk is increased within the computing environment to enable the virtual disk to act as a storage for the identity information and as a cache of other system data to operate the virtual machine. The virtual machine is booted within the computing environment. The virtual machine is configured to at least access the virtual disk that includes both identity information and caches other system data to operate the virtual machine. Related apparatus, systems, techniques and articles are also described.

CACHE PROGRAM OPERATION OF THREE-DIMENSIONAL MEMORY DEVICE WITH STATIC RANDOM-ACCESS MEMORY
20220413771 · 2022-12-29 ·

A three-dimensional (3D) memory device includes a 3D NAND memory array, an on-die static random-access memory (SRAM), and peripheral circuits formed on the same chip with the on-die SRAM. The peripheral circuits include a page buffer coupled to the on-die SRAM and a controller coupled to the on-die SRAM and the page buffer. The controller may be configured to load program data into the page buffer and cache the program data into the on-die SRAM as a backup copy of the program data. In response to a status of programming the program data from the page buffer into the 3D NAND memory array being failed, the controller may be further configured to transmit the backup copy of the program data in the on-die SRAM to the page buffer, and program the backup copy of the program data in the page buffer into the 3D NAND memory array.

DATA UPDATING TECHNOLOGY
20220342541 · 2022-10-27 · ·

A storage system includes a management node and a plurality of storage nodes forming a redundant array of independent disks (RAID). When the management node determines that not all data in an entire stripe is updated based on a received write request, the management node sends update data chunk obtained from to-be-written data to corresponding storage node. The storage node do not directly update, based on the received update data chunks, data block stored in storage device of the storage node, but store the update data chunk into non-volatile memories (NVM) cache of the storage node and send the update data chunk to another storage node to backup. According to the data updating method, write amplification problems caused in a stripe update process can be reduced, thereby improving update performance of the storage system.

Data updating technology
11422703 · 2022-08-23 · ·

A storage system includes a management node and a plurality of storage nodes forming a redundant array of independent disks (RAID). When the management node determines that not all data in an entire stripe is updated based on a received write request, the management node sends update data chunk obtained from to-be-written data to corresponding storage node. The storage node do not directly update, based on the received update data chunks, data block stored in storage device of the storage node, but store the update data chunk into non-volatile memories (NVM) cache of the storage node and send the update data chunk to another storage node to backup. According to the data updating method, write amplification problems caused in a stripe update process can be reduced, thereby improving update performance of the storage system.

Using a second content-addressable memory to manage memory burst accesses in memory sub-systems

A request to access data at an address is received from a host system. A tag associated with the address is determined to not be found in first entries in a first content-addressable memory (CAM) or in second entries in a second CAM. Responsive to determining that the tag is not found in the first entries or in the second entries, a particular entry of the first entries that each includes valid data is selected. A determination is made whether the particular entry satisfies a condition indicating that content in the particular entry is to be stored in the second CAM. The content is associated with other data stored in the cache. Responsive to determining that the condition is satisfied, the content of the particular entry is stored in one of the second entries to maintain the data in the cache.

Data storage method and apparatus, and server

This disclosure relates to a data storage method and apparatus, and a server. The method includes receiving, by a first server, a write instruction sent by a second server, storing target data in a cache of a controller, detecting a read instruction for the target data, and storing the target data in a storage medium of a non-volatile memory based on the read instruction. In other words, when the second server needs to write the target data to the first server, the target data is not only written to the cache of the first server, but also written to the storage medium of the first server. This can ensure that the data in the cache is written to the storage medium promptly.