G06F2212/20

METHOD FOR REFRESHING DYNAMIC RANDOM ACCESS MEMORY AND A COMPUTER SYSTEM

A method for refreshing a dynamic random access memory DRAM and a computer system are provided. When an address of a refresh unit in a DRAM and refresh information of the refresh unit are acquired, the address of the refresh unit and the refresh information of the refresh unit are encapsulated as a DRAM access request, where the refresh unit is storage space on which one time of refresh is performed in the DRAM, and the refresh information of the refresh unit includes a refresh cycle of the refresh unit. Then, the address and the refresh information of the refresh unit are written into refresh data space using the DRAM access request, where the refresh data space is storage space that is preset in the DRAM and that is used for storing an address of at least one refresh unit and refresh information of the at least one refresh unit.

Method and apparatus for accessing data stored in a storage system that includes both a final level of cache and a main memory
09594693 · 2017-03-14 · ·

A data access system including a storage drive, processor and cache module. The processor, in response to data required by the processor not being cached within one or more levels of cache of the processor, generates a first physical address (PA). The cache module includes a memory and first and second controllers. The memory is a final level of cache. The first controller converts the first PA into a virtual address. The second controller: converts the virtual address into a second PA; based on the second PA, determines whether the data is cached within the memory; and if the data is cached, accesses and forwards the data to the processor. The first or second controller determines whether a cache miss has occurred and, in response to a cache miss and based on the second PA or a third PA of the storage drive, retrieves the data from the storage drive.

DATA ACCESS IN HYBRID MAIN MEMORY SYSTEMS
20170052742 · 2017-02-23 ·

Implementations of the present disclosure include methods, systems, and computer-readable storage mediums for identifying a data processing function to be executed in a hybrid main memory system, the hybrid main memory system including a first type of main memory and a second type of main memory, the data processing function including data access operations to access the hybrid main memory system, accessing a write metric for the data processing function, the write metric based at least in part on a proportion of the data access operations that are write operations, and, based at least in part on the write metric being less than a threshold value, designating the data processing function for execution in the first type of main memory.

Method for memory allocation during execution of a neural network

According to an aspect, a method is proposed for defining placements, in a volatile memory, of temporary scratch buffers used during an execution of an artificial neural network, the method comprising: determining an execution order of layers of the neural network, defining placements, in a heap memory zone of the volatile memory, of intermediate result buffers generated by each layer, according to the execution order of the layers, determining at least one free area of the heap memory zone over the execution of the layers, defining placements of temporary scratch buffers in the at least one free area of the heap memory zone according to the execution order of the layers.

CODECS FOR DNA DATA STORAGE

Described herein are systems and methods for encoding digital data into oligonucleotides and decoding the oligonucleotides back into digital data. The encoding and decoding schemes include an inner codec for transforming the digital data into bases, and vice versa. The encoding and decoding schemes also include an outer codec comprising an error correction scheme.

METHOD FOR MEMORY ALLOCATION DURING EXECUTION OF A NEURAL NETWORK
20260044724 · 2026-02-12 ·

According to an aspect, a method is proposed for defining placements, in a volatile memory, of temporary scratch buffers used during an execution of an artificial neural network, the method comprising: determining an execution order of layers of the neural network, defining placements, in a heap memory zone of the volatile memory, of intermediate result buffers generated by each layer, according to the execution order of the layers, determining at least one free area of the heap memory zone over the execution of the layers, defining placements of temporary scratch buffers in the at least one free area of the heap memory zone according to the execution order of the layers.