G06F2212/305

Translation lookaside buffer in memory

Examples of the present disclosure provide apparatuses and methods related to a translation lookaside buffer in memory. An example method comprises receiving a command including a virtual address from a host translating the virtual address to a physical address on volatile memory of a memory device using a translation lookaside buffer (TLB).

Nonvolatile memory device and operation method thereof

A nonvolatile memory device includes a nonvolatile memory, a volatile memory being a cache memory of the nonvolatile memory, and a first controller configured to control the nonvolatile memory. The nonvolatile memory device further includes a second controller configured to receive a device write command and an address, and transmit, to the volatile memory through a first bus, a first read command and the address and a first write command and the address sequentially, and transmit a second write command and the address to the first controller through a second bus, in response to the reception of the device write command and the address.

GUARANTEED REAL-TIME CACHE CARVEOUT FOR DISPLAYED IMAGE PROCESSING SYSTEMS AND METHODS
20230081746 · 2023-03-16 ·

An electronic device may include an electronic display to display an image based on processed image data. The electronic device may also include image processing circuitry to generate the processed image data based on input image data and previously determined data stored in memory. The image processing circuitry may also operate according to real-time computing constraints. Cache memory may store the previously determined data in a provisioned section of the cache memory allotted to the image processing circuitry. Additionally, a controller may manage reading and writing of the previously determined data to the provisioned section of the cache memory.

DATA AGGREGATION PROCESSING APPARATUS AND METHOD, AND STORAGE MEDIUM
20230128085 · 2023-04-27 ·

The present disclosure provides a data aggregation processing apparatus and method, and a storage medium. According to the apparatus, a relatively complex target data sequence may be split, by a sequence grouping module, into a plurality of subsequences, which then can be subjected to simultaneous aggregation processing in a parallel processing mode by means of a plurality of parallel aggregation modules included in an aggregation module group, so as to obtain corresponding intermediate processing results. Subsequently, the intermediate processing results are merged by a merging module to obtain an aggregation processing result for the target data sequence. Aggregation processing for the target data sequence is completed, thus solving the technical problems in the existing methods, such as low aggregation processing efficiency and prolonged processing procedure. The technical effects of improving the aggregation processing efficiency and shortening the waiting time of users are achieved.

METHOD AND APPARATUS FOR SORTING DATA, STORAGE APPARATUS
20230076550 · 2023-03-09 ·

For each data in a plurality of data, data is read from a cache unit. For each data in the plurality of data, a group to which the data read from the cache unit belongs to is determined based at least in part on a predetermined grouping rule. A determination is made of (1) a quantity of groups and (2) a quantity of data corresponding to each group after determining the groups to which the plurality of data belong. Data belonging to a same group is written into a contiguous storage space of the cache unit, including by: sequentially reading the plurality of data from the cache unit and sequentially writing the plurality of data into the cache unit.

Memory tiering techniques in computing systems

Techniques of memory tiering in computing devices are disclosed herein. One example technique includes retrieving, from a first tier in a first memory, data from a data portion and metadata from a metadata portion of the first tier upon receiving a request to read data corresponding to a system memory section. The method can then include analyzing the data location information to determine whether the first tier currently contains data corresponding to the system memory section in the received request. In response to determining that the first tier currently contains data corresponding to the system memory section in the received request, transmitting the retrieved data from the data portion of the first memory to the processor in response to the received request. Otherwise, the method can include identifying a memory location in the first or second memory that contains data corresponding to the system memory section and retrieving the data from the identified memory location.

Memory-adaptive processing method for convolutional neural network

A memory-adaptive processing method for a convolutional neural network includes a feature map counting step, a size relation counting step and a convolution calculating step. The feature map counting step is for counting a number of a plurality of input channels of a plurality of input feature maps, an input feature map tile size, a number of a plurality of output channels of a plurality of output feature maps and an output feature map tile size for a convolutional layer operation. The size relation counting step is for obtaining a cache free space size in a feature map cache and counting a size relation. The convolution calculating step is for performing the convolutional layer operation with the input feature maps to produce the output feature maps according to a memory-adaptive processing technique, and the memory-adaptive processing technique includes a dividing step and an output-group-first processing step.

MEMORY TIERING TECHNIQUES IN COMPUTING SYSTEMS

Techniques of memory tiering in computing devices are disclosed herein. One example technique includes retrieving, from a first tier in a first memory, data from a data portion and metadata from a metadata portion of the first tier upon receiving a request to read data corresponding to a system memory section. The method can then include analyzing the data location information to determine whether the first tier currently contains data corresponding to the system memory section in the received request. In response to determining that the first tier currently contains data corresponding to the system memory section in the received request, transmitting the retrieved data from the data portion of the first memory to the processor in response to the received request. Otherwise, the method can include identifying a memory location in the first or second memory that contains data corresponding to the system memory section and retrieving the data from the identified memory location.

CACHE WITH COMPRESSED DATA AND TAG

Cache line data and metadata are compressed and stored in first and, optionally, second memory regions, the metadata including an address tag When the compressed data fit entirely within a primary block in the first memory region, both data and metadata are retrieved in a single memory access. Otherwise, overflow data is stored in an overflow block in the second memory region. The first and second memory regions may be located in the same row of a DRAM, for example, or in different regions of a DRAM and may be configured to enable standard DRAM components to be used. Compression and decompression logic circuits may be included in a memory controller.

Memory access methods and apparatus

A disclosed example apparatus includes a row address register (412) to store a row address corresponding to a row (608) in a memory array (602). The example apparatus also includes a row decoder (604) coupled to the row address register to assert a signal on a wordline (704) of the row after the memory receives a column address. In addition, the example apparatus includes a column decoder (606) to selectively activate a portion of the row based on the column address and the signal asserted on the wordline.