G06F2212/1012

MEMORY DEVICE AND OPERATION METHOD OF THE SAME
20210209018 · 2021-07-08 ·

A device is provided that includes a first memory and a second memory and an accessing circuit. Actual addresses of the first memory and the second memory alternately correspond to reference addresses of a processing circuit. The accessing circuit is configured to perform the steps outlined below. A read command corresponding to a reference read address is received from the processing circuit to convert the reference read address to an actual read address of the first memory and the second memory. A first read data is read from a first one of the first memory and the second memory according to the actual read address and a second read data is prefetched from a second one of the first memory and a second memory according to a next first read address simultaneously.

Translation lookaside buffer in a switch

Examples of the present disclosure provide apparatuses and methods related to a translation lookaside buffer in memory. An example method comprises receiving a command including a virtual address from a host translating the virtual address to a physical address on volatile memory of a memory device using a translation lookaside buffer (TLB).

Apparatuses and methods for configurable memory array bank architectures
10788985 · 2020-09-29 · ·

Apparatuses and methods for configurable memory array bank architectures are described. An example apparatus includes a mode register configured to store information related to bank architecture and a memory array including a plurality of memory banks. The plurality of memory banks are configured to be arranged in a bank architecture based at least in part on the information related to bank architecture stored in the mode register.

Decoupling memory metadata granularity from page size

The disclosure provides an approach for tracking metadata (e.g., accessed and dirty bits) of page tables at finer granularity than the size of the page tables. A disclosed herein, modification to existing hardware design may enable finer page table granularity of metadata, leading to more precise representation of the state of memory and an improvement to system performance and efficiency. Finer grain dirty metadata can dramatically improve the efficiency and simplicity of subsystems.

CACHE COHERENT NODE CONTROLLER FOR SCALE-UP SHARED MEMORY SYSTEMS

The present invention relates to cache coherent node controllers for scale-up shared memory systems. In particular it is disclosed a computer system at least comprising a first group of CPU modules connected to at least one first FPGA Node Controller configured to execute transactions directly or through a first interconnect switch to at least one second FPGA Node Controller connected to a second group of CPU modules running a single instance of an operating system.

PROGRAMMABLE CACHE COHERENT NODE CONTROLLER

A computer system includes a first group of CPU modules operatively coupled to at least one first Programmable ASIC Node Controller configured to execute transactions directly or through a first interconnect switch to at least one second Programmable ASIC Node Controller connected to a second group of CPU modules running a single instance of an operating system.

APPARATUSES AND METHODS FOR CONFIGURABLE MEMORY ARRAY BANK ARCHITECTURES
20200004420 · 2020-01-02 · ·

Apparatuses and methods for configurable memory array bank architectures are described. An example apparatus includes a mode register configured to store information related to bank architecture and a memory array including a plurality of memory banks. The plurality of memory banks are configured to be arranged in a bank architecture based at least in part on the information related to bank architecture stored in the mode register.

CACHE CONFIGURATION PERFORMANCE ESTIMATION
20190361808 · 2019-11-28 ·

Computer-implemented methods using machine learning are provided for generating an estimated cache performance of a cache configuration. A neural network is trained using, as inputs, a set of memory access parameters generated from a non-cycle-accurate simulation of a data processing system comprising the cache configuration and a cache configuration value, and using, as outputs, cache performance values generated by a cycle-accurate simulation of the data processing system comprising the cache configuration. The trained neural network is then provided with sets of memory access parameters generated from a non-cycle-accurate simulation of a proposed data processing system and a selected cache configuration and generates estimated cache performance values for that selected cache configuration.

APPARATUSES AND METHODS FOR COMPUTE ENABLED CACHE
20190354485 · 2019-11-21 ·

The present disclosure includes apparatuses and methods for compute enabled cache. An example apparatus comprises a compute component, a memory and a controller coupled to the memory. The controller configured to operate on a block select and a subrow select as metadata to a cache line to control placement of the cache line in the memory to allow for a compute enabled cache.

Cache memory architecture

Various implementations described herein are directed to device. The device may include a first tier having a processor and a first cache memory that are coupled together via control logic to operate as a computing architecture. The device may include a second tier having a second cache memory that is coupled to the first cache memory. Also, the first tier and the second tier may be integrated together with the computing architecture to operate as a stackable cache memory architecture.