G06F12/0653

SPATIAL CACHE
20210406197 · 2021-12-30 · ·

A cache includes a p-by-q array of memory units; a row addressing unit; and a column addressing unit. Each memory unit has an m-by-n array of memory cells. The column addressing unit has, for each memory unit, m n-to-one multiplexers, one associated with each of the m rows of the memory unit, wherein each n-to-one multiplexer has an input coupled to each of the n memory cells associated with the row associated with that multiplexer. The row addressing unit has, for each memory unit, n m-to-one multiplexers, one associated with each of the n columns of the memory unit, wherein each m-to-one multiplexer has an input coupled to each of the m memory cells associated with the column associated with that multiplexer. The row addressing unit and column addressing unit support reading and/or writing of the array of memory units, e.g. using virtual or physical addresses.

Techniques to facilitate a hardware based table lookup

Techniques to facilitate a hardware based table look of a table maintained in or more types of memories or memory domains include examples of receiving a search request forwarded from a queue management device. Examples also include implementing table lookups to obtain a result and sending the result to an output queue of the queue management device for the queue management device to forward the result to a requestor of the search request.

Addressing for disaggregated memory pool

A method for memory address mapping in a disaggregated memory system includes receiving an indication of one or more ranges of host physical addresses (HPAs) from a compute node of a plurality of compute nodes, the one or more ranges of HPAs including a plurality of memory addresses corresponding to different allocation slices of the disaggregated memory pool that are allocated to the compute node. The one or more ranges of HPAs are converted into a contiguous range of device physical addresses (DPAs). For each DPA, a target address decoder (TAD) is identified based on a slice identifier and a slice-to-TAD index. Each DPA is mapped to a media-specific physical element of a physical memory unit of the disaggregated memory pool based on the TAD.

Buffer Addressing for a Convolutional Neural Network
20210390368 · 2021-12-16 ·

A method for providing input data for a layer of a convolutional neural network “CNN”, the method comprising: receiving input data comprising input data values to be processed in a layer of the CNN; determining addresses in banked memory of a buffer in which the received data values are to be stored based upon format data indicating a format parameter of the input data in the layer and indicating a format parameter of a filter which is to be used to process the input data in the layer; and storing the received input data values at the determined addresses in the buffer for retrieval for processing in the layer.

HIERARCHICAL MEMORY SYSTEMS
20210390060 · 2021-12-16 ·

Apparatuses, systems, and methods for hierarchical memory systems are described. A hierarchical memory system can leverage persistent memory to store data that is generally stored in a non-persistent memory, thereby increasing an amount of storage space allocated to a computing system at a lower cost than approaches that rely solely on non-persistent memory. An example method includes initiating a read request associated with an address from an input/output device, redirecting the read request to a hierarchical memory component, generating, by the hierarchical memory component, an interrupt message to send to a hypervisor, gathering, at the hypervisor, address register access information from the hierarchical memory component, and determining a physical location of data associated with the read request.

System and method for split storage stack

In certain embodiments, a method includes starting an application as a first process within a user space of an operating system. The application instantiates a data storage system associated with the operating system. The method also includes starting a block device service as a second process within the user space of the operating system, the block device service being configured to manage a persistent storage device of the computing device. In addition, the method includes receiving, by a kernel of the operating system, a system call request from the application to communicate with the block device service, the system call request is generated by the application using the data storage system and comprises an access request to access the persistent storage device. The method further includes providing the application, in response to the system call request, access to the block device service through the IPC channel.

DYNAMIC LOAD BALANCING FOR POOLED MEMORY

Examples described herein relate to a memory controller to allocate an address range for a process among multiple memory pools based on a service level parameters associated with the address range and performance capabilities of the multiple memory pools. In some examples, the service level parameters include one or more of latency, network bandwidth, amount of memory allocation, memory bandwidth, data encryption use, type of encryption to apply to stored data, use of data encryption to transport data to a requester, memory technology, and/or durability of a memory device.

RANDOM SEED GENERATING CIRCUIT OF MEMORY SYSTEM
20220197792 · 2022-06-23 ·

A random seed generating circuit of a memory system includes a first address generating circuit, a second address generating circuit, a table circuit and a seed generating circuit. The first address generating circuit generates an initial address based on target page information. The second address generating circuit generates a plurality of table addresses based on the target page information and a plurality of partial addresses, which are divided from the initial address. The table circuit outputs, from a plurality of tables, a plurality of table values respectively corresponding to the plurality of table addresses. The seed generating circuit generates a random seed based on the plurality of table values.

Optimized record placement in graph database

Methods and systems are disclosed for optimizing record placement in a graph by minimizing fragmentation when writing data. Issues with fragmented data within a graph database are addressed on the record level by placing data that is frequently accessed together contiguously within memory. For example, a dynamic rule set may be developed based on dynamically analyzing access patterns of the graph database, policies, system characteristics and/or other heuristics. Based on statistics regarding normal query patterns, the systems and methods may identify an optimal position for certain types of edges that are often traversed with respect to particular types of nodes.

Disassociating memory units with a host system
11741008 · 2023-08-29 · ·

A command indicating a logical address and a length system is received from a host system. One or more memory units in a memory sub-system corresponding to the logical address and the length are identified. An indicator associated with the one or more memory units is set, to indicate that the one or more memory units are invalid. The one or more memory units are excluded from a media management operation performing in the memory sub-system.