G06F2212/6012

UTILIZATION OF A DISTRIBUTED INDEX TO PROVIDE OBJECT MEMORY FABRIC COHERENCY
20220137818 · 2022-05-05 ·

Embodiments of the invention provide systems and methods to implement an object memory fabric. Object memory modules may include object storage storing memory objects, memory object meta-data, and a memory module object directory. Each memory object and/or memory object portion may be created natively within the object memory module and may be a managed at a memory layer. The memory module object directory may index all memory objects and/or portions within the object memory module. A hierarchy of object routers may communicatively couple the object memory modules. Each object router may maintain an object cache state for the memory objects and/or portions contained in object memory modules below the object router in the hierarchy. The hierarchy, based on the object cache state, may behave in aggregate as a single object directory communicatively coupled to all object memory modules and to process requests based on the object cache state.

Cache and method for managing cache

A cache and a method for managing a cache are provided. The cache includes a storage circuit, a buffer circuit and a control circuit. The buffer circuit stores data in a first-in first-out (FIFO) manner. The control circuit is coupled to the storage circuit and the buffer circuit and is configured to find a storage space in the storage circuit and write the data to the storage space.

SYSTEMS AND METHODS FOR A CROSS-LAYER KEY-VALUE STORE WITH A COMPUTATIONAL STORAGE DEVICE

Provided is a method of data storage, the method including receiving, at a host of a key-value store, a request to access a data node stored on a storage device of the key-value store, locating an address corresponding to the data node in a host cache on the host, and determining that the data node is in a kernel cache on the storage device.

Utilization of a distributed index to provide object memory fabric coherency
11775171 · 2023-10-03 · ·

Embodiments of the invention provide systems and methods to implement an object memory fabric. Object memory modules may include object storage storing memory objects, memory object meta-data, and a memory module object directory. Each memory object and/or memory object portion may be created natively within the object memory module and may be a managed at a memory layer. The memory module object directory may index all memory objects and/or portions within the object memory module. A hierarchy of object routers may communicatively couple the object memory modules. Each object router may maintain an object cache state for the memory objects and/or portions contained in object memory modules below the object router in the hierarchy. The hierarchy, based on the object cache state, may behave in aggregate as a single object directory communicatively coupled to all object memory modules and to process requests based on the object cache state.

Managing meta-data in an object memory fabric
11755202 · 2023-09-12 · ·

Embodiments of the invention provide systems and methods to implement a hardware-based multi-node processing system of an object memory fabric. Hardware-based processing nodes operatively coupled may each include object memory modules storing and managing memory objects, each memory object being created natively within the memory module and managed by the memory module at a memory layer, and each memory object including memory object data and memory object metadata. The memory object metadata may include triggers that specify additional operations to be executed by any object memory module of the hardware-based processing nodes when the respective memory object is located at the respective object memory module and accessed as part of the respective object memory module processing requests.

Implementation of an object memory centric cloud
11755201 · 2023-09-12 · ·

Embodiments of the invention provide systems and methods to implement an object memory fabric including hardware-based processing nodes having memory modules storing and managing memory objects created natively within the memory modules and managed by the memory modules at a memory layer, where physical address of memory and storage is managed with the memory objects based on an object address space that is allocated on a per-object basis with an object addressing scheme. Each node may utilize the object addressing scheme to couple to additional nodes to operate as a set of nodes so that all memory objects of the set are accessible based on the object addressing scheme, which defines invariant object addresses for the memory objects that are invariant with respect to physical memory storage locations and storage location changes of the memory objects within the memory module and across all modules interfacing the object memory fabric.

TREATING MAIN MEMORY AS A COLLECTION OF TAGGED CACHE LINES FOR TRACE LOGGING
20220269614 · 2022-08-25 ·

Treating main memory as a collection of tagged cache lines for trace logging. A computer system allocates a plurality of memory blocks, and a corresponding plurality of tags, within a main memory. Each tag indicates whether data stored in a corresponding memory block has been captured by an execution trace. The computer system synchronizes these tags with tags in a memory cache and manages a traced status of the memory blocks. This can include one or more of (i) setting a tag to indicate a memory block has not been captured based on identifying a direct memory access operation, (ii) setting a tag based on whether a paged-in value of a memory block has been captured, (iii) setting a tag or memory categorization based whether a memory block has been initialized, or (iv) setting a tag or memory categorization based whether a memory block is mapped to a file.

Memory having a static cache and a dynamic cache

The present disclosure includes memory having a static cache and a dynamic cache. A number of embodiments include a memory, wherein the memory includes a first portion configured to operate as a static single level cell (SLC) cache and a second portion configured to operate as a dynamic SLC cache when the entire first portion of the memory has data stored therein.

Adaptive caching in a multi-tier cache

Provided are a computer program product, system, and method for staging data from storage to a fast cache tier of a multi-tier cache in a non-adaptive sector caching mode in which data staged in response to a read request is limited to track sectors required to satisfy the read request. Data is also staged from storage to a slow cache tier of the multi-tier cache in a selected adaptive caching mode of a plurality of adaptive caching modes available for staging data of tracks. Adaptive caching modes are selected for the slow cache tier as a function of historical access ratios. Prestage requests for the slow cache tier are enqueued in one of a plurality of prestage request queues of various priority levels as a function of the selected adaptive caching mode and historical access ratios. Other aspects and advantages are provided, depending upon the particular application.

Utilization of a distributed index to provide object memory fabric coherency
11126350 · 2021-09-21 · ·

Embodiments of the invention provide systems and methods to implement an object memory fabric. Object memory modules may include object storage storing memory objects, memory object meta-data, and a memory module object directory. Each memory object and/or memory object portion may be created natively within the object memory module and may be a managed at a memory layer. The memory module object directory may index all memory objects and/or portions within the object memory module. A hierarchy of object routers may communicatively couple the object memory modules. Each object router may maintain an object cache state for the memory objects and/or portions contained in object memory modules below the object router in the hierarchy. The hierarchy, based on the object cache state, may behave in aggregate as a single object directory communicatively coupled to all object memory modules and to process requests based on the object cache state.