Patent classifications
G06F12/0824
Object memory data flow triggers
Embodiments of the invention provide systems and methods for managing processing, memory, storage, network, and cloud computing to significantly improve the efficiency and performance of processing nodes. More specifically, embodiments of the present invention are directed to an instruction set of an object memory fabric. This object memory fabric instruction set can include trigger instructions defined in metadata for a particular memory object. Each trigger instruction can comprise a single instruction and action based on reference to a specific object to initiate or perform defined actions such as pre-fetching other objects or executing a trigger program.
Dirty cache line write-back tracking
A cache system may include a cache to store a plurality of cache lines in a write-back mode; dirty cache line counter circuitry to store a count of dirty cache lines in the cache, increment the count when a new dirty cache line is added to the cache, and decrement the count when an old dirty cache line is written-back from the cache; dirty cache line write-back tracking circuitry to store an ordering of the dirty cache lines in a write-back order; mapping circuitry to map the dirty lines into the ordering; and controller circuity to use the mapping circuity to identify an evicted dirty cache line in the ordering and remove the evicted dirty cache line from the ordering.
Method for PRP/SGL handling for out-of-order NVME controllers
Read latency for a read operation to a host implementing a PRP/SGL buffer is reduced by generating an address table representing the linked-list structure defining the PRP/SGL buffer. The address table may be generated concurrently with reading of data referenced by the read command from a NAND storage device. A block table for tracking status of LBAs referenced by IO commands may include a reference to the address table which is used to transfer LBAs to host memory as soon as the address table is complete and a block of data referenced by an LBA has been read from the NAND storage device.
Coherence-based cache-line Copy-on-Write
A method of performing a copy-on-write on a shared memory page is carried out by a device communicating with a processor via a coherence interconnect. The method includes: adding a page table entry so that a request to read a first cache line of the shared memory page includes a cache-line address of the shared memory page and a request to write to a second cache line of the shared memory page includes a cache-line address of a new memory page; in response to the request to write to the second cache line, storing new data of the second cache line in a second memory and associating the second cache-line address with the new data stored in the second memory; and in response to a request to read the second cache line, reading the new data of the second cache line from the second memory.
STORAGE OF TREE DATA STRUCTURES
Disclosed herein is a computer-implemented method for storing binary tree data in memory. The binary tree data comprises parent node data, first child node data and second child node data. The computer-implemented method comprises determining a first child node memory address, the first child node memory address being less than a parent node memory address; determining a second child node memory address, the second child node memory address being greater than the parent node memory address; storing the parent node data at the parent node memory address; storing the first child node data at the first child node memory address; and storing the second child node data at the second child node memory address.
Efficient erasure-coded storage in distributed data systems
Techniques for efficiently storing client data blocks on a distributed-computing system are provided. The system includes a fast performance tier and a large capacity tier. The capacity tier stores the client data blocks in erasure encoded data stripes. The performance tier stores logical map data including an address map indicating a correspondence between logical addresses associated with a first layer of the system and physical addresses associated with a second layer. A method includes receiving a request to include additional client data blocks in the client blocks. The request indicates logical addresses for additional blocks. Corresponding physical addresses for additional block are determined. Each additional block is stored at the physical address. Additional logical map data is stored in the performance tier. Storing the additional logical map data includes updating the address map to indicate the correspondence between the logical addresses and the physical addresses for the additional blocks.
CONTROLLER WITH CACHING AND NON-CACHING MODES
An apparatus includes a CPU core, a first cache subsystem coupled to the CPU core, and a second memory coupled to the cache subsystem. The first cache subsystem includes a configuration register, a first memory, and a controller. The controller is configured to: receive a request directed to an address in the second memory and, in response to the configuration register having a first value, operate in a non-caching mode. In the non-caching mode, the controller is configured to provide the request to the second memory without caching data returned by the request in the first memory. In response to the configuration register having a second value, the controller is configured to operate in a caching mode. In the caching mode the controller is configured to provide the request to the second memory and cache data returned by the request in the first memory.
Scalable System on a Chip
An integrated circuit (IC) including a plurality of processor cores, a plurality of graphics processing units, a plurality of peripheral circuits, and a plurality of memory controllers is configured to support scaling of the system using a unified memory architecture. For example, the IC may include an interconnect fabric configured to provide communication between the one or more memory controller circuits and the processor cores, graphics processing units, and peripheral devices; and an off-chip interconnect coupled to the interconnect fabric and configured to couple the interconnect fabric to a corresponding interconnect fabric on another instance of the integrated circuit, wherein the interconnect fabric and the off-chip interconnect provide an interface that transparently connects the one or more memory controller circuits, the processor cores, graphics processing units, and peripheral devices in either a single instance of the integrated circuit or two or more instances of the integrated circuit.
Systems and methods for implementing coherent memory in a multiprocessor system
Data units are stored in private caches in nodes of a multiprocessor system, each node containing at least one processor (CPU), at least one cache private to the node and at least one cache location buffer (CLB) private to the node. In each CLB location information values are stored, each location information value indicating a location associated with a respective data unit, wherein each location information value stored in a given CLB indicates the location to be either a location within the private cache disposed in the same node as the given CLB, to be a location in one of the other nodes, or to be a location in a main memory. Coherence of values of the data units is maintained using a cache coherence protocol. The location information values stored in the CLBs are updated by the cache coherence protocol in accordance with movements of their respective data units.
Techniques For Multi-Tiered Data Storage In Multi-Tenant Caching Systems
Embodiments of the invention are directed to systems and methods for utilizing a multi-tiered caching architecture in a multi-tenant caching system. A portion of the in-memory cache may be allocated as dedicated shares (e.g., dedicated allocations) that are each dedicated to a particular tenant, while another portion of the in-memory cache (e.g., a shared allocation) can be shared by all tenants in the system. When a threshold period of time has elapsed since data stored in a dedicated allocation has last been accessed, the data may be migrated to the shared allocation. If data is accessed from the shared allocation, it may be migrated back to the dedicated allocation Utilizing the techniques for providing a multi-tiered approach to a multi-tenant caching system can increase performance and decrease latency with respect to conventional caching systems.