G06F2212/304

MULTI-PROCESSOR, MULTI-DOMAIN, MULTI-PROTOCOL, CACHE COHERENT, SPECULATION AWARE SHARED MEMORY AND INTERCONNECT

A device includes an interconnect and a plurality of devices connected to the interconnect. The plurality of devices includes a first interface connected to the interconnect and a second interface connected to the interconnect. The plurality of devices further includes a first memory bank connected to the interconnect and a second memory bank connected to the interconnect. The plurality of devices further includes an external memory interface connected to the interconnect and a controller configured to establish virtual channels among the plurality of devices connected to the interconnect.

Credit aware central arbitration for multi-endpoint, multi-core system

A device includes a data path, a first interface configured to receive a first memory access request from a first peripheral device, and a second interface configured to receive a second memory access request from a second peripheral device. The device further includes an arbiter circuit configured to determine a first destination device connected to the data path and associated with the first memory access request and a first credit threshold corresponding to the first memory access request. The arbiter circuit is further configured to determine a second destination device connected to the data path and associated with the second memory access request and a second credit threshold corresponding to the second memory access request. The arbiter circuit is configured to arbitrate access to the data path by the first memory access request and the second memory access request based on the first credit threshold and the second credit threshold.

Adaptive credit-based replenishment threshold used for transaction arbitration in a system that supports multiple levels of credit expenditure
11429527 · 2022-08-30 · ·

A device includes an arbiter circuit configured to receive a first request for a resource. The first request is associated with a first credit cost. The arbiter circuit is further configured to receive a second request for the resource. The second request is associated with a second credit cost. The arbiter circuit is further configured to select the first request for the resource as an arbitration winner. The arbiter circuit is further configured to decrement a number of available credits associated with the resource by the first credit cost. The arbiter circuit is further configured to, in response to the number of available credits associated with the resource falling to a lower credit threshold, wait until the number of available credits associated with the resource reaches an upper credit threshold to select an additional arbitration winner for the resource.

Multicore, multibank, fully concurrent coherence controller

A system includes a multi-core shared memory controller (MSMC). The MSMC includes a snoop filter bank, a cache tag bank, and a memory bank. The cache tag bank is connected to both the snoop filter bank and the memory bank. The MSMC further includes a first coherent slave interface connected to a data path that is connected to the snoop filter bank. The MSMC further includes a second coherent slave interface connected to the data path that is connected to the snoop filter bank. The MSMC further includes an external memory master interface connected to the cache tag bank and the memory bank. The system further includes a first processor package connected to the first coherent slave interface and a second processor package connected to the second coherent slave interface. The system further includes an external memory device connected to the external memory master interface.

CONFIGURABLE CACHE FOR MULTI-ENDPOINT HETEROGENEOUS COHERENT SYSTEM
20220229779 · 2022-07-21 ·

A device includes a memory bank. The memory bank includes data portions of a first way group. The data portions of the first way group include a data portion of a first way of the first way group and a data portion of a second way of the first way group. The memory bank further includes data portions of a second way group. The device further includes a configuration register and a controller configured to individually allocate, based on one or more settings in the configuration register, the first way and the second way to one of an addressable memory space and a data cache.

System, method and computer readable medium for file encryption and memory encryption of secure byte-addressable persistent memory and auditing

A method comprising initializing, by a processor, a field identification (FID) field and a file type field in a memory encryption counter block associated with pages for each file of a plurality of files stored in a persistent memory device (PMD), in response to a command by an operating system (OS). The file type field identifies whether each file associated with FID field is one of an encrypted file and a memory location. The method includes decrypting data of a page stored in the PMD, based on a read command by a requesting core. When decrypting, determining whether the requested page is an encrypted file or memory location. If the requested page is an encrypted file, performing decryption based on a first encryption pad generated based on the file encryption key of the encrypted file and a second encryption pad generated based on a processor key of the secure processor.

Storage system
11385815 · 2022-07-12 · ·

A storage system includes a redundancy group formed of storage drives that stores host data and redundant data in a distributed manner, and a controller that controls access to the redundancy group. The controller is configured to: select, from among the storage drives in the redundancy group, a part of the storage drives in an upper limit number equal to or smaller than a redundancy level of the redundancy group, and set the part of the storage drives to a power saving state; receive, from a host, a read request with respect to a target storage drive in the redundancy group; and restore, when the target storage drive is in the power saving state, target data corresponding to the read request from data collected from a part of the storage drives different from the target storage drive in the redundancy group, and return the target data to the host.

MULTICORE SHARED CACHE OPERATION ENGINE

Techniques including receiving configuration information for a trigger control channel of the one or more trigger control channels, the configuration information defining a first one or more triggering events, receiving a first memory management command, store the first memory management command, detecting a first one or more triggering events, and triggering the stored first memory management command based on the detected first one or more triggering events.

DELAYED SNOOP FOR IMPROVED MULTI-PROCESS FALSE SHARING PARALLEL THREAD PERFORMANCE
20220156193 · 2022-05-19 ·

Techniques for maintaining cache coherency comprising storing data blocks associated with a main process in a cache line of a main cache memory, storing a first local copy of the data blocks in a first local cache memory of a first processor, storing a second local copy of the set of data blocks in a second local cache memory of a second processor executing a first child process of the main process to generate first output data, writing the first output data to the first data block of the first local copy as a write through, writing the first output data to the first data block of the main cache memory as a part of the write through, transmitting an invalidate request to the second local cache memory, marking the second local copy of the set of data blocks as delayed, and transmitting an acknowledgment to the invalidate request.

Distributed error detection and correction with hamming code handoff

A device includes a data path, a first interface connected to the data path and configured to receive a request from a processor package to write a data value to a memory address, and a controller connected to the data path and configured to receive the request to write the data value to the memory address and to calculate a Hamming code of the data value. The controller is configured to transmit the data value and the Hamming code on the data path. The device includes an external memory interleave connected to the data path. The external memory interleave is configured to receive the data value and calculate a test Hamming code of the data value and to determine whether to send the data value to an external memory interface to be written to the memory address based on a comparison of the Hamming code and the test Hamming code.