Patent classifications
G06F2212/304
DELAYED SNOOP FOR IMPROVED MULTI-PROCESS FALSE SHARING PARALLEL THREAD PERFORMANCE
Techniques for maintaining cache coherency comprising storing data blocks associated with a main process in a cache line of a main cache memory, storing a first local copy of the data blocks in a first local cache memory of a first processor, storing a second local copy of the set of data blocks in a second local cache memory of a second processor executing a first child process of the main process to generate first output data, writing the first output data to the first data block of the first local copy as a write through, writing the first output data to the first data block of the main cache memory as a part of the write through, transmitting an invalidate request to the second local cache memory, marking the second local copy of the set of data blocks as delayed, and transmitting an acknowledgment to the invalidate request.
MULTICORE SHARED CACHE OPERATION ENGINE
Techniques for accessing memory by a memory controller, comprising receiving, by the memory controller, a memory management command to perform a memory management operation at a virtual memory address, translating the virtual memory address to a physical memory address, wherein the physical memory address comprises an address within a cache memory, and outputting an instruction to the cache memory based on the memory management command and the physical memory address.
STORAGE SYSTEM
A storage system includes a redundancy group formed of storage drives that stores host data and redundant data in a distributed manner, and a controller that controls access to the redundancy group. The controller is configured to: select, from among the storage drives in the redundancy group, a part of the storage drives in an upper limit number equal to or smaller than a redundancy level of the redundancy group, and set the part of the storage drives to a power saving state; receive, from a host, a read request with respect to a target storage drive in the redundancy group; and restore, when the target storage drive is in the power saving state, target data corresponding to the read request from data collected from a part of the storage drives different from the target storage drive in the redundancy group, and return the target data to the host.
MRAM NOISE MITIGATION FOR WRITE OPERATIONS WITH SIMULTANEOUS BACKGROUND OPERATIONS
A method of writing data utilizes a pipeline to process write operations of a first plurality of data words addressed to a memory bank. The method also comprises writing a second plurality of data words into an error buffer, wherein the second plurality of data words comprises data words that are awaiting write verification. Additionally, the method comprises searching for at least one data word that is awaiting write verification in the error buffer, wherein verify operations associated with the at least one data word occur in a same row as the write operation. Finally, the method comprises determining if an address associated with any of the at least one data word is proximal to an address for the write operation and preventing a verify operation associated with the at least one data word from occurring in a same cycle as the write operation.
SYSTEM, METHOD AND COMPUTER READABLE MEDIUM FOR FILE ENCRYPTION AND MEMORY ENCRYPTION OF SECURE BYTE-ADDRESSABLE PERSISTENT MEMORY AND AUDITING
A method comprising initializing, by a processor, a field identification (FID) field and a file type field in a memory encryption counter block associated with pages for each file of a plurality of files stored in a persistent memory device (PMD), in response to a command by an operating system (OS). The file type field identifies whether each file associated with FID field is one of an encrypted file and a memory location. The method includes decrypting data of a page stored in the PMD, based on a read command by a requesting core. When decrypting, determining whether the requested page is an encrypted file or memory location. If the requested page is an encrypted file, performing decryption based on a first encryption pad generated based on the file encryption key of the encrypted file and a second encryption pad generated based on a processor key of the secure processor.
SYSTEM, SECURE PROCESSOR AND METHOD FOR RESTORATION OF A SECURE PERSISTENT MEMORY
Embodiments include a method comprising storing each encrypted data block, of a cyphertext page, with corresponding encrypted error correction code (ECC) bits in a persistent memory device (PMD). The encrypted ECC bits verify both an encryption counter value of an encryption operation and a plaintext block of the cyphertext page from a decryption operation. The method includes decrypting, using the decryption operation during a read operation of a memory controller, a respective one block of the cyphertext file and the corresponding encrypted ECC bits stored in the PMD using a current counter value to form the plaintext block and decrypted ECC bits. The method includes verifying the plaintext block with the decrypted ECC bits; and performing a security check of the encryption counter value in response to the plaintext block failing the verification, using the decrypted ECC bits. A system and secure processor are provided.
Translation lookaside buffer invalidation
A type of translation lookaside buffer (TLB) invalidation instruction is described which specifically targets a first type of TLB which stores combined stage-1-and-2 entries which depend on both stage 1 translation data and the stage 2 translation data, and which is configured to ignore a TLB invalidation command which invalidates based on a first set of one or more invalidation conditions including an address-based invalidation condition depending on matching of intermediate address. A second type of TLB other than the first type ignores the invalidation command triggered by the first type of TLB invalidation instruction. This approach helps to limit the performance impact of stage 2 invalidations in systems supporting a combined stage-1-and-2 TLB which cannot invalidate by intermediate address.
DELAYED SNOOP FOR IMPROVED MULTI-PROCESS FALSE SHARING PARALLEL THREAD PERFORMANCE
Techniques for maintaining cache coherency comprising storing data blocks associated with a main process in a cache line of a main cache memory, storing a first local copy of the data blocks in a first local cache memory of a first processor, storing a second local copy of the set of data blocks in a second local cache memory of a second processor executing a first child process of the main process to generate first output data, writing the first output data to the first data block of the first local copy as a write through, writing the first output data to the first data block of the main cache memory as a part of the write through, transmitting an invalidate request to the second local cache memory, marking the second local copy of the set of data blocks as delayed, and transmitting an acknowledgment to the invalidate request.
Multi-processor bridge with cache allocate awareness
Techniques for loading data, comprising receiving a memory management command to perform a memory management operation to load data into the cache memory before execution of an instruction that requests the data, formatting the memory management command into one or more instruction for a cache controller associated with the cache memory, and outputting an instruction to the cache controller to load the data into the cache memory based on the memory management command.
INFORMATION PROCESSING APPARATUS, CACHE CONTROL APPARATUS AND CACHE CONTROL METHOD
According to an embodiment, an information processing apparatus includes a cache memory and a cache controller. The cache controller includes a first circuit, a second circuit and a third circuit. The first control circuit is configured to store a designated address range for a process of cache maintenance. The second circuit is configured to determine whether or not the addresses to be accessed for the cache memory by the information processing apparatus are within the designated address range. The third circuit is configured to store reservation information for reserving execution of a process of cache maintenance for cache lines corresponding to addresses within the designated address range.