G06F12/0835

Dynamic On-Demand Device-Assisted Paging

Systems, methods, and devices for efficient I/O page fault handling are provided. A system may include a peripheral device that accesses guest memory of a virtual machine using direct memory access (DMA) and a processing device that that runs the virtual machine. The processing device may include a buffer allocated to receive a payload from the peripheral device while an input/output page fault corresponding to a page of the guest memory is resolved. The processing device may also include an input/output page fault queue to store a descriptor corresponding to the input/output page fault and a fault buffer queue to store a descriptor corresponding to a location of the buffer allocated to receive the payload while the input/output page fault is resolved.

INTERLEAVED CACHE PREFETCHING
20230205701 · 2023-06-29 ·

A method includes receiving, at a direct memory access (DMA) controller of a memory device, a first command from a first cache controller coupled to the memory device to prefetch first data from the memory device and sending the prefetched first data, in response to receiving the first command, to a second cache controller coupled to the memory device. The method can further include receiving a second command from a second cache controller coupled to the memory device to prefetch second data from the memory device, and sending the prefetched second data, in response to receiving the second command, to a third cache controller coupled to the memory device.

MEMORY MANAGEMENT DEVICE
20230195633 · 2023-06-22 ·

Memory modules and associated devices and methods are provided using a memory copy function between a cache memory and a main memory that may be implemented in hardware. Address translation may additionally be provided.

EFFICIENT AND CONCURRENT MODEL EXECUTION

An accelerator is disclosed. A circuit may process a data to produce a processed data. A first tier storage may include a first capacity and a first latency. A second tier storage may include a second capacity and a second latency. The second capacity may be larger than the first capacity, and the second latency may be slower than the first latency. A bus may be used to transfer at least one of the data or the processed data between the first tier storage and the second tier storage.

VERIFICATION OF OFF-CHIP COMPUTER-READABLE INSTRUCTIONS AND RELATED SYSTEMS, METHODS, AND APPARATUSES

An apparatus may comprise an off-chip data storage device and a semiconductor device package including processing circuitry and an on-chip memory device, the off-chip data storage device including master data and portions of the computer-readable instructions. The processing circuitry may retrieve a master data that includes a digital signature that may be used to verify the master data and a hash table that may include hash information for others of the portions. The processing circuitry may also verify the master instructions responsive to the digital signature, retrieve a portion, calculate a hash value of the retrieved portion, and determine whether the calculated hash value correlates to hash information of the hash table.

Active input/output expander of a memory sub-system

A value setting associated with one or more parameters of a host-side interface and a memory-side interface of an input/output (I/O) expander is configured to enable Open NAND Flash Interface (ONFI)-compliant communications between a host system and a target memory die of a memory sub-system. The I/O expander processes one or more ONFI-compliant communications between the host system and the target memory die, wherein the one or more ONFI-compliant communications relate to execution of a memory access operation.

Cache memory system and operating method for the same

A cache memory system includes a cache memory, which stores cache data corresponding to portions of main data stored in a main memory and priority data respectively corresponding to the cache data; a table storage unit, which stores a priority table including information regarding access frequencies with respect to the main data; and a controller, which, when at least one from among the main data is requested, determines whether cache data corresponding to the request is stored in the cache memory, deletes one from among the cache data based on the priority data, and updates the cache data set with new data, wherein the priority data is determined based on the information regarding access frequencies.

MEMORY SYSTEM HAVING MULTIPLE CACHE PAGES AND OPERATING METHOD THEREOF
20170329709 · 2017-11-16 ·

A semiconductor memory system and an operating method thereof include a controller; and a memory device including a memory page manager, Nand pages, and multiple cache pages, wherein the Nand pages include current Nand pages and next Nand pages, wherein the current Nand pages is corresponding to a read command received from the controller, the memory page manager is configured to manage correlation of the Nand pages and the multiple cache pages, predict next Nand pages in accordance at least in part with the read command, the current Nand pages, or a combination thereof, and send the Nand pages to the controller, and the multiple cache pages contain pages loaded from the Nand pages.

PROTECTION OF DATA IN MEMORY OF AN INTEGRATED CIRCUIT USING A SECRET TOKEN
20220358054 · 2022-11-10 ·

Methods, systems, apparatuses, and computer program products are provided for protecting data in a memory of an integrated circuit (IC). A process token is obtained in a special purpose IC from a host that is external to and communicatively connected to the special purpose IC. The process token is stored in a first memory portion of the special purpose IC. In response to receiving a processing request from the host, the processing request is processed, and data generated by processing the processing request is written in a second memory portion of the special purpose IC. When a read request is received to read the data in the second memory portion, a determination is made whether the read request includes a read token that matches the previously stored process token. If the read token matches the process token, the data in the second memory portion may be returned to the host.

Granting exclusive cache access using locality cache coherency state

A cache coherency management facility to reduce latency in granting exclusive access to a cache in certain situations. A node requests exclusive access to a cache line of the cache. The node is in one region of nodes of a plurality of regions of nodes. The one region of nodes includes the node requesting exclusive access and another node of the computing environment, in which the node and the another node are local to one another as defined by a predetermined criteria. The node requesting exclusive access checks a locality cache coherency state of the another node, the locality cache coherency state being specific to the another node and indicating whether the another node has access to the cache line. Based on the checking indicating that the another node has access to the cache line, a determination is made that the node requesting exclusive access is to be granted exclusive access to the cache line. The determining being independent of transmission of information relating to the cache line from one or more other nodes of the one or more other regions of nodes.