G06F12/084

MEMORY INCLUSIVITY MANAGEMENT IN COMPUTING SYSTEMS

Techniques of memory inclusivity management are disclosed herein. One example technique includes receiving a request from a core of the CPU to write a block of data corresponding to a first cacheline to a swap buffer at a memory. In response to the request, the method can include retrieving metadata corresponding to the first cacheline that includes a bit encoding a status value indicating whether the memory block at the memory currently contains data of the first cacheline or data corresponding to a second cacheline. The first and second cachelines alternately sharing the swap buffer at the memory. When the decoded status value indicates that the memory block at the first memory currently contains the data corresponding to the first cacheline, an instruction is transmitted to the memory controller to directly write the block of data to the memory block at the first memory.

VIRTUALIZATION ENGINE FOR VIRTUALIZATION OPERATIONS IN A VIRTUALIZATION SYSTEM
20220413888 · 2022-12-29 ·

Methods, systems, and computer storage media for providing virtualization operations—including an activate operation, suspend operation, and resume operation for virtualization in a virtualization system. In operation, a unique identifier and file metadata associated with a first file stored in a cache engine. The cache engine manages the first file of an application running on the virtual machine to circumvent writing file data of the first file to an OS disk during a suspend operation of the virtual machine and circumvents reading file data of the first file from the OS disk during a resume operation of the virtual machine. Based on a resume operation associated with the virtual machine and the file metadata, file data of the first file that is stored in the cache engine is accessed. The file data is communicated to the virtual machine, the virtual machine is associated with the suspend and the resume operation.

VIRTUALIZATION ENGINE FOR VIRTUALIZATION OPERATIONS IN A VIRTUALIZATION SYSTEM
20220413888 · 2022-12-29 ·

Methods, systems, and computer storage media for providing virtualization operations—including an activate operation, suspend operation, and resume operation for virtualization in a virtualization system. In operation, a unique identifier and file metadata associated with a first file stored in a cache engine. The cache engine manages the first file of an application running on the virtual machine to circumvent writing file data of the first file to an OS disk during a suspend operation of the virtual machine and circumvents reading file data of the first file from the OS disk during a resume operation of the virtual machine. Based on a resume operation associated with the virtual machine and the file metadata, file data of the first file that is stored in the cache engine is accessed. The file data is communicated to the virtual machine, the virtual machine is associated with the suspend and the resume operation.

DYNAMICALLY COALESCING ATOMIC MEMORY OPERATIONS FOR MEMORY-LOCAL COMPUTING
20220414013 · 2022-12-29 ·

Dynamically coalescing atomic memory operations for memory-local computing is disclosed. In an embodiment, it is determined whether a first atomic memory access and a second atomic memory access are candidates for coalescing. In response to a triggering event, the atomic memory accesses that are candidates for coalescing are coalesced in a cache prior to requesting memory-local processing by a memory-local compute unit. The atomic memory accesses may be coalesced in the same cache line or atomic memory accesses in different cache lines may be coalesced using a multicast memory-local processing command.

DYNAMICALLY COALESCING ATOMIC MEMORY OPERATIONS FOR MEMORY-LOCAL COMPUTING
20220414013 · 2022-12-29 ·

Dynamically coalescing atomic memory operations for memory-local computing is disclosed. In an embodiment, it is determined whether a first atomic memory access and a second atomic memory access are candidates for coalescing. In response to a triggering event, the atomic memory accesses that are candidates for coalescing are coalesced in a cache prior to requesting memory-local processing by a memory-local compute unit. The atomic memory accesses may be coalesced in the same cache line or atomic memory accesses in different cache lines may be coalesced using a multicast memory-local processing command.

METHODS AND APPARATUSES FOR DYNAMICALLY CHANGING DATA PRIORITY IN A CACHE

Embodiments are generally directed to methods and apparatuses for dynamically changing data priority in a cache. An embodiment of an apparatus comprising: a priority controller to: receive a memory access request to request data; and set a priority flag for the memory access request based on an accumulated access amount of data stored in a memory block to be accessed by the memory access request to dynamically change a priority level of the requested data.

METHODS AND APPARATUSES FOR DYNAMICALLY CHANGING DATA PRIORITY IN A CACHE

Embodiments are generally directed to methods and apparatuses for dynamically changing data priority in a cache. An embodiment of an apparatus comprising: a priority controller to: receive a memory access request to request data; and set a priority flag for the memory access request based on an accumulated access amount of data stored in a memory block to be accessed by the memory access request to dynamically change a priority level of the requested data.

Hybrid Memory Module
20220406354 · 2022-12-22 ·

A memory module includes cache of relatively fast and durable dynamic, random-access memory (DRAM) in service of a larger amount of relatively slow and wear-sensitive nonvolatile memory. Local controller manages communication between the DRAM cache and nonvolatile memory to accommodate disparate access granularities, reduce the requisite number of memory transactions, and minimize the flow of data external to nonvolatile memory components.

Hybrid Memory Module
20220406354 · 2022-12-22 ·

A memory module includes cache of relatively fast and durable dynamic, random-access memory (DRAM) in service of a larger amount of relatively slow and wear-sensitive nonvolatile memory. Local controller manages communication between the DRAM cache and nonvolatile memory to accommodate disparate access granularities, reduce the requisite number of memory transactions, and minimize the flow of data external to nonvolatile memory components.

VARIABLE PROTECTION WINDOW EXTENSION FOR A TARGET ADDRESS OF A STORE-CONDITIONAL REQUEST

A processing unit includes a processor core and an associated cache memory. The cache memory establishes a reservation of a hardware thread of the processor core for a store target address and services a store-conditional request of the processor core by conditionally updating the shared memory with store data based on the whether the hardware thread has a reservation for the store target address. The cache memory receives a hint associated with the store-conditional request indicating an intent of the store-conditional request. The cache memory protects the store target address against access by any conflicting memory access request during a protection window extension following servicing of the store-conditional request. The cache memory establishes a first duration for the protection window extension based on the hint having a first value and establishes a different second duration for the protection window extension based on the hint having a different second value.