Patent classifications
G06F2212/6012
Utilization of a distributed index to provide object memory fabric coherency
Embodiments of the invention provide systems and methods to implement an object memory fabric. Object memory modules may include object storage storing memory objects, memory object meta-data, and a memory module object directory. Each memory object and/or memory object portion may be created natively within the object memory module and may be a managed at a memory layer. The memory module object directory may index all memory objects and/or portions within the object memory module. A hierarchy of object routers may communicatively couple the object memory modules. Each object router may maintain an object cache state for the memory objects and/or portions contained in object memory modules below the object router in the hierarchy. The hierarchy, based on the object cache state, may behave in aggregate as a single object directory communicatively coupled to all object memory modules and to process requests based on the object cache state.
MEMORY HAVING A STATIC CACHE AND A DYNAMIC CACHE
The present disclosure includes memory having a static cache and a dynamic cache. A number of embodiments include a memory, wherein the memory includes a first portion configured to operate as a static single level cell (SLC) cache and a second portion configured to operate as a dynamic SLC cache when the entire first portion of the memory has data stored therein.
MEMORY UTILIZED AS BOTH SYSTEM MEMORY AND NEAR MEMORY
An embodiment of a memory controller device includes technology to control access to a multi-level memory including at least a first level memory and a second level memory, provide direct access to the first level memory based on a system memory address, cache accesses to the second level memory in a second portion of the first level memory, and address a memory space with a total memory capacity which includes a first capacity of the first portion of the first level memory plus a second capacity of the second level memory. Other embodiments are disclosed and claimed.
Fast cache warm-up
An embodiment of a semiconductor package apparatus may include technology to determine if a memory request for a second level memory results in a miss with respect to a first level memory, determine if a range of the second level memory corresponding to the memory request is unwritten, if the memory request results in the miss with respect to the first level memory, and blank a corresponding range of the first level memory if the range of the second level memory corresponding to the memory request is determined to be unwritten. Other embodiments are disclosed and claimed.
Cache partitioning in a multicore processor
Techniques described herein generally include methods and systems related to cache partitioning in a chip multiprocessor. Cache-partitioning for a single thread or application between multiple data sources improves energy or latency efficiency of a chip multiprocessor by exploiting variations in energy cost and latency cost of the multiple data sources. Partition sizes for each data source may be selected using an optimization algorithm that minimizes or otherwise reduces latencies or energy consumption associated with cache misses.
Memory having a static cache and a dynamic cache
The present disclosure includes memory having a static cache and a dynamic cache. A number of embodiments include a memory, wherein the memory includes a first portion configured to operate as a static single level cell (SLC) cache and a second portion configured to operate as a dynamic SLC cache when the entire first portion of the memory has data stored therein.
FAST CACHE WARM-UP
An embodiment of a semiconductor package apparatus may include technology to determine if a memory request for a second level memory results in a miss with respect to a first level memory, determine if a range of the second level memory corresponding to the memory request is unwritten, if the memory request results in the miss with respect to the first level memory, and blank a corresponding range of the first level memory if the range of the second level memory corresponding to the memory request is determined to be unwritten. Other embodiments are disclosed and claimed.
MEMORY HAVING A STATIC CACHE AND A DYNAMIC CACHE
The present disclosure includes memory having a static cache and a dynamic cache. A number of embodiments include a memory, wherein the memory includes a first portion configured to operate as a static single level cell (SLC) cache and a second portion configured to operate as a dynamic SLC cache when the entire first portion of the memory has data stored therein.
WRITE-BACK CACHE FOR STORAGE CONTROLLER USING PERSISTENT SYSTEM MEMORY
Systems and methods provide a storage controller with write-back caching capabilities that may be used during scenarios where the storage controller is required to provide write-through caching, and thus unable to utilize internal cache memory for write-back caching. The storage controller utilizes an allocation of persistent memory that is made available by the host IHS (Information Handling System), to which the storage controller is coupled. In scenarios where the storage controller is required to provide write-through caching, the storage controller may be configured to route received write data to the allocated host memory. In this manner, the data integrity provided by write-through operations is maintained, while also providing the host IHS with the speed of write-back operations. When ready to store the write data, the storage controller may request the flushing of write data from the allocated host memory.
ADAPTIVE CACHE SIZING PER WORKLOAD
- Balaji Vembu ,
- Altug Koker ,
- Josh B. Mastronarde ,
- Nikos Kaburlasos ,
- Abhishek R. Appu ,
- Sanjeev S. Jahagirdar ,
- Eric J. Asperheim ,
- Subramaniam Maiyuran ,
- Kiran C. Veernapu ,
- Pattabhiraman K ,
- Kamal Sinha ,
- Bhushan M. Borole ,
- Wenyin Fu ,
- Joydeep Ray ,
- Prasoonkumar Surti ,
- Eric J. Hoekstra ,
- Travis T. Schluessler ,
- Linda L. Hurd
Briefly, in accordance with one or more embodiments, an apparatus comprises a processor to monitor cache utilization of an application during execution of the application for a workload; and a memory to store cache utilization statistics responsive to the monitored cache utilization. The processor is to determine an optimal cache configuration for the application based at least in part on the cache utilization statistics for the workload such that a smallest amount of cache is turned on for subsequent executions of the workload by the application.