Patent classifications
G06F2212/6012
CACHE BLOCK BUDGETING TECHNIQUES
Methods, systems, and devices for cache block budgeting techniques are described. In some memory systems, a controller may configure a memory device with a cache. The cache may include a first subset of blocks configured to statically operate in a first mode and a second subset of blocks configured to dynamically switch between operating in the first mode and a second mode. A block operating in the second mode may be configured to store relatively more bits per memory cell than a block operating in the first mode. The controller may track and store, for each block of the second subset of blocks, a respective ratio of cycles performed in the first mode to cycles performed in the second mode. The controller may select a block from the second subset of blocks to switch between modes responsive to a trigger and based on the respective ratio for the block.
CACHE BLOCK BUDGETING TECHNIQUES
Methods, systems, and devices for cache block budgeting techniques are described. In some memory systems, a controller may configure a memory device with a cache. The cache may include a first subset of blocks configured to statically operate in a first mode and a second subset of blocks configured to dynamically switch between operating in the first mode and a second mode. A block operating in the second mode may be configured to store relatively more bits per memory cell than a block operating in the first mode. The controller may track and store, for each block of the second subset of blocks, a respective ratio of cycles performed in the first mode to cycles performed in the second mode. The controller may select a block from the second subset of blocks to switch between modes responsive to a trigger and based on the respective ratio for the block.
Memory utilized as both system memory and near memory
An embodiment of a memory controller device includes technology to control access to a multi-level memory including at least a first level memory and a second level memory, provide direct access to the first level memory based on a system memory address, cache accesses to the second level memory in a second portion of the first level memory, and address a memory space with a total memory capacity which includes a first capacity of the first portion of the first level memory plus a second capacity of the second level memory. Other embodiments are disclosed and claimed.
Cache block budgeting techniques
Methods, systems, and devices for cache block budgeting techniques are described. In some memory systems, a controller may configure a memory device with a cache. The cache may include a first subset of blocks configured to statically operate in a first mode and a second subset of blocks configured to dynamically switch between operating in the first mode and a second mode. A block operating in the second mode may be configured to store relatively more bits per memory cell than a block operating in the first mode. The controller may track and store, for each block of the second subset of blocks, a respective ratio of cycles performed in the first mode to cycles performed in the second mode. The controller may select a block from the second subset of blocks to switch between modes responsive to a trigger and based on the respective ratio for the block.
Instance Deployment Method, Instance Management Node, Computing Node, and Computing Device
In a method, an instance management node receives a request for creating a service instance; the instance management node obtains a cache configuration corresponding to the service instance; and the instance management node creates the service instance on a computing node, and creates a cache instance on the computing node based on the cache configuration. In this way, the service instance may provide a service by using the matched cache instance.
Distributed index for fault tolerant object memory fabric
Embodiments of the invention provide systems and methods for managing processing, memory, storage, network, and cloud computing to significantly improve the efficiency and performance of processing nodes. Embodiments can implement an object memory fabric including object memory modules storing memory objects created natively within the object memory module and may be a managed at a memory layer. The memory module object directory may index all memory objects within the object memory module. A hierarchy of object routers communicatively coupling the object memory modules may each include a router object directory that indexes all memory objects and portions contained in object memory modules below the object router in the hierarchy. The hierarchy of object routers may behave in aggregate as a single object directory communicatively coupled to all object memory modules and to process requests based on the router object directories.
Distributed caching systems and methods
Example distributed caching systems and methods are described. In one implementation, a system has multiple host systems, each of which includes a cache resource that is accessed by one or more consumers. A management server is coupled to the multiple host systems and presents available cache resources and resources associated with available host systems to a user. The management server receives a user selection of at least one available cache resource and at least one host system. The selected host system is then configured to share the selected cache resource.
Method and apparatus for adjusting cache prefetch policies based on predicted cache pollution from dynamically evolving workloads
A cache management system includes a sequentiality determination process configured to determine sequentiality profiles of a workload of IO traces as the workload dynamically changes over time. A learning process is trained to learn a correlation between workload sequentiality and cache pollution, and the trained learning process is used to predict cache pollution before the cache starts to experience symptoms of excessive pollution. The predicted pollution value is used by a cache policy adjustment process to change the prefetch policy applied to the cache, to proactively control the manner in which prefetching is used to write data to the cache. Selection of the cache policy is implemented on a per-LUN basis, so that cache performance for each LUN is individually managed by the cache management system.
APPLICATION PROGRAMMING INTERFACE FOR FINE GRAINED LOW LATENCY DECOMPRESSION WITHIN PROCESSOR CORE
Methods and apparatus relating to techniques for increasing per core memory bandwidth by using forget store operations are described. In an embodiment, a cache stores a buffer. Execution circuitry executes an instruction. The instruction causes one or more cachelines in the cache to be marked based on a start address for the buffer and a size of the buffer. A marked cacheline in the cache is to be prevented from being written back to memory. Other embodiments are also disclosed and claimed.
Memory system with program mode switching based on mixed and sequential workloads
A memory system includes: a memory device; a host interface suitable for receiving write commands and queueing the received write commands in an interface queue; a workload manager suitable for detecting, in a cache program mode, a mixed workload when a read count is greater than a first threshold value, the read count representing a number of read commands queued in the interface queue and the mixed workload representing receipt of a mix of read and write commands; a mode manager suitable for switching from the cache program mode to a normal program mode when the mixed workload is detected; and a processor suitable for processing write commands queued in a command queue in the cache program mode and processing write commands queued in the interface queue in the normal program mode when the mixed workload is detected.