G06F2212/282

PROVIDING ROLLING UPDATES OF DISTRIBUTED SYSTEMS WITH A SHARED CACHE
20230205697 · 2023-06-29 · ·

Disclosed herein are system, apparatus, article of manufacture, method, and/or computer program product embodiments for providing rolling updates of distributed systems with a shared cache. An embodiment operates by receiving a data item key corresponding to a request from a user profile operating on a media player and receiving a version identifier corresponding to a first version of an application operating on the media player. It is determined that a shared cache includes a first value and second value for the data item key. A key component is generated corresponding to the user profile. Both the generated key component and the data item key are provided to the shared cache, and the first value of the data item as stored in the shared cache is received. The first value of the first version of the data item is updated.

PRIORITY-BASED STORAGE AND ACCESS OF COMPRESSED MEMORY LINES IN MEMORY IN A PROCESSOR-BASED SYSTEM

In an aspect, high priority lines are stored starting at an address aligned to a cache line size for instance 64 bytes, and low priority lines are stored in memory space left by the compression of high priority lines. The space left by the high priority lines and hence the low priority lines themselves are managed through pointers also stored in memory. In this manner, low priority lines contents can be moved to different memory locations as needed. The efficiency of higher priority compressed memory accesses is improved by removing the need for indirection otherwise required to find and access compressed memory lines, this is especially advantageous for immutable compressed contents. The use of pointers for low priority is advantageous due to the full flexibility of placement, especially for mutable compressed contents that may need movement within memory for instance as it changes in size over time

Register file segments for supporting code block execution by using virtual cores instantiated by partitionable engines
09842005 · 2017-12-12 · ·

A system for executing instructions using a plurality of register file segments for a processor. The system includes a global front end scheduler for receiving an incoming instruction sequence, wherein the global front end scheduler partitions the incoming instruction sequence into a plurality of code blocks of instructions and generates a plurality of inheritance vectors describing interdependencies between instructions of the code blocks. The system further includes a plurality of virtual cores of the processor coupled to receive code blocks allocated by the global front end scheduler, wherein each virtual core comprises a respective subset of resources of a plurality of partitionable engines, wherein the code blocks are executed by using the partitionable engines in accordance with a virtual core mode and in accordance with the respective inheritance vectors. A plurality register file segments are coupled to the partitionable engines for providing data storage.

Selective allocation of CPU cache slices to database objects
09842052 · 2017-12-12 · ·

A central processing unit (CPU) forming part of a computing device, initiates execution of code associated with each of a plurality of objects used by a worker thread. The CPU has an associated cache that is split into a plurality of slices. It is determined, by a cache slice allocation algorithm for each object, whether any of the slices will be exclusive to or shared by the object. Thereafter, for each object, any slices determined to be exclusive to the object are activated such that the object exclusively uses such slices and any slices determined to be shared by the object are activated such that the object shares or is configured to share such slices.

CACHE ALLOCATION TECHNIQUES
20230185723 · 2023-06-15 ·

Methods, systems, and devices for cache allocation techniques are described. The memory system may receive a write command that includes a stream identification (ID) associated with performance constraints for data streams. The memory system may determine an application identification (ID) that indicates a type of data based on the stream ID of the write command. In some cases, the memory system may assign a level associated with the application ID that indicates an amount of data to be written for the write operation and allocate an amount of space available to be written for the write operation in a single-level cell cache based on the assigned level. The memory system may write the data to the single-level cell cache within the amount of space of the single-level cell cache.

CACHING SYSTEMS AND METHODS FOR HARD DISK DRIVES AND HYBRID DRIVES
20170344276 · 2017-11-30 ·

A system includes a read/write module and a caching module. The read/write module is configured to access a first portion of a recording surface of a rotating storage device. Data is stored on the first portion of the recording surface of the rotating storage device at a first density. The caching module is configured to cache data on a second portion of the recording surface of the rotating storage device at a second density. The second portion of the recording surface of the rotating storage device is separate from the first portion of the recording surface of the rotating storage device. The second density is less than the first density.

Adaptive cache partitioning

Described apparatuses and methods partition a cache memory based, at least in part, on a metric indicative of prefetch performance. The amount of cache memory allocated for metadata related to prefetch operations versus cache storage can be adjusted based on operating conditions. Thus, the cache memory can be partitioned into a first portion allocated for metadata pertaining to an address space (prefetch metadata) and a second portion allocated for data associated with the address space (cache data). The amount of cache memory allocated to the first portion can be increased under workloads that are suitable for prefetching and decreased otherwise. The first portion may include one or more cache units, cache lines, cache ways, cache sets, or other resources of the cache memory.

Providing memory management unit (MMU) partitioned translation caches, and related apparatuses, methods, and computer-readable media

Providing memory management unit (MMU) partitioned translation caches, and related apparatuses, methods, and computer-readable media. In this regard, an apparatus comprising an MMU is provided. The MMU comprises a translation cache providing a plurality of translation cache entries defining address translation mappings. The MMU further comprises a partition descriptor table providing a plurality of partition descriptors defining a corresponding plurality of partitions each comprising one or more translation cache entries of the plurality of translation cache entries. The MMU also comprises a partition translation circuit configured to receive a memory access request from a requestor. The partition translation circuit is further configured to determine a translation cache partition identifier (TCPID) of the memory access request, identify one or more partitions of the plurality of partitions based on the TCPID, and perform the memory access request on a translation cache entry of the one or more partitions.

Adaptive Cache Partitioning

Described apparatuses and methods partition a cache memory based, at least in part, on a metric indicative of prefetch performance. The amount of cache memory allocated for metadata related to prefetch operations versus cache storage can be adjusted based on operating conditions. Thus, the cache memory can be partitioned into a first portion allocated for metadata pertaining to an address space (prefetch metadata) and a second portion allocated for data associated with the address space (cache data). The amount of cache memory allocated to the first portion can be increased under workloads that are suitable for prefetching and decreased otherwise. The first portion may include one or more cache units, cache lines, cache ways, cache sets, or other resources of the cache memory.

Cache management of logical-physical translation metadata
11263149 · 2022-03-01 · ·

The present disclosure describes aspects of cache management of logical-physical translation metadata. In some aspects, a cache (260) for logical-physical translation entries of a storage media system (114) is divided into a plurality of segments (264). An indexer (364) is configured to efficiently balance a distribution of the logical-physical translation entries (252) between the segments (252). A search engine (362) associated with the cache is configured to search respective cache segments (264) and a cache manager (160) may leverage masked search functionality of the search engine (362) to reduce the overhead of cache flush operations.