Patent classifications
G06F2212/314
Dynamic ingestion throttling of data log
A technique for controlling acceptance of host application data into a data log in a data storage system includes selectively accepting or refusing newly arriving host data into the data log based on a comparison between an oldest entry in the data log and an age threshold. The age threshold is dynamically updated based on system heuristics. As long as the oldest log entry is younger than the age threshold, the data log continues to accept newly arriving host application data, acknowledging IO requests to host applications as the data specified in those requests is entered into the log. However, when the oldest log entry is older than the age threshold, new log entries are temporarily refused entry into the data log. Instead, newly arriving data are placed in a pending list, where they are kept until the data log is again accepting new log entries.
SYSTEM-ON-CHIP FOR SPECULATIVE EXECUTION EVENT COUNTER CHECKPOINTING AND RESTORING
An example system for speculative execution event counter checkpointing and restoring may include a plurality of symmetric cores, at least one of the symmetric cores to simultaneously process a plurality of threads and to perform out-of-order instruction processing for the plurality of threads; at least one shared cache circuit to be shared among two or more the of symmetric cores. The system may further include a memory controller to couple the symmetric cores to a system memory and a data communication interface to couple one or more of the cores to input/output devices. The system may further include event counter circuitry comprising: a plurality of event counters including programmable event counters and fixed event counters and one or more configuration registers to store configuration data to specify an event type to be counted by the programmable event counters, wherein at least one of the one or more configuration registers is to store configuration data for a plurality of the programmable event counters. The system may further include transactional memory circuitry to process transactional memory operations including load operations and store operations, the transactional memory circuitry to process a transaction begin instruction to indicate a start of a transactional execution region of a program, a transaction end instruction to indicate an end of the transactional execution region, and a transaction abort instruction to abort processing of the transactional execution region. The system may further include transaction checkpoint circuitry to store a processor state at the start of the transactional execution region of the program, the processor state including values of one or more of the event counters. The system may further include lock elision circuitry to cause critical sections of the program to execute as transactions on multiple threads without acquiring a lock, the lock elision circuitry to cause the critical sections to be re-executed non-speculatively using one or more locks in response to detecting a transaction failure.
Dynamic caching module selection for optimized data deduplication
Embodiments of the invention provide a method, system and computer program product for dynamic caching module selection for optimized data deduplication. In an embodiment of the invention, a method for dynamic caching module selection for optimized data deduplication is provided. The method includes receiving a request to retrieve data and classifying the request. The method also includes identifying from amongst multiple different caching modules each with a different configuration a particular caching module associated with the classification of the request. Finally, the method includes deduplicating the data in the identified caching module.
Data copy avoidance across a storage
Embodiments of the present disclosure relate to methods and apparatuses for data copy avoidance where after a data access request is received from the first storage node, what is sent by a second storage node to the first storage node is not an address of a second storage space in a second mirrored cache, but an address of a first storage space in a first cache corresponding to the second storage space. In this way, data access may be implemented directly in the first cache on the first storage node, and can reduce data communication across different storage nodes, eliminate potential system performance bottlenecks, and enhance data access performance.
Apparatus and method for controlling shared cache of multiple processor cores by using individual queues and shared queue
A control unit stores data used in a process to a shared cache memory. The control unit provides a shared queue in a memory space of the shared cache memory and performs LRU control with the use of the shared queue. The control unit also provides a local queue in the memory space of the shared cache memory. The control unit enqueues a CBE (management information) for a cache page used by a core in a process to the local queue. The control unit dequeues a plurality of CBEs from the local queue upon satisfaction of a predetermined condition, and enqueues the dequeued CBEs to the shared queue.
Large scale storage system and method of operating thereof
A distributed storage system comprising interconnected computer nodes; each one of the computer nodes comprising at least one processing resource configured to execute a Unified Distributed Storage Platform (UDSP) agent; at least one of the computer nodes comprising one or more resources including at least one cache resource configured to cache objects and having corresponding cache-related parameters; at least one UDSP agent of a respective computer node having the at least one cache resource is configured to: monitor cache-related parameters of the at least one cache resource connected to the respective computer node, for determining whether the cache-related parameters meet at least one first SLS criteria; and in the case the at least one first SLS criteria is not met, initiate handoff of at least part of one or more cache object spaces of the at least one cache resource to at least one other computer node, which after receiving the at least part of one or more cache object spaces, its cache-related parameters meet at least one second SLS criteria.
System and method for event monitoring in cache coherence protocols without explicit invalidations
Synchronization events associated with cache coherence are monitored without using invalidations. A callback-read is issued to a memory address associated with the synchronization event, which callback-read either reads the last value written in the memory address or blocks until a next write takes place in the memory address and reads a newly written value.
WRITE REQUEST PROCESSING METHOD, PROCESSOR, AND COMPUTER
Embodiments of the present disclosure provide a write request processing method, a processor, and a computer. A first computer is connected to a second computer, respective operating systems run on the first computer and the second computer respectively, the first computer includes a first processor, the first processor is connected to a second processor of the second computer by using a system bus, the first computer includes a first memory address space, a second memory address space of the second computer is a mirror address space of the first memory address space, and the first processor mirrors data written into the first memory address space to the second memory address space by using the system bus, which can reduce mirroring operation latency and improve IOPS performance of a system.
REDUCED PAGE LOAD TIME UTILIZING CACHE STORAGE
A cache that can be stored in a user partitioned region of storage and utilized to reduce the amount of time required to present content responsive to content requests is described. A request for content associated with a region of a user interface can be received and data corresponding to a list item in a cache can be accessed. Content associated with the data can be presented in the region of the user interface via a same presentation as a most recent presentation of the content. At a time subsequent to when the content is initially presented in the region, new data associated with the list item can be retrieved. In examples where the new data corresponds to updated data, the presentation can be modified based partly on the updated data and the new data can be written to the cache in a location corresponding to the list item.
Cache Memory System and Method for Accessing Cache Line
A cache memory system is provided. The cache memory system includes multiple upper level caches and a current level cache. Each upper level cache includes multiple cache lines. The current level cache includes an exclusive tag random access memory (Exclusive Tag RAM) and an inclusive tag random access memory (Inclusive Tag RAM). The Exclusive Tag RAM is configured to preferentially store an index address of a cache line that is in each upper level cache and whose status is unique dirty (UD). The Inclusive Tag RAM is configured to store an index address of a cache line that is in each upper level cache and whose status is unique clean (UC), shared clean (SC), or shared dirty (SD).