G06F2212/6042

CACHE MANAGEMENT

It is determined that a cache operation relating to the transfer of data between a cache memory and a data storage system is required. A state of a utilization model is received, the utilization model including requirements for utilization of resources of the data storage system over a time period, and the state indicating a cost of resource utilization associated with cache operations in the current time period. It is determined whether to perform the cache operation, based on the utilization requirements and the state of a utilization model. If the cache operation is not to be performed, and if the cache operation is a write operation, it is determined whether the cache memory is full. If so, the cache operation is managed according to an emergency cache management process; if not, the data associated with the cache operation is maintained in the cache memory.

Queue Optimization in Cloud Computing

Systems and methods for object-based data storage are provided. A queue may be maintained of operations relating to a plurality of documents operable to be maintained at an object-based data storage. An independent operation may be identified in the queue that must be processed prior to processing at least one dependent operation to thereby enable parallelization of processing of operations in the queue. The identified independent operation may then be processed. Subsequently, the dependent operations may be processed.

Methods and apparatus to facilitate an atomic operation and/or a histogram operation in cache pipeline

Methods, apparatus, systems and articles of manufacture to facilitate an atomic operation and/or a histogram operation in cache pipeline are disclosed. An example system includes a cache storage coupled to an arithmetic component; and a cache controller coupled to the cache storage, wherein the cache controller is operable to: receive a memory operation that specifies a set of data; retrieve the set of data from the cache storage; utilize the arithmetic component to determine a set of counts of respective values in the set of data; generate a vector representing the set of counts; and provide the vector.

Data As Compute

A method includes storing a function representing a set of data elements stored in a backing memory and, in response to a first memory read request for a first data element of the set of data elements, calculating a function result representing the first data element based on the function.

METHOD AND SYSTEM FOR IMPLEMENTING LOCK FREE SHARED MEMORY WITH SINGLE WRITER AND MULTIPLE READERS
20170308466 · 2017-10-26 ·

A method and a system for implementing a lock-free shared memory accessible by a plurality of readers and a single writer are provided herein. The method including: maintaining a memory accessible by the readers and the writer, wherein the memory is a hash table having at least one linked list of buckets, each bucket in the linked list having: a bucket ID, a pointer to an object, and a pointer to another bucket; calculating a pointer to one bucket of the linked list of buckets based on a hash function in response to a read request by any of the readers; and traversing the linked list of buckets, to read a series of objects corresponding with the traversed buckets, while checking that the writer has not: added, amended, or deleted objects pointed to by any of said traversed buckets, wherein said checking is carried out in a single atomic action.

ASYMMETRICAL MEMORY MANAGEMENT
20170300415 · 2017-10-19 ·

Described herein are embodiments of asymmetric memory management to enable high bandwidth accesses. In embodiments, a high bandwidth cache or high bandwidth region can be synthesized using the bandwidth capabilities of more than one memory source. In one embodiment, memory management circuitry includes input/output (I/O) circuitry coupled with a first memory and a second memory. The I/O circuitry is to receive memory access requests. The memory management circuitry also includes logic to determine if the memory access requests are for data in a first region of system memory or a second region of system memory, and in response to a determination that one of the memory access requests is to the first region and a second of the memory access requests is to the second region, access data in the first region from the cache of the first memory and concurrently access data in the second region from the second memory.

Bioinformatics systems, apparatuses, and methods executed on an integrated circuit processing platform

A system, method and apparatus for executing a bioinformatics analysis on genetic sequence data includes an integrated circuit formed of a set of hardwired digital logic circuits that are interconnected by physical electrical interconnects. One of the physical electrical interconnects forms an input to the integrated circuit that may be connected with an electronic data source for receiving reads of genomic data. The hardwired digital logic circuits may be arranged as a set of processing engines, each processing engine being formed of a subset of the hardwired digital logic circuits to perform one or more steps in the bioinformatics analysis on the reads of genomic data. Each subset of the hardwired digital logic circuits may be formed in a wired configuration to perform the one or more steps in the bioinformatics analysis.

Method, apparatus and system for adjusting voltage of supercapacitor
09823722 · 2017-11-21 · ·

A method for adjusting a voltage of a supercapacitor is disclosed, the method, which is used to retard aging of the supercapacitor and extend a service life of the supercapacitor, includes: acquiring information that carries a system service volume; configuring a size of an available capacity value of the Cache according to the information; and adjusting a working voltage of the supercapacitor according to the configured size of the available capacity value of the Cache.

READ DISCARDS IN A PROCESSOR SYSTEM WITH WRITE-BACK CACHES
20170293556 · 2017-10-12 ·

A system and method provide for a better way of managing a shared memory system. A multiprocessor system includes a first and second CPU, with each CPU having a private L1 cache. The system further includes a level 2 (L2) cache shared between the first CPU and the second CPU, and includes a memory coherency manager (CM) and an I/O device. The second CPU is configured to request ownership of a cache line in the L1 cache of the first CPU that is in a Modified state. Later, upon receiving a read discard command from the I/O device, the second CPU is configured to request the CM update the cache line from a Modified state to a Shared state.

Resource estimation for MVCC-enabled database
11256680 · 2022-02-22 · ·

Systems and methods may include execution of a database workload on a plurality of database tables, collection of execution statistics associated with execution of the database workload, determination of an in-memory row storage cache size for multi-version concurrency control based on the collected execution statistics, and configuration of a database system to allocate the in-memory row storage cache size for multi-version concurrency control.