Patent classifications
G06F12/122
Information processing apparatus and computer-readable recording medium having stored therein process allocation determining program
An information processing apparatus including a plurality of groups, each group including a first memory, a second memory different in process speed from the first memory, and a processor including a memory controller that is connected to the first memory and the second memory and that controls an access from a process to the first memory and the second memory, wherein a first processor among a plurality of the processors of the plurality of groups is configured to determine, based on a characteristic of a plurality of the processes accessing data stored in the first memory or the second memory in each of the plurality of groups, an allocation of the plurality of processes onto the plurality of processors.
METHOD TO MINIMIZE HOT/COLD PAGE DETECTION OVERHEAD ON RUNNING WORKLOADS
Methods and apparatus to minimize hot/cold page detection overhead on running workloads. A page meta data structure is populated with meta data associated with memory pages in one or more far memory tier. In conjunction with one or more processes accessing memory pages to perform workloads, the page meta data structure is updated to reflect accesses to the memory pages. The page meta data, which reflects the current state of memory, is used to determine which pages are “hot” pages and which pages are “cold” pages, wherein hot pages are memory pages with relatively higher access frequencies and cold pages are memory pages with relatively lower access frequencies. Variations on the approach including filtering meta data updates on pages in memory regions of interest and applying a filter(s) to trigger meta data updates based on (a) condition(s). A callback function may also be triggered to be executed synchronously with memory page accesses.
Distributed quantum entanglement cache
An entangled quantum cache includes a quantum store that receives a plurality of quantum states and is configured to store and order the plurality of quantum states and to provide select ones of the stored and ordered plurality of quantum states to a quantum data output at a first desired time. A fidelity system is configured to determine a fidelity of at least some of the plurality of quantum states. A classical store is coupled to the fidelity system and configured to store classical data comprising the determined fidelity information and an index that associates particular ones of classical data with particular ones of the plurality of quantum states and to supply at least some of the classical data to a classical data output at a second desired time. A processor is connected to the classical store and determines the first time based on the index.
Distributed quantum entanglement cache
An entangled quantum cache includes a quantum store that receives a plurality of quantum states and is configured to store and order the plurality of quantum states and to provide select ones of the stored and ordered plurality of quantum states to a quantum data output at a first desired time. A fidelity system is configured to determine a fidelity of at least some of the plurality of quantum states. A classical store is coupled to the fidelity system and configured to store classical data comprising the determined fidelity information and an index that associates particular ones of classical data with particular ones of the plurality of quantum states and to supply at least some of the classical data to a classical data output at a second desired time. A processor is connected to the classical store and determines the first time based on the index.
Virtual file system for cloud-based shared content
A server in a cloud-based environment interfaces with storage devices that store shared content accessible by two or more users. Individual items within the shared content are associated with respective object metadata that is also stored in the cloud-based environment. Download requests initiate downloads of instances of a virtual file system module to two or more user devices associated with two or more users. The downloaded virtual file system modules capture local metadata that pertains to local object operations directed by the users over the shared content. Changed object metadata attributes are delivered to the server and to other user devices that are accessing the shared content. Peer-to-peer connections can be established between the two or more user devices. Object can be divided into smaller portions such that processing the individual smaller portions of a larger object reduces the likelihood of a conflict between user operations over the shared content.
Virtual file system for cloud-based shared content
A server in a cloud-based environment interfaces with storage devices that store shared content accessible by two or more users. Individual items within the shared content are associated with respective object metadata that is also stored in the cloud-based environment. Download requests initiate downloads of instances of a virtual file system module to two or more user devices associated with two or more users. The downloaded virtual file system modules capture local metadata that pertains to local object operations directed by the users over the shared content. Changed object metadata attributes are delivered to the server and to other user devices that are accessing the shared content. Peer-to-peer connections can be established between the two or more user devices. Object can be divided into smaller portions such that processing the individual smaller portions of a larger object reduces the likelihood of a conflict between user operations over the shared content.
Heterogenous-latency memory optimization
Memory pages are background-relocated from a low-latency local operating memory of a server computer to a higher-latency memory installation that enables high-resolution access monitoring and thus access-demand differentiation among the relocated memory pages. Higher access-demand memory pages are background-restored to the low-latency operating memory, while lower access-demand pages are maintained in the higher latency memory installation and yet-lower access-demand pages are optionally moved to yet higher-latency memory installation.
MULTI-ADAPTIVE CACHE REPLACEMENT POLICY
Techniques for performing cache operations are provided. The techniques include tracking performance events for a plurality of test sets of a cache, detecting a replacement policy change trigger event associated with a test set of the plurality of test sets, and in response to the replacement policy change trigger event, operating non-test sets of the cache according to a replacement policy associated with the test set.
MEMORY MANAGEMENT METHOD AND TERMINAL DEVICE
This application provides a memory management method and a terminal device. According to the method, a frequently accessed memory page and a rarely accessed memory page that are required for starting and using an application may be determined by using a memory page on which a page exception occurs in a target running application. A terminal retains, in a memory, the memory page that is required for starting and using the application, to resolve a problem that freezing occurs when an application that runs in a background is started again, and improve keepalive performance of the application. The terminal migrates the rarely accessed memory page to a swap partition of a storage, to save memory resources.
Cache eviction methods
One example method includes a cache eviction operation. Entries in a cache are maintained in an entry list that includes a recent list and a frequent list. When an eviction operation is initiated or triggered, timestamps of last access for the entries are adjusted by corresponding adjustment values. Candidates for eviction are identified based on the adjusted timestamps of last access. At least some of the candidates are evicted from the cache.