G06F12/123

EVICTION MECHANISM
20220391325 · 2022-12-08 · ·

A device comprising: storage comprising a group of partitions, and a controller operable to place data into a selected one of the partitions, and to evict existing data from the selected partition when already occupied. The eviction is performed according to an eviction policy. According to this, each partition has an associated age indicator, each age indicator is operable to cycle through a sequence of J steps. Each age indicator is able to run ahead of the current oldest age indicator, but only as long as the age indicators of all the partitions in the group, between them, form a consecutive run of no more than K consecutive steps in the sequence, where K<J−1. The selected partition for eviction is one of the partitions in the group with the oldest age indicator.

EVICTION MECHANISM
20220391325 · 2022-12-08 · ·

A device comprising: storage comprising a group of partitions, and a controller operable to place data into a selected one of the partitions, and to evict existing data from the selected partition when already occupied. The eviction is performed according to an eviction policy. According to this, each partition has an associated age indicator, each age indicator is operable to cycle through a sequence of J steps. Each age indicator is able to run ahead of the current oldest age indicator, but only as long as the age indicators of all the partitions in the group, between them, form a consecutive run of no more than K consecutive steps in the sequence, where K<J−1. The selected partition for eviction is one of the partitions in the group with the oldest age indicator.

CACHING FOR DISK BASED HYBRID TRANSACTIONAL ANALYTICAL PROCESSING SYSTEM
20220391394 · 2022-12-08 ·

A method for caching partial data pages to support optimized transactional processing and analytical processing with minimal memory footprint may include loading, from disk to memory, a portion of a data page. The memory may include a first cache for storing partial data pages and a second cache for storing full data pages. The first portion of the data page may be loaded into the first cache. A data structure may be updated to indicate that the portion of the data page has been loaded into the first cache. When the data structure indicates that the data page has been loaded into the first cache in its entirety, transferring the data page from the first cache to the second cache. One or more queries may be performed using at least the portion of the data page loaded into the memory. Related systems and articles of manufacture are also provided.

CACHING FOR DISK BASED HYBRID TRANSACTIONAL ANALYTICAL PROCESSING SYSTEM
20220391394 · 2022-12-08 ·

A method for caching partial data pages to support optimized transactional processing and analytical processing with minimal memory footprint may include loading, from disk to memory, a portion of a data page. The memory may include a first cache for storing partial data pages and a second cache for storing full data pages. The first portion of the data page may be loaded into the first cache. A data structure may be updated to indicate that the portion of the data page has been loaded into the first cache. When the data structure indicates that the data page has been loaded into the first cache in its entirety, transferring the data page from the first cache to the second cache. One or more queries may be performed using at least the portion of the data page loaded into the memory. Related systems and articles of manufacture are also provided.

BUFFER POOL MANAGEMENT
20220382671 · 2022-12-01 ·

A processor may allocate a first buffer segment from a buffer pool. The first buffer segment may be configured with a first contiguous range of memory for a first data partition of a data table. The first data partition comprising a first plurality of data blocks. A processor may store the first plurality of data blocks in order into the first buffer segment. A processor may retrieve the target data block from the first buffer segment in response to a data access request for a target data block of the first plurality of data blocks.

QUERY PROCESSING FOR DISK BASED HYBRID TRANSACTIONAL ANALYTICAL PROCESSING SYSTEM
20220382758 · 2022-12-01 ·

A method for processing a query may include receiving a query associated with one or more predicate columns and one or more aggregate columns. To respond to the query, one or more partial data pages including the one or more predicate columns but not the one or more aggregate columns may be loaded from disk to memory. For each partial data page, a first value occupying the one or more predicate columns may be evaluated to identify one or more rows satisfying a predicate associated with the query. A portion of a data page containing the aggregate columns may be loaded from disk into memory. A result of the query corresponding to a second value occupying the aggregate columns may be generated based on the portion of the data page loaded in the memory. Related systems and articles of manufacture are also provided.

QUERY PROCESSING FOR DISK BASED HYBRID TRANSACTIONAL ANALYTICAL PROCESSING SYSTEM
20220382758 · 2022-12-01 ·

A method for processing a query may include receiving a query associated with one or more predicate columns and one or more aggregate columns. To respond to the query, one or more partial data pages including the one or more predicate columns but not the one or more aggregate columns may be loaded from disk to memory. For each partial data page, a first value occupying the one or more predicate columns may be evaluated to identify one or more rows satisfying a predicate associated with the query. A portion of a data page containing the aggregate columns may be loaded from disk into memory. A result of the query corresponding to a second value occupying the aggregate columns may be generated based on the portion of the data page loaded in the memory. Related systems and articles of manufacture are also provided.

MEMORY DEVICE AND METHOD FOR ACCESSING MEMORY DEVICE

The invention provides a memory device including a memory array, an internal memory, and a processor. The memory array stores node mapping tables for access data in the memory array. The internal memory includes a cached mapping table area and has a root mapping table. The processor determines whether a first node mapping table of the node mapping tables is temporarily stored in the cached mapping table area according to the root mapping table. In response to the first node mapping table is temporarily stored in the cached mapping table area, the processor accesses data according to the first node mapping table in the cached mapping table area, marks the modified first node mapping table through an asynchronous index identifier, and writes back the modified first node mapping table from the cached mapping table area to the memory array.

MEMORY DEVICE AND METHOD FOR ACCESSING MEMORY DEVICE

The invention provides a memory device including a memory array, an internal memory, and a processor. The memory array stores node mapping tables for access data in the memory array. The internal memory includes a cached mapping table area and has a root mapping table. The processor determines whether a first node mapping table of the node mapping tables is temporarily stored in the cached mapping table area according to the root mapping table. In response to the first node mapping table is temporarily stored in the cached mapping table area, the processor accesses data according to the first node mapping table in the cached mapping table area, marks the modified first node mapping table through an asynchronous index identifier, and writes back the modified first node mapping table from the cached mapping table area to the memory array.

LATERAL PERSISTENCE DIRECTORY STATES

Aspects of the invention include defining one or more processor units having a plurality of caches, each processor unit comprising a processor having at least one cache, and wherein each of the one or more processor units are coupled together by an interconnect fabric, for each of the plurality of caches, arranging a plurality of cache lines into one or more congruence classes, each congruence class comprises a chronology vector, arranging each cache in the plurality of caches into a cluster of caches based on a plurality of scope domains, determining a first cache line to evict based on the chronology vector, and determining a target cache for installing the first cache line based on a scope of the first cache line and a saturation metric associated with the target cache, wherein the scope of the first cache line is determined based on lateral persistence tag bits.