G06F16/24539

CONTINUOUS CLOUD-SCALE QUERY OPTIMIZATION AND PROCESSING
20230214385 · 2023-07-06 ·

Runtime statistics from the actual performance of operations on a set of data are collected and utilized to dynamically modify the execution plan for processing a set of data. The operations performed are modified to include statistics collection operations, the statistics being tailored to the specific operations being quantified. Optimization policy defines how often optimization is attempted and how much more efficient an execution plan should be to justify transitioning from the current one. Optimization is based on the collected runtime statistics but also takes into account already materialized intermediate data to gain further optimization by avoiding reprocessing.

Information processing system and non-transitory computer readable medium

An information processing system comprising a processor programmed to: receive a question asked by a questioner, an answer provided by an answerer to the question, and a rating by a rater with respect to at least one of the question and the answer; manage relationship information, the relationship information being related to the questioner, the answerer, and the rating by the rater; acquire attribute information about each of the questioner, the answerer, and the rater; and present rating information based on the relationship information and in response to a condition specified by a requester with respect to the attribute information.

Techniques for Tiered Cache in NoSQL Multiple Shard Stores
20230214387 · 2023-07-06 ·

Computer technology for: (i) performing prefetching based on shard workload in NoSQL; and/or (ii) perform distribution of stored data over the various tiers of a cache memory based on shard workload in NoSQL. This can help achieve better load balance among and between the shards of a database and the respectively associated nodes on which the shards are stored.

Workload pool hierarchy for a search and indexing system

Resource management includes storing, for multiple workload pools of a data intake and query system, a workload pool hierarchy arranged in multiple workload pool layers. After storing a processing request is assigned a selected subset of workload pools in a second layer of the workload pool hierarchy based on a type of processing request. The processing request is then assigned to an individual workload pool in the selected subset to obtain a selected workload pool. Execution of the processing request is initiated on the selected workload pool.

MAKING DECISIONS FOR PLACING DATA IN A MULTI-TENANT CACHE

Placement decisions may be made to place data in a multi-tenant cache. Usage of multi-tenant cache nodes for performing access requests may be obtained. Usage prediction techniques may be applied to the usage to determine placement decisions for data amongst the multi-tenant cache nodes. Placement actions for the data amongst at the multi-tenant cache nodes may be performed according to the placement decisions.

USING QUERY LOGS TO OPTIMIZE EXECUTION OF PARAMETRIC QUERIES

The present disclosure relates to systems, methods, and computer-readable media for optimizing selection of a cached execution plan to use in processing a parametric query. For example, systems described herein involve training a plan selection model that makes use of machine learning to identify an execution plan from a set of pre-selected execution plans based on predicted cost of executing a query instance in accordance with the selected execution plan (e.g., relative to predicted costs of executing the query instance using other pre-selected execution plans). This application describes features related to lowering costs associated with selecting the execution plan in a way that will continue to be more accurate overtime based on training and refining the plan selection model.

REFLECTION CREATION SYSTEM, REFLECTION CREATION METHOD, AND REFLECTION CREATION PROGRAM
20220405283 · 2022-12-22 ·

A reflection creation system 1 includes a storage device 3 that holds a history 24 of reflections and an arithmetic device 2 that selects, from the history, a query whose reflection creation source is different from that of a new query for reflection creation among queries related to a past reflection having a same content as that of a reflection represented by the new query, based on the content of the reflection, and creates a reflection based on the selected query of the past reflection.

TIME AWARE CACHING
20220398245 · 2022-12-15 · ·

The present disclosure relates to time aware caching. One method includes receiving an API request for data from a database, wherein the request defines a time window associated with the data, creating a first and second query based on the request, wherein the first query corresponds to a first chunk of the time window, and wherein the second query corresponds to a second chunk of the time window, hashing a first statement associated with the first query to produce a first key and hashing a second statement associated with the second query to produce a second key, retrieving a first portion of the data corresponding to the first chunk of the time window from cache responsive to a determination that the first key is in the cache, and retrieving a second portion of the data corresponding to the second chunk of the time window from the database responsive to a determination that the second key is not in the cache.

Reducing requests using probabilistic data structures

Techniques are disclosed relating to providing and using probabilistic data structures to at least reduce requests between database nodes. In various embodiments, a first database node processes a database transaction that involves writing a set of database records to an in-memory cache of the first database node. As part of processing the database transaction, the first database node may insert, in a set of probabilistic data structures, a set of database keys that correspond to the set of database records. The first database node may send, to a second database node, the set of probabilistic data structures to enable the second database node to determine whether to request, from the first database node, a database record associated with a database key.

Caching objects from a data store
11520789 · 2022-12-06 · ·

In some examples, a database management node updates object metadata with indicators of access frequencies of a plurality of objects in a data store that is remotely accessible by the database management node over a network. The database management node selects a subset of the plurality of objects based on the indicators, and caches the subset in the local storage.