G06F16/2246

SMART BALANCE TREE LOOKUP

The present disclosure describes techniques for performing a smart tree lookup operation in a balanced tree. The techniques according to the present disclosure comprise identifying at least one data entry to be searched within the balanced tree, extracting a plurality of keys of the balanced tree, determining whether all or a subset of keys of the plurality of keys are required for searching the at least one data entry within the balanced tree, in response to the determination that the subset of keys of the plurality of keys are required for searching the at least one data entry, generating a first compare function for each of the at least one data entry using the subset of keys, and traversing a first path for the at least one data entry based on the first compare function. Accordingly, the techniques of the present disclosure enable efficient balanced tree lookup operation.

HOST, STORAGE SYSTEM INCLUDING THE HOST, AND OPERATING METHOD OF THE HOST

A host, a storage system, and an operating method of the host are provided. The host includes a host memory configured to store a tree structure including a leaf node and an index node, an index management module configured to manage an index based on the tree structure and generate a first log corresponding to the leaf node based on a first update request corresponding to a first key-value entry included in the leaf node, and a device driver configured to generate a first write command corresponding to the first log and transmit the generated first write command to a key-value storage device, so as to store the first log in the key-value storage device. The index management module is configured to generate a first new key-value entry, the first new-key value entry including a first value updated based on the first update request, as the first log.

Artificial intelligence decision making neuro network core system and information processing method using the same
11580404 · 2023-02-14 · ·

Artificial intelligence decision making neuro network core system and information processing method using the same include an electronic device linking to a unsupervised neural network interface module, a asymmetric hidden layers input module linking to the unsupervised neural network interface module and a neuron module formed with tree-structured data, a layered weight parameter module linking to the neuron module formed with tree-structured data and an non-linear PCA (Principal Component Analysis) module, an input module of the lead backpropagation unit linking to the non-linear PCA module and a tuning module, an output module of the lead backpropagation unit linking to tuning module and the non-linear PCA module; when the electronic device receives raw data, processing and learning the raw data via all the modules, and updating programs to generate decision results that accommodate a variety of scenarios, in order to elevate the reference value and practicality of the decision result.

Efficient traversal of hierarchical datasets

In one embodiment, a method comprises receiving a request for a particular user identification (ID) to perform a particular operation on a particular data object. An entitlement cache associates each operation that the particular user ID is entitled to perform with a first encoding of a tuple of a plurality of tuples. An object mapping cache associates each tuple of the plurality of tuples with a second encoding of each tuple of the plurality of tuples. An object mapping is used to determine a first tuple. The object mapping cache is used to determine a first vector of one of more left values based on the first tuple. The entitlement cache is used to determine a second vector of one or more value pairs. In response to identifying a match between the first vector and the second vector, the particular user ID is granted access to the particular data object.

Preventing DBMS deadlock by eliminating shared locking

A DBMS receives a database-access request that includes an instruction to non-destructively read a database table row. The DBMS assigns the request a TSN identifier and creates a TSN image that identifies all TSNs assigned to transactions that are not yet committed. The DBMS traverses a linked list of log entries that identifies a chronological history of transactions performed on the same row. The DBMS infers that the table row currently contains data stored by the most recently logged transaction that is not contained in the TSN image and that has thus been committed. The DBMS then continues to process statements of the transaction based on the assumption that the row contains the inferred value. The DBMS performs this procedure without acquiring a shared lock on the data page or on the index leaf page that points to the table row.

Implementing linear algebra functions via decentralized execution of query operator flows

A method for execution by a query processing system includes determining a query request that indicates a plurality of operators, where the plurality of operators includes at least one relational algebra operator and further includes at least one non-relational operator. A query operator execution flow is generated from the query request that indicates a serialized ordering of the plurality of operators. A query resultant of the query is generated by facilitating execution of the query via a set of nodes of a database system that each perform a plurality of operator executions in accordance with the query operator execution flow, where a subset of the set of nodes each execute at least one operator execution corresponding to the at least one non-relational operator in accordance with the execution of the query.

Cache conscious techniques for generation of quasi-dense grouping codes of compressed columnar data in relational database systems

Herein are techniques for dynamic aggregation of results of a database request, including concurrent grouping of result items in memory based on quasi-dense keys. Each of many computational threads concurrently performs as follows. A hash code is calculated that represents a particular natural grouping key (NGK) for an aggregate result of a database request. Based on the hash code, the thread detects that a set of distinct NGKs that are already stored in the aggregate result does not contain the particular NGK. A distinct dense grouping key for the particular NGK is statefully generated. The dense grouping key is bound to the particular NGK. Based on said binding, the particular NGK is added to the set of distinct NGKs in the aggregate result.

Free space management in a block store
11580013 · 2023-02-14 · ·

Various embodiments set forth techniques for free space management in a block store. The techniques include receiving a request to allocate one or more blocks in a block store, accessing a sparse hierarchical data structure to identify an allocator page identifying a region of a backing store having a greatest number of free blocks, and allocating the one or more blocks.

Industrial data verification using secure, distributed ledger

A verification platform may include a data connection to receive a stream of industrial asset data, including a subset of the industrial asset data, from industrial asset sensors. The verification platform may store the subset of industrial asset data into a data store, the subset of industrial asset data being marked as invalid, and record a hash value associated with a compressed representation of the subset of industrial asset data combined with metadata in a secure, distributed ledger (e.g., associated with blockchain technology). The verification platform may then receive a transaction identifier from the secure, distributed ledger and mark the subset of industrial asset data in the data store as being valid after using the transaction identifier to verify that the recorded hash value matches a hash value of an independently created version of the compressed representation of the subset of industrial asset data combined with metadata.

Allocating cache memory in a dispersed storage network

A method for execution by a dispersed storage network (DSN) managing unit includes receiving access information from a plurality of distributed storage and task (DST) processing units via a network. Cache memory utilization data is generated based on the access information. Configuration instructions are generated for transmission via the network to the plurality of DST processing units based on the cache memory utilization data.