G06F16/24562

SMART CONTRACT-BASED DATA PROCESSING METHOD, APPARATUS, DEVICE, AND STORAGE MEDIUM

A data processing method, apparatus, and device based on a smart contract, and a storage medium can improve execution speed of a smart contract and reduce a running time. A contract call request used for executing a transaction service is acquired. An asynchronous variable statement corresponding to the asynchronous variable function name in a smart contract is also acquired. A first memory and a second memory associated with the first variable parameter are queried based on the first asynchronous variable statement. A data read request is transmitted to the first memory and a separate data read request is transmitted to the second memory in an asynchronous request manner indicated by the first asynchronous variable statement to obtain first to-be-read data and second to-be-read data used for executing the transaction service.

Methods for updating reference count and shared objects in a concurrent system

A method for to manage concurrent access to a shared resource in a distributed computing environment. A reference counter counts is incremented for every use of an object subtype in a session and decremented for every release of an object subtype in a session. A session counter is incremented upon the first instance of fetching an object type into a session cache and decremented upon having no instances of the object type in use in the session. When both the reference counter and the session counter are zero, the object type may be removed from the cache.

Message Object Traversal In High-Performance Network Messaging Architecture
20230027817 · 2023-01-26 · ·

A communications system implements instructions including maintaining a message object that includes an array of entries. Each entry of the array includes a field identifier, a data type, and a next entry pointer. The next entry pointers and a head pointer establish a linked list of entries. The instructions include, in response to a request to add a new entry to the message object, calculating an index based on a field identifier of the new entry and determining whether the entry at the calculated index within the array of entries is active. The instructions include, if the entry is inactive, writing a data type, field identifier, and data value of the new entry to the calculated index, and inserting the new entry into the linked list. The instructions include, if the entry is already active, selectively expanding the size of the array and repeating the calculating and determining.

STATELESS STREAM HANDLING AND RESHARDING
20230214396 · 2023-07-06 ·

Systems and methods are disclosed for stateless stream handling and resharding. In one implementation, a first shard including one or more messages is generated. The first shard is associated with a first shard version attribute. The first shard and the first shard version attribute are provided as a first atomic update within a data stream. The first shard is resharded into at least a second shard. The second shard is associated with a second shard version attribute. The second shard and the second shard version attribute are provided as a second atomic update within the data stream.

EFFECTIVE AND SCALABLE BUILDING AND PROBING OF HASH TABLES USING MULTIPLE GPUS
20230214225 · 2023-07-06 ·

Described approaches provide for effectively and scalably using multiple GPUs to build and probe hash tables and materialize results of probes. Random memory accesses by the GPUs to build and/or probe a hash table may be distributed across GPUs and executed concurrently using global location identifiers. A global location identifier may be computed from data of an entry and identify a global location for an insertion and/or probe using the entry. The global location identifier may be used by a GPU to determine whether to perform an insertion or probe using an entry and/or where the insertion or probe is to be performed. To coordinate GPUs in materializing results of probing a hash table a global offset to the global output buffer may be maintained in memory accessible to each of the GPUs or the GPUs may compute global offsets using an exclusive sum of the local output buffer sizes.

Key-value storage using a skip list

This disclosure provides various techniques that may allow for accessing values stored in a data structure that stores multiple values corresponding to database transactions using a skip list. A key may be used to traverse the skip list to access data associated with the key. The skip list maintains on ordering of multiple keys, each associated with a particular record in the data structure, using indirect links between data records in the data structure that reference buckets included in hash table. Each bucket includes pointers to one or more records in the skip list.

System and method for providing bottom-up aggregation in a multidimensional database environment

In accordance with an embodiment, the system supports bottom-up aggregation in a multidimensional database computing environment. A dynamic flow, coupled with a data retrieval layer or data fetching component, which in some environments can incorporate a kernel-based data structure, referred to herein as an odometer retriever, or odometer, that manages pointers to data blocks, contains control information, or otherwise operates as an array of arrays of pointers to stored members, enables bottom-up aggregation of cube data which, for example with pure aggregating queries, provides considerable run time improvement.

Media sharing across service providers
11514099 · 2022-11-29 · ·

Embodiments including methods and apparatus to share file and file recommendations are disclosed. Data is received indicating a particular media item from a first service provider, where the particular media item is accessible from the first service provider according to a first pointer. A second pointer is identified in a database according to which the particular media item is accessible from a second service provider. Data indicating the second pointer is transmitted to a media playback system, via at least one of a WAN or a LAN.

Method and system for encapsulating and storing information from multiple disparate data sources
11507556 · 2022-11-22 · ·

An example computer-implemented method and computer system, each adapted for encapsulating digital data records in multiple, differently structured and unstructured formats, the data records ingested from multiple data storage locations, is described herein. In the method, each ingested data record is separated into a plurality of tuple structures, and for each tuple, the tuple is split into a data part and fieldname part. A pointer is created by combining the fieldname part, a record identifier of the data record, and a database identifier of the storage location where the data record was stored. The pointer is appended to the data part to form a digital stem cell (DSC) that is stored in a single data store, each formed DSC having the same structure.

DATA SECURITY USING RANDOMIZED FEATURES
20220358236 · 2022-11-10 ·

Data security using randomized features, provides improved protection of user data, within a cloud infrastructure. Files received are broken apart into data blocks then randomly written into storage locations that are recorded in sequence into a key comprising an array of pointers. Data blocks may be randomly sized between maximum and minimum parameters. Storage locations may first be tested to prevent unwanted overwrites of preexisting data, undersized locations may receive a partial write, plus a pointer to an overflow location into which the remainder of data is written. Randomized data storage is separate and isolated from pointers based key storage via separate communication channels, and separate storage infrastructures. Download speeds may be boosted via parallel processing of data blocks out of storage and into reassembly according to the pointers key sequence. Re-assembled files may be worked upon then saved back into the cloud infrastructure.