Patent classifications
G06F2212/465
PARALLEL DATA SYNCHRONIZATION OF HIERARCHICAL DATA
A data sync cache is maintained to facilitate syncing of child data objects between a first computing system and a second computing system. Responsive to successful syncing of a parent data object of a child data object by a first sync engine, parent object data sync information indicating that the parent data object was successfully synced is written to the data sync cache. Prior to initiating a sync of the child data object by a second sync engine different from the first sync engine, a cache lookup of the data sync cache is performed to determine if the sync information is contained therein. If the data sync cache includes the sync information, the child data object sync is initiated. In this manner, failed syncs of child data objects are reduced along with the expensive API calls to the second computing system that would otherwise be required to retry failed syncs.
System and method for improved performance in a multidimensional database environment
In accordance with an embodiment, described herein is a system and method for improving performance within a multidimensional database computing environment. A multidimensional database, utilizing a block storage option, performs numerous input/output (I/O) operations when executing calculations. To separate I/O operations from calculations, a background task queue is created to identify data blocks requiring I/O. The background task queue is utilized by background writer threads to execute the I/O operations in parallel with calculations.
Database tool
A memory stores a first cache and a second cache. A processor copies a first portion of data from a first table stored in a database into a second table. The processor further determines that a second portion of data from the first table will be overwritten and copies the second portion into a third table. The processor further determines that a probability that a user will access a third portion of the first table is greater than a threshold and copies the third portion into the first cache. The processor further determines a fourth portion of the first table that the user accesses at a frequency greater than a set frequency and copies the fourth portion into the second cache.
Elastic Columnar Cache for Cloud Databases
A method for providing elastic columnar cache includes receiving cache configuration information indicating a maximum size and an incremental size for a cache associated with a user. The cache is configured to store a portion of a table in a row-major format. The method includes caching, in a column-major format, a subset of the plurality of columns of the table in the cache and receiving a plurality of data requests requesting access to the table and associated with a corresponding access pattern requiring access to one or more of the columns. While executing one or more workloads, the method includes, for each column of the table, determining an access frequency indicating a number of times the corresponding column is accessed over a predetermined time period and dynamically adjusting the subset of columns based on the access patterns, the maximum size, and the incremental size.
Techniques for providing I/O hints using I/O flags
Techniques for processing I/O operations may include: issuing, by a process of an application on a host, an I/O operation; determining, by a driver on the host, that the I/O operation is a read operation directed to a logical device used as a log to log writes performed by the application, wherein the read operation reads first data stored at one or more logical addresses of the logical device; storing, by the driver, an I/O flag in the I/O operation, wherein the I/O flag has a first flag value denoting an expected read frequency associated with the read operation; sending the I/O operation from the host to the data storage system; and performing first processing of the I/O operation on the data storage system, wherein said first processing includes using the first flag value in connection with caching the first data in a cache of the data storage system.
Low latency access to data sets using shared data set portions
Systems and methods are described for providing rapid access to data sets used by serverless function executions. Rather than pre-loading an entire data set into an environment of a serverless function, which might incur large latencies, the environment is provided with a local access view of the data set, such as in the form of a read-only mount point. As blocks within the data set are requested, a local process can translate the requests into requests for corresponding network objects. The network objects are then retrieved, and the relevant portion of the object is made available to the environment. Network objects may be shared among multiple data sets, so a host device may include a cache enabling an object retrieved for a first environment to also be used to service requests from a second environment.
Metadata cache for storing manifest portion
Example implementations relate to storing manifest portions in a metadata cache. An example includes receiving, by a storage controller, a read request associated with a first data unit. In response to receiving the read request, the storage controller stores a manifest portion in a metadata cache, the stored manifest portion comprising a plurality of records, the plurality of records including a first record associated with the first data unit. The storage controller determines storage information of the first data unit using pointer information included in the first record of the stored manifest portion, and replaces the pointer information in the first record with the determined storage information of the first data unit.
Data loading method, data loading apparatus, and recording medium
A non-transitory computer-readable recording medium having stored therein a program for causing a computer to execute a processing, the processing includes allocating a plurality of records to a page in shared memory that is able to be accessed simultaneously by a plurality of processings; receiving the plurality of records; writing, based on the plurality of records, information of writing region to the page for each of the plurality of records, and generating a writing processing corresponding to record for the plurality of records; generating, based on written the record to the writing region indicated by the information on the page by the writing processing executed, the page with at least of one of the record written; and loading the page generated to the database.
METADATA CACHE FOR STORING MANIFEST PORTION
Example implementations relate to storing manifest portions in a metadata cache. An example includes receiving, by a storage controller, a read request associated with a first data unit. In response to receiving the read request, the storage controller stores a manifest portion in a metadata cache, the stored manifest portion comprising a plurality of records, the plurality of records including a first record associated with the first data unit. The storage controller determines storage information of the first data unit using pointer information included in the first record of the stored manifest portion, and replaces the pointer information in the first record with the determined storage information of the first data unit.
High-performance implementation of sharing of read-only data in a multi-tenant environment
A container is a collection of schemas, objects, and related structures in a multitenant container database (CDB) that appears logically to an application as a separate database. Within a CDB, each container has a unique ID and name. The root database and every PDB is considered a container. PDBs isolate data and operations so that from the perspective of a user or application, each PDB appears as if it were a traditional non-CDB. A database management system that manages a container database is a container database management system (CDBMS). Data and metadata in the root database may include common schemas that make the functionality that users will often use available CDB-wide. To execute a query accessing a common schema, the common schemas may be accessed by sessions of a PDB without switching database contexts.