G06F16/2336

TRANSACTIONAL KEY-VALUE STORE
20170371912 · 2017-12-28 ·

Example implementations herein can be used to build, maintain, and access databases built database in multi-core computing systems with large VRAM and huge NVRAM. The database with optimistic concurrency control can be built on a transactional key-value data store that includes logically equivalent data pages stored in both VRAM and VRAM. Data records in volatile data pages in the VRAM represent the most recent version of the data. Data records in the NVRAM immutable and are organized in a stratified composite snapshot. A distributed log gleaner process is used to process entries corresponding to transactions on the volatile data pages and construct the snapshot. The log gleaner sorts the log entries by epoch, key range, and most recent use to partition the snapshot across multiple nodes.

Write lock conflicts in a storage network

A storage unit operates by: receiving a write slice request, wherein the write slice request includes a plurality of encoded data slices and wherein the write slice request corresponds to a range; determining whether a write lock conflict exists based on the range; issuing an unfavorable write slice response when the write lock conflict is determined to exist; and when the write lock conflict is determined to not exist: initiating local storage of the plurality of encoded data slices; and issuing a favorable write slice response.

SHARED MEMORY-BASED TRANSACTION PROCESSING
20170345094 · 2017-11-30 ·

Described is a method for updating a first computer memory of a first transaction engine that processes transactions of a first topic and a second computer memory of a second transaction engine that processes transactions of a second topic different from the first topic but the transactions involving or updating a common factor. After the first transaction engine processes the transaction, it notifies a separate process affiliated or associated with each of the remaining transaction engines of the execution of the transaction. Each such associated process updates a local shared memory that it shares with the respective transaction engine. A memory master may also be notified of the transactions and their completion and the memory master may maintain state information. In a stock market or other electronically-implemented exchange or bourse context, the transactions may be orders for matching engines in an order book.

Temporal Logical Transactions

In supporting temporal logical transactions, a database management system (DBMS) determines that a temporal logical transaction time (T) is set for a temporal logical transaction. The DBMS receives a change request for a current row in a current table. A history row for a history table corresponding to the current table is created. The values in the history row are set to the values in the current row, where a begin time in the history row has same value as a begin time in the current row, and an end time in the history row is set to T. When the begin time equals the end time in the history row, the DBMS does not store the history row in the history table. The values in the current row are changed according to the change request, and the begin time in the current row is set to T.

SYSTEMS AND METHODS FOR IMPLEMENTING A MULTI-HOST RECORD LOCK DEADLOCK FEEDBACK MECHANISM

A method includes retrieving, by a processor, a first entry from a global wait list as a current waiting lock. The method further includes decreasing, by the processor, a deadlock timer of the current waiting lock. The method further includes determining, by the processor, whether the deadlock timer equals zero. The method further includes appending, by the processor, the current waiting lock to an end of a deadlock victim selection list, if the deadlock timer equals zero. The method further includes selecting, by the processor, a victim from the deadlock victim selection list.

COMPUTER-READABLE RECORDING MEDIUM FOR STORING DATA PROCESSING PROGRAM, DATA PROCESSING METHOD, AND DATA PROCESSING APPARATUS
20220058204 · 2022-02-24 · ·

A method includes: repeatedly executing a first processing configured to generate, in response to input of conversion examples of values set, a conversion program for converting values of records, convert the values of the records by executing the conversion program, and display a conversion result; and executing a second processing configured to select one or more second records from the records, each of the one or more records being a record on an upper side of a first record, the first record being a record for which a conversion example is added in second or subsequent input, determine whether a value of each second record is changed in a latest conversion result from a previous conversion result, and in response that a value of the second record is changed, cause the first processing to highlight a value of the second record in display of the latest conversion result.

Using a local cache to store, access and modify files tiered to cloud storage

Systems and methods are provided herein for efficient local caching of data tiered to cloud storage to help reduce the bandwidth cost of repeated reads and writes to the same region of a stubbed file, increase the performance of write operations, and increase performance of read operations to portions of a stubbed file accessed repeatedly. When operations are directed toward data tiered to the cloud, the data can be read from cloud storage and stored within a local cache. A cache tracking tree can be generated and used to track file regions of a stub file, cached states associated with regions of the stub file, a set of cache flags, and other file and mapping data. For example, the cache state of regions of a stub file can be tracked including a cached data state, a non-cached state, a modified state, or a truncated state. Operations directed toward stubbed files can then look to the cache tracking tree to determine the most efficient way to access, retrieve, or operate on the data that maximizes local file system performance while reducing network activity.

Shared volumes in distributed RAID over shared multi-queue storage devices

A method for data storage, in a system that includes multiple servers, multiple multi-queue storage devices and at least one storage controller that communicate over a network, includes receiving in a server, from an application running on the server, a request to access data belonging to one or more stripes. The stripes are stored on the storage devices and are shared with one or more other servers. In response to the request, the following are performed at least partially in parallel: (i) requesting one or more global locks that prevent the other servers from accessing the stripes, and (ii) reading at least part of the stripes from the storage devices speculatively, irrespective of whether the global locks are granted. Execution of the request is completed upon verifying that the speculatively-read data is valid.

VERSION CONTROL OF RECORDS IN AN ELECTRONIC DATABASE
20170277743 · 2017-09-28 ·

Systems, methods, and other embodiments associated with concurrently maintaining separate versions of records in an electronic database are described. In one embodiment, a method includes enabling the electronic database to concurrently store separate versions of a record by using a set of system columns to maintain the separate versions together in the electronic database and provide access to each of the separate versions in isolation from one another. The example method may also include, in response to identifying a change request to modify the record, generating an additional version of the record in the electronic database by adding the additional version into the electronic database with a new row identifier in a row identifier column and a row identifier from the record stored in the source column to uniquely identify the additional version as a version of the record and avoid conflicts between multiple versions of the record.

OBJECT MANAGER

Techniques described herein relate to automated approval of resource requests. More specifically, resource request data is retrieved, identified, processed and aggregated to automate approval of the request.