Patent classifications
G06F2212/465
DATA MANAGEMENT METHOD AND COMPUTER-READABLE RECORDING MEDIUM STORING DATA MANAGEMENT PROGRAM
A data management method causes a computer to execute processing including: creating, when a predetermined data processing program performs data processing, based on an access frequency to a data store, high-frequency state item list information obtained by listing high-frequency state items of which the access frequency is high; determining, when state information that includes a value of the high-frequency state item is written to the data store, whether or not the state information corresponds to the high-frequency state item with reference to the high-frequency state item list information; grouping and writing pieces of the state information of a plurality of the high-frequency state item.
METHOD AND SYSTEM FOR CONSTRUCTING PERSISTENT MEMORY INDEX IN NON-UNIFORM MEMORY ACCESS ARCHITECTURE
A method for constructing a persistent memory index in a non-uniform memory access architecture includes: maintaining partial persistent views in a persistent memory and maintaining a global volatile view in a DRAM; an underlying persistent memory index processing a request in a foreground thread when cold data is accessed; when hot data is accessed, reading a key-value pair for a piece of hot data in the global volatile view in response to a query operation carried in the request, and in response to an insert/update/delete operation carried in the request, updating a local partial persistent view and the global volatile view; and in response to a hotspot migration, a background thread generating new partial persistent views and a new global volatile view, and recycling the partial persistent views and the global volatile view for old hot data into the underlying persistent memory index.
System and method for a semantically-driven smart data cache
A method of integrating data across multiple data stores is provided. The method includes ingesting diverse data from multiple data sources and reconciling the ingested diverse data by updating semantic models based on the ingested diverse data. The method further includes storing the ingested diverse data based on one or more classification of the data sources according to the semantic models and automatically generating scalable service endpoints that are semantically consistent according to the classification of the data sources. The generated scalable service endpoints are application programming interfaces. The method also includes determining a protocol based on the scalable service endpoints in response to receiving a call from the one or more recipient systems and responding to the call from the one or more recipient systems by providing data in the classification of the data sources.
USING DYNAMIC DATA STRUCTURES FOR STORING DATA OBJECTS
A technique for dynamic data structure usage for storing data objects is described. In one example of the present disclosure, a system can receive a data object and properties associated with the data object. The system can determine, based on at least one of the properties and pre-defined rules for data objects and corresponding object types, an object type of the data object and a first data structure for storing the data object that is different from a second data structure currently storing data objects in the memory. The system can output a command for causing the first data structure to store the data object in the memory.
Machine learning model for micro-service compliance requirements
Embodiments relate to a computer system, computer program product, and computer-implemented method to train a machine learning (ML) model using artificial intelligence to learn an association between (regulatory) compliance requirements and features of micro-service training datasets. The trained ML model is leveraged to determine the compliance requirements of a micro-service requiring classification. In an exemplary embodiment, once the micro-service has been classified with respect to applicable compliance requirements, the classified micro-service may be used as an additional micro-service training dataset to further train the ML model and thereby improve its performance.
Adaptive caching for hybrid columnar databases with heterogeneous page sizes
Disclosed herein are system, method, and computer program product embodiments for adaptive caching for hybrid columnar databases with heterogeneous page sizes. An embodiment operates by receiving a request to load a new page of memory from a disk in a buffer cache. The embodiment scans one or more pools comprising one or more pages of the same size in a buffer cache. The embodiment determines an increment of a reuse rate for the pools in the buffer cache within a time interval. The embodiment determines a cumulative reuse rate that is the sum of the increments of the reuse rate over several time intervals. The embodiment determines a gliding average reuse rate of the cumulative reuse rate over several time intervals. The embodiment compares the average reuse rates of the plurality of the pools to a threshold to dynamically determine whether a pool should reuse memory from the existing pages of the same pool or rebalance memory from one or more victim pools.
Utilizing a persistent write cache as a redo log
A storage control system receives a first write request and a second write request following the first write request. The first and second write requests comprise respective first and second data items for storage to a primary storage. First and second cache write operations are performed in parallel to write the first and second data items a persistent write cache. The first cache write operation comprises a split write operation which comprises writing a parsing header for the first data item to the write cache, and writing a payload of the first data item to the write cache. The second cache write operation comprises storing the second data item and associated metadata in the write cache, and waiting for an acknowledgment that the parsing header for the first data item has been successfully stored in the write cache before returning an acknowledgment indicating successful storage of the second data item.
Method and system for constructing persistent memory index in non-uniform memory access architecture
A method for constructing a persistent memory index in a non-uniform memory access architecture includes: maintaining partial persistent views in a persistent memory and maintaining a global volatile view in a DRAM; an underlying persistent memory index processing a request in a foreground thread when cold data is accessed; when hot data is accessed, reading a key-value pair for a piece of hot data in the global volatile view in response to a query operation carried in the request, and in response to an insert/update/delete operation carried in the request, updating a local partial persistent view and the global volatile view; and in response to a hotspot migration, a background thread generating new partial persistent views and a new global volatile view, and recycling the partial persistent views and the global volatile view for old hot data into the underlying persistent memory index.
OBJECT STORAGE DEVICE AND AN OPERATING METHOD THEREOF
A controller includes: an interface unit configured to receive an access request for object data; and an indexing unit configured to determine whether to divide the object data and, when the object data is divided, store a first portion of the object data in a first memory and a second portion of the object data in a first storage space and a second storage space, wherein the first and second storage spaces have a latency greater than a latency of the first memory.
Synchronizing garbage collection and incoming data traffic
The technology describes performing garbage collection while data writes are occurring, which can lead to a conflict in that a new reference to an otherwise non-referenced candidate object for garbage collection is written after the non-referenced candidate object is detected. In one example implementation, orphaned binary large objects (BLOBs) that are not referenced by a descriptor file and are beyond a certain age are detected and deleted via an object references table traversal as part of garbage collection. Before reclaiming a deleted BLOB's capacity, a background process operates to restore the deleted BLOB if a new descriptor file reference to the BLOB was written during the object references table traversal. Capacity is only reclaimed after the object references table traversal and the background processing completes, for those BLOBs that were deleted and had not been restored.