G06F2212/465

Pattern detection system and method

A method, computer program product, and computing system for receiving content on a high-availability storage system. The content is compared to one or more entries in a static database associated with a cache memory system of the high-availability storage system. If the content does not match the one or more entries in the static database, the content is compared to one or more entries in a dynamic database associated with the cache memory system. If the content does not match the one or more entries in the dynamic database: the content is written to the cache memory system and a representation of the content is written to a temporal database associated with the cache memory system and maintained for a defined period of time.

SYNCHRONIZING GARBAGE COLLECTION AND INCOMING DATA TRAFFIC
20220075719 · 2022-03-10 ·

The technology describes performing garbage collection while data writes are occurring, which can lead to a conflict in that a new reference to an otherwise non-referenced candidate object for garbage collection is written after the non-referenced candidate object is detected. In one example implementation, orphaned binary large objects (BLOBs) that are not referenced by a descriptor file and are beyond a certain age are detected and deleted via an object references table traversal as part of garbage collection. Before reclaiming a deleted BLOB's capacity, a background process operates to restore the deleted BLOB if a new descriptor file reference to the BLOB was written during the object references table traversal. Capacity is only reclaimed after the object references table traversal and the background processing completes, for those BLOBs that were deleted and had not been restored.

Methods for providing data values using asynchronous operations and querying a plurality of servers

A processing system server and methods for performing asynchronous data store operations. The server includes a processor which maintains a cache of objects in communication with the server. The processor executes an asynchronous computation to determine the value of a first object. In response to a request for the first object occurring before the asynchronous computation has determined the value of the first object, a value of the first object is returned from the cache. In response to a request for the first object occurring after the asynchronous computation has determined the value of the first object, a value of the first object determined by the asynchronous computation is returned. The asynchronous computation may comprise at least one future, such as a ListenableFuture, or at least one process or thread. Execution of an asynchronous computation may occur with a frequency correlated with how frequently the object changes or how important it is to have a current value of the object. The asynchronous computation may receive different values from at least two servers and may determine the value of an object based on time stamps.

Computing tile

Systems, apparatuses, and methods related to a computing tile are described. The computing tile may perform operations on received data to extract some of the received data. The computing tile may perform operations without intervening commands. The computing tile may perform operations on data streamed through the computing tile to extract relevant data from data received by the computing tile. In an example, the computing tile is configured to receive a command to initiate an operation to reduce a size of a block of data from a first size to a second size. The computing tile can then receive a block of data from a memory device coupled to the apparatus. The computing tile can then perform an operation on the block of data to extract predetermined data from the block of data to reduce a size of the block of data from a first size to a second size.

Systems and techniques for aggregation, display, and sharing of data

Systems and techniques for aggregation, display, and sharing of data. Graphic items representing data objects identified by a data package may be displayed on timelines. Each timeline may be associated with a respective class of data, and each graphic item displayed on a respective timeline may represent one or more of the data objects in the class associated with the respective timeline.

System and method for a semantically-driven smart data cache

An embodiment of the disclosure provides a method of integrating data across multiple data stores in a smart cache in order to provide data to one or more recipient systems. The method includes automatically ingesting diverse data from multiple data sources, automatically reconciling the ingested diverse data by updating semantic models based on the ingested diverse data, storing the ingested diverse data based on one or more classification of the data sources according to the semantic models, automatically generating scalable service endpoints which are semantically consistent according to the classification of the data sources, and responding to a call from the one or more recipient systems by providing data in the classification of the data sources.

Method, apparatus and computer program product for managing address in storage system

Techniques manage addresses in a storage system. In such techniques, an address page of an address pointing to target data in the storage system is determined in response to receiving an access request for accessing data in the storage system. A transaction for managing the address page is generated on the basis of the address page, here the transaction at least comprises an indicator of the address page and a state of the transaction. A counter describing how many times the address page is referenced is set. The transaction is executed at a control node of the storage system on the basis of the counter. With such techniques, the access speed for addresses in the storage system can be accelerated, and then the overall response speed of the storage system can be increased.

Network-integrated storage data cache

In one example, a switch in the form of a network core switch includes one or more card slots. A cache memory device is also included that is configured to be received in one of the card slots. The switch further includes one or more storage node connection ports in communication with the cache memory device, and also includes one or more client communication ports in communication with the cache memory device.

Machine Learning Model For Micro-Service Compliance Requirements

Embodiments relate to a computer system, computer program product, and computer-implemented method to train a machine learning (ML) model using artificial intelligence to learn an association between (regulatory) compliance requirements and features of micro-service training datasets. The trained ML model is leveraged to determine the compliance requirements of a micro-service requiring classification. In an exemplary embodiment, once the micro-service has been classified with respect to applicable compliance requirements, the classified micro-service may be used as an additional micro-service training dataset to further train the ML model and thereby improve its performance.

Method, device and computer program product for cache-based index mapping and data access
11113195 · 2021-09-07 · ·

Embodiments for accessing data are provided. A method of accessing data comprises: receiving a request to access first data in a storage device, at least a part of data in the storage device being cached in a cache, and index information of the at least a part of data being recorded in an index structure associated with the cache; querying the index structure to determine whether the first data is cached in the cache; and accessing the first data based on a result of the query. Embodiments of the present disclosure can improve data accessing efficiency while saving memory consumption.