Patent classifications
G06F16/134
Distributed file cache
Embodiments of the present invention relate to methods, systems, and computer program products for file management in a distributed file cache system. In some embodiments, a method is disclosed. According to the method, responsive to determining that at least one client node is obtaining a file of a first version stored at a storage node, one or more processors generate contact information indicating that the file of the first version is accessible from the storage node and the at least one client node and recorded the contact information into a distributed hash table. The storage node and the at least one client node are included in a plurality of nodes associated with the distributed hash table. Further, one or more processors generate first version information indicating that the file is of the first version and record the first version information into a blockchain associated with the plurality of nodes.
Storage system performing data deduplication, method of operating storage system, and method of operating data processing system
A storage system performing data deduplication includes a storage device configured to store data received from a host, and a controller configured to receive the data and an index associated with the data received from the host. The controller includes a memory configured to store mapping information and a reference count, the mapping information associating the index received from the host with a physical address of the storage system, the reference count associated with the index received from the host. The controller determines whether the data received from the host corresponds to a duplicate of data previously stored in the storage device by reading, from the memory, the mapping information and the reference count, the reading based on the index received from the host. The controller performs a deduplication process by updating the reference count if the data received from the host corresponds to the duplicate of data previously stored.
Replicating data utilizing a virtual file system and cloud storage
A computer-implemented method according to one embodiment includes receiving, at a virtual file system, replicated data from a physical file system, transferring the replicated data from the virtual file system to cloud storage, and providing access to the replicated data in response to an unavailability of the physical file system, utilizing the virtual file system and the cloud storage.
Techniques for in-memory data searching
One embodiment of the invention is directed to a method for performing efficient data searches in a distributed computing system. The method may comprise, receiving a search request including a key. The key may be provided to a block-based table manager via a programming interface external to a virtual machine executing on a computer system. The programming interface may provide a translation between a first programming framework of the virtual machine and a second programming framework of the block-based table manager. Providing the key may cause the block-based table manager to conduct a search for a value corresponding to the key. The value may be provided in response to the search request. Utilizing such block-based tables may enable a data search to be performed using on-board memory of computing node operating within a distributed computing system.
AUTOMATED-APPLICATION-RELEASE-MANAGEMENT SUBSYSTEM THAT SUPPORTS INSERTION OF ADVICE-BASED CROSSCUTTING FUNCTIONALITY INTO PIPELINES
The current document is directed to automated-application-release-management facilities that support aspect-oriented-programming-like insertion of plug-in-implemented advice into release pipelines. In a described implementation, advice is represented by entries in an advice set or aggregation. These entries encode rules, advice types, and references to advice-implementing plug-ins. During release-pipeline execution, calls to the advice-implementing plug-ins are inserted prior to and after tasks in workflows corresponding to the tasks that are then executed by a workflow-execution engine. Rules may include release-pipeline parameters and advice definitions may use wildcard characters and other elements of regular expression in pipeline, stage, and task names.
Automatic website data migration
Aspects of the present disclosure involve systems and methods for performing operations comprising: retrieving, from a content management system (CMS), website generation data that references a data model type stored on the CMS; importing, by a migration agent from the CMS, definition of the data model type referenced by the website generation data as a local version of the data model type; detecting, by the migration agent, a change to a property of the local version of the data model type; and generating, by the migration agent, a migration script to migrate the change to the property of the local version of data model type to the data model type stored on the CMS.
Dynamic application instance discovery and state management within a distributed system
Dynamic application instance discovery and state management within a distributed system. A distributed system may implement application instances configured to perform one or more application functions within the distributed system, and discovery and failure detection daemon (DFDD) instances, each configured to store an indication of a respective operational state of each member of a respective group of the number of application instances. Each of the DFDD instances may repeatedly execute a gossip-based synchronization protocol with another one of the DFDD instances, where execution of the protocol between DFDD instances includes reconciling differences among membership of the respective groups of application instances. A new application instance may be configured to notify a particular DFDD instance of its availability to perform an application function. The particular DFDD instance may be configured to propagate the new instance's availability to other DFDD instances via execution of the synchronization protocol, without intervention on the part of the new application instance.
AUTOMATIC WEBSITE DATA MIGRATION
Aspects of the present disclosure involve systems and methods for performing operations comprising: retrieving, from a content management system (CMS), website generation data that references a data model type stored on the CMS; importing, by a migration agent from the CMS, definition of the data model type referenced by the website generation data as a local version of the data model type; detecting, by the migration agent, a change to a property of the local version of the data model type; and generating, by the migration agent, a migration script to migrate the change to the property of the local version of data model type to the data model type stored on the CMS.
REDUCING PROBABILISTIC FILTER QUERY LATENCY
Systems and techniques for reducing probabilistic filter query latency are described herein. A query for a probabilistic filter that is stored on a first media may be received from a caller. In response to receiving the query, cached segments of the probabilistic filter stored on a second media may be obtained. Here, the probabilistic filter provides a set membership determination that is conclusive in a determination that an element is not in a set. The query may be executed on the cached segments resulting in a partial query result. Retrieval of remaining data of the probabilistic filter from the first media to the second media may be initiated without intervention from the caller. Here, the remaining data corresponds to the query and data that is not in the cached segment. The partial query results may then be returned to the caller.
SYSTEM AND METHOD FOR DIRECT OBJECT TO FILE MAPPING IN A GLOBAL FILESYSTEM
A method for storing a file in cloud storage service (CSS) having a blocks index indexing blocks each having a unique block identifier, the entries thereof indicating for each block identifier a location of the block within an object storage system (OSS), the method comprising: the CSS transmitting a list of block identifiers indicating respective blocks that are not in the blocks index but which are indicated by a received file map for the file; adding an entry into the blocks index to indicate a location of uploaded blocks within the OSS for each block of the list and that has been successfully uploaded to the OSS; and when all of the blocks have been successfully uploaded, concatenating all blocks of the received file map in an order specified by the received file map to form a file object corresponding to the file in the OSS.