Patent classifications
G06F16/1734
SELECTING SUBSCRIBING COMPUTING NODE TO EXECUTE DATA STORAGE PLAN FOR DATA SHARD
A distributed database system maintains a database including a data shard for which a primary computing node is responsible. The primary computing node identifies a data storage plan for the data shard. The plan identifies a file subset of data storage files of the shard to be merged into a larger data storage file, and a node subset of computing nodes of the system that subscribe to the data shard. The primary node identifies which computing nodes of the node subset each have sufficient computing resources to execute the plan, as candidate computing nodes. The primary node identifies which files of the file subset each candidate computing node locally caches. The primary node selects one candidate computing node to execute the plan, based on the files of the file subset that each candidate computing node locally caches. The primary node causes the selected candidate computing node to execute the plan.
Automatic Mount of Application Filesystem Intermediate Snapshot Creation Timepoints
An Application Data Management System (ADMS) enables an application file system to be mounted at any selected reconstruction time (T.sub.R). If the reconstruction time T.sub.R falls intermediate snapshot creation timepoints, the ADMS creates a version of the application file system at the selected reconstruction time T.sub.R using a snapshot of the data file from a previous application file system snapshot creation timepoint, and a snapshot of the log file from a subsequent application file system snapshot creation timepoint. The ADMS uses the snapshot of the log file from the subsequent snapshot creation timepoint to replay transactions on the snapshot of the data file from the previous snapshot creation timepoint up to the selected reconstruction time T.sub.R. This enables the state of the application file system to be recreated and mounted at any arbitrary selected reconstruction time, even if the selected reconstruction time is not coincident with snapshot creation timepoints.
CONTINUOUS DATA PROTECTION USING A WRITE FILTER
A reference snapshot of a storage is stored. Data changes that modify the storage are received. The data changes are captured by a write filter of the storage. The received data changes are logged. The data changes occurring after an instance time of the reference snapshot are applied to the reference snapshot to generate a first incremental snapshot corresponding to a first intermediate reference restoration point. The data changes occurring after an instance time of the first incremental snapshot are applied to the first incremental snapshot to generate a second incremental snapshot corresponding to a second intermediate reference restoration point.
Method, device and computer program product for shrinking storage space
Techniques for shrinking a storage space involve determining a used storage space in a storage pool allocated to a plurality of file systems, and determining a usage level of a storage space in the storage pool based on the used storage space in and a storage capacity of the storage pool. The techniques further involve shrinking a storage space from one or more of the plurality of file systems based on the usage level of the storage pool. Such techniques may automatically shrink storage space in one or more file systems from the global level of the storage pool, which determines an auto shrink strategy according to overall performance of the storage pool, thereby improving efficiency of auto shrink and balancing system performance and saving space.
Performing quantum file copying
Performing quantum file copying is disclosed herein. In one example, upon receiving a request to copy a source quantum file comprising a plurality of source qubits, a quantum file manager accesses a quantum file registry record identifying the plurality of source qubits and a location of each of the plurality of source qubits. The quantum file manager next allocates a plurality of target qubits equal in number to the plurality of source qubits, and copies data stored by each of the source qubits into a corresponding target qubit. The quantum file manager then generates a target quantum file registry record that identifies the plurality of target qubits and their locations. In some examples, a quantum file move operation may be performed by deleting the source quantum file after the copy operation, and updating the target quantum file registry record with the same quantum file identifier as the source quantum file.
Creating database clones at a specified point-in-time
A point-in-time clone may be created for a database. A request to create the point-in-time clone may be received. The clone may be provided with access to a storage for the database that stores a history of modifications to the database applicable to return data of the database according to a state of the data at the specified point in time. The clone may then be updated so that the updates made to the clone are stored for subsequent access by the clone.
Transparent interpretation and integration of layered software architecture event streams
A computerized method includes analyzing program code, including a control flow graph, of one or more applications that are executable by an operating system of a computing device to determine event-logging functions of the program code that generate event logs; extracting, by the processing device based on the event-logging functions, log message strings from the program code that describes event-logging statements; identifying, by the processing device, via control flow analysis, possible control flow paths of the log message strings through the control flow graph; storing, in a database accessible by the processing device, the possible control flow paths; and inputting, by the processing device into a log parser, the possible control flow paths of the log message strings to facilitate interpretation of application events during runtime execution of the one or more applications.
Supporting storage using a multi-writer log-structured file system
Solutions for supporting storage using a multi-writer log-structured file system (LFS) are disclosed that include receiving incoming data from an object of a plurality of objects that are configured to simultaneously write to the LFS from different nodes; based at least on receiving the incoming data, determining whether sufficient free segments are available in a local segment usage table (SUT) for writing the incoming data; based at least on determining that insufficient free segments are available, requesting allocation of new free segments; writing the incoming data to a log; acknowledging the writing to the object; determining whether the log has accumulated a full segment of data; based at least on determining that the log has accumulated a full segment of data, writing the full segment of data to a first segment of the free segments; and updating the local SUT to mark the first segment as no longer free.
Intelligent management of stub files in hierarchical storage
Intelligent management of stub files in hierarchical storage is provided by: in response to identifying a file to migrate from a file system to offline storage, providing metadata for the file to a machine learning engine; receiving a stub profile for the file from the machine learning engine that indicates an offset from a beginning of the file and a length from the offset for previewing the file; and migrating the portion of the file from the file system to an offline storage based on the stub profile. In some embodiments this further comprises: monitoring file system operations; in response to detecting a read operation of the portion of the file: determining a file type; providing file data to the machine learning engine; and performing a supervised learning operation based on the file type and the file data to update the machine learning engine.
Efficient filename storage and retrieval
The disclosed technology relates to a system configured to detect a modification to a node in a tree data structure. The node is associated with a content item managed by a content management service as well as a filename. The system may append the filename and a separator to a filename array, determine a location of the filename in the filename array, and store the location of the filename in the node.