Patent classifications
G06F11/2023
Distributed Application Orchestration Management in a Heterogeneous Distributed Computing Environment
Distributed application orchestration management is provided. A first passive member of a set of passive members sends a notification message to other members indicating that the first passive member is initiating start of a distributed application in response to the first passive member validating that a self-restart by a leader member failed. The first passive member compares timestamps associated with an attempt to start the distributed application by other passive members in the set of passive members. The first passive member stops a particular attempt to start the distributed application in response to the first passive member determining that a timestamp associated with the particular attempt to start the distributed application by the first passive member is newer than another timestamp of another passive member. The first passive member designates the other passive member having an older timestamp as a new leader member to continue starting the distributed application.
DATABASE OPTIMIZATION USING RECORD CORRELATION AND INTERMEDIATE STORAGE MEDIA
An embodiment includes deriving usage data associated with records of a database by monitoring requests to perform read operations on the records of the database. The embodiment generates record correlation data representative of correlations between respective groups of records of the database by parsing the usage data associated with the records of the database. The embodiment stores a plurality of records received as respective write requests during a first time interval in an intermediate storage medium. The embodiment identifies a correlation in the record correlation data between a first record of the plurality of records and a second record of the plurality of records. The embodiment selects, responsive to identifying the correlation, a first location in the database for writing the first record and a second location in the database for writing the second record based on a proximity of the first location to the second location.
System and method for automatically scaling a cluster based on metrics being monitored
In accordance with an embodiment, described herein is a system and method for use in a distributed computing environment, for automatically scaling a cluster based on metrics being monitored. A cluster that comprises a plurality of nodes or brokers and supports one or more colocated partitions across the nodes, can be associated with an exporter process and alert manager that monitors metrics associated with the cluster. Various metrics can be associated with user-configured alerts that trigger or otherwise indicate the cluster should be scaled. When a particular alert is raised, a callback handler associated with the cluster, for example an operator, can automatically bring up one or more new nodes, that are added to the cluster, and then reassign a selection of existing colocated partitions to the new nodes/brokers, such that computational load can be distributed within the newly-scaled cluster environment.
Distributed File System that Provides Scalability and Resiliency
A distributed storage management system comprising nodes that form a cluster, a distributed block layer that spans the nodes in the cluster, and file system instances deployed on the nodes. Each file system instance comprises a data management subsystem and a storage management subsystem disaggregated from the data management subsystem. The storage management subsystem comprises a node block store that forms a portion of the distributed block layer and a storage manager that manages a key-value store and virtualized storage supporting the node block store. A file system volume hosted by the data management subsystem maps to a logical block device hosted by the virtualized storage in the storage management subsystem. The key-value store includes, for a data block of the logical block device, a key that comprises a block identifier for the logical block device and a value that comprises the data block.
Managing containers on a data storage system
Mechanisms and techniques are employed for managing the allocation and load balancing of storage system resources for the containerized, distributed execution of applications on a storage system. A control component executing on a processing component of the storage system may control reserving the necessary resources on one or more processing components to implement an application, and control a container management module to create, deploy and/or modify one or more containers on one or more processing components of the storage system. The one or more containers then may be executed to implement the application. Multiple processing components of the storage system may have a resource management module executing thereon. The control component may exchange communications with the one or more resource management modules of each processing component to determine the resources available within the processing component; e.g., to determine whether the processing component can satisfy the resource requirements of the application.
Fast multipath failover
A host device is configured to obtain a default timeout value of the host device for the submission of an input-output (IO) operation to a storage system and to determine a first timeout value that is less than the default timeout value. The host device is further configured to submit the IO operation to the storage system along a first path using the first timeout value and to determine that the submission of the IO operation along the first path has timed out. The host device is further configured to determine a second timeout value that is greater than the first timeout value and to submit the IO operation to the storage system along a second path using the second timeout value.
Resource pool management method and apparatus, resource pool control unit, and communications device
This application provides a resource pool management method and apparatus, a resource pool control unit, and a communications device. The method is applied to a resource pool system including a plurality of communications devices, and one resource pool control unit is deployed on each communications device. A first resource pool control unit that is responsible for managing a resource pool at a current moment receives a resource application request of an application program on any communications device, allocates, from the resource pool according to a preset rule, a first resource including one or more logical hardware devices, and sends a resource configuration request to a second resource pool control unit, so that the second resource pool control unit completes configuration of the first resource based on the resource configuration request, to provide a required hardware device resource for the application program.
High availability for a relational database management system as a service in a cloud platform
A Relational Database Management System (“RDBMS”) as a service cluster may including a master RDBMS Virtual Machine (“VM”) node associated with an Internet Protocol (“IP”) address and a standby RDBMS VM node associated with an IP address. The RDBMS as a service (e.g., PostgreSQL as a service) may also include n controller VM nodes each associated with an IP address. An internal load balancer may receive requests from cloud applications and include a frontend IP address different than the RDBMS IP as a service addresses and a backend pool including indications of the master RDBMS VM node and the standby RDBMS VM node. A Hyper-Text Transfer Protocol (“HTTP”) custom probe may transmit requests for the health of the master RDBMS VM node and the standby RDBMS VM node via the associated IP addresses, and responses to the requests may be used in connection with a failover operation.
Data storage migration in replicated environment
The described technology is generally directed towards replicating metadata representing a virtual data structure corresponding to replicated legacy data instead of the actual data for the data structure. Once virtual chunks are replicated to a remote, newer storage system, the corresponding legacy data is locally read into the virtual chunks to transform the virtual chunks into real data chunks of the remote newer storage system. A checksum can be replicated for the remote newer storage system to evaluate the consistency of the data. Efficient data storage migration is thus accomplished in a replicated environment based on relatively negligible replication traffic between two remote locations, while still assuring the consistency of migrated data.
MANAGING CONTAINERS ON A DATA STORAGE SYSTEM
Mechanisms and techniques are employed for managing the allocation and load balancing of storage system resources for the containerized, distributed execution of applications on a storage system. A control component executing on a processing component of the storage system may control reserving the necessary resources on one or more processing components to implement an application, and control a container management module to create, deploy and/or modify one or more containers on one or more processing components of the storage system. The one or more containers then may be executed to implement the application. Multiple processing components of the storage system may have a resource management module executing thereon. The control component may exchange communications with the one or more resource management modules of each processing component to determine the resources available within the processing component; e.g., to determine whether the processing component can satisfy the resource requirements of the application.