Patent classifications
G06F9/5072
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM STORING INFORMATION PROCESSING PROGRAM
An information processing apparatus includes: a memory; and a processor coupled to the memory and configured to: divide a job in units of computing nodes for a plurality of computing nodes; determine execution of scale-out or scale-in on the basis of a load in a case where each of the computing nodes is caused to execute a job obtained by the division; execute, in a case where determining execution of the scale-out, the scale-out according to the division of the job in units of computing nodes; and execute, in a case where determining execution of the scale-in, the scale-in according to the division of the job in units of computing nodes.
DIFFERENTIATED WORKLOAD TELEMETRY
In an approach for generating differentiated workload telemetry data, a processor corresponds one or more services with a workload related telemetry generating an event emitter. A processor performs a correlation analysis of corresponding relationship and connection among connected resources and current traffic into and out of the one or more services. A processor labels domain context for each telemetry event. A processor communicates each telemetry event to a global event handler. A processor performs a cross-correlation in real-time of telemetry data with the global event handler. A processor updates a real-time differentiated workload report.
Cloud hybrid application storage management (CHASM) system
The cloud hybrid application storage management system spans local data center and cloud-based storage and provides a unified view of content and administration throughout an enterprise. The system manages synchronization of storage locations, ensuring that files are replicated, uniquely identified, and protected against corruption. The system ingests digital media assets and creates instances of the assets with their own identification and rights and houses the identification and relationships in a CAR (Central Asset Registry). The system tracks the different instances of the assets in multiple storage locations using the CAR, which is a central asset registry that ties together disparate digital asset management repository systems (DAMs) and cloud-based storage archives in which the instances reside. While the invention treats and manages multiple files/instances independently, the CAR identifies them as related to each other.
Intra-footprint computing cluster bring-up
Methods, systems and computer program products for intra-footprint computing cluster bring-up within a virtual private cloud. A network connection is established between an initiating module and a virtual private cloud (VPC). An initiating module allocates resources of the virtual private cloud including a plurality of nodes that correspond to members of a to-be-configured computing cluster. A cluster management module having coded therein an intended computing cluster configuration is configured into at least one of the plurality of nodes. The members of the to-be-configured computing cluster interoperate from within the VPC to accomplish a set of computing cluster bring-up operations that configure the plurality of members into the intended computing cluster configuration. Execution of bring-up instructions of the management module serve to allocate networking IP addresses of the virtual private cloud. The allocated networking IP addresses of the virtual private cloud are assigned to networking interfaces of the plurality of nodes.
Provisioning edge backhauls for dynamic workloads
Network capacity is provisioned in a computing environment comprising a computing service provider and an edge computing network. A cost function is applied to usage data for a number of user endpoints at the edge computing network, a number and type of workloads at the edge computing network, offload capability of the edge computing network, and resource capacities at the edge computing network. An estimated network capacity is determined, where the workloads are dynamic, and the cost function is usable to optimize the network capacity with respect to one or more criteria.
Method and system for electing a master in a cloud based distributed system using a serverless framework
A method and system elects a master node from a plurality of nodes in a distributed system. A serverless elector function periodically outputs an election API call to a load balancer. The load balancer elects a master node from a plurality of candidate nodes each time the load balancer receives the election API call.
Leader election in a distributed system based on node weight and leadership priority based on network performance
Example implementations relate to consensus protocols in a stretched network. According to an example, a distributed system includes continuously monitoring network performance and/or network latency among a cluster of a plurality of nodes in a distributed computer system. Leadership priority for each node is set based at least in part on the monitored network performance or network latency. Each node has a vote weight based at least in part on the leadership priority of the node. Each node's vote is biased by the node's vote weight. The node having a number of biased votes higher than a maximum possible number of votes biased by respective vote weights received by any other node in the cluster is selected as a leader node.
Automated runtime configuration for dataflows
Methods, systems and computer program products are provided for automated runtime configuration for dataflows to automatically select or adapt a runtime environment or resources to a dataflow plan prior to execution. Metadata generated for dataflows indicates dataflow information, such as numbers and types of sources, sinks and operations, and the amount of data being consumed, processed and written. Weighted dataflow plans are created from unweighted dataflow plans based on metadata. Weights that indicate operation complexity or resource consumption are generated for data operations. A runtime environment or resources to execute a dataflow plan is/are selected based on the weighted dataflow and/or a maximum flow. Preferences may be provided to influence weighting and runtime selections.
Ad hoc decentralized cloud infrastructure
Technologies for establishing and utilizing a decentralized cloud infrastructure using a plurality of mobile computing devices include broadcasting for the formation of the decentralized cloud computing and storage infrastructure and establishing wireless communications between the plurality of mobile computing devices. The plurality of mobile computing devices self-organize and cooperate with one another to establish a structured decentralized cloud infrastructure to expose and sharing resources, services, and/or applications for ad hoc or socially-driven decentralized, cloud computing purposes.
Method For Organizing Tasks In The Nodes Of A Computer Cluster, Associated Task Organizer And Cluster
The invention relates to a method for organizing tasks, in at least some nodes of a computer cluster, comprising: First, launching two containers on each of said nodes, a standard container and a priority container, next, for all or part of said nodes with two containers, at each node, while a priority task does not occur, assigning one or more available resources of the node to the standard container thereof in order to execute a standard task, the priority container thereof not executing any task, when a priority task occurs, dynamically switching only a portion of the resources from the standard container thereof to the priority container thereof, such that, the priority task is executed in the priority container with the switched portion of the resources, and the standard task continues to be executed, without being halted, in the standard container with the non-switched portion of the resources.