G06F2209/5019

Communication management apparatus and communication management method

A communication management apparatus suppresses occurrence of a communication anomaly in a cluster by including: an acquisition unit acquiring quantities of traffic of communications performed by one or more communication units operating in each of a plurality of computers constituting a cluster; a prediction unit predicting future quantities of traffic of the communications; an identification unit calculating, for each of the computers, a total of the future quantities of traffic of the communication units operating in the computer and identifying a first computer for which the total exceeds a threshold; and a move control unit controlling, for a communication unit operating in the first computer, move to a second computer.

Method to optimize restore based on data protection workload prediction
11693743 · 2023-07-04 · ·

An intelligent method of selecting a data recovery site upon receiving a data recovery request. The backup system collects historical activity data of the storage system to identify work load of every data recovery site. A predicted activity load for each data recovery site is then generated using the collected data. When a request for data recovery is received, the system first identifies which data recovery site has copies of the files to be recovered. Then it uses the predicted work load for these data recovery sites to determine whether to use a geographically local site or a site that may be remote geographically, but has a lower work load.

ALLOCATING OF COMPUTING RESOURCES FOR APPLICATIONS

A method for performing scheduling includes extracting information from at least one log file for an application. The method also includes determining an allocation of cloud resources for the application based on the information from the log file(s).

SELECTING A NODE OF A WORK GROUP FOR EXECUTING A TARGET TRANSACTION OF ANOTHER WORK GROUP TO EXECUTE SKIPPABLE STEPS PRIOR TO A PREDICTED INTERRUPTION

A computing network includes nodes of different work groups. Nodes of a work group are dedicated to transactions of the work group. If a node of a first work group is predicted to have an idleness window, a second work group may borrow the node to execute a transaction of the second work group. At least a subset of steps of the transaction may be categorized into a step group. Trees of a transaction may be categorized into one or more tree groups. A node is selected for executing a transaction, if the predicted idleness duration of the node is sufficient relative to the predicted runtime of the transaction, the step group, and/or tree group. A credit system is maintained. A first work group transfers a credit to a second work group when borrowing a node of the second work group for executing a transaction of the first work group.

SELECTING A NODE DEDICATED TO TRANSACTIONS OF A PARTICULAR WORK GROUP FOR EXECUTING A TARGET TRANSACTION OF ANOTHER WORK GROUP

A computing network includes nodes of different work groups. Nodes of a work group are dedicated to transactions of the work group. If a node of a first work group is predicted to have an idleness window, a second work group may borrow the node to execute a transaction of the second work group. At least a subset of steps of the transaction may be categorized into a step group. Trees of a transaction may be categorized into one or more tree groups. A node is selected for executing a transaction, if the predicted idleness duration of the node is sufficient relative to the predicted runtime of the transaction, the step group, and/or tree group. A credit system is maintained. A first work group transfers a credit to a second work group when borrowing a node of the second work group for executing a transaction of the first work group.

SELECTING A NODE GROUP OF A WORK GROUP FOR EXECUTING A TARGET TRANSACTION OF ANOTHER WORK GROUP TO OPTIMIZE PARALLEL EXECUTION OF STEPS OF THE TARGET TRANSACTION

A computing network includes nodes of different work groups. Nodes of a work group are dedicated to transactions of the work group. If a node of a first work group is predicted to have an idleness window, a second work group may borrow the node to execute a transaction of the second work group. At least a subset of steps of the transaction may be categorized into a step group. Trees of a transaction may be categorized into one or more tree groups. A node is selected for executing a transaction, if the predicted idleness duration of the node is sufficient relative to the predicted runtime of the transaction, the step group, and/or tree group. A credit system is maintained. A first work group transfers a credit to a second work group when borrowing a node of the second work group for executing a transaction of the first work group.

MODEL MANAGEMENT SYSTEM AND MODEL MANAGEMENT METHOD
20220413926 · 2022-12-29 ·

There is provided a model management system that manages, for each computing environment and each service, a model capable of inferring a resource quantity when an application operates, in a resource of the computing environment. The model management system includes an acquiring unit that acquires environment information including at least one of configuration information and setting information of a computing environment from each of a plurality of computing environments, a detecting unit that detects a computing environment in which the environment information acquired by the acquiring unit has been changed, and a selecting unit that selects, as an update target candidate, a model associated with the computing environment detected by the detecting unit.

INTELLIGENT RESOURCE MANAGEMENT
20220413931 · 2022-12-29 ·

A system and method for distributing resources in a computing system is disclosed. The resources include hardware components in a hardware pool, a management infrastructure, and an application. A telemetry system is coupled to the resources to collect operational data from the operation of the resources. A data analytics system is coupled to the telemetry subsystem to predict a future operational data value based on the collected operational data. A policy engine is coupled to the data analytics system to determine a configuration to allocate the resources based on the future operational data value.

POSITIONING OF EDGE COMPUTING DEVICES

A processor may receive user data associated with one or more locations of a user in an environment. The processor may receive edge computing data associated with utilization of edge computing resources by the user. The processor may analyze the edge computing data to associate a context with an edge computing resource need. The processor may analyze the user data to associate a context with a location of the user within the environment. The processor may determine a first location of the user in the environment at a first time. The processor may predict a first edge computing need of the user in the first location. The processor may determine an arrangement of one or more edge computing devices configured to meet the first edge computing need of the user at the first time.

ADAPTIVE CONTROL OF DEADLINE-CONSTRAINED WORKLOAD MIGRATIONS
20220413942 · 2022-12-29 ·

Adaptive control of deadline-constrained workload migrations can include monitoring migrations of workloads forming a wave migrating from a source computing node to a target computing node. The monitoring can be performed in real time. The migrations can be performed by transferring image replications of each workload over a data communication network. Based on an expected bandwidth availability, a likelihood that a cutover deadline associated with the wave is exceeded prior to completing a migration of each of the wave's workloads can be predicted. Migration of one or more selected workloads can be suspended in response to determining that exceeding the cutover deadline prior to completing migration of each of the wave's workloads is likely.