G06F2209/503

COORDINATED MICROSERVICES

Techniques are provided for a coordinated microservice system including a coordinator and multiple services, which interact with each other. Each of the services can have multiple execution instances, which run independently of each other. In operation, each instance of each service can use, or otherwise depend upon, one or more of the other services to perform at least some of its respective function(s). The coordinator monitors execution requests from each instance of the services to other services and calculates an available capacity of the other services upon which the requesting services depend to execute each of the execution requests based on the monitored performance metrics of the other services and level(s) of resource consumption associated with each of the execution requests. The coordinator then selects one of the execution requests based on the available capacity of the other services to service the execution requests without degrading the other services.

Computerized control of execution pipelines

Systems, methods, and other embodiments associated with controlling an execution pipeline are described. In one embodiment, a method includes generating an execution pipeline for executing a plurality of tasks. The example method may also include evaluating execution definitions of the tasks to identify execution properties of the plurality of tasks. The example method may also include assigning each task to an execution environment selected from a set of execution environments based upon execution properties of the task matching execution properties of the execution environments. The example method may also include controlling the execution pipeline to execute each task within the assigned execution environments.

Processing allocation in data center fleets
11637791 · 2023-04-25 · ·

A method and system for allocating tasks among processing devices in a data center. The method may include receiving a request to allocate a task to one or more processing devices, the request indicating a required bandwidth for performing the task, a list of predefined processing device groups connected to a host server and indicating availability of the processing device groups included therein for allocation of tasks and available bandwidth for each available processing device group, assigning the task to a processing device group having an available bandwidth greater than or equal to the required bandwidth for performing the task, and updating the list to indicate that each of the processing device group to which the task is assigned and other processing device group sharing at least one processing device is unavailable. The task may be assigned to an available processing device group having a lowest amount of power needed.

Management system and management method for migrating a business system operating on a target infrastructure

A management system manages an infrastructure system configured to provide a resource for operating a business system and included a plurality of infrastructures with different architecture. The management system comprises a monitoring module configured to monitor a usage state of the resource of the business system; and an analysis module configured to identify a business system being a migration target. The analysis module is configured to: analyze a usage tendency of the resource of the business system, determine which of the infrastructures is appropriate for the business system, and store a result of determining as a first determination result; identify a target infrastructure, which is an infrastructure for which the resource is predicted to become insufficient; select a business system being the migration target from among business systems operating on the target infrastructure based on the first determination result; and migrate the selected business system.

Systems and methods dynamically routing an event to a component of a hybrid application

The present disclosure is directed to dynamically routing an event to a component of a hybrid application. For example, a method may include: detecting an event from a first component of a first component type of a hybrid application; transmitting a request to execute a function associated with the event to a plurality of components, the plurality of components being a combination of components of the first component type and components of a second component type different from the first component type, the first component being different from the plurality of components; dynamically determining which component of the plurality of components to assign to execute the function, the dynamically determining being based on which components of the plurality of components are available to execute the function and one or more rules; assigning the function to the determined component; and receiving a result of the function from the determined component.

PROVIDING A USER-CENTRIC APPLICATION

According to aspects of the present disclosure there are provided methods and apparatus for providing a user-centric application comprising a plurality of service modules, the method comprising identifying available resources associated with each of a plurality of edge devices in the local network, identifying a first device of the local network, the first device offering a user input service, identifying a second device of the local network, the second device to receive an output event and present the output event for consumption by the user, for each service module, each service module to provide a portion of a functionality of the user-centric application and having an associated resource request, deploying the service module to an edge device based on the associated resource request and the identified available resources, and configuring data flows between the deployed service modules, the first device, and the second device to realize the user-centric application.

CONTAINERIZED WORKLOAD MANAGEMENT IN CONTAINER COMPUTING ENVIRONMENT
20230123350 · 2023-04-20 ·

Techniques for managing containerized workloads in a container computing environment are disclosed. For example, a method comprises the following steps. In a container computing environment configured to create an instance of a containerized workload for executing a microservice, the method computes a parameter based on a first set of execution conditions for the microservice, wherein the parameter represents a resource utilization value at which at least one additional instance of the containerized workload is created for executing the microservice. The method then re-computes the parameter based on a second set of execution conditions for the microservice.

DYNAMICALLY PERFORMING A STORAGE SERVICE JOB WORKFLOW
20230123568 · 2023-04-20 ·

An indication of a storage service job to be performed is received. A task to be performed for the storage service job is determined. The task is added to a work queue. Execution of one or more tasks in the work queue that includes the task is dynamically managed. Resources are dynamically allocated to one or more virtualization containers that are assigned to execute the one or more tasks in the work queue. An identification of one or more new tasks to be performed for the storage service job is received from one of the virtualization containers executing the task. The one or more new tasks are added to the work queue.

DISTRIBUTED COMPUTING WITH VARIABLE ENERGY SOURCE AVAILABILITY

A computer system that includes a plurality of compute clusters that are located at different geographical locations. Each compute cluster is powered by a local energy source at a geographical location of that compute cluster. Each local energy source has a pattern of energy supply that is variable over time based on an environmental factor. The computer system further includes a server system that executes a global scheduler that distributes virtual machines that perform compute tasks for server-executed software programs to the plurality of compute clusters of the distributed compute platform. To distribute virtual machines for a target server-executed software program, the global scheduler is configured to select a subset of compute clusters that have different complementary patterns of energy supply such that the subset of compute clusters aggregately provide a target compute resource availability for virtual machines for the target server-executed software program.

Machine-Learning-Based Load Balancing for Cloud-Based Disaster Recovery Apparatuses, Processes and Systems
20230069593 · 2023-03-02 ·

The Machine-Learning-Based Load Balancing for Cloud-Based Disaster Recovery Apparatuses, Processes and Systems (“MLLB”) transforms workload agent installation request, AWCD training request, NWCD training request, asset workload classification request, node workload classification request, asset virtualization request inputs via MLLB components into workload agent installation response, AWCD training response, NWCD training response, asset workload classification response, node workload classification response, asset virtualization response outputs. An asset virtualization request datastructure is obtained. A set of asset workload classification labels for the asset determined using an asset workload classification datastructure is retrieved. A set of node workload classification labels for each node in a set of available compute nodes determined using a node workload classification datastructure is retrieved. A set of compatible candidate compute nodes is determined using a set of capacity threshold rules. A virtual machine corresponding to the asset is instantiated on a selected candidate compute node.