G06F9/4856

Determining optimal placements of workloads on multiple platforms as a service in response to a triggering event

A computer-implemented method, a computer program product, and a computer system for placements of workloads in a system of multiple platforms as a service. A computer detects a triggering event for modifying a matrix that pairs respective workloads on respective platforms and includes attributes of running respective workloads on respective platforms. The computer recalculates the attributes in the matrix, in response to the triggering event being detected. The computer determines optimal placements of the respective workloads on the respective platforms, based on information in the matrix. The computer places the respective workloads on the respective platforms, based on the optimal placements.

VIRTUAL MACHINE MIGRATION METHOD AND RELATED DEVICE

Embodiments of this application disclose a virtual machine migration method. One example method includes: indicating, by a controller, a proxy virtual machine to mount a volume; replacing, by using the proxy virtual machine, a driver of an original platform in the volume with a driver of a target platform; and then, mounting a replaced volume to a target virtual machine.

MIGRATION OF VIRTUAL COMPUTE INSTANCES USING REMOTE DIRECT MEMORY ACCESS

A virtual compute instance is migrated between hosts using remote direct memory access (RDMA). The hosts are equipped with RDMA-enabled network interface controllers for carrying out RDMA operations between them. Upon failure of a first host and copying of page tables of the virtual compute instance to the first host's memory, a first RDMA operation is performed to transfer the page tables from the first host's memory to the second host's memory. Then, second RDMA operations are performed to transfer data pages of the virtual compute instance from the first host's memory to the second host's memory, with references to memory locations of the data pages specified in the page tables. The page tables of the virtual compute instance are reconstructed to reference memory locations of the data pages in the second host's memory and stored therein.

HITLESS CONTAINER UPGRADE WITHOUT AN ORCHESTRATOR

Systems, methods, and computer-readable media are disclosed for performing a hitless upgrade of executable code in the absence of an orchestrator or other upgrade manager. A mechanism is disclosed that utilizes containers to update software functionality, features, or the like without interrupting a service provided by a container and without relying on an orchestrator or other upgrade manager to coordinate the upgrade process. State information indicative of a current state of module(s) within a container is maintained in an external data store such as a state database. A hand-off from a current container to a new container that updates module code/functionality of the current container can be initiated upon determining that a state metric calculated by the old container at a future timestamp matches a state metric independently calculated by the new container at the same timestamp.

Proactive cluster compute node migration at next checkpoint of cluster upon predicted node failure

While scheduled checkpoints are being taken of a cluster of active compute nodes distributively executing an application in parallel, a likelihood of failure of the active compute nodes is periodically and independently predicted. Responsive to the likelihood of failure of a given active compute node exceeding a threshold, the given active compute node is proactively migrated to a spare compute node of the cluster at a next scheduled checkpoint. Another spare compute node of the cluster can perform prediction and migration. Prediction can be based on both hardware events and software events regarding the active compute nodes.

Cluster resource management in distributed computing systems

Techniques are provided for managing resources among clusters of computing devices in a computing system. Resource reassignment message are generated for indicating that servers are reassigned and in response to resource compute loads exceed or fall below certain thresholds. Techniques also include establishing communications with the reassigned servers to assign compute loads without physically relocating the servers from one cluster to another cluster.

Continuous Liveness and Integrity of Applications During Migration
20230009930 · 2023-01-12 ·

Managing application migration is provided. An API server on a controller node is invoked to update a configuration map of a reverse proxy on a worker node for the reverse proxy to route user service requests corresponding to unmigrated applications of a set of applications from a first computing platform to a second computing platform to maintain liveness of the unmigrated applications during migration. The API server is invoked to build an image for an application of the set of applications based on source code of the application obtained from the second computing platform. The API server is invoked to generate a pod on the worker node to perform a workload of the application using the image. The API server is invoked to update a service on the worker node to select the pod on the worker node performing the workload of the application.

APPLICATION LIFECYCLE MANAGEMENT BASED ON REAL-TIME RESOURCE USAGE
20230010567 · 2023-01-12 ·

Application lifecycle management based on real-time resource usage. A first plurality of resource values that quantify real-time computing resources used by a first instance of an application is determined at a first point in time. Based on the first plurality of resource values, one or more utilization values are stored in a profile that corresponds to the application. Subsequent to storing the one or more utilization values in the profile, it is determined that a second instance of the application is to be initiated. The profile is accessed, and the second instance of the application is caused to be initiated on a first computing device utilizing the one or more utilization values identified in the profile.

Distributable and customizable load-balancing of data-associated computation via partitions and virtual processes

Methods, systems, computer-readable media, and apparatuses for determining partitions and virtual processes in a simulation are presented. A plurality of partitions of a simulated world may be determined, and each partition may correspond to a different metric for entities in the simulated world. A plurality of virtual processes for the simulated world may also be determined. The system may assign a different virtual process to each partition. An indication of the partitions may be sent to one or more partition enforcer services, and an indication of the virtual processes may be sent to a virtual process manager.

Processor core power management in a virtualized environment
11550607 · 2023-01-10 · ·

Processor core power management in a virtualized environment. A hypervisor, executing on a processor device of a computing host, the processor device having a plurality of processor cores, receives from a guest operating system of a virtual machine, a request to set a virtual central processing unit (VCPU) of the virtual machine to a first requested P-state level of a plurality of P-state levels. Based on the request, the hypervisor associates the VCPU with a first processor core having a P-state that corresponds to the first requested P-state level.