G06F2209/505

Dynamic resource allocation for computational simulation

Systems and methods for automated resource allocation during a computational simulation are described herein. An example method includes analyzing a set of simulation inputs to determine a first set of computing resources for performing a simulation, and starting the simulation with the first set of computing resources. The method also includes dynamically analyzing at least one attribute of the simulation to determine a second set of computing resources for performing the simulation, and performing the simulation with the second set of computing resources. The second set of computing resources is different than the first set of computing resources.

INTELLIGENT KEYWORD RECOMMENDER

A system, a method, and a computer program product for generation of keywords for a solution note for resolving an issue associated with a computing component. A dataset for training a keyword data model is received. The dataset includes a plurality of variables associated with one or more values. The keyword data model is configured for determination, as a function of one or more variables in the plurality of variables, of one or more keywords in a plurality of keywords associated with a computing solution in a plurality of computing solutions for resolving a problem with an operation of a computing component in a plurality of computing components. The keyword data model is trained using the received dataset and the keyword data model is applied to one or more variables in the received dataset to generate one or more keywords. One or more keywords associated with the computing solution is generated.

Optimizing Virtual Machine Scheduling on Non-Uniform Cache Access (NUCA) Systems
20230026837 · 2023-01-26 ·

Techniques for optimizing virtual machine (VM) scheduling on a non-uniform cache access (NUCA) system are provided. In one set of embodiments, a hypervisor of the NUCA system can partition the virtual CPUs of each VM running on the system into logical constructs referred to as last level cache (LLC) groups, where each LLC group is sized to match (or at least not exceed) the LLC domain size of the system. The hypervisor can then place/load balance the virtual CPUs of each VM on the system’s cores in a manner that attempts to keep virtual CPUs which are part of the same LLC group within the same LLC domain, subject to various factors such as compute load, cache contention, and so on.

INTELLIGENT AUTO-SCALING OF CONTAINERIZED WORKLOADS IN CONTAINER COMPUTING ENVIRONMENT
20230229511 · 2023-07-20 ·

Techniques for managing containerized workloads in a container computing environment are disclosed. For example, a method comprises the following steps. The method predicts a composite time delay value for initializing an instance of a containerized workload for executing a microservice within a container computing environment. The method then computes at least one target resource utilization parameter, based on the predicted composite time delay value, for use by the container computing environment.

RESOURCE SCHEDULING WITH UPGRADE AWARENESS IN VIRTUALIZED ENVIRONMENT
20230229510 · 2023-07-20 ·

Aspects of workload reallocation within a software-defined data center (SDDC) undergoing an upgrade are described. As upgrades become available for services and other types of applications installed on a cluster of host devices within a data center, an upgrade of the installed services may be required for each of the host devices. During a cluster upgrade, the order in which hosts in the cluster are upgraded is determined as a function of evacuation costs and evacuation policies associated with each host device in the computing cluster. In addition, a maintenance cost associated with a workload needing to be evacuated from a host undergoing an upgrade is determined based on the upgrade sequence. The maintenance cost can then be used as a factor for selecting an optimal candidate host for migrating the workload to when the host the workload is currently running on is being upgraded.

Creating virtual machine groups based on request

Embodiments of the present invention provide a method, a system, and an apparatus for creating a virtual machine. The method includes: receiving a virtual machine creation request to create a plurality of virtual machines; dividing the plurality of virtual machines into a plurality of virtual machine groups; determining a home physical rack for each virtual machine group, where one virtual machine group corresponds to one home physical rack; and creating each virtual machine group on the home physical rack of each virtual machine group. Because each virtual machine group is created on a home physical rack to which each virtual machine group belongs, each virtual machine group is equivalent to one physical rack.

CONTAINER-AS-A-SERVICE (CAAS) CONTROLLER FOR SELECTING A BARE-METAL MACHINE OF A PRIVATE CLOUD FOR A CLUSTER OF A MANAGED CONTAINER SERVICE

Embodiments described herein are generally directed to a controller of a managed container service that facilitates selection among bare metal machines available within a private cloud. According to an example, a request is received by a Container-as-a-Service controller from a CaaS portal to create a cluster based at least in part on resources of a private cloud of a customer of a managed container service. An inventory of bare-metal machines available within the private cloud is received from a Bare-Metal-as-a-Service (BMaaS) provider associated with the private cloud. A particular bare metal machine is identified for the cluster by selecting among the available bare-metal machines based on cluster information associated with the request, the inventory, and a best fit algorithm configured in accordance with a policy established by the customer.

Container Orchestration System
20230222006 · 2023-07-13 ·

The present disclosure provides a system for coordinating the distribution of resource instances (e.g. Kubernetes nodes) that belong to different infrastructure providers providing resource instances at different locations. Each infrastructure provider provides one or more resource instances. The resource instances provide by an infrastructure can be spread over multiple locations. Several Kubernetes master nodes are deployed to manage the RIs spread among multiple infrastructure providers and multiple locations.

TASK SCHEDULING METHOD FOR AUTOMATED MACHINE LEARNING
20230015759 · 2023-01-19 · ·

A method for scheduling a task for AutoML (Automated Machine Learning) by a terminal, includes: setting a ratio of 1) a first task requiring a plurality of arithmetic devices and 2) a second task requiring one arithmetic device, in a cluster connected with the terminal; allocating a third task for the AutoML on the basis of the set ratio; receiving a request for allocation of a session from a user; inspecting whether the session is allocable on the basis of the ratio of the second task; and allocating the session to the arithmetic device associated with the second task on the basis of the ratio of the second task when the session is allocable.

DYNAMIC CROSS-ARCHITECTURE APPLICATION ADAPTION
20230014741 · 2023-01-19 · ·

Embodiments described herein are generally directed to improving performance of high-performance computing (HPC) or artificial intelligence (AI) workloads on cluster computer systems. According to one embodiment, a section of a high-performance computing (HPC) or artificial intelligence (AI) workload executing on a cluster computer system is identified as significant to a figure of merit (FOM) of the workload. An alternate placement among multiple heterogeneous compute resources of a node of the cluster computer system is determined for a portion of the section currently executing on a given compute resource of the multiple heterogeneous compute resources. After predicting an improvement to the FOM based on the alternate placement, the portion is relocated to the alternate placement.