G06F9/505

INTELLIGENT AUTO-SCALING OF CONTAINERIZED WORKLOADS IN CONTAINER COMPUTING ENVIRONMENT
20230229511 · 2023-07-20 ·

Techniques for managing containerized workloads in a container computing environment are disclosed. For example, a method comprises the following steps. The method predicts a composite time delay value for initializing an instance of a containerized workload for executing a microservice within a container computing environment. The method then computes at least one target resource utilization parameter, based on the predicted composite time delay value, for use by the container computing environment.

HIGHLY CONCURRENT AND RESPONSIVE APPLICATION PROGRAMMING INTERFACE (API) IN EVENTUAL CONSISTENCY ENVIRONMENT
20230229527 · 2023-07-20 ·

The disclosure relates to processing application programming interface (API) requests. Embodiments include receiving, at an API wrapper, from a first caller, a first call to an API and sending the first call to the API. Embodiments include receiving, by the API wrapper, from one or more second callers, a second one or more calls to the API prior to receiving a response from the API to the first call. Embodiments include receiving, by the API wrapper, the response from the API to the first call and responding to the first call from the first caller with the response from the API to the first call. Embodiments include responding, by the API wrapper, to the second one or more calls from the one or more second callers with the response from the API to the first call without sending the second one or more calls to the API.

RESOURCE SCHEDULING WITH UPGRADE AWARENESS IN VIRTUALIZED ENVIRONMENT
20230229510 · 2023-07-20 ·

Aspects of workload reallocation within a software-defined data center (SDDC) undergoing an upgrade are described. As upgrades become available for services and other types of applications installed on a cluster of host devices within a data center, an upgrade of the installed services may be required for each of the host devices. During a cluster upgrade, the order in which hosts in the cluster are upgraded is determined as a function of evacuation costs and evacuation policies associated with each host device in the computing cluster. In addition, a maintenance cost associated with a workload needing to be evacuated from a host undergoing an upgrade is determined based on the upgrade sequence. The maintenance cost can then be used as a factor for selecting an optimal candidate host for migrating the workload to when the host the workload is currently running on is being upgraded.

ON-BOARDING VIRTUAL INFRASTRUCTURE MANAGEMENT SERVER APPLIANCES TO BE MANAGED FROM THE CLOUD

A method of on-boarding a virtual infrastructure management (VIM) server appliance in which VIM software for locally managing a software-defined data center (SDDC) is installed, to enable the VIM server appliance to be centrally managed through a cloud service includes upgrading the VIM server appliance from a current version to a higher version that supports communication with agents of the cloud service, modifying configurations of the upgraded VIM server appliance according to a prescriptive configuration required by the cloud service, and deploying a gateway appliance for running the agents of the cloud service that communicate with the cloud service and the upgraded VIM server appliance.

PROPAGATING JOB CONTEXTS TO A JOB EXECUTION ENVIRONMENT
20230229515 · 2023-07-20 ·

In a job management environment comprising a plurality of job systems and a scheduler for scheduling a job submitted to the job management environment to a job system for running, a processor, in a first job system, intercepts, from outside of a first container in the first job system, a first job from being sent to the scheduler. A processor, in the first job system, determines whether the first job is submitted from a container in the first job system. In response to a first determination that the first job is submitted from a container in the first job system, a processor, in the first job system, determines contexts of the first job, the contexts of the first job including a context related to the first container. A processor, in the first job system, sends the first job together with the contexts of the first job to the scheduler.

INTELLIGENT ORCHESTRATION OF CLASSIC-QUANTUM COMPUTATIONAL GRAPHS

One example method includes receiving a computation workflow defined by a graph that includes quantum computing nodes, receiving a catalogue of quantum computing instances that are available in a hybrid classic-quantum computation infrastructure, transforming the graph to create a first graph transformation, and each of the quantum computing nodes is assigned a respective candidate resource allocation that identifies candidate resources operable to execute a respective quantum algorithm associated with that quantum computing node, and the transforming is performed using information from the catalogue, and optimizing the computation workflow by selecting, for each of the quantum computing nodes, a resource from the candidate resource allocation associated with that quantum computing node, and the optimizing includes transforming the first graph transformation to create a second graph transformation that specifies the selected resources for each node.

Optimizing distribution of heterogeneous software process workloads
11561836 · 2023-01-24 · ·

A request is received to schedule a new software process. Description data associated with the new software process is retrieved. A workload resource prediction is requested and received for the new software process. A landscape directory is analyzed to determine a computing host in a managed landscape on which to load the new software process. The new software process is executed on the computing host.

Efficient inter-chip interconnect topology for distributed parallel deep learning
11561840 · 2023-01-24 · ·

The present disclosure provides a system comprising: a first group of computing nodes and a second group of computing nodes, wherein the first and second groups are neighboring devices and each of the first and second groups comprising: a set of computing nodes A-D, and a set of intra-group interconnects, wherein the set of intra-group interconnects communicatively couple computing node A with computing nodes B and C and computing node D with computing nodes B and C; and a set of inter-group interconnects, wherein the set of inter-group interconnects communicatively couple computing node A of the first group with computing node A of the second group, computing node B of the first group with computing node B of the second group, computing node C of the first group with computing node C of the second group, and computing node D of the first group with computing node D of the second group.

Embedded persistent queue

Various aspects are disclosed for distributed application management using an embedded persistent queue framework. In some aspects, task execution data is monitored from a plurality of task execution engines. A task request is identified. The task request can include a task and a Boolean predicate for task assignment. The task is assigned to a task execution engine embedded in a distributed application process if the Boolean predicate is true, and a capacity of the task execution engine is sufficient to execute the task. The task is enqueued in a persistent queue. The task is retrieved from the persistent queue and executed.

Workload aware security patch management

Example implementations relate to method and system for securing a workload from a security vulnerability based on management of critical patches for the workload. The method includes obtaining information of existing patches for each of a plurality of infrastructure resources that are required to execute the workload, where the infrastructure resources are segregated as multiple layers. The method further includes determining dependency of the infrastructure resources across the multiple layers and identifying the security vulnerability related to the infrastructure resources. The method further includes evaluating perceived criticalities of first and second new patches for the security vulnerability based a workload weightage, a resource age of the infrastructure resources, and an actual criticality of the first and second new patches. Further, the method includes installing the first new patch followed by the second new patch on the infrastructure resources based on the perceived criticalities, in an order of the determined dependency.