G06F9/5083

CONTENT BASED READ CACHE ASSISTED WORKLOAD DEPLOYMENT

In an example, a computer-implemented method for deploying a workload in a virtualized computing environment include retrieving a digest file corresponding to the workload. The digest file may include a plurality of hash values from a storage device and each hash value corresponds to a data block of a plurality of data blocks associated with a virtual disk stored in the storage device. Further, the method includes determining whether the plurality of hash values in the digest file match with data in a CBRC of a destination host computing system and obtaining data blocks corresponding to hash values that are not present in the CBRC from the storage device to store in the destination host computing system. Furthermore, the method includes deploying the workload on the destination host computing system upon obtaining the data blocks corresponding to hash values that are not present in the CBRC.

Pre-warming scheme to load machine learning models

Techniques for hosting adding and warming a host are described. In some instances, a method of determining that at least one group of hosts is to be increased by adding an additional host to the group of hosts; sending a request to the group of hosts for a list of machine learning models loaded per host of the group of hosts; receiving, from each host, the list of loaded machine learning models; loading at least a proper subset of list of loaded machine learning models into random access memory of the at least one group; receiving a request to perform an inference; routing the request to the additional host of the group of hosts; performing an inference using the additional host of the group of hosts; and providing a result of the inference to an external entity is described.

MACHINE-LEARNING TRAINING SERVICE FOR SYNTHETIC DATA
20230229513 · 2023-07-20 ·

Various embodiments, methods and systems for implementing a distributed computing system machine-learning training service are provided. Initially a machine learning model is accessed. A plurality of synthetic data assets are accessed, where a synthetic data asset is associated with asset-variation parameters that are programmable for machine-learning. The machine learning model is retrained using the plurality of synthetic data assets. The machine-learning training service is further configured for executing real-time calls to generate an on-the-fly-generated synthetic data asset such that the on-the-fly-generated synthetic data asset is rendered in real-time to preclude pre-rendering and storing the on-the-fly-generated synthetic data asset. The machine-learning training service further supports hybrid-based machine learning training, where the machine learning model is trained based on a combination of the plurality of synthetic data assets, a plurality of non-synthetic data assets, and synthetic data asset metadata associated with the plurality of synthetic data assets.

Update management device, update management system, and update management method

An update management device manages software update of a plurality of ECUs included in an in-vehicle network, the update management device including: an information acquiring unit for acquiring load information indicating a load of each of the plurality of ECUs, performance information indicating a performance of each of the plurality of ECUs, and configuration information indicating the configuration of the in-vehicle network; and an update setting unit for selecting a restoration execution ECU that executes a restoration process of update data from among the plurality of ECUs using the load information, the performance information, and the configuration information acquired by the information acquiring unit.

Determining a future operation failure in a cloud system

Examples described relate to determining a future operation failure in a cloud system. In an example, a historical utilization of resources for performing an operation in a cloud system may be determined. A current utilization of resources in the cloud system may be determined. Based on the historical utilization of resources for performing the operation in the cloud system and the current utilization of resources in the cloud system, a determination may be made whether a future performance of the operation in the cloud system is likely to be a failure. In response to a determination that the future performance of the operation in the cloud system is likely to be a failure, an alert may be generated.

Disaggregated system domain

An approach is disclosed that configures a computer system node from components that are each connected to an intra-node network. The configuring is performed by selecting a set of components, including at least one processor, and assigning each of the components a different address range within the node. An operating system is run on the processor included in the node with the operating system accessing each of the assigned components.

Load balancing of resources

Embodiments presented herein techniques for balancing a multidimensional set of resources of different types within a distributed resources system. Each host computer providing the resources publishes a status on current resource usage by guest clients. Upon identifying a local imbalance, the host computer determines a source workload to migrate to or from the resources container to minimize the variance in resource usage. Additionally, when placing a new resource workload, the host computer selects a resources container that minimizes the variance to further balance resource usage.

Optimizing distribution of heterogeneous software process workloads
11561836 · 2023-01-24 · ·

A request is received to schedule a new software process. Description data associated with the new software process is retrieved. A workload resource prediction is requested and received for the new software process. A landscape directory is analyzed to determine a computing host in a managed landscape on which to load the new software process. The new software process is executed on the computing host.

Embedded persistent queue

Various aspects are disclosed for distributed application management using an embedded persistent queue framework. In some aspects, task execution data is monitored from a plurality of task execution engines. A task request is identified. The task request can include a task and a Boolean predicate for task assignment. The task is assigned to a task execution engine embedded in a distributed application process if the Boolean predicate is true, and a capacity of the task execution engine is sufficient to execute the task. The task is enqueued in a persistent queue. The task is retrieved from the persistent queue and executed.

Workload aware security patch management

Example implementations relate to method and system for securing a workload from a security vulnerability based on management of critical patches for the workload. The method includes obtaining information of existing patches for each of a plurality of infrastructure resources that are required to execute the workload, where the infrastructure resources are segregated as multiple layers. The method further includes determining dependency of the infrastructure resources across the multiple layers and identifying the security vulnerability related to the infrastructure resources. The method further includes evaluating perceived criticalities of first and second new patches for the security vulnerability based a workload weightage, a resource age of the infrastructure resources, and an actual criticality of the first and second new patches. Further, the method includes installing the first new patch followed by the second new patch on the infrastructure resources based on the perceived criticalities, in an order of the determined dependency.