G06F2009/4557

System and method for optimizing network topology in a virtual computing environment
11579913 · 2023-02-14 · ·

A computer network optimization methodology is disclosed. In a computer-implemented method, components of a computing environment are automatically monitored, and have a feature selection analysis performed thereon. Provided the feature selection analysis determines that features of the components are in frequent communication and generating network latency. Provided the feature selection analysis determines that features of the components are not well defined, a similarity analysis of the features is performed. Results of the feature selection methodology are generated, and the components involved in the network traffic latency are reassigned to migrate the latency.

Edge computing system
11582283 · 2023-02-14 · ·

A method of traffic reduction in a mesh computing system (400), the mesh computing system (400) comprising hosts located on edge nodes of the mesh computing system (400) and a central registry located outside the mesh computing system (400), the central registry holding the images. The method comprises, at a first host located at a first edge node, receiving (920) a request from a client for an image, sending (930) a request for the image to at least one other host of the mesh computing system (400). When the first host receives (940) notification that at least a second host holds the image, the first host downloads (960) the image from the second host to the first host. The first host creates (970) a container from the image. A host at a node (636; 700) and a mesh computing system (400) are also provided.

Honoring resource scheduler constraints during maintenances

The present disclosure describes a technique for honoring virtual machine placement constraints established on a first host implemented on a virtualized computing environment by receiving a request to migrate one or more virtual machines from the first host to a second host and without violating the virtual machine placement constraints, identifying an architecture of the first host, provisioning a second host with an architecture compatible with that of the first host, adding the second host to the cluster of hosts, and migrating the one or more virtual machines from the first host to the second host.

Policy enforcement and performance monitoring at sub-LUN granularity
11579910 · 2023-02-14 · ·

Techniques are provided for enforcing policies at a sub-logical unit number (LUN) granularity, such as at a virtual disk or virtual machine granularity. A block range of a virtual disk of a virtual machine stored within a LUN is identified. A quality of service policy object is assigned to the block range to create a quality of service workload object. A target block range targeted by an operation is identified. A quality of service policy of the quality of service policy object is enforced upon the operation using the quality of service workload object based upon the target block range being within the block range of the virtual disk.

Systems and methods for virtual machine resource optimization using machine learning techniques

Systems described herein may allow for the intelligent configuration of containers onto virtualized resources. As described, systems described herein may generate configurations based on received parameters for utilization to configure (e.g., install, instantiate, etc.) virtualized resources. Once generated, a configuration may be selected according to determined selection parameters and/or intelligent selection techniques.

Integrity-preserving cold migration of virtual machines

A method includes identifying a source virtual machine to be migrated from a source domain to a target domain, extracting file-in-use metadata and shared asset metadata from virtual machine metadata of the source virtual machine, and copying one or more files identified in the file-in-use metadata to a target virtual machine in the target domain. For each of one or more shared assets identified in the shared asset metadata, the method further includes (a) determining whether or not the shared asset already exists in the target domain, (b) responsive to the shared asset already existing in the target domain, updating virtual machine metadata of the target virtual machine to specify the shared asset, and (c) responsive to the shared asset not already existing in the target domain, copying the shared asset to the target domain and updating virtual machine metadata of the target virtual machine to specify the shared asset.

Platform independent GPU profiles for more efficient utilization of GPU resources

Disclosed are various examples for platform independent graphics processing unit (GPU) profiles for more efficient utilization of GPU resources. A virtual machine configuration can be identified to include a platform independent graphics computing requirement. Hosts can be identified as available in a computing environment based on the platform independent graphics computing requirement. The virtual machine can be placed on a host based on a consideration of host priority.

SYSTEM AND METHOD OF UTILIZING THERMAL PROFILES ASSOCIATED WITH WORKLOAD EXECUTING ON INFORMATION HANDLING SYSTEMS

In one or more embodiments, one or more systems, one or more methods, and/or one or more processes may determine first thermal attribute values associated with multiple information handling systems (IHSs) with respect to a period of time as the IHSs execute a first workload; determine multiple variance ranges respectively associated with the first thermal attributes; periodically determine second thermal attribute values associated with the IHSs as the IHSs execute a second workload; determine that a thermal attribute value of the second thermal attribute values exceeds a respective variance range of the variance ranges as a first information handling system (IHS) of the IHSs executes the second workload; generate an alert based at least on the thermal attribute value exceeding the respective variance range; and in response to the alert, transfer at least a portion of the second workload from the first IHS to a second IHS of the IHSs.

FAIL-SAFE POST COPY MIGRATION OF CONTAINERIZED APPLICATIONS
20230043180 · 2023-02-09 ·

A supervisor on a destination host receives a request to migrate an application from a source host to the destination host and determines a total amount of memory associated with the application on the source host. The supervisor on the destination host allocates one or more memory pages in a page table on the destination host to satisfy the total amount of memory associated with the application on the source host, where the one or more memory pages are to be associated with the application on the destination host. Responsive to determining that the one or more memory pages have been allocated on the destination host, the supervisor on the destination host initiates migration of the application from the source host to the destination host.

Traversing a large connected component on a distributed file-based data structure

A distributed system including multiple processing nodes. The distributed system can perform certain acts. The acts can include receiving a set of input nodes and a set of criteria. The acts can include obtaining an adjacency list representing a large connected component. The large connected component can include nodes, edges, and edge metadata. A quantity of the nodes of the large connected component can exceed 1 billion. The adjacency list can be distributed across the multiple processing nodes. The nodes of the large connected component can include the input nodes. The acts also can include performing one or more iterations of traversing the large connected component until a stopping condition is satisfied. Each iteration can include processing a set of input nodes at the multiple processing nodes using the set of criteria to generate first data at the multiple processing nodes, determining a set of output nodes such that each output node of the set of output nodes is one hop from a respective input node of the set of input nodes, consolidating the first data from the multiple processing nodes to a first processing node of the multiple processing nodes, processing the first data at the first processing node; and assigning the set of input nodes for a subsequent iteration of the one or more iterations based on the set of output nodes when the stopping condition is not satisfied. The acts further can include outputting second data based on the first data received and processed at the first processing node during the one or more iterations. Other embodiments are disclosed.