Patent classifications
G06F9/4868
Intelligent and automatic load balancing of workloads on replication appliances based on appliance load scores
Various systems and methods are provided in which a replication process is initiated between a primary site and a recovery site, each having plurality of gateway appliances. Replication loads are evaluated for each given gateway appliance of the plurality of gateway appliances. If a determination is made that at least one gateway appliance of the plurality of gateway appliances is not overloaded, the plurality of gateway appliances are sorted based on replication loads respectively associated with each gateway appliance, and a determination is made as to whether a relative difference in replication loads between a gateway appliance having a highest replication load and a gateway appliance having a lowest replication load exceeds a difference threshold to determine whether the replication workloads between the gateway appliances should be rebalanced.
NETWORK FUNCTIONS VIRTUALIZATION MANAGEMENT AND ORCHESTRATION METHOD, NETWORK FUNCTIONS VIRTUALIZATION MANAGEMENT AND ORCHESTRATION SYSTEM, AND PROGRAM
A network functions virtualization management and orchestration system with a VNF descriptor (VNFD) including a information element that allows an instance created based on the VNFD to be distinguished by name. The information element includes an information element of a VM name that describes a naming rule for a virtual machine (VM).
LIVE UPDATING A VIRTUAL MACHINE VIRTUALIZING PHYSICAL RESOURCES
For a first virtual machine (VM) executing in a physical machine, a second VM is instantiated in the physical machine, the first VM using a physical adapter installed in the physical machine, the first VM virtualizing a portion of physical memory of the physical machine, the second VM virtualizing the physical adapter. The second VM is deployed using a memory mapping virtualizing the portion of physical memory. Checkpointing of an application executing in the first VM is caused, generating application state data of the application. The application is caused to execute in the second VM using the application state data. Process data of the application is caused to be updated in the second VM, the updating instructing the application to use the memory mapping.
SYSTEM AND METHOD OF APPLICATION TRANSITIONS BETWEEN INFORMATION HANDLING SYSTEMS
In one or more embodiments, one or more systems, one or more methods, and/or one or more processes may: execute a first application within a first operating system (OS) virtualization on a first information handling system (IHS); suspend the first application at a point of execution; determine one or more statuses associated with the first application, in which the one or more statuses includes the point of execution where the first application was suspended; provide the one or more statuses to a second IHS; configure a second application and a second OS virtualization with the one or more statuses associated with the first application within the first OS virtualization; establish input/output associated with the second application with one or more components of the first IHS via the network; and execute the second application within the second OS virtualization on the second IHS at the point of execution.
Multi-level caching to deploy local volatile memory, local persistent memory, and remote persistent memory
A technique is introduced for applying multi-level caching to deploy various types of physical memory to service captured memory calls from an application. The various types of physical memory can include local volatile memory (e.g., dynamic random-access memory), local persistent memory, and/or remote persistent memory. In an example embodiment, a user-space page fault notification mechanism is used to defer assignment of actual physical memory resources until a memory buffer is accessed by the application. After populating a selected physical memory in response to an initial user-space page fault notification, page access information can be monitored to determine which pages continues to be accessed and which pages are inactive to identify candidates for eviction.
MIGRATION CONTEXT AND FLOW GRAPH BASED MIGRATION CONTROL
In some examples, migration context and flow graph based migration control may include ascertaining an application that is to be migrated from a physical environment to a cloud environment, and determining a migration issue associated with the migration of the application. Migration context and flow graph based migration control may further include identifying, from a historical issue database, a plurality of historical issues, determining, for the migration issue and the plurality of historical issues, unified proximities, sorting, based on the determined unified proximities, the historical issues, selecting, from the sorted historical issues, a topmost historical issue, and determining, from the topmost historical issue, a resolution associated with the topmost historical issue. Further, migration context and flow graph based migration control may include executing the resolution to resolve the migration issue, and performing, based on the resolved migration issue, migration of the application from the physical environment to the cloud environment.
Detection of faults in performance of micro instructions
Micro-architectural fault detectors are described. An example of storage mediums includes instructions for receiving one or more micro instructions for scheduling in a processor, the processor including one or more processing resources; and performing fault detection in performance of the one or more micro instructions utilizing one or more of a first idle canary detection mode, wherein the first mode includes assigning at least one component as an idle canary detector to perform a canary process with an expected outcome, and a second micro-architectural redundancy execution mode, wherein the second mode includes replicating a first micro instruction to generate micro instructions for performance by a set of processing resources.
Systems and methods for managing resources in a hyperconverged infrastructure cluster
Various approaches for managing computational resources in a hyperconverged infrastructure (HCI) cluster include identifying the hosts associated with the HCI cluster for providing one or more computational resources thereto; for each of the hosts, determining a revenue and/or an expense for allocating the computational resource(s) to the HCI cluster; and determining whether to clone, suspend or terminate each host in the HCI cluster based at least in part on the associated revenue and/or expense.
SYSTEMS AND METHODS FOR LOAD BALANCING BASED ON THERMAL PARAMETERS
In accordance with embodiments of the present disclosure, a system may include a plurality of slots each configured to receive a modular information handling system, a plurality of air movers each configured to cool at least one modular information handling system disposed in at least one of the plurality of slots, and a controller communicatively coupled to the plurality of slots and the plurality of air movers and configured to, based on one or more thermal operational parameters associated with the plurality of slots and the plurality of air movers, determine an optimal allocation of at least one workload to a particular information handling system of a plurality of modular information handling systems received in the plurality of slots.
METHODS AND SYSTEMS FOR INSTANTIATING AND TRANSPARENTLY MIGRATING EXECUTING CONTAINERIZED PROCESSES
A method for instantiating and transparently migrating executing containerized processes includes receiving, by a container engine executing on a first machine, an instruction to instantiate a container image on the first machine. The container engine transmits, to a modified container runtime process, executing on the first machine, the instruction to instantiate the container image on the first machine. The modified container runtime process generates, on the first machine, a shim process representing the instantiated container image. The shim process forwards the instruction to an agent executing on a second machine, via a proxy connected to the agent via a network connection. The agent directs instantiation of the container image as a containerized process. A scheduler component executing on the first machine determines to migrate the containerized process to a third machine. The scheduler component directs migration of the containerized process to the third machine, during execution of the containerized process.