Patent classifications
G06F2209/508
Capacity management in a cloud computing system using virtual machine series modeling
A method for minimizing allocation failures in a cloud computing system without overprovisioning may include determining a predicted supply for a virtual machine series in a system unit of the cloud computing system during an upcoming time period. The predicted supply may be based on a shared available current capacity and a shared available future added capacity for the virtual machine series in the system unit. The method may also include predicting an available capacity for the virtual machine series in the system unit during the upcoming time period. The predicted available capacity may be based at least in part on a predicted demand for the virtual machine series in the system unit during the upcoming time period and the predicted supply. The method may also include taking at least one mitigation action in response to determining that the predicted demand exceeds the predicted supply during the upcoming time period.
HIGH AVAILABILITY SCHEDULER EVENT TRACKING
Aspects include monitoring, by a controller, an operational status of a tracker system that is configured to track and record a current status of a job being executed and to report completion of the job to the controller. The recording includes storing two copies of the current status, where a first copy is stored in a shared memory location accessible by the controller. In response to determining, based on the monitoring, that the tracker system is operational, waiting to receive a job completion message for the job from the tracker system and performing a job completion action based on receiving the job completion message. In response to determining that the tracker system is not operational, obtaining the current status of the job from the shared memory location and performing the job completion action based on the current status indicating that the job has completed.
Systems and methods for managing usage of computing resources
A processor-implemented method is disclosed. The method includes: obtaining, from an activity logging system, activity data associated with one or more defined computing tasks, the activity data indicating progress towards completion of the one or more defined computing tasks, the defined computing tasks being associated with one or more projects; obtaining, from a resource usage monitoring system, time-based resource tracking data associated with at least one of the projects, the resource tracking data including project identifying data associated with the at least one project and project time data identifying one or more time periods reflecting use of a computing resource in association with the at least one project; determining mappings of the one or more time periods to the one or more defined computing tasks based on the project identifying data and the activity data associated with the one or more defined computing tasks; determining, based on the mappings, that at least one task-based resource usage criterion is satisfied; and in response to determining that the at least one task-based resource usage criterion is satisfied, generating a notification of resource usage for display on a computing device.
Methods and systems for balancing loads in distributed computer networks for computer processing requests with variable rule sets and dynamic processing loads
Methods and systems are described for balancing loads in distributed computer networks for computer processing requests with variable rule sets and dynamic processing loads. The methods and systems may include determining an initial allocation of the plurality of processing requests to the plurality of available domains that has a lowest initial sum excess processing load. The methods and systems may then retrieve an updated estimated processing load for at least one of the plurality of processing requests and determine a secondary allocation of the plurality of processing requests to the plurality of available domains.
Monitoring resource utilization via intercepting bare metal communications between resources
A system for providing computer implemented services using information handling systems includes a composed information handling system that provides, at least in part, the computer implemented services and a system control processor manager. The system control processor manager instantiates a utilization monitor in a system control processor of the composed information handling system; and monitors, using the utilization monitor, a use rate of computing resources of the composed information handling system by a client.
System and method for facilitating management of cloud infrastructure by using smart bots
A system and method for facilitating management of cloud infrastructure by using smart bots is disclosed. The method includes obtaining one or more insights associated with one or more user accounts on a cloud infrastructure from one or more cloud infrastructure resources and determining one or more cloud infrastructure issues associated with the one or more user accounts by validating the obtained one or more insights based on a set of predefined rules. The method further includes creating one or more customized bots for the determined one or more cloud infrastructure issues based on one or more user parameters by using a rule engine based AI model and deploying the created one or more customized bots on the one or more cloud infrastructure resources. Further, the method includes managing the cloud infrastructure via the deployed one or more customized bots.
ALLOCATING OF COMPUTING RESOURCES FOR APPLICATIONS
A method for performing scheduling includes extracting information from at least one log file for an application. The method also includes determining an allocation of cloud resources for the application based on the information from the log file(s).
HARVESTING AND USING EXCESS CAPACITY ON LEGACY WORKLOAD MACHINES
Some embodiments provide a novel method for deploying containerized applications. The method of some embodiments deploys a data collecting agent on a machine that operates on a host computer and executes a set of one or more workload applications. From this agent, the method receives data regarding consumption of a set of resources allocated to the machine by the set of workload applications. The method assesses excess capacity of the set of resources for use to execute a set of one or more containers, and then deploys the set of one or more containers on the machine to execute one or more containerized applications. In some embodiments, the set of workload applications are legacy workloads deployed on the machine before the installation of the data collecting agent. By deploying one or more containers on the machine, the method of some embodiments maximizes the usages of the machine, which was previously deployed to execute legacy non-containerized workloads.
METHOD AND APPARATUS TO SELECT ASSIGNABLE DEVICE INTERFACES FOR VIRTUAL DEVICE COMPOSITION
Scalable I/O Virtualization (Scalable IOV) allows efficient and scalable sharing of Input/Output (I/O) devices across a large number of containers or Virtual Machines. Scalable IOV defines the granularity of sharing of a device as an Assignable Device Interface (ADI). In response to a request for a virtual device composition, an ADI is selected based on affinity to the same NUMA node as the running virtual machine, utilization metrics for the Input-Output Memory Management Unit (IOMMU) unit and utilization metrics of a device of a same device class. Selecting the ADI based on locality and utilization metrics reduces latency and increases throughput for a virtual machine running critical or real-time workloads.
NODE MANAGEMENT METHOD, DEVICE AND APPARATUS, STORAGE MEDIUM, AND SYSTEM
A node management method, a node management apparatus, a cluster node manager, a non-transitory computer-readable storage medium and a network function virtualization system are disclosed. The node management method may include: receiving node life cycle management information (S11); performing life cycle management on a node according to the node life cycle management information, where the node life cycle management includes at least one of node creation, node scaling and node release (S12).