Patent classifications
G06F9/505
MULTI-DEVICE PROCESSING ACTIVITY ALLOCATION
Allocating processing activities among multiple computing devices can include identifying multiple computing activities of a computer-executable process and, for each computing activity identified, estimating in real time the computing resources needed. The identifying can be in response to detecting a computer-executable instruction executed by one multiple communicatively coupled computing devices, and the computer-executable instruction can be associate with the computer-executable process. A current condition and configuration of each of the computing devices can be determined in real time. For each computing device an effect induced by executing one or more of the plurality of activities can be predicted, the predicting based each computing device's current condition and configuration and performed by a machine learning model trained using data collected from prior real-time processing of example process activities. Based on the predicting, computing activities can be allocated in real time among the computing devices.
Generation of cloud service inventory
A data model characterizing a plurality of resources is received. The data model associates a first resource within a first remote computing environment with a first tag and a second resource within a second remote computing environment with a second tag. The data model is received from a database that is separate from the first remote computing environment and the second remote computing environment. The plurality of resources is grouped based on the first tag and the second tag. The grouping can form a first group associated with the first tag and a second group associated with the second tag. A first list of resources characterizing the first group and a second list of resources characterizing the second group is provided. Related apparatus, systems, techniques and articles are also described.
Containerized workload scheduling
A method for containerized workload scheduling can include determining a network state for a first hypervisor in a virtual computing cluster (VCC). The method can further include determining a network state for a second hypervisor. Containerized workload scheduling can further include deploying a container to run a containerized workload on a virtual computing instance (VCI) deployed on the first hypervisor or the second hypervisor based, at least in part, on the determined network state for the first hypervisor and the second hypervisor.
System and method for cloud workload provisioning
Disclosed is a system and method for cloud workload provisioning. In one implementation, the present invention provides a system enabling an automated guidance to the user for the workload to be provisioned. The present invention matches the user's workload profile based on a wide variety of historical data set and makes easy for users to choose the cloud provisioning for various kinds of workloads. The system can automatically readjust a workload profile for cloud provisioning. The system can provide a manual selection option for cloud provisioning. In one embodiment, the present invention provides a system and method that derives a workload provision scaling factor mechanism using historic data set. Furthermore, the system and method can automatically or manually readjust the provision scaling factor based on a workload profile for cloud provisioning.
Scheduler for amp architecture with closed loop performance and thermal controller
Systems and methods are disclosed for scheduling threads on a processor that has at least two different core types, such as an asymmetric multiprocessing system. Each core type can run at a plurality of selectable voltage and frequency scaling (DVFS) states. Threads from a plurality of processes can be grouped into thread groups. Execution metrics are accumulated for threads of a thread group and fed into a plurality of tunable controllers for the thread group. A closed loop performance control (CLPC) system determines a control effort for the thread group and maps the control effort to a recommended core type and DVFS state. A closed loop thermal and power management system can limit the control effort determined by the CLPC for a thread group, and limit the power, core type, and DVFS states for the system. Deferred interrupts can be used to increase performance.
Software switch and method therein
A software switch and a method performed by the software switch are disclosed. The software switch receives, from a node deploying a virtual machine, a request for a virtual port to be polled by the virtual machine. The request includes a Central Processing Unit “CPU” identity identifying a CPU on which the virtual machine executes. The request includes an indication of a clock frequency at which the CPU is set to operate. The software switch determines a number of packets in a queue associated with the virtual port. The software switch adjusts the clock frequency of the CPU based on the number of packets in the queue. A corresponding computer program and a computer program carrier are also disclosed.
Method for establishing system resource prediction and resource management model through multi-layer correlations
A method for establishing system resource prediction and resource management model through multi-layer correlations is provided. The method builds an estimation model by analyzing the relationship between a main application workload, resource usage of the main application, and resource usage of sub-application resources and prepares in advance the specific resources to meet future requirements. This multi-layer analysis, prediction, and management method is different from the prior arts, which only focus on single-level estimation and resource deployment. The present invention can utilize more interactive relationships at different layers to effectively perform predictions, thereby achieving the advantage of reducing hidden resource management costs when operating application services.
Processing rest API requests based on resource usage satisfying predetermined limits
A request manager analyzes API calls from a client to a host application for state and performance information. If current utilization of host application processing or memory footprint resources exceed predetermined levels, then the incoming API call is not forwarded to the application. If current utilization of the host application processing and memory resources do not exceed the predetermined levels, then the request manager quantifies the processing or memory resources required to report the requested information and determines whether projected utilization of the host application processing or memory resources inclusive of the resources required to report the requested information exceed predetermined levels. If the predetermined levels are not exceeded, then the request manager forwards the API call to the application for processing.
Platform independent GPU profiles for more efficient utilization of GPU resources
Disclosed are various examples for platform independent graphics processing unit (GPU) profiles for more efficient utilization of GPU resources. A virtual machine configuration can be identified to include a platform independent graphics computing requirement. Hosts can be identified as available in a computing environment based on the platform independent graphics computing requirement. The virtual machine can be placed on a host based on a consideration of host priority.
Technologies for providing shared memory for accelerator sleds
Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.