Patent classifications
G06F9/5094
REDUCING THE ENVIRONMENTAL IMPACT OF DISTRIBUTED COMPUTING
A process includes obtaining a workload and a set of candidate computing resources and predicting amounts of carbon emissions attributable to executing the workload on different members of the set of candidate computing resources. The process also includes predicting measures of computing performance in executing the workload of the different members of the set of candidate computing resources and computing a set of scores based on the amounts of carbon emissions and the measures of computing performance. The process also includes orchestrating the workload based on the scores.
DYNAMIC CROSS-ARCHITECTURE APPLICATION ADAPTION
Embodiments described herein are generally directed to improving performance of high-performance computing (HPC) or artificial intelligence (AI) workloads on cluster computer systems. According to one embodiment, a section of a high-performance computing (HPC) or artificial intelligence (AI) workload executing on a cluster computer system is identified as significant to a figure of merit (FOM) of the workload. An alternate placement among multiple heterogeneous compute resources of a node of the cluster computer system is determined for a portion of the section currently executing on a given compute resource of the multiple heterogeneous compute resources. After predicting an improvement to the FOM based on the alternate placement, the portion is relocated to the alternate placement.
Hardware accelerated compute kernels for heterogeneous compute environments
A request to perform a compute task is received. A plurality of compute processor resources eligible to perform the compute task is identified, wherein the plurality of compute processor resources includes two or more of the following: a field-programmable gate array, an application-specific integrated circuit, a graphics processing unit, or a central processing unit. Based on an optimization metric, one of the compute processor resources is dynamically selected to perform the compute task.
System, apparatus and method for providing hardware state feedback to an operating system in a heterogeneous processor
In one embodiment, a processor includes a power controller having a resource allocation circuit. The resource allocation circuit may: receive a power budget for a first core and at least one second core and scale the power budget based at least in part on at least one energy performance preference value to determine a scaled power budget; determine a first maximum operating point for the first core and a second maximum operating point for the at least one second core based at least in part on the scaled power budget; determine a first efficiency value for the first core based at least in part on the first maximum operating point for the first core and a second efficiency value for the at least one second core based at least in part on the second maximum operating point for the at least one second core; and report a hardware state change to an operating system scheduler based on the first efficiency value and the second efficiency value. Other embodiments are described and claimed.
COMPUTING POWER SHARING-RELATED EXCEPTION REPORTING AND HANDLING METHODS AND DEVICES, STORAGE MEDIUM, AND TERMINAL APPARATUS
Provided are a method and an apparatus for reporting and handling an exception in computing power sharing, a storage medium, and a terminal device. The method for reporting an exception in computing power sharing includes: detecting a current hardware state and a current battery state; and reporting an exception to a network unit, in a case that the hardware state or the battery state reaches a preset exception threshold, or in a case that a change of the hardware state or a change of the battery state reaches a preset reporting threshold. The method for handling an exception in computing power sharing includes: receiving an exception reported from a cooperative computing terminal; determining a total workload assigned to the cooperative computing terminal and a remaining workload of the cooperative computing terminal; and determining, based on the exception and the remaining workload, to reassign the remaining workload or the total workload.
Method and apparatus for providing thermal wear leveling
Exemplary embodiments provide thermal wear spreading among a plurality of thermal die regions in an integrated circuit or among dies by using die region wear-out data that represents a cumulative amount of time each of a number of thermal die regions in one or more dies has spent at a particular temperature level. In one example, die region wear-out data is stored in persistent memory and is accrued over a life of each respective thermal region so that a long term monitoring of temperature levels in the various die regions is used to spread thermal wear among the thermal die regions. In one example, spreading thermal wear is done by controlling task execution such as thread execution among one or more processing cores, dies and/or data access operations for a memory.
READINESS STATES FOR PARTITIONED INTERNAL RESOURCES OF A MEMORY CONTROLLER
Apparatus, systems, and methods are presented for controlling readiness states for partitioned internal resources of a memory controller. The controller may include at least one internal hardware resource that is partitioned so that readiness states for individual partitions of the internal hardware resource are individually controllable. The controller may determine a value for a parameter that corresponds to upcoming workload for the controller. The controller may compare the value to a set of thresholds. The controller may control the readiness states for the partitions of the internal hardware resource based on the comparison of the parameter to the set of thresholds.
Processor core power management in a virtualized environment
Processor core power management in a virtualized environment. A hypervisor, executing on a processor device of a computing host, the processor device having a plurality of processor cores, receives from a guest operating system of a virtual machine, a request to set a virtual central processing unit (VCPU) of the virtual machine to a first requested P-state level of a plurality of P-state levels. Based on the request, the hypervisor associates the VCPU with a first processor core having a P-state that corresponds to the first requested P-state level.
Memory system
A memory system includes a first nonvolatile memory, a first processor, and a second processor. The first processor sets a first assignment amount. The second processor performs access to the first nonvolatile memory, calculates a consumed amount which is an amount according to an operation time of the first nonvolatile memory in the access, and transmits a notification to the first processor when the consumed amount reaches the first assignment amount.
Migrating quantum services from quantum computing devices to quantum simulators
Migration of quantum services from quantum computing devices to quantum simulators is disclosed herein. In one example, a quantum computing device executes a migration service that receives a system stress indicator from a system monitor that tracks a status of the quantum computing device and/or a status of qubits maintained by the quantum computing device. The migration service determines, based on the system stress indicator, that a quantum service running on the quantum computing device is to be migrated. Upon determining that the quantum service is to be migrated, the migration service retrieves a QASM file that contains quantum programming instructions defining the quantum service. The QASM file is then transmitted to a quantum simulator running on a classical computing device for failover execution. In some examples, the classical computing device then executes a simulated quantum service within the quantum simulator based on the QASM file.