Patent classifications
G06F9/4893
MASTER ELECTRONICS APPARATUS, ELECTRONIC APPARATUS AND CONTROLLING METHOD THEREOF
A master electronic apparatus, an electronic apparatus, and a controlling method thereof where the master electronic apparatus includes a communication interface and a processor. The processor receives first data and second data regarding predicted power consumption amounts corresponding to respective tasks of a first electronic apparatus and a second electronic apparatus, calculates summed-up values of the predicted power consumption amounts for respective times, and compares the summed-up values with instantaneous power amount limits for the respective times. The processor, based on the summed-up values being smaller than the instantaneous power amount limits, transmits a task approval signal to the second electronic apparatus, and based on identifying a time a summed-up value is greater than or equal to the instantaneous power amount limit, transmits a control signal controlling an operation in the identified time to at least one of the first electronic apparatus and the second electronic apparatus based on priorities.
Power management of components within a storage management system
As the volume of data under management expands rapidly, so do the costs associated with storing and that data on secondary storage devices. The illustrative approach provides an improvement to the information management system by delaying certain tasks that meet a set of criteria until a specified threshold is met. The system receives a request to be performed on a set of data stored on secondary devices. Power management module determines whether the task satisfies a set of criteria for delayed execution, queues the task, and when a specified threshold of the queued tasks is met powers up the necessary components to execute the tasks.
Scheduler for amp architecture with closed loop performance and thermal controller
Systems and methods are disclosed for scheduling threads on a processor that has at least two different core types, such as an asymmetric multiprocessing system. Each core type can run at a plurality of selectable voltage and frequency scaling (DVFS) states. Threads from a plurality of processes can be grouped into thread groups. Execution metrics are accumulated for threads of a thread group and fed into a plurality of tunable controllers for the thread group. A closed loop performance control (CLPC) system determines a control effort for the thread group and maps the control effort to a recommended core type and DVFS state. A closed loop thermal and power management system can limit the control effort determined by the CLPC for a thread group, and limit the power, core type, and DVFS states for the system. Deferred interrupts can be used to increase performance.
SYSTEM AND METHOD OF UTILIZING THERMAL PROFILES ASSOCIATED WITH WORKLOAD EXECUTING ON INFORMATION HANDLING SYSTEMS
In one or more embodiments, one or more systems, one or more methods, and/or one or more processes may determine first thermal attribute values associated with multiple information handling systems (IHSs) with respect to a period of time as the IHSs execute a first workload; determine multiple variance ranges respectively associated with the first thermal attributes; periodically determine second thermal attribute values associated with the IHSs as the IHSs execute a second workload; determine that a thermal attribute value of the second thermal attribute values exceeds a respective variance range of the variance ranges as a first information handling system (IHS) of the IHSs executes the second workload; generate an alert based at least on the thermal attribute value exceeding the respective variance range; and in response to the alert, transfer at least a portion of the second workload from the first IHS to a second IHS of the IHSs.
SELECTIVE MULTITHREADED EXECUTION OF MEMORY TRAINING BY CENTRAL PROCESSING UNIT(CPU) SOCKETS
Embodiments described herein are generally directed to selective multithreaded execution of memory training by CPU sockets. In an example, a memory configuration and a current phase of execution of memory training for each of multiple CPU sockets of a computer system is received. Based on the memory configuration and the current phase of execution of each of the CPU sockets an estimated power usage across all CPU sockets may be determined. Based on the estimated power usage and a power consumption threshold (e.g., PTAM or PA), performance of the current phase of execution of one or more CPU sockets may be selectively released for one or more channels of the one or more CPU sockets.
INTELLIGENT SELECTION OF OPTIMIZATION METHODS IN HETEROGENEOUS ENVIRONMENTS
Intelligent selection of optimization methods in heterogeneous environments is described. In some embodiments, an Information Handling System (IHS) may include a processor and a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution, cause the IHS to: identify a context; rank a plurality of optimization methods based upon the context; and execute at least a subset of the ranked optimization methods.
Configuration management based on thermal state
The systems and methods manage thermal states of a device through user configuration of a client application on the device. The systems and methods set thermal thresholds associated with the device. The systems and methods infer the thermal thresholds from information gathered by a client application running on the device. The systems and methods implement a stored policy associated with a violation of one of the thermal thresholds by one of the monitored thermal states.
ENHANCED POWER MANAGEMENT FOR SUPPORT OF PRIORITY SYSTEM EVENTS
Embodiments are generally directed to enhanced power management for support of priority system events. An embodiment of a system includes a processing element; a memory including a registry for information regarding one or more system events that are designated as priority events; a mechanism to track operation of events that requires Turbo mode operation for execution; and a power control unit to implement a power management algorithm. The system is to maintain an first energy budget and a second residual energy budget for operation in a Turbo power mode, and wherein the power management algorithm is to determine whether to authorize execution of a detected system event in the Turbo power mode based on the second residual energy budget upon determining that the first energy budget is not sufficient for execution of the detected system event and that the detected system event is designated as a priority event. Priority designations for the priority events may include a first High Priority designation and a second Critical designation.
NEURAL NETWORK POWER MANAGEMENT IN A MULTI-GPU SYSTEM
Systems, apparatuses, and methods for managing power consumption for a neural network implemented on multiple graphics processing units (GPUs) are disclosed. A computing system includes a plurality of GPUs implementing a neural network. In one implementation, the plurality of GPUs draw power from a common power supply. To prevent the power consumption of the system from exceeding a power limit for long durations, the GPUs coordinate the scheduling of tasks of the neural network. At least one or more first GPUs schedule their computation tasks so as not to overlap with the computation tasks of one or more second GPUs. In this way, the system spends less time consuming power in excess of a power limit, allowing the neural network to be implemented in a more power efficient manner.
Software assisted power management
Embodiments include an apparatus comprising an execution unit coupled to a memory, a microcode controller, and a hardware controller. The microcode controller is to identify a global power and performance hint in an instruction stream that includes first and second instruction phases to be executed in parallel, identify a local hint based on synchronization dependence in the first instruction phase, and use the first local hint to balance power consumption between the execution unit and the memory during parallel executions of the first and second instruction phases. The hardware controller is to use the global hint to determine an appropriate voltage level of a compute voltage and a frequency of a compute clock signal for the execution unit during the parallel executions of the first and second instruction phases. The first local hint includes a processing rate for the first instruction phase or an indication of the processing rate.