G06F9/5094

Dynamically updating logical identifiers of cores of a processor

In one embodiment, a processor includes a plurality of cores each including a first storage to store a physical identifier for the core and a second storage to store a logical identifier associated with the core; a plurality of thermal sensors to measure a temperature at a corresponding location of the processor; and a power controller including a dynamic core identifier logic to dynamically remap a first logical identifier associated with a first core to associate the first logical identifier with a second core, based at least in part on a temperature associated with the first core, the dynamic remapping to cause a first thread to be migrated from the first core to the second core transparently to an operating system. Other embodiments are described and claimed.

Energy-aware computing system
11567561 · 2023-01-31 · ·

An energy-aware system is provided. The system includes an energy harvester adapted to supply harvested energy as an output for storage at an energy storage; and a scheduler, the scheduler being made up of, at least in part, hardware of the energy-aware system, the scheduler operable to schedule execution of operations performed by the energy-aware system, wherein the scheduler is configured to: determine if a current voltage level at the energy storage is higher than a start voltage level; and cause initiation of execution of at least a portion one of the operations when the start voltage of the one of the operations levels is lower than or equal to the current voltage level.

Platform slicing of central processing unit (CPU) resources

Examples herein relate to assigning, by a system agent of a central processing unit (CPU), an operating frequency to a core group based priority level of the core group while avoiding throttling of the system agent. Avoiding throttling of the system agent can include maintaining a minimum performance level of the system agent. A minimum performance level of the system agent can be based on a minimum operating frequency. Assigning, by a system agent of a central processing unit, an operating frequency to a core group based priority level of the core group while avoiding throttling of the system agent can avoid a thermal limit of the CPU. Avoiding thermal limit of the CPU can include adjusting the operating frequency to the core group to avoid performance indicators of the CPU. A performance indicator can indicate CPU utilization corresponds to Thermal Design Point (TDP).

CARBON FOOTPRINT CLIMATE IMPACT SCORES FOR DATACENTER WORKLOADS
20230236904 · 2023-07-27 ·

The technology described herein is directed towards determining a datacenter's power consumption of its devices at the workload level, from which an objective carbon footprint impact score can be determined. Devices can include servers, network devices such as switches, and storage devices. For a group of workloads at a location, workload power consumption values can be determined based on collected power-related workload metrics data. The power consumption values are used in determining per-workload carbon footprint values for the workloads based on the location. One or more actions can be taken to modify the respective carbon footprint values, e.g., moving a workload to a different location, changing device hardware, and so on.

TEMPERATURE CONTROL METHOD, MEMORY STORAGE APPARATUS, AND MEMORY CONTROL CIRCUIT UNIT
20230021668 · 2023-01-26 · ·

A temperature control method, a memory storage apparatus, and a memory control circuit unit are disclosed. The method includes: detecting a system parameter of the memory storage apparatus, and the system parameter reflects wear of a rewritable non-volatile memory module in the memory storage apparatus; determining a temperature control threshold value according to the system parameter; and performing a temperature reducing operation in response to a temperature of the memory storage apparatus reaching the temperature control threshold value to reduce the temperature of the memory storage apparatus.

WORKLOAD AWARE VIRTUAL PROCESSING UNITS
20230024130 · 2023-01-26 ·

A processing unit is configured differently based on an identified workload, and each configuration of the processing unit is exposed to software (e.g., to a device driver) as a different virtual processing unit. Using these techniques, a processing system is able to provide different configurations of the processing unit to support different types of workloads, thereby conserving system resources. Further, by exposing the different configurations as different virtual processing units, the processing system is able to use existing device drivers or other system infrastructure to implement the different processing unit configurations.

SYSTEM AND METHODS FOR SERVER POWER MANAGEMENT
20230229216 · 2023-07-20 ·

A system and methods are provided for improving power efficiency of a data center, including: acquiring training data including power caps, utilization rates, and a measure of Service Level Agreement (SLA) compliance of one or more computer servers of the data center; creating a model for determining power caps according to measured utilization rates of the one or more computer servers, wherein the determined power caps, when applied to the one or more computer servers, reduce power consumption and meet the measure of SLA compliance; and applying the model, according to subsequent data received during a second operating period, to determine a power cap to apply to the one or more computer servers, wherein the subsequent data includes a subsequent utilization rate of the one or more computer servers.

Power aware load placement

Techniques are described for enabling a service provider to determine the power utilization of electrical lineups powering physical servers in a data center and place virtual machine instances into the physical servers based on the power utilization and a user-specified preference of a virtual machine instance type. In one embodiment, a computer-implemented method includes determining a power utilization for each lineup of a plurality of lineups that comprise a plurality of racks of physical servers, selecting a lineup of the plurality of lineups for the virtual machine instance based on the power utilizations for the plurality of lineups, selecting a virtual machine slot for the virtual machine instance from a plurality of candidate virtual machine slots of the physical servers of the lineup based on the user-specified preference, and causing the virtual machine slot of a physical server of the lineup to execute the virtual machine instance.

DATA PLANE SCALABLE ARCHITECTURE FOR WIRELESS COMMUNICATION
20230019102 · 2023-01-19 ·

Embodiments of apparatus and method for data plane management are disclosed. In one example, an apparatus for communication both uplink and downlink can include a plurality of downlink clusters, each downlink cluster including a downlink cluster processor configured to process three or more downlink data layers. The apparatus can also include a plurality of uplink clusters, each uplink cluster including an uplink cluster processor configured to process three or more uplink data layers. The apparatus can further include a controller configured to scale the plurality of downlink clusters and configured to scale the plurality of uplink clusters. Scaling the plurality of downlink clusters and the plurality of uplink clusters can include activating or deactivating one or more clusters of the plurality of downlink clusters, the plurality of uplink clusters, or both the plurality of downlink clusters and the plurality of uplink clusters.

DEVICE, METHOD AND SYSTEM TO PROVIDE THREAD SCHEDULING HINTS TO A SOFTWARE PROCESS

Techniques and mechanisms for providing a thread scheduling hint to an operating system of a processor which comprises first cores and second cores. In an embodiment, the first cores are of a first type which corresponds to a first range of sizes, and the second cores are of a second type which corresponds to a second range of sizes smaller than the first range of sizes. A power control unit (PCU) of the processor is to detect that an inefficiency, of a first operational mode of the processor, would exist while an indication of an amount of power, to be available to the processor, is below a threshold. Based on the detecting, the PCU hints to an executing software process that a given core is to be included in, or omitted from, a pool of cores available for thread scheduling. The hint indicates the given core based on a relative prioritization of the first core type and the second core type.