Patent classifications
G06F9/5094
SELECTIVE MULTITHREADED EXECUTION OF MEMORY TRAINING BY CENTRAL PROCESSING UNIT(CPU) SOCKETS
Embodiments described herein are generally directed to selective multithreaded execution of memory training by CPU sockets. In an example, a memory configuration and a current phase of execution of memory training for each of multiple CPU sockets of a computer system is received. Based on the memory configuration and the current phase of execution of each of the CPU sockets an estimated power usage across all CPU sockets may be determined. Based on the estimated power usage and a power consumption threshold (e.g., PTAM or PA), performance of the current phase of execution of one or more CPU sockets may be selectively released for one or more channels of the one or more CPU sockets.
Method and devices for processing sensor data by applying one or more processing pipelines to the sensor data
In one embodiment, the method includes obtaining, by a first processing device, energy demand data representative of the energy consumption of respective tasks of a processing pipeline, obtaining, by the first processing device, battery availability data representative of the available energy of the batteries of other respective processing devices, for respective tasks of the processing pipeline, selecting, by the first processing device, one of the processing devices for executing the task, as a function of the energy demand data and the battery availability data, and controlling, by the first processing device, the execution of the respective tasks on the selected processing devices.
Cognitive processing resource allocation
A processor may run a background process to identify a first task being initiated by a first user on a device, where the first task is associated with a first application. The processor may identify the first user of the device. The processor may analyze one or more interactions of the first user associated with the first application on the device. The processor may allocate, based at least in part on identification of the first user, identification of the first task, or analysis of the one or more interactions of the first user, computing resources to one or more hardware components on the device.
INTELLIGENT SELECTION OF OPTIMIZATION METHODS IN HETEROGENEOUS ENVIRONMENTS
Intelligent selection of optimization methods in heterogeneous environments is described. In some embodiments, an Information Handling System (IHS) may include a processor and a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution, cause the IHS to: identify a context; rank a plurality of optimization methods based upon the context; and execute at least a subset of the ranked optimization methods.
TECHNIQUES FOR DISTRIBUTED PROCESSING TASK PORTION ASSIGNMENT
Various embodiments are generally directed to techniques for assigning portions of a task among individual cores of one or more processor components of each processing device of a distributed processing system. An apparatus to assign processor component cores to perform task portions includes a processor component; an interface to couple the processor component to a network to receive data that indicates available cores of base and subsystem processor components of processing devices of a distributed processing system, the subsystem processor components made accessible on the network through the base processor components; and a core selection component for execution by the processor component to select cores from among the available cores to execute instances of task portion routines of a task based on a selected balance point between compute time and power consumption needed to execute the instances of the task portion routines. Other embodiments are described and claimed.
Methods and apparatus to execute a workload in an edge environment
Methods and apparatus to execute a workload in an edge environment are disclosed. An example apparatus includes a node scheduler to accept a task from a workload scheduler, the task including a description of a workload and tokens, a workload executor to execute the workload, the node scheduler to access a result of execution of the workload and provide the result to the workload scheduler, and a controller to access the tokens and distribute at least one of the tokens to at least one provider, the provider to provide a resource to the apparatus to execute the workload.
MULTIVARIABLE CONTROLLER FOR COORDINATED CONTROL OF COMPUTING DEVICES AND BUILDING INFRASTRUCTURE IN DATA CENTERS OR OTHER LOCATIONS
A method includes obtaining first information associated with control of multiple computing devices, where the first information relates to possible changes to processing tasks performed by the computing devices. The method also includes obtaining second information associated with building infrastructure operations performed by one or more building systems of one or more buildings that house the computing devices. The method further includes identifying one or more changes to one or more of the computing devices using the first and second information. In addition, the method includes outputting third information identifying the one or more changes.
Adaptive memory performance control by thread group
A device implementing adaptive memory performance control by thread group may include a memory and at least one processor. The at least one processor may be configured to execute a group of threads on one or more cores. The at least one processor may be configured to monitor a plurality of metrics corresponding to the group of threads executing on one or more cores. The metrics may include, for example, a core stall ratio and/or a power metric. The at least one processor may be configured to determine, based at least in part on the plurality of metrics, a memory bandwidth constraint with respect to the group of threads executing on the one or more cores. The at least one processor may be configured to, in response to determining the memory bandwidth constraint, increase a memory performance corresponding to the group of threads executing on the one or more cores.
ALLOCATING MEMORY AND REDIRECTING MEMORY WRITES IN A CLOUD COMPUTING SYSTEM BASED ON TEMPERATURE OF MEMORY MODULES
Systems and methods for allocating memory and redirecting data writes based on temperature of memory modules in a cloud computing system are described. A method includes maintaining temperature profiles for a first plurality of memory modules and a second plurality of memory modules, The method includes automatically redirecting a first request to write to memory from a first compute entity being executed by the first processor to a selected one of a first plurality of memory chips, whose temperature does not meet or exceed the temperature threshold, included in at least the first plurality of memory modules and automatically redirecting a second request to write to memory from a second compute entity being executed by the second processor to a selected one of the second plurality of memory chips, whose temperature does not meet or exceed the temperature threshold, included in at least the second plurality of memory modules.
DATA TRANSMISSION METHOD, WEARABLE APPARATUS, AND STORAGE MEDIUM
A data transmission method is provided, the method is applied to a wearable apparatus configured with a first operation system and a second operation system, and a power consumption of the first operation system is lower than a power consumption of the second operation system; the method; the method includes: obtaining a type of data to be transmitted; detecting current screen state information of the wearable apparatus; and determining target data according to the current screen state information of the wearable apparatus and the type of data to be transmitted, and transmitting the target data from the first operation system of the wearable apparatus to the second operation system of the wearable apparatus.