G06F9/5094

Resource management unit for capturing operating system configuration states and offloading tasks
11526380 · 2022-12-13 · ·

This disclosure describes methods, devices, systems, and procedures in a computing system for capturing a configuration state of an operating system executing on a central processing unit (CPU), and offloading resource-related tasks, based on the configuration state, to a resource management unit such as a system-on-chip (SoC). The resource management unit identifies a status of each resource based on the captured configuration state of the operating system. The resource management unit then processes tasks associated with the status of the resources, such as modifying a clock rate of a clocked component in the computing system. This can alleviate the CPU from processing those tasks thereby improving overall computing system performance and dynamics.

Methods and apparatus to implement always-on context sensor hubs for processing multiple different types of data inputs

Methods and apparatus to implement always-on context sensor hubs for processing multiple different types of data inputs are disclosed. An examples apparatus includes a first processor core to implement a host controller, and a second processor core to implement an offload engine. The host controller includes first logic to process sensor data associated with an electronic device when the electronic device is in a low power mode. The host controller is to offload a computational task associated with the sensor data to the offload engine. The offload engine includes second logic to execute the computational task.

Information processing device and information processing method
11526378 · 2022-12-13 · ·

An information processing device that includes: a memory; and a monitoring processor that is coupled to the memory, wherein the monitoring processor is configured to, in accordance with temperature information of a chip on which a plurality of monitored processors are mounted, stop execution of tasks designated as having low degrees of priority that are set in advance, among a plurality of tasks that are respectively executed at any of the plurality of monitored processors.

Automated learning technology to partition computer applications for heterogeneous systems

Systems, apparatuses and methods may provide for technology that identifies a prioritization data structure associated with a function, wherein the prioritization data structure lists hardware resource types in priority order. The technology may also allocate a first type of hardware resource to the function if the first type of hardware resource is available, wherein the first type of hardware resource has a highest priority in the prioritization data structure. Additionally, the technology may allocate, in the priority order, a second type of hardware resource to the function if the first type of hardware resource is not available.

Electronic device and method for performing temperature control

An electronic device and method of operating an electronic device are provided. The electronic device includes a temperature measurement unit configured to measure a temperature of each of multiple components of the electronic device, and a controller configured to change, based on a first reference temperature, an operating frequency of the controller to a first operating frequency when a temperature of the controller, measured by the temperature measurement unit, reaches the first reference temperature and change, based on a third reference temperature that is lower than the first reference temperature, the operating frequency of the controller to a second operating frequency when a temperature of at least one component of the multiple components reaches a second reference temperature while the controller operates at the first operating frequency.

Cooperative dynamic clock and voltage scaling (DCVS) between two processor systems
11520628 · 2022-12-06 · ·

In a real-time system having first and second processor systems, cooperative dynamic clock and voltage scaling (“DCVS”) may include a first processor system monitoring a condition indicative of first processor workload, adjusting a first processor operating frequency in response to a detected amount of change in the first processor workload, and providing an indication based on the detected amount of change in the first processor workload to the second processor contemporaneously with providing first processor output data to the second processor. The cooperative DCVS may further include the second processor system adjusting a second processor operating frequency in response to the indication.

Operating a power source as a heating device in an information handling system (IHS)
11520392 · 2022-12-06 · ·

Systems and methods for operating a power source as a heating device in an Information Handling System (IHS) are described. In some embodiments, an IHS may include a processor and a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution, cause the IHS to: receive an indication to increase a temperature of the IHS and, in response to the indication, concurrently set a first power supply in source mode and a second power supply in sink mode.

PROCESSING SYSTEM, PROCESSING METHOD, AND COMPUTER READABLE MEDIUM STORING PROCESSING PROGRAM
20220375276 · 2022-11-24 ·

A processing system is in a host center configured to communicate with edge computers each of which is mounted on one of a plurality of moving bodies, and the processing system is configured to perform host processing on a service data set related to a cloud service. The processing system includes a processor configured to set divided data of the service data set, and search, for each of the divided data, the edge computer that satisfies a necessary condition required for performing distribution processing on the divided data by monitoring whether each of the edge computers satisfies the necessary condition.

User presence prediction driven device management

Pooling computing resources based on inferences about a plurality of hardware devices. The method includes identifying inference information about the plurality of devices. The method further includes based on the inference information optimizing resource usage of the plurality of hardware devices.

Compute optimization mechanism for deep neural networks

Embodiments provide mechanisms to facilitate compute operations for deep neural networks. One embodiment comprises a graphics processing unit comprising one or more multiprocessors, at least one of the one or more multiprocessors including a register file to store a plurality of different types of operands and a plurality of processing cores. The plurality of processing cores includes a first set of processing cores of a first type and a second set of processing cores of a second type. The first set of processing cores are associated with a first memory channel and the second set of processing cores are associated with a second memory channel.