Patent classifications
G06F9/5094
Display system using system level resources to calculate compensation parameters for a display module in a portable device
A system including a display module and a system module. The display module is integrated in a portable device with a display communicatively coupled to one or more of a driver unit, a measurement unit, a timing controller, a compensation sub-module, and a display memory unit. The system module is communicatively coupled to the display module and has one or more interface modules, one or more processing units, and one or more system memory units. At least one of the processing units and the system memory units is programmable to calculate new compensation parameters for the display module during an offline operation.
Load sharing between wireless earpieces
A method for off-loading tasks between a set of wireless earpieces in an embodiment of the present invention may have one or more of the following steps: (a) monitoring battery levels of the set of wireless earpieces, (b) determining the first wireless earpiece battery level and the second wireless battery level, (c) communicating the battery levels of each wireless earpiece to the other wireless earpiece of the set of wireless earpieces, (d) assigning a first task involving one or more of the following: computing tasks, background tasks, audio processing tasks, and sensor data analysis tasks from one of the set of wireless earpieces to the other wireless earpiece if the battery level of the one of the set of wireless earpieces falls below a critical threshold, (e) communicating data for use in performing a second task to the other wireless earpiece if the second task is communicated to the first wireless earpiece.
Methods and arrangements for automated improving of quality of service of a data center
An automated improving of quality of service of a data center. Transients of a power grid fed to a power supply unit are monitored by a probe. Information on transients is provided across an interface to a server of the data center. Based on characteristics of the transients, a reliability of the data center is subjected to automated updating. A request for migration of workload requiring a higher reliability than the updated reliability can be sent to a central management. When the central management has identified another data center that can meet the required reliability, the central management migrates or relocates the workload to the another data center.
METHOD FOR DIMENSIONING AN ELECTRIC POWER RESERVED BY BASE STATIONS
A method for dimensioning an electric power reserved by at least one current base station among a plurality of base stations that are connected to a virtualization manager of a network infrastructure is disclosed. The method is implemented by the virtualization manager, the method includes receiving a request for dimensioning the reserved electric power; configuring, according to the dimensioning request, at least one server of the at least one current base station; and controlling, according to the configuration of the at least one server, at least one virtual computing resource of the network infrastructure, so as to dimension the reserved electric power.
USING SPARSITY METADATA TO REDUCE SYSTOLIC ARRAY POWER CONSUMPTION
A processing apparatus can include a general-purpose parallel processing engine comprising a matrix accelerator including a multi-stage systolic array, where each stage includes multiple processing elements associated with multiple processing channels. The multiple processing elements are configured to receive output sparsity metadata that is independent of input sparsity of input matrix elements and perform processing operations on the input matrix elements based on the output sparsity metadata.
HARDWARE-ASSISTED CORE FREQUENCY AND VOLTAGE SCALING IN A POLL MODE IDLE LOOP
A hardware controller within a core of a processor is described. The hardware controller includes telemetry logic to generate telemetry data that indicates an activity state of the core; core stall detection logic to determine, based on the telemetry data from the telemetry logic, whether the core is in an idle loop state; and a power controller that, in response to the core stall detection logic determining that the core is in the idle loop state, is to decrease a power mode of the core from a first power mode associated with a first set of power settings to a second power mode associated with a second set of power settings.
MANAGING COMPUTE RESOURCES AND RUNTIME OBJECT LOAD STATUS IN A PLATFORM FRAMEWORK
Embodiments of systems and methods for managing compute resources and runtime object load status in a platform framework are described. In some embodiments, an Information Handling System (IHS) may include a processor and a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution, cause the IHS to: receive, at a platform framework via an Application Programming Interface (API), an arbitration policy; notify an application, by the platform framework via the API, of a state change with respect to the arbitration policy based upon a change in context; receive, at the platform framework from the application via the API, an identification of at least one compute resource to execute a workload associated with the arbitration policy; and offload the workload to the compute resource.
Technologies for providing advanced management of power usage limits in a disaggregated architecture
Technologies for providing advanced management of power usage limits in a disaggregated architecture include a compute device. The compute device includes circuitry configured to execute operations associated with a workload in a disaggregated system. The circuitry is also configured to determine whether a present power usage of the compute device is within a predefined range of a power usage limit assigned to the compute device. Additionally, the circuitry is configured to send, to a device in the disaggregated system and in response to a determination that the present power usage of the present compute device is not within the predefined range of the power usage limit assigned to the present compute device, offer data indicative of an offer to reduce the power usage limit assigned to the present compute device to enable a second power utilization limit of another compute device in the disaggregated system to be increased.
Computing Device Control of a Job Execution Environment
Job execution environment control techniques are described to manage policy selection and implementation to control use of job executors by a computing device, automatically and without user intervention. These techniques are usable to select a policy from a plurality of policies that is then used to control lifecycles of job executors of a job execution environment of a computing device. Further, these techniques are usable to respond dynamically to change the selected policy during runtime of the application in response to changes in the job execution environment.
METHOD AND APPARATUS FOR DIFFERENTIALLY OPTIMIZING QUALITY OF SERVICE QoS
A method and apparatus for differentially optimizing a quality of service (QoS) includes: establishing a system model of a multi-task unloading framework; acquiring a mode for users executing a computation task, executing, according to the mode for users executing the computation task, the system model of the multi-task unloading framework; and optimizing a quality of service (QoS) on the basis of a multi-objective optimization method for a multi-agent deep reinforcement learning. According to the present invention, an unloading policy is calculated on the basis of a multi-user differentiated QoS of a multi-agent deep reinforcement learning, and with the differentiated QoS requirements among different users in a system being considered, a global unloading decision is performed according to a task performance requirement and a network resource state, and differentiated performance optimization is performed on different user requirements, thereby effectively improving a system resource utilization rate and a user service quality.