G06F9/5094

Load sharing between wireless earpieces
20230092994 · 2023-03-23 · ·

A method for off-loading tasks between a set of wireless earpieces in an embodiment of the present invention may have one or more of the following steps: (a) monitoring battery levels of the set of wireless earpieces, (b) determining the first wireless earpiece battery level and the second wireless battery level, (c) communicating the battery levels of each wireless earpiece to the other wireless earpiece of the set of wireless earpieces, (d) assigning a first task involving one or more of the following: computing tasks, background tasks, audio processing tasks, and sensor data analysis tasks from one of the set of wireless earpieces to the other wireless earpiece if the battery level of the one of the set of wireless earpieces falls below a critical threshold, (e) communicating data for use in performing a second task to the other wireless earpiece if the second task is communicated to the first wireless earpiece.

Resource Management Unit for Capturing Operating System Configuration States and Offloading Tasks
20230088718 · 2023-03-23 · ·

This disclosure describes methods, devices, systems, and procedures in a computing system for capturing a configuration state of an operating system executing on a central processing unit (CPU), and offloading resource-related tasks, based on the configuration state, to a resource management unit such as a system-on-chip (SoC). The resource management unit identifies a status of each resource based on the captured configuration state of the operating system. The resource management unit then processes tasks associated with the status of the resources, such as modifying a clock rate of a clocked component in the computing system. This can alleviate the CPU from processing those tasks thereby improving overall computing system performance and dynamics.

Resource Tapping Method, Resource Tapping Apparatus and Electronic Device
20230092978 · 2023-03-23 ·

This disclosure provides a resource tapping method, a resource tapping apparatus and an electronic device, and relates to the field of computer technology, in particular to the technical field of artificial intelligence, such as deep learning and machine learning. A specific implementation is as follows: obtaining operation data in M resource dimensions of a target cabinet, the M resource dimensions including a power resource, where M is a positive integer; determining a target power over-allocation value of the target cabinet based on the operation data, the target power over-allocation value being used for indicating an allowable power increment on the basis of a power rating of the target cabinet; and determining, based on the target power over-allocation value, a first quantity of additional servers deployable in the target cabinet.

METHOD FOR DATA PROCESSING AND APPARATUS, AND ELECTRONIC DEVICE
20220342706 · 2022-10-27 ·

A method for data processing includes: receiving capability information for processing data sent by a network-side device; determining whether the capability information satisfies a preset requirement of data to be processed; and in response to the capability information satisfying the preset requirement, sending the data to be processed to the network-side device for processing.

INFORMATION REPORTING METHOD, APPARATUS AND DEVICE, AND STORAGE MEDIUM
20220342713 · 2022-10-27 ·

Described are an information reporting method, apparatus and device, and a storage medium. The method includes: a terminal sends AI/ML capability information to a network device, the AI/ML capability information indicating resource information of a terminal for processing an AI/ML service; the network device, according to the AI/ML capability information reported by the terminal, can flexibly switch an AI/ML model run by the terminal, distribute an appropriate AI/ML model for the terminal, and adjust AI/ML training parameters and the like. Therefore, while it is ensured that an AI/ML task can be completed, AI/ML resources such as the processing capability, storage capability, and battery of the terminal can be utilized more efficiently, so that the reliability, timeliness and efficiency of AI/ML operations based on a terminal are ensured.

Performance scaling for binary translation

Embodiments relate to improving user experiences when executing binary code that has been translated from other binary code. Binary code (instructions) for a source instruction set architecture (ISA) cannot natively execute on a processor that implements a target ISA. The instructions in the source ISA are binary-translated to instructions in the target ISA and are executed on the processor. The overhead of performing binary translation and/or the overhead of executing binary-translated code are compensated for by increasing the speed at which the translated code is executed, relative to non-translated code. Translated code may be executed on hardware that has one or more power-performance parameters of the processor set to increase the performance of the processor with respect to the translated code. The increase in power-performance for translated code may be proportional to the degree of translation overhead.

Techniques for generating a system cache partitioning policy
11609860 · 2023-03-21 · ·

In various embodiments, a computing system includes, for example, a plurality of processing units that share access to a system cache. A cache management application receives, for example, resource savings information for each processing unit. The resource savings information indicates, for example, amounts of a resource (e.g., power) that are saved when different units of the system cache are allocated to a processing unit. The cache management application determines, for example, the number of units of system cache to allocate to each processing unit based on the received resource savings information.

DISPLAY SYSTEM USING SYSTEM LEVEL RESOURCES TO CALCULATE COMPENSATION PARAMETERS FOR A DISPLAY MODULE IN A PORTABLE DEVICE
20230081884 · 2023-03-16 ·

A system including a display module and a system module. The display module is integrated in a portable device with a display communicatively coupled to one or more of a driver unit, a measurement unit, a timing controller, a compensation sub-module, and a display memory unit. The system module is communicatively coupled to the display module and has one or more interface modules, one or more processing units, and one or more system memory units. At least one of the processing units and the system memory units is programmable to calculate new compensation parameters for the display module during an offline operation.

GREENER SOFTWARE DEFINED STORAGE STACK
20220342705 · 2022-10-27 ·

A method for managing client resources by receiving a desired load factor representing the number of instructions being executed per second (IOPS) to implement an application on a set of cores of a client device, based on the desired load factor and a latency factor, determining a maximum number of IOPS that can be executed by the cores of the client device before reaching system saturation, determining a pattern of the IOPS being executed on the set of cores based on historical IOPS information for the latency factor, and based on the historical IOPS information, determining to execute the IOPS on a subset of the set of cores.

PROCESSING DATA STREAM MODIFICATION TO REDUCE POWER EFFECTS DURING PARALLEL PROCESSING
20230078991 · 2023-03-16 ·

Certain aspects of the present disclosure provide a method for performing parallel data processing, including: receiving data for parallel processing from a data processing requestor; generating a plurality of data sub-blocks; determining a plurality of data portions in each data sub-block of the plurality of data sub-blocks; changing an order of the plurality of data portions in at least one data sub-block of the plurality of data sub-blocks; providing the plurality of data sub-blocks, including the at least one data sub-block comprising the changed order of the plurality of data portions, to a plurality of processing units for parallel processing; and receiving processed data associated with the plurality of data sub-blocks from the plurality of processing units.