Patent classifications
G06F9/30083
REDUCING SAVE RESTORE LATENCY FOR POWER CONTROL
A method of save-restore operations includes monitoring, by a power controller of a parallel processor (such as a graphics processing unit), of a register bus for one or more register write signals. The power controller determines that a register write signal is addressed to a state register that is designated to be saved prior to changing a power state of the parallel processor from a first state to a second state having a lower level of energy usage. The power controller instructs a copy of data corresponding to the state register to be written to a local memory module of the parallel processor. Subsequently, the parallel processor receives a power state change signal and writes state register data saved at the local memory module to an off-chip memory prior to changing the power state of the parallel processor.
Performance scaling for binary translation
Embodiments relate to improving user experiences when executing binary code that has been translated from other binary code. Binary code (instructions) for a source instruction set architecture (ISA) cannot natively execute on a processor that implements a target ISA. The instructions in the source ISA are binary-translated to instructions in the target ISA and are executed on the processor. The overhead of performing binary translation and/or the overhead of executing binary-translated code are compensated for by increasing the speed at which the translated code is executed, relative to non-translated code. Translated code may be executed on hardware that has one or more power-performance parameters of the processor set to increase the performance of the processor with respect to the translated code. The increase in power-performance for translated code may be proportional to the degree of translation overhead.
MEMORY-NETWORK PROCESSOR WITH PROGRAMMABLE OPTIMIZATIONS
Various embodiments are disclosed of a multiprocessor system with processing elements optimized for high performance and low power dissipation and an associated method of programming the processing elements. Each processing element may comprise a fetch unit and a plurality of address generator units and a plurality of pipelined datapaths. The fetch unit may be configured to receive a multi-part instruction, wherein the multi-part instruction includes a plurality of fields. First and second address generator units may generate, based on different fields of the multi-part instruction, addresses from which to retrieve first and second data for use by an execution unit for the multi-part instruction or a subsequent multi-part instruction. The execution units may perform operations using a single pipeline or multiple pipelines based on third and fourth fields of the multi-part instruction.
Apparatus and branch prediction circuitry having first and second branch prediction schemes, and method
A processing pipeline may have first and second execution circuits having different performance or energy consumption characteristics. Instruction supply circuitry may support different instruction supply schemes with different energy consumption or performance characteristics. This can allow a further trade-off between performance and energy efficiency. Architectural state storage can be shared between the execute units to reduce the overhead of switching between the units. In a parallel execution mode, groups of instructions can be executed on both execute units in parallel.
Computing system translation to promote efficiency
A system and method for utilizing the processing power of computing devices such as smart devices are provided. The system includes one or more distributed smart devices and a management server that communicates with the smart devices in order to determine whether they are idle and whether viable compute tasks are present that can be performed on the smart devices based on the smart device's status, configuration, utility and network parameters, and availability. Some tasks may be performed in low power mode to save energy.
COMPUTING SYSTEM AND METHOD
A datacenter including a first voltage sensor and/or first amperage sensor along with a second voltage sensor and/or second amperage sensor. The first amperage and voltage sensors are associated with a first computing device (FCD) and the second amperage and voltage sensors are associated with a second computing device (SCD). The datacenter also includes an electronic control unit (ECU) that communicates with the FCD and the SCD. The ECU is configured to receive FCD energy consumption information and additional updated FCD energy consumption information via the first voltage and/or amperage sensor(s). FCD energy consumption information is associated with energy consumed by the FCD during a first customer billing cycle. The ECU is also configured to divide a first blockchain mining reward into a first customer portion and a first datacenter portion and withhold the first customer portion when the first datacenter portion is less than a first minimum threshold.
CIRCUIT AND REGISTER TO PREVENT EXECUTABLE CODE ACCESS
Certain aspects provide a computing system including a first CPU configured to load executable code for the computing system. The computing system further includes a first memory configured to store the executable code at an address range of the first memory. The first memory is local to a second CPU. The computing system further includes a one-time programmable or read-only register configured to store an indication of the address range. The computing system further includes a circuit configured to: determine if a first memory address associated with a first memory access command is in the address range based on the indication of the address range stored in the register; when the first memory address is in the address range, refrain from sending the first command to the first memory; and when the first memory address is not in the address range, send the first command to the first memory.
Methods and systems for a user interface for illumination power, management, and control
Systems and methods for illumination power, management and control can include lighting fixtures, lighting controllers, databases, and gateways. The lighting controllers can power the lighting fixtures, control the lighting fixtures, and store fixture state data and controller state data. The lighting controllers can be connected to building mains power (e.g., 240 VAC) and provide DC power to the lighting fixtures. The lighting controllers can read state data from and control the fixtures via a digital interface. The Database server can store user profiles, site profiles, fixture property data, and controller property data. The gateway can read and modify the state data stored by the lighting controllers, and can query the database server for the property data. The gateway can also provide a user interface through which users, based on authorization, can read and write the state data (e.g., fixture on/off) and the property data.
Neural network computation device and method
The present disclosure provides a computation device including: a computation module for executing a neural network computation, and a power conversion module connected to the computation module, for converting input data and/or output data of the neural network computation into power data. The present disclosure further provides a computation method. The computation device and method of the present disclosure may reduce the cost of storage resources and computing resources, and may increase the computation speed.
METHODS AND SYSTEMS FOR A USER INTERFACE FOR ILLUMINATION POWER, MANAGEMENT, AND CONTROL
Systems and methods for illumination power, management and control can include lighting fixtures, lighting controllers, databases, and gateways. The lighting controllers can power the lighting fixtures, control the lighting fixtures, and store fixture state data and controller state data. The lighting controllers can be connected to building mains power (e.g., 240 VAC) and provide DC power to the lighting fixtures. The lighting controllers can read state data from and control the fixtures via a digital interface. The Database server can store user profiles, site profiles, fixture property data, and controller property data. The gateway can read and modify the state data stored by the lighting controllers, and can query the database server for the property data. The gateway can also provide a user interface through which users, based on authorization, can read and write the state data (e.g., fixture on/off) and the property data.