Y02D10/00

Accelerated deep learning

Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency, such as accuracy of learning, accuracy of prediction, speed of learning, performance of learning, and energy efficiency of learning. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Each compute element has processing resources and memory resources. Each router enables communication via wavelets with at least nearest neighbors in a 2D mesh. Stochastic gradient descent, mini-batch gradient descent, and continuous propagation gradient descent are techniques usable to train weights of a neural network modeled by the processing elements. Reverse checkpoint is usable to reduce memory usage during the training.

Printing apparatus

A printing apparatus includes: a printing section including a plurality of printing elements that performs printing on a medium; a power supply circuit supplying power to the printing section; a control circuit controlling the printing section and the power supply circuit; and a USB-Type-C interface configured to couple an external device and including a data transmission/reception terminal, a power input/output terminal, and a state identification terminal. When the state identification terminal detects that the external device that receives the power via the power input/output terminal is coupled, and the printing section performs printing, an amount of power supplied to the external device is large in a case in which a drive rate of the plurality of printing elements is low as compared with a case in which the drive rate of the plurality of printing elements is high.

Frame protocol of memory device

Techniques are described herein for a training procedure that identifies a frame boundary and generates a frame clock to identify the beginning and the end of a frame. After the frame training procedure is complete, a memory device may be configured to execute a frame synchronization procedure to identify the beginning of a frame based on the frame clock without the use of headers or other information within the frame during an active session of the memory device. During an activation time period after a power-up event, the memory device may initiate the frame training procedure. Once the frames are synchronized, the memory device may be configured to use that frame clock during an entire active session (e.g., until a power-down event) to identify the beginning of a frame as part of a frame synchronization procedure.

Method and apparatus for synchronizing the time stamp counter

A method and apparatus for synchronizing a time stamp counter (TSC) associated with a processor core in a computer system includes initializing the TSC associated with the processor core by synchronizing the TSC associated with the processor core with at least one other TSC in a hierarchy of TSCs. One or more processor cores are powered down. Upon powering up of the one or more processor cores, the TSC associated with the processor core is synchronized with the at least one other TSC in the hierarchy of TSCs.

Reducing save restore latency for power control based on write signals

A method of save-restore operations includes monitoring, by a power controller of a parallel processor (such as a graphics processing unit), of a register bus for one or more register write signals. The power controller determines that a register write signal is addressed to a state register that is designated to be saved prior to changing a power state of the parallel processor from a first state to a second state having a lower level of energy usage. The power controller instructs a copy of data corresponding to the state register to be written to a local memory module of the parallel processor. Subsequently, the parallel processor receives a power state change signal and writes state register data saved at the local memory module to an off-chip memory prior to changing the power state of the parallel processor.

Methods and devices for power management based on synthetic machine learning benchmarks

A method for power management based on synthetic machine learning benchmarks, including generating a record of synthetic machine learning benchmarks for synthetic machine learning models that are obtained by changing machine learning network topology parameters, receiving hardware information from a client device executing a machine learning program or preparing to execute a machine learning program, selecting a synthetic machine learning benchmark based on the correlation of the hardware information with the synthetic machine learning models, and determining work schedules based on the selected synthetic machine learning benchmark.

Method and apparatus for a power-efficient framework to maintain data synchronization of a mobile personal computer to simulate a connected scenario

An apparatus and method for a power-efficient framework to maintain data synchronization of a mobile personal computer (MPC) are described. In one embodiment, the method includes the detection of a data synchronization wakeup event while the MPC is operating according to a sleep state. Subsequent to wakeup event, at least one system resource is disabled to provide a minimum number of system resources required to re-establish a network connection. In one embodiment, user data from a network server is synchronized on the MPC without user intervention; the mobile platform system resumes operation according to the sleep state. In one embodiment, a wakeup alarm is programmed according to a user history profile regarding received e-mails. In a further embodiment, data synchronizing involves disabling a display, and throttling the system processor to operate at a reduced frequency. Other embodiments are described and claimed.

Power management of components within a storage management system

As the volume of data under management expands rapidly, so do the costs associated with storing and that data on secondary storage devices. The illustrative approach provides an improvement to the information management system by delaying certain tasks that meet a set of criteria until a specified threshold is met. The system receives a request to be performed on a set of data stored on secondary devices. Power management module determines whether the task satisfies a set of criteria for delayed execution, queues the task, and when a specified threshold of the queued tasks is met powers up the necessary components to execute the tasks.

Dynamic selection of cores for processing responses

Methods, systems, and devices for the dynamic selection of cores for processing responses are described. A memory sub-system can receive, from a host system, a read command to retrieve data. The memory sub-system can include a first core and a second core. The first core can process the read command based on receiving the read command. The first core can identify the second core for processing a read response associated with the read command. The first core can issue an internal command to retrieve the data from a memory device of the memory sub-system. The internal command can include an indication of the second core selected to process the read response.

Method and apparatus for mining competition relationship POIs

A method and apparatus for mining a competition relationship between POIs. An embodiment of the method includes: acquiring a graphlet mining result obtained by mining map retrieval data of users which encompasses attribute information of retrieved target POIs, the graphlet mining result encompassing occurrence frequencies of respective preset situations, and a preset situation comprising: conforming to attribute information of POIs represented by a corresponding preset graphlet and a preset association relationship between attribute information of at least two POIs; for a first and second POI, determining an occurrence frequency of a preset situation corresponding to a preset graphlet where attribute information of the first and second POI co-occur, and generating a relationship feature of the first and second POI; and inputting the relationship feature into a pre-trained relationship prediction model to obtain a competition relationship prediction result of the first POI and the second POI.