G06N3/065

Artificial neuron synaptic weights implemented with variable dissolvable conductive paths

A low-power, controllable, and reconfigurable method to control weights in model neurons in an Artificial Neural Network is disclosed. Memristors are utilized as adjustable synapses, where the memristor resistance reflects the synapse weight. The injection of extremely small electric currents (a few nanoamperes) in each cell forces the resistance to drop abruptly by several orders of magnitudes due to the formation of a conductive path between the two electrodes. These conductive paths dissolve as soon as the current injection stops, and the cells return to their initial state. A repeated injection of currents into the same cell results in an almost identical effect in resistance drop. Different, stable resistance values in each cell can be controllably achieved by injecting different current values.

Software assisted power management

Embodiments include an apparatus comprising an execution unit coupled to a memory, a microcode controller, and a hardware controller. The microcode controller is to identify a global power and performance hint in an instruction stream that includes first and second instruction phases to be executed in parallel, identify a local hint based on synchronization dependence in the first instruction phase, and use the first local hint to balance power consumption between the execution unit and the memory during parallel executions of the first and second instruction phases. The hardware controller is to use the global hint to determine an appropriate voltage level of a compute voltage and a frequency of a compute clock signal for the execution unit during the parallel executions of the first and second instruction phases. The first local hint includes a processing rate for the first instruction phase or an indication of the processing rate.

Redundant memory access for rows or columns containing faulty memory cells in analog neural memory in deep learning artificial neural network

Numerous embodiments are disclosed for accessing redundant non-volatile memory cells in place of one or more rows or columns containing one or more faulty non-volatile memory cells during a program, erase, read, or neural read operation in an analog neural memory system used in a deep learning artificial neural network.

Neuromorphic device with crossbar array structure storing both weights and neuronal states of neural networks

Neuromorphic methods, systems and devices are provided. The embodiment may include a neuromorphic device which may comprise a crossbar array structure and an analog circuit. The crossbar array structure may include N input lines and M output lines interconnected at junctions via N×M electronic devices, which, in preferred embodiments, include, each, a memristive device. The input lines may comprise N.sub.1 first input lines and N.sub.2 second input lines. The first input lines may be connected to the M output lines via N.sub.1×M first devices of said electronic devices. Similarly, the second input lines may be connected to the M output lines via N.sub.2×M second devices of said electronic devices. The analog circuit may be configured to program the electronic devices so as for the first devices to store synaptic weights and the second devices to store neuronal states.

Neuromorphic device with crossbar array structure storing both weights and neuronal states of neural networks

Neuromorphic methods, systems and devices are provided. The embodiment may include a neuromorphic device which may comprise a crossbar array structure and an analog circuit. The crossbar array structure may include N input lines and M output lines interconnected at junctions via N×M electronic devices, which, in preferred embodiments, include, each, a memristive device. The input lines may comprise N.sub.1 first input lines and N.sub.2 second input lines. The first input lines may be connected to the M output lines via N.sub.1×M first devices of said electronic devices. Similarly, the second input lines may be connected to the M output lines via N.sub.2×M second devices of said electronic devices. The analog circuit may be configured to program the electronic devices so as for the first devices to store synaptic weights and the second devices to store neuronal states.

Accelerating sparse matrix multiplication in storage class memory-based convolutional neural network inference

Techniques are presented for accelerating in-memory matrix multiplication operations for a convolution neural network (CNN) inference in which the weights of a filter are stored in the memory of a storage class memory device, such as a ReRAM or phase change memory based device. To improve performance for inference operations when filters exhibit sparsity, a zero column index and a zero row index are introduced to account for columns and rows having all zero weight values. These indices can be saved in a register on the memory device and when performing a column/row oriented matrix multiplication, if the zero row/column index indicates that the column/row contains all zero weights, the access of the corresponding bit/word line is skipped as the result will be zero regardless of the input.

Image sensor having on-chip compute circuit

In one example, an apparatus comprises: a first sensor layer, including an array of pixel cells configured to generate pixel data; and one or more semiconductor layers located beneath the first sensor layer with the one or more semiconductor layers being electrically connected to the first sensor layer via interconnects. The one or more semiconductor layers comprises on-chip compute circuits configured to receive the pixel data via the interconnects and process the pixel data, the on-chip compute circuits comprising: a machine learning (ML) model accelerator configured to implement a convolutional neural network (CNN) model to process the pixel data; a first memory to store coefficients of the CNN model and instruction codes; a second memory to store the pixel data of a frame; and a controller configured to execute the codes to control operations of the ML model accelerator, the first memory, and the second memory.

INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
20230023123 · 2023-01-26 · ·

A reservoir includes a common input layer, first and second output layers that outputs a first and a second readout values based on an input, a first partial reservoir including the input layer and the first output layer, and a second partial reservoir having a size between the input layer and the second output layer larger than the size of the first partial reservoir, and the training processing including: first calculating a third output weight that reduces a difference between a first product sum value of a third readout value and a first output weight; and second calculating a fourth output weight that reduces a difference between a second product sum value of a fourth readout value and a second output weight and differential teaching data that is a difference between a third product sum value of the third readout value and the third output weight and the teaching data.

NEURAL NETWORK COMPUTING DEVICE AND COMPUTING METHOD THEREOF
20230027768 · 2023-01-26 ·

A computing method for performing a matrix multiplying-and-accumulating computation by a flash memory array which includes word lines, bit lines and flash memory cells. The computing method includes the following steps: respectively storing a weight value in each of the flash memory cells, receiving a plurality of input voltages via the word lines, performing an computation on one of the input voltages and the weight value by each of the flash memory cells to obtain an output current, outputting the output currents of the flash memory cells via the bit lines, and accumulating the output currents of the flash memory cells connected to the same bit line of the bit lines to obtain a total output current. Each of the flash memory cells is an analog device, and each of the input voltages, each of the output currents and each of the weight values are analog values.

ANALOG NEUROMOPRHIC CIRCUIT WITH STACKS OF RESISTIVE MEMORY CROSSBAR CONFIGURATIONS
20230028592 · 2023-01-26 ·

An analog neuromorphic circuit is disclosed having a resistive memory crossbar configurations positioned in the analog neuromorphic circuit forming a 3D stack. Input voltages are applied to an input selector unit that selects a first selected resistive memory crossbar configuration that the input voltages are applied. Output voltages are generated by the first selected resistive memory crossbar configuration from a propagation of the input voltages through resistive memories positioned on the first selected resistive memory crossbar configuration. An output selector unit selects the first selected resistive memory crossbar configuration that generates the output voltages. Each output voltage corresponds to an output of the first selected resistive memory crossbar configuration as selected by the output selector. An activation function unit receives the output voltages generated from the first selected memory crossbar configuration and executes a function based on the output voltages received from the first selected resistive memory crossbar configuration.