Patent classifications
G06N3/06
Artificial neuron
An artificial neuron including: a membrane capacitor; an input of an external synaptic excitation in current, the membrane capacitor integrating the input current; a negative-feedback impulse circuit, supplied by a power supply at a negative voltage between −200 mV and 0 mV and at a positive voltage between 0 mV and +200 mV, including: a bridge based on pMOS and nMOS transistors in series and linked by a midpoint to the membrane capacitor, the midpoint defining the output of the artificial neuron, at least one delay capacitor between the gate and the source of one of the transistors of the bridge, at least two CMOS inverters between the membrane capacitor and the gates of the transistors of the bridge.
Method and apparatus for generating a chemical structure using a neural network
A method of generating a chemical structure performed by a neural network device includes receiving a target property value and a target structure characteristic value; selecting first generation descriptors; generating second generation descriptors; determining, using a first neural network of the neural network device, property values of the second generation descriptors; determining, using a second neural network of the neural network device, structure characteristic values of the second generation descriptors; selecting, from the second generation descriptors, candidate descriptors that satisfy the target property value and the target structure characteristic value; and generating, using the second neural network of the neural network device, chemical structures for the selected candidate descriptors.
Method and apparatus for generating a chemical structure using a neural network
A method of generating a chemical structure performed by a neural network device includes receiving a target property value and a target structure characteristic value; selecting first generation descriptors; generating second generation descriptors; determining, using a first neural network of the neural network device, property values of the second generation descriptors; determining, using a second neural network of the neural network device, structure characteristic values of the second generation descriptors; selecting, from the second generation descriptors, candidate descriptors that satisfy the target property value and the target structure characteristic value; and generating, using the second neural network of the neural network device, chemical structures for the selected candidate descriptors.
Neuromorphic event-driven neural computing architecture in a scalable neural network
An event-driven neural network including a plurality of interconnected core circuits is provided. Each core circuit includes an electronic synapse array that has multiple digital synapses interconnecting a plurality of digital electronic neurons. A synapse interconnects an axon of a pre-synaptic neuron with a dendrite of a post-synaptic neuron. A neuron integrates input spikes and generates a spike event in response to the integrated input spikes exceeding a threshold. Each core circuit also has a scheduler that receives a spike event and delivers the spike event to a selected axon in the synapse array based on a schedule for deterministic event delivery.
Neuromorphic event-driven neural computing architecture in a scalable neural network
An event-driven neural network including a plurality of interconnected core circuits is provided. Each core circuit includes an electronic synapse array that has multiple digital synapses interconnecting a plurality of digital electronic neurons. A synapse interconnects an axon of a pre-synaptic neuron with a dendrite of a post-synaptic neuron. A neuron integrates input spikes and generates a spike event in response to the integrated input spikes exceeding a threshold. Each core circuit also has a scheduler that receives a spike event and delivers the spike event to a selected axon in the synapse array based on a schedule for deterministic event delivery.
Systems for introducing memristor random telegraph noise in Hopfield neural networks
Systems are provided for implementing a hardware accelerator. The hardware accelerator emulate a stochastic neural network, and includes a first memristor crossbar array, and a second memristor crossbar array. The first memristor crossbar array can be programmed to calculate node values of the neural network. The nodes values can be calculated in accordance with rules to reduce an energy function associated with the neural network. The second memristor crossbar array is coupled to the first memristor crossbar array and programmed to introduce noise signals into the neural network. The noise signals can be introduced such that the energy function associated with the neural network converges towards a global minimum and modifies the calculated node values.
Systems for introducing memristor random telegraph noise in Hopfield neural networks
Systems are provided for implementing a hardware accelerator. The hardware accelerator emulate a stochastic neural network, and includes a first memristor crossbar array, and a second memristor crossbar array. The first memristor crossbar array can be programmed to calculate node values of the neural network. The nodes values can be calculated in accordance with rules to reduce an energy function associated with the neural network. The second memristor crossbar array is coupled to the first memristor crossbar array and programmed to introduce noise signals into the neural network. The noise signals can be introduced such that the energy function associated with the neural network converges towards a global minimum and modifies the calculated node values.
Method and system for processing neural network
The present disclosure provides a neural network processing system that comprises a multi-core processing module composed of a plurality of core processing modules and for executing vector multiplication and addition operations in a neural network operation, an on-chip storage medium, an on-chip address index module, and an ALU module for executing a non-linear operation not completable by the multi-core processing module according to input data acquired from the multi-core processing module or the on-chip storage medium, wherein the plurality of core processing modules share an on-chip storage medium and an ALU module, or the plurality of core processing modules have an independent on-chip storage medium and an ALU module. The present disclosure improves an operating speed of the neural network processing system, such that performance of the neural network processing system is higher and more efficient.
Decompression apparatus for decompressing a compressed artificial intelligence model and control method thereof
A decompression apparatus is provided. The decompression apparatus includes a memory configured to store compressed data decompressed and used in neural network processing of an artificial intelligence model, a decoder configured to include a plurality of logic circuits related to a compression method of the compressed data, decompress the compressed data through the plurality of logic circuits based on an input of the compressed data, and output the decompressed data, and a processor configured to obtain data of a neural network processible form from the data output from the decoder.
Decompression apparatus for decompressing a compressed artificial intelligence model and control method thereof
A decompression apparatus is provided. The decompression apparatus includes a memory configured to store compressed data decompressed and used in neural network processing of an artificial intelligence model, a decoder configured to include a plurality of logic circuits related to a compression method of the compressed data, decompress the compressed data through the plurality of logic circuits based on an input of the compressed data, and output the decompressed data, and a processor configured to obtain data of a neural network processible form from the data output from the decoder.