Patent classifications
G06N3/06
SYNTHESIS OF BRANCHING MORPHOLOGIES
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium for generating model neurons. In one aspect, a method includes receiving a plurality of descriptions of branches of dendrites of one or more neurons and generating a collection of model neurites. Each of the descriptions characterizes, for an individual branch, i) a distance from a cell body at which the individual branch first bifurcates and ii) a distance from the cell body at which the individual branch actually terminates. Generating the collection of model neurites includes repeatedly selecting a description of a branch from the plurality and probabilistically generating a topology of a model neurite based on the selected description. The probabilistic generation of the model neurite includes deciding whether to bifurcate, terminate, or continue the model neurites at different positions based on the selected description.
AUTOMATICALLY DETERMINING NEURAL NETWORK ARCHITECTURES BASED ON SYNAPTIC CONNECTIVITY
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for determining network architectures based on synaptic connectivity. One of the methods includes processing a network input using a neural network to generate a network output, comprising: processing the network input using an encoder subnetwork of the neural network to generate an embedding of the network input; processing the embedding of the network input using a first connectivity layer of the neural network to generate a first connectivity layer output; processing the first connectivity layer output using a brain emulation subnetwork of the neural network to generate a brain emulation subnetwork output; processing the brain emulation subnetwork output using a second connectivity layer of the neural network to generate a second connectivity layer output; and processing the second connectivity layer output using a decoder subnetwork of the neural network to generate the network output.
IMPLEMENTING NEURAL NETWORKS THAT INCLUDE CONNECTIVITY NEURAL NETWORK LAYERS USING SYNAPTIC CONNECTIVITY
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for implementing connectivity neural network layers. One of the methods includes processing a network input using a neural network to generate a network output, comprising: generating a layer input to a connectivity layer of the neural network based on the network input, wherein the layer input to the connectivity layer comprises a plurality of input values arranged in a plurality of input channels; processing the layer input using the connectivity layer to generate a layer output comprising a plurality of output values arranged in a plurality of output channels; processing the plurality of output channels of the connectivity layer using a brain emulation subnetwork of the neural network to generate a brain emulation subnetwork output; and generating the network output based on the brain emulation subnetwork output.
BIOPROCESSING ELEMENT AND NEURAL NETWORK PROCESSOR WITH BIOPROCESSING ELEMENT
A bioprocessing device performing an operation based on cultured biological neurons includes: an electrode layer comprising electrodes connected to the biological neurons; circuit layers, stacked with the electrode layer, comprising stacked circuits for the biological neurons; and inter-layer connectors configured to connect the electrode layer and the circuit layers.
DATA AUGMENTATION USING BRAIN EMULATION NEURAL NETWORKS
In one aspect, there is provided a method performed by one or more data processing apparatus, the method including receiving a training dataset having multiple training examples, where each training example includes: (i) an image, and (ii) a segmentation defining a target region of the image that has been classified as including pixels in a target category. The method further includes determining a respective refined segmentation for each training example, including, for each training example, processing the target region of the image defined by the segmentation for the training example using a de-noising neural network to generate a network output that defines the refined segmentation for the training example. The method further includes training a segmentation machine learning model on the training examples of the training dataset, including, for each training example training the segmentation machine learning model to process the image included in the training example to generate a model output that matches the refined segmentation for the training example.
Low spike count ring buffer mechanism on neuromorphic hardware
Low spike count ring buffer mechanisms on neuromorphic hardware are provided. A ring buffer comprises a plurality of memory cells. The plurality of memory cells comprises one or more neurosynaptic core. A demultiplexer is operatively coupled to the ring buffer. The demultiplexer is adapted to receive input comprising a plurality of spikes, and write sequentially to each of the plurality of memory cells. A plurality of output connectors is operatively coupled to the ring buffer. Each of the plurality of output connectors is adapted to provide an output based on contents of a subset of the plurality of memory cells.
Low spike count ring buffer mechanism on neuromorphic hardware
Low spike count ring buffer mechanisms on neuromorphic hardware are provided. A ring buffer comprises a plurality of memory cells. The plurality of memory cells comprises one or more neurosynaptic core. A demultiplexer is operatively coupled to the ring buffer. The demultiplexer is adapted to receive input comprising a plurality of spikes, and write sequentially to each of the plurality of memory cells. A plurality of output connectors is operatively coupled to the ring buffer. Each of the plurality of output connectors is adapted to provide an output based on contents of a subset of the plurality of memory cells.
ENHANCED DIGITAL SIGNAL PROCESSOR (DSP) NAND FLASH
A method and apparatus for systems and methods for digital signal processing (DSP) in a non-volatile memory (NVM) device comprising CMOS coupled to NVM die, of a data storage device. According to certain embodiments, one or more DSP calculations are provided by a controller to the CMOS components of the NVM, that configure one or more memory die to carry out atomic calculations on the data resident on the die. The results of calculations of each die are provided to an output latch for each die, back-propagating data back to the configured calculation portion as needed, otherwise forwarding the results to the controller. The controller aggregates the results of DSP calculations of each die and presents the results to the host system.
SYSTEM AND METHOD FOR CONTROLLING PHYSICAL SYSTEMS USING BRAIN WAVES
Embodiments of a system for controlling an object using brainwaves are disclosed. The system includes a set of EEG electrodes configured to be positioned on a head of a user and to collect EEG signals. The system further includes one or more computer readable storage mediums storing a framework configured to execute an extensible architecture through which EEG signals are interpreted for control of the object. The framework includes an EEG device plugin associated with the set of EEG electrodes and configured to extract the EEG signals from the set of EEG electrodes. The framework also includes an interpreter plugin configured to convert the EEG signals extracted by the EEG device plugin into a command. Further, the framework includes an object control plugin configured to access the command through an extension point of the interpreter plugin and to execute the command to control the object.
Apparatus and method for executing recurrent neural network and LSTM computations
Aspects for Long Short-Term Memory (LSTM) blocks in a recurrent neural network (RNN) are described herein. As an example, the aspects may include one or more slave computation modules, an interconnection unit, and a master computation module collectively configured to calculate an activated input gate value, an activated forget gate value, a current cell status of the current computation period, an activated output gate value, and a forward pass result.