Patent classifications
G06N3/061
Analog switched-capacitor neural network
Systems and methods are provided for reducing power in in-memory computing, matrix-vector computations, and neural networks. An apparatus for in-memory computing using charge-domain circuit operation includes transistors configured as memory bit cells, transistors configured to perform in-memory computing using the memory bit cells, capacitors configured to store a result of in-memory computing from the memory bit cells, and switches, wherein, based on a setting of each of the switches, the charges on at least a portion of the plurality of capacitors are shorted together. Shorting together the plurality of capacitors yields a computation result.
HIERARCHICAL SCALABLE NEUROMORPHIC SYNAPTRONIC SYSTEM FOR SYNAPTIC AND STRUCTURAL PLASTICITY
In one embodiment, the present invention provides a neural network circuit comprising multiple symmetric core circuits. Each symmetric core circuit comprises a first core module and a second core module. Each core module comprises a plurality of electronic neurons, a plurality of electronic axons, and an interconnection network comprising multiple electronic synapses interconnecting the axons to the neurons. Each synapse interconnects an axon to a neuron. The first core module and the second core module are logically overlayed on one another such that neurons in the first core module are proximal to axons in the second core module, and axons in the first core module are proximal to neurons in the second core module. Each neuron in each core module receives axonal firing events via interconnected axons and generates a neuronal firing event according to a neuronal activation function.
Living Machine for the Manufacture of Living Knowledge
Living Machine for the manufacture of living knowledge by living individuals through the practice of the knowledge creation process in knowledge creation process cycles wherein living knowledge economics operates.
Method and apparatus for training semantic segmentation model, computer device, and storage medium
A method and apparatus for training a semantic segmentation model, a computer device, and a storage medium are described herein. The method includes: constructing a training sample set; inputting the training sample set into a deep network model for training; inputting the training sample set into a weight transfer function for training to obtain a bounding box prediction mask parameter; and constructing a semantic segmentation model.
ARTIFICIAL NEUROMORPHIC CIRCUIT AND OPERATION METHOD
Artificial neuromorphic circuit includes synapse and post-neuron circuits. Synapse circuit includes phase change element, first switch having at least three terminals, and second switch. Phase change element includes first and second terminals. First switch includes first, second and control terminals. Second switch includes first, second and control terminals. First switch is configured to receive first pulse signal. Second switch is coupled to phase change element and first switch, and is configured to receive second pulse signal. Post-neuron circuit includes capacitor and input terminal. Input terminal of post-neuron circuit charges capacitor in response to first pulse signal. Post-neuron circuit generates firing signal based on voltage level of capacitor and threshold voltage. Post-neuron circuit generates control signal based on firing signal. Control signal controls turning on of second switch. Second pulse signal flows through second switch to control state of phase change element to determine weight of artificial neuromorphic circuit.
Low energy deep-learning networks for generating auditory features for audio processing pipelines
Low energy deep-learning networks for generating auditory features such as mel frequency cepstral coefficients in audio processing pipelines are provided. In various embodiments, a first neural network is trained to output auditory features such as mel-frequency cepstral coefficients, linear predictive coding coefficients, perceptual linear predictive coefficients, spectral coefficients, filter bank coefficients, and/or spectro-temporal receptive fields based on input audio samples. A second neural network is trained to output a classification based on input auditory features such as mel-frequency cepstral coefficients. An input audio sample is provided to the first neural network. Auditory features such as mel-frequency cepstral coefficients are received from the first neural network. The auditory features such as mel-frequency cepstral coefficients are provided to the second neural network. A classification of the input audio sample is received from the second neural network.
Neuromorphic computing device utilizing a biological neural lattice
Techniques are disclosed for fabricating and using a neuromorphic computing device including biological neurons. For example, a method for fabricating a neuromorphic computing device includes forming a channel in a first substrate and forming at least one sensor in a second substrate. At least a portion of the channel in the first substrate is seeded with a biological neuron growth material. The second substrate is attached to the first substrate such that the at least one sensor is proximate to the biological neuron growth material and growth of the seeded biological neuron growth material is stimulated to grow a neuron in the at least a portion of the channel.
METHODS, SYSTEMS, AND APPARATUSES FOR DETERMINING VIEWERSHIP
Methods, systems, and apparatuses for determining viewership of a content item are described herein. Machine learning techniques may be used to determine which user(s) among a user group at a multi-user location is consuming a content item. A machine learning model may be trained using demographic attributes and content attributes associated with a plurality of single-user locations. A probability engine may train a machine learning model using the demographic attributes and content attributes and one or more machine learning algorithms. The trained machine learning model may be used to determine which user(s) among at least two users is consuming a content item at a multi-user location at which multiple people reside.
ANALOG CIRCUITS FOR IMPLEMENTING BRAIN EMULATION NEURAL NETWORKS
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for implementing brain emulation neural networks using analog circuits. One of the methods includes obtaining data defining a synaptic connectivity graph representing synaptic connectivity between neurons in a brain of a biological organism, wherein the synaptic connectivity graph comprises a plurality of nodes and edges, wherein each edge connects a pair of nodes, each node corresponds to a respective neuron in the brain of the biological organism, and each edge connecting a pair of nodes in the synaptic connectivity graph corresponds to a synaptic connection between a pair of neurons; determining an artificial neural network architecture corresponding to the synaptic connectivity graph; and generating, from the artificial neural network architecture, a design of an analog circuit that is configured to execute a plurality of operations of an artificial neural network having the artificial neural network architecture.
RECURRENT NEURAL NETWORK ARCHITECTURES BASED ON SYNAPTIC CONNECTIVITY GRAPHS
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for implementing a recurrent neural network that includes a brain emulation subnetwork. One of the methods includes obtaining an input sequence; and processing the input sequence using a recurrent neural network, wherein the recurrent neural network comprises a brain emulation subnetwork having a network architecture that has been determined according to a synaptic connectivity graph, the processing comprising: at a first time step, processing a first input element in the input sequence to generate a hidden state of the recurrent neural network; at each of a plurality of subsequent time steps, updating the hidden state of the recurrent neural network; and at each of one or more of the plurality of time steps, generating an output element for the time step based on the updated hidden state for the time step.