Patent classifications
G06N3/048
SPARSITY-AWARE COMPUTE-IN-MEMORY
Certain aspects of the present disclosure provide techniques for performing machine learning computations in a compute in memory (CIM) array comprising a plurality of bit cells, including: determining that a sparsity of input data to a machine learning model exceeds an input data sparsity threshold; disabling one or more bit cells in the CIM array based on the sparsity of the input data prior to processing the input data; processing the input data with bit cells not disabled in the CIM array to generate an output value; applying a compensation to the output value based on the sparsity to generate a compensated output value; and outputting the compensated output value.
INPUT CIRCUITRY FOR ANALOG NEURAL MEMORY IN A DEEP LEARNING ARTIFICIAL NEURAL NETWORK
Numerous embodiments of input circuitry for an analog neural memory in a deep learning artificial neural network are disclosed.
PERFORMANCE-ADAPTIVE SAMPLING STRATEGY TOWARDS FAST AND ACCURATE GRAPH NEURAL NETWORKS
Techniques for implementing a performance-adaptive sampling strategy towards fast and accurate graph neural networks are provided. In one technique, a graph that comprises multiple nodes and edges connecting the nodes is stored. An embedding for each node is initialized, as well as a sampling policy for sampling neighbors of nodes. One or more machine learning techniques are used to train a graph neural network and learn embeddings for the nodes. Using the one or more machine learning techniques comprises, for each node: (1) selecting, based on the sampling policy, a set of neighbors of the node; (2) based on the graph neural network and embeddings for the node and the set of neighbors, computing a performance loss; and (3) based on a gradient of the performance loss, modifying the sampling policy.
SYSTEMS AND METHODS FOR PROVIDING A MULTI-PARTY COMPUTATION SYSTEM FOR NEURAL NETWORKS
A system and method are disclosed for secure multi-party computations. The system performs operations including establishing an API for coordinating joint operations between a first access point and a second access point related to performing a secure prediction task in which the first access point and the second access point will perform private computation of first data and second data without the parties having access to each other's data. The operations include storing a list of assets representing metadata about the first data and the second data, receiving a selection of the second data for use with the first data, managing an authentication and authorization of communications between the first access point and the second access point and performing the secure prediction task using the second data operating on the first data.
OUTPUT CIRCUITRY FOR ANALOG NEURAL MEMORY IN A DEEP LEARNING ARTIFICIAL NEURAL NETWORK
Numerous embodiments of output circuitry for an analog neural memory in a deep learning artificial neural network are disclosed. In some embodiments, a common mode circuit is used with differential cells, W+ and W−, that together store a weight, W. The common mode circuit can utilize current sources, variable resistors, or transistors as part of the structure for introducing a common mode voltage bias.
RECORD MATCHING MODEL USING DEEP LEARNING FOR IMPROVED SCALABILITY AND ADAPTABILITY
Systems and methods are described for linking records from different databases. A search may be performed for each record of a received record set for similar records based on having similar field values. Recommended records of the record set may be assigned with the identified similar records to sub-groups. Pairs of records may be formed for each record of the sub-group, and comparative and identifying features may be extracted from each field of the pairs of records. Then, a trained model may be applied to the differences to determine a similarity score. Cluster identifiers may be applied to records within each sub-group having similarity scores greater than a predetermined threshold. In response to a query for a requested record, all records having the same cluster identifier may be output on a graphical interface, allowing users to observe linked records for a person in the different databases.
METHOD FOR PREDICTING STRUCTURE OF INDOOR SPACE USING RADIO PROPAGATION CHANNEL ANALYSIS THROUGH DEEP LEARNING
A method for predicting a structure of an indoor space using radio propagation channel analysis through deep-learning is disclosed. Channel data of radio signals are collected for various indoor spaces, and radio channel parameter data such as PDP, AoA, and AoD are extracted therefrom. A large amount of propagation channel parameter data is input to an artificial neural network together with vertex coordinate data of the corresponding indoor space and deep-learning is performed in advance. The propagation channel parameter data are extracted from the indoor space to be predicted, the best matching indoor space is detected based on the trained artificial neural network. The best matching indoor space is predicted as the structure of the indoor space.
SUPER RESOLUTION USING CONVOLUTIONAL NEURAL NETWORK
An apparatus for super resolution imaging includes a convolutional neural network (104) to receive a low resolution frame (102) and generate a high resolution illuminance component frame. The apparatus also includes a hardware scaler (106) to receive the low resolution frame (102) and generate a second high resolution chrominance component frame. The apparatus further includes a combiner (108) to combine the high resolution illuminance component frame and the high resolution chrominance component frame to generate a high resolution frame (110).
NEURAL NETWORK OPTIMIZATION METHOD AND APPARATUS
The present disclosure relates to neural network optimization methods and apparatuses in the field of artificial intelligence. One example method includes sampling preset hyperparameter search space to obtain multiple hyperparameter combinations. Multiple iterative evaluations are performed on the multiple hyperparameter combinations to obtain multiple performance results of each hyperparameter combination. Any iterative evaluation comprises obtaining at least one performance result of each hyperparameter combination, and if a hyperparameter combination meets a first preset condition, re-evaluating the hyperparameter combination to obtain a re-evaluated performance result of the hyperparameter combination. An optimal hyperparameter combination is determined. If the optimal hyperparameter combination does not meet a second preset condition, a preset model is updated, based on the multiple performance results of each hyperparameter combination, for next sampling. Or if the optimal hyperparameter combination meets a second preset condition, the optimal hyperparameter combination is used as a hyperparameter combination of a neural network.
IMAGE PROCESSING METHOD, NETWORK TRAINING METHOD, AND RELATED DEVICE
This application provides an image processing method, a network training method, and a related device, and relates to image processing technologies in the artificial intelligence field. The method includes: inputting a first image including a first vehicle into an image processing network to obtain a first result output by the image processing network, where the first result includes location information of a two-dimensional 2D bounding frame of the first vehicle, coordinates of a wheel of the first vehicle, and a first angle of the first vehicle, and the first angle of the first vehicle indicates an included angle between a side line of the first vehicle and a first axis of the first image; and generating location information of a three-dimensional 3D outer bounding box of the first vehicle based on the first result.