Patent classifications
G06F18/2136
Spatially sparse neural network accelerator for multi-dimension visual analytics
Systems, apparatuses and methods may provide for technology that decodes data via an instruction that indicates a number of rulebooks to be processed, an input feature size, an output feature size, and a plurality of feature map base addresses, rearranges spatially distributed voxel output feature maps in the decoded data based on weight planes, and performs a channel-wise multiply-accumulate (MAC) operation on the rearranged spatially distributed voxel output feature maps to obtain an output, wherein the channel-wise MAC operation is performed as partial accumulations by a plurality of processing elements.
Machine learning sparse computation mechanism
An apparatus to facilitate processing of a sparse matrix is disclosed. The apparatus includes a plurality of processing units each comprising one or more processing elements, including logic to read operands, a multiplication unit to multiply two or more operands and a scheduler to identify operands having a zero value and prevent scheduling of the operands having the zero value at the multiplication unit.
Spatially sparse neural network accelerator for multi-dimension visual analytics
Systems, apparatuses and methods may provide for technology that decodes data via an instruction that indicates a number of rulebooks to be processed, an input feature size, an output feature size, and a plurality of feature map base addresses, rearranges spatially distributed voxel output feature maps in the decoded data based on weight planes, and performs a channel-wise multiply-accumulate (MAC) operation on the rearranged spatially distributed voxel output feature maps to obtain an output, wherein the channel-wise MAC operation is performed as partial accumulations by a plurality of processing elements.
Machine learning sparse computation mechanism
Techniques to improve performance of matrix multiply operations are described in which a compute kernel can specify one or more element-wise operations to perform on output of the compute kernel before the output is transferred to higher levels of a processor memory hierarchy.
Machine learning sparse computation mechanism
Techniques to improve performance of matrix multiply operations are described in which a compute kernel can specify one or more element-wise operations to perform on output of the compute kernel before the output is transferred to higher levels of a processor memory hierarchy.
Systems and methods for out-of-distribution classification
An embodiment proposed herein uses sparsification techniques to train the neural network with a high feature dimension that may yield desirable in-domain detection accuracy but may prune away dimensions in the output that are less important. Specifically, a sparsification vector is generated based on Gaussian distribution (or other probabilistic distribution) and is used to multiply with the higher dimension output to reduce the number of feature dimensions. The pruned output may be then used for the neural network to learn the sparsification vector. In this way, out-of-distribution detection accuracy can be improved.
Generating a sparse feature vector for classification
An apparatus for classifying an input includes a classifier and a feature extractor. The feature extractor is configured to generate a feature vector based on the input. The feature vector is also configured to set a number of elements of the feature vector to zero to produce a sparse feature vector. The sparse feature vector has the same dimensions as the feature vector generated by the feature extractor. However, the sparse feature vector includes fewer non-zero elements than the feature vector generated by the feature extractor. The feature vector is further configured to forward the sparse feature vector to the classifier to classify the input.
Training a Neural Network
A computer implemented method of training a neural network configured to combine a set of coefficients with respective input data values. So as to train a test implementation of the neural network, sparsity is applied to one or more of the coefficients according to a sparsity parameter, the sparsity parameter indicating a level of sparsity to be applied to the set of coefficients; the test implementation of the neural network is operated on training input data using the coefficients so as to form training output data; in dependence on the training output data, assessing the accuracy of the neural network; the sparsity parameter is updated in dependence on the accuracy of the neural network; and a runtime implementation of the neural network is configured in dependence on the updated sparsity parameter.
Systems and methods for weakly supervised training of a model for monocular depth estimation
System, methods, and other embodiments described herein relate to semi-supervised training of a depth model for monocular depth estimation. In one embodiment, a method includes training the depth model according to a first stage that is self-supervised and that includes using first training data that comprises pairs of training images. Respective ones of the pairs including separate frames depicting a scene of a monocular video. The method includes training the depth model according to a second stage that is weakly supervised and that includes using second training data to produce depth maps according to the depth model. The second training data comprising individual images with corresponding sparse depth data. The second training data providing for updating the depth model according to second stage loss values that are based, at least in part, on the depth maps and the depth data.
QUANTILE HURDLE MODELING SYSTEMS AND METHODS FOR SPARSE TIME SERIES PREDICTION APPLICATIONS
A server computer may receive and process a plurality of time series data to generate sparse datasets based on sparsity levels. The server computer applies a time series forecasting model to each respective subset of previous data points of the sparse datasets increasingly at the first time granularity to generate a set of prediction values and a set of residuals; applies a regression model to the set of the prediction residuals to generate a set of adjusted residuals for the sparse datasets; and generates a visualized explanation based on the set of the prediction values and the set of adjusted residuals for one or more of the sparse datasets.