G06F18/2136

Feature Summarization Filter With Applications Using Data Analytics
20170344620 · 2017-11-30 ·

A data-analytics application may be optimized for implementation on a computing device for conserving computing or providing timely results, such as a prediction, recommendation, inference, or diagnosis about a monitored system, process, event, or a user, for example. A feature filter or classifier is generated and incorporated into or used by the application to provide the optimization. The feature filter and classifier are generated based on a set of significant features, determined using a data condensation and summarization process, from a high-dimensional set of available features characterizing the target. For example, a process that includes utilizing combined sparse principal component analysis with sparse singular value decomposition and applying k-medoids clustering may determine the significant features. Insignificant features may be filtered out or not used, as information represented by the insignificant features is expressed by the significant features.

SPATIALLY SPARSE NEURAL NETWORK ACCELERATOR FOR MULTI-DIMENSION VISUAL ANALYTICS

Systems, apparatuses and methods may provide for technology that decodes data via an instruction that indicates a number of rulebooks to be processed, an input feature size, an output feature size, and a plurality of feature map base addresses, rearranges spatially distributed voxel output feature maps in the decoded data based on weight planes, and performs a channel-wise multiply-accumulate (MAC) operation on the rearranged spatially distributed voxel output feature maps to obtain an output, wherein the channel-wise MAC operation is performed as partial accumulations by a plurality of processing elements.

SPATIALLY SPARSE NEURAL NETWORK ACCELERATOR FOR MULTI-DIMENSION VISUAL ANALYTICS

Systems, apparatuses and methods may provide for technology that decodes data via an instruction that indicates a number of rulebooks to be processed, an input feature size, an output feature size, and a plurality of feature map base addresses, rearranges spatially distributed voxel output feature maps in the decoded data based on weight planes, and performs a channel-wise multiply-accumulate (MAC) operation on the rearranged spatially distributed voxel output feature maps to obtain an output, wherein the channel-wise MAC operation is performed as partial accumulations by a plurality of processing elements.

VISION-AIDED AERIAL NAVIGATION
20170328716 · 2017-11-16 · ·

An aerial vehicle is navigated using vision-aided navigation that classifies regions of acquired still image frames as featureless or feature-rich, and thereby avoids expending time and computational resources attempting to extract and match false features from the featureless regions. The classification may be performed by computing a texture metric as by testing widths of peaks of the autocorrelation function of a region against a threshold, which may be an adaptive threshold, or by using a model that has been trained using a machine learning method applied to a training dataset comprising training images of featureless regions and feature-rich regions. Such machine learning method can use a support vector machine. The resultant matched feature observations can be data-fused with other sensor data to correct a navigation solution based on GPS and/or IMU data.

Hierarchical machine learning system for lifelong learning

Embodiments described herein cover a hierarchical machine learning system with a separated perception subsystem (that includes a hierarchy of nodes having at least a first layer and a second layer) and application subsystem. In one example embodiment a first node in the first layer processes a first input and processes at least a portion of the first input to generate a first feature vector. A second node in the second layer processes a second input comprising at least a portion of the first feature vector to generate a second feature vector. The first node generates a first sparse feature vector from the first feature vector and/or the second node generates a second sparse feature vector from the second feature vector. A third node of the perception subsystem then processes at least one of the first sparse feature vector or the second sparse feature vector to determine an output.

SPARSE INFERENCE MODULES FOR DEEP LEARNING
20170316311 · 2017-11-02 ·

Described is a sparse inference module that can be incorporated into a deep learning system. For example, the deep learning system includes a plurality of hierarchical feature channel layers, each feature channel layer having a set of filters. A plurality of sparse inference modules can be included such that a sparse inference module resides electronically within each feature channel layer. Each sparse inference module is configured to receive data and match the data against a plurality of pattern templates to generate a degree of match value for each of the pattern templates, with the degree of match values being sparsified such that only those degree of match values that exceed a predetermined threshold, or a fixed number of the top degree of match values, are provided to subsequent feature channels in the plurality of hierarchical feature channels, while other, losing degree of match values are quenched to zero.

FEW-SHOT OBJECT DETECTION USING SEMANTIC RELATION REASONING
20220058432 · 2022-02-24 ·

Disclosed herein is an improved few-shot detector which utilizes semantic relation reasoning to learn novel objects from both visual information and the semantic relation of base class objects Specifically, a semantic space is constructed using word embeddings. Guided by the word embeddings of the classes, the detector is trained to project the objects from the visual space to the semantic space and to align their image representations with the corresponding class embeddings.

STRUCTURE-PRESERVING COMPOSITE MODEL FOR SKIN LESION SEGMENTATION
20170243345 · 2017-08-24 ·

A structure-preserving composite model for skin lesion segmentation includes partitioning a dermoscopic image into superpixels at a first scale. Each superpixel is a vertex on a graph defined by color coordinates and spatial coordinates, and represents a number of pixels of the dermoscopic image according to the first scale. Further, constructing a plurality of k background templates by k-means clustering selected ones of the superpixels in space and color. Additionally, generating sparse representations of the plurality of superpixels based on the plurality of background templates. Also, calculating a reconstruction error for each superpixel by comparison of its sparse representation to its original color coordinates and spatial coordinates. Furthermore, outputting a confidence map that identifies each pixel of the dermoscopic image as belonging or not belonging to a skin lesion, based on the reconstruction errors of the representative superpixels.

Machine learning sparse computation mechanism

Techniques to improve performance of matrix multiply operations are described in which a compute kernel can specify one or more element-wise operations to perform on output of the compute kernel before the output is transferred to higher levels of a processor memory hierarchy.

Method and apparatus for detecting salient object in image

A method and an apparatus for detecting a salient object in an image includes separately performing convolution processing corresponding to at least two convolutional layers on a to-be-processed image to obtain at least two first feature maps of the to-be-processed image, performing superposition processing on at least two first feature maps included in a superposition set in at least two sets to obtain at least two second feature maps of the to-be-processed image, the at least two sets are in a one-to-one correspondence with the at least two second feature maps, and a resolution of a first feature map included in the superposition set is lower than or equal to a resolution of a second feature map corresponding to the superposition set, and splicing the at least two second feature maps to obtain a saliency map.