Patent classifications
G06F18/2136
METHOD FOR CO-SEGMENTATING THREE-DIMENSIONAL MODELS REPRESENTED BY SPARSE AND LOW-RANK FEATURE
Presently disclosed is a method for co-segmenting three-dimensional models represented by sparse and low-rank feature, comprising: pre-segmenting each three-dimensional model of a three-dimensional model class to obtain three-dimensional model patches for the each three-dimensional model; constructing a histogram for the three-dimensional model patches of the each three-dimensional model to obtain a patch feature vector for the each three-dimensional model; performing a sparse and low-rank representation to the patch feature vector for the each three-dimensional model to obtain a representation coefficient and a representation error of the each three-dimensional model; determining a confident representation coefficient for the each three-dimensional model according to the representation coefficient and the representation error of the each three-dimensional model; and clustering the confident representation coefficient of the each three-dimensional model to co-segment the each three-dimensional model respectively.
Multi-domain neighborhood embedding and weighting of sampled data
This document describes “Multi-domain Neighborhood Embedding and Weighting” (MNEW) for use in processing point cloud data, including sparsely populated data obtained from a lidar, a camera, a radar, or combination thereof. MNEW is a process based on a dilation architecture that captures pointwise and global features of the point cloud data involving multi-scale local semantics adopted from a hierarchical encoder-decoder structure. Neighborhood information is embedded in both static geometric and dynamic feature domains. A geometric distance, feature similarity, and local sparsity can be computed and transformed into adaptive weighting factors that are reapplied to the point cloud data. This enables an automotive system to obtain outstanding performance with sparse and dense point cloud data. Processing point cloud data via the MNEW techniques promotes greater adoption of sensor-based autonomous driving and perception-based systems.
Multi-domain neighborhood embedding and weighting of sampled data
This document describes “Multi-domain Neighborhood Embedding and Weighting” (MNEW) for use in processing point cloud data, including sparsely populated data obtained from a lidar, a camera, a radar, or combination thereof. MNEW is a process based on a dilation architecture that captures pointwise and global features of the point cloud data involving multi-scale local semantics adopted from a hierarchical encoder-decoder structure. Neighborhood information is embedded in both static geometric and dynamic feature domains. A geometric distance, feature similarity, and local sparsity can be computed and transformed into adaptive weighting factors that are reapplied to the point cloud data. This enables an automotive system to obtain outstanding performance with sparse and dense point cloud data. Processing point cloud data via the MNEW techniques promotes greater adoption of sensor-based autonomous driving and perception-based systems.
Systems and methods for a cross media joint friend and item recommendation framework
Various embodiments of systems and methods for cross media joint friend and item recommendations are disclosed herein.
Systems and methods for quantum processing of data using a sparse coded dictionary learned from unlabeled data and supervised learning using encoded labeled data elements
Systems, methods and aspects, and embodiments thereof relate to unsupervised or semi-supervised features learning using a quantum processor. To achieve unsupervised or semi-supervised features learning, the quantum processor is programmed to achieve Hierarchal Deep Learning (referred to as HDL) over one or more data sets. Systems and methods search for, parse, and detect maximally repeating patterns in one or more data sets or across data or data sets. Embodiments and aspects regard using sparse coding to detect maximally repeating patterns in or across data. Examples of sparse coding include L0 and L1 sparse coding. Some implementations may involve appending, incorporating or attaching labels to dictionary elements, or constituent elements of one or more dictionaries. There may be a logical association between label and the element labeled such that the process of unsupervised or semi-supervised feature learning spans both the elements and the incorporated, attached or appended label.
Image processing method, an image processing apparatus, and a surveillance system
An image processing method including: capturing changes in a monitored scene; and performing a sparse feature calculation on the changes in the monitored scene to obtain a sparse feature map.
MACHINE LEARNING SPARSE COMPUTATION MECHANISM
Techniques to improve performance of matrix multiply operations are described in which a compute kernel can specify one or more element-wise operations to perform on output of the compute kernel before the output is transferred to higher levels of a processor memory hierarchy.
IMAGE RECOGNITION SYSTEM, IMAGE RECOGNITION SERVER, AND IMAGE RECOGNITION
An object of the present invention is to provide an image recognition system, an image recognition server, and an image recognition method having a new high security framework that can achieve utilization of multi-device diversity. The image recognition system according to the present disclosure includes a computationally non-intensive encryption algorithm based on random unitary transformation and achieves a high level of security. In addition, the image recognition system achieves high recognition performance by using ensemble learning to integrate recognition results based on the dictionaries of 4 different devices.
SYSTEMS AND METHODS TO OBTAIN SUFFICIENT VARIABILITY IN CLUSTER GROUPS FOR USE TO TRAIN INTELLIGENT AGENTS
A method, system, and computer programming product for checking that clusters representative of transactional activity of a group of persons exhibits sufficient variability including: receiving transactional data; forming clusters from the received transactional data representing groups of persons that behave similarly; determining that a cluster representing a group of persons that behave similarly is not sufficiently variable; and increasing, in response to the cluster representing the group of persons behaving similarly not being sufficiently variable, the variability of the cluster. Further including, in an embodiment, creating a superset cluster consisting of both the cluster and the parent of the cluster; creating test data using the superset as a baseline; injecting the test data into the superset cluster; determining if the superset cluster rejects the injected test data as an indication of insufficient variability.
Systems and methods for out-of-distribution classification
An embodiment provided herein preprocesses the input samples to the classification neural network, e.g., by adding Gaussian noise to word/sentence representations to make the function of the neural network satisfy Lipschitz property such that a small change in the input does not cause much change to the output if the input sample is in-distribution. Method to induce properties in the feature representation of neural network such that for out-of-distribution examples the feature representation magnitude is either close to zero or the feature representation is orthogonal to all class representations. Method to generate examples that are structurally similar to in-domain and semantically out-of domain for use in out-of-domain classification training. Method to prune feature representation dimension to mitigate long tail error of unused dimension in out-of-domain classification. Using these techniques, the accuracy of both in-domain and out-of-distribution identification can be improved.