Patent classifications
G06V10/7753
Obstacle distribution simulation method, device and terminal based on a probability graph
Embodiments of an obstacle distribution simulation method, device and terminal based on a probability graph are provided. The method can include: acquiring a plurality of point clouds of a plurality of frames; acquiring real labeling data of an acquisition vehicle at vehicle labeled positions, and acquiring data of a simulation position of the acquisition vehicle; determining the number of obstacles to be simulated at a position to be simulated; extracting real labeling data of the obstacles, and constructing a labeling data set; dividing the labeling data set into a plurality of grids and calculating occurrence probabilities of the plurality of obstacles; selecting the determined number of obstacles to be simulated according to the occurrence probabilities; and acquiring a position distribution of the selected obstacles to be simulated for the position to be simulated based on the real labeling data of the selected obstacles to be simulated.
UNSUPERVISED REPRESENTATION LEARNING WITH CONTRASTIVE PROTOTYPES
The system and method are directed to a prototypical contrastive learning (PCL). The PCL explicitly encodes the hierarchical semantic structure of the dataset into the learned embedding space and prevents the network from exploiting low-level cues for solving the unsupervised learning task. The PCL includes prototypes as the latent variables to help find the maximum-likelihood estimation of the network parameters in an expectation-maximization framework. The PCL iteratively performs an E-step for finding prototypes with clustering and M-step for optimizing the network on a contrastive loss.
TRAINING POINT CLOUD PROCESSING NEURAL NETWORKS USING PSEUDO-ELEMENT - BASED DATA AUGMENTATION
Methods, computer systems, and apparatus, including computer programs encoded on computer storage media, for performing training of a neural network that is configured to process a network input comprising a point cloud to generate a network output for a point cloud processing task. The system obtains a set of labeled training examples and a set of unlabeled point clouds, generates a respective pseudo-label for each unlabeled point cloud, generates a plurality of pseudo-elements based on the respective pseudo-label for the unlabeled point cloud, generates augmented training data by augmenting the labeled training examples using the pseudo-elements generated for the unlabeled point clouds, and performing training of the neural network on the augmented training data.
VIDEO REPRESENTATION LEARNING
Certain aspects of the present disclosure provide techniques for training a first model based on a first labeled video dataset; generating a plurality of action-words based on output generated by the first model processing motion data in videos of an unlabeled video dataset; defining labels for the videos in the unlabeled video dataset based on the generated action-words; and training a second model based on the labels for the videos in the unlabeled video dataset.
SYSTEMS AND METHODS FOR SEMI-SUPERVISED LEARNING WITH CONTRASTIVE GRAPH REGULARIZATION
Embodiments described herein provide an approach (referred to as “Co-training” mechanism throughout this disclosure) that jointly learns two representations of the training data, their class probabilities and low-dimensional embeddings. Specifically, two representations of each image sample are generated: a class probability produced by the classification head and a low-dimensional embedding produced by the projection head. The classification head is trained using memory-smoothed pseudo-labels, where pseudo-labels are smoothed by aggregating information from nearby samples in the embedding space. The projection head is trained using contrastive learning on a pseudo-label graph, where samples with similar pseudo-labels are encouraged to have similar embeddings.
DATA AUGMENTATION METHOD, LEARNING DEVICE, AND RECORDING MEDIUM
First optimization processing for optimizing parameters of a DNN and second optimization processing for optimizing hyperpararneters for each sample used in data augmentation processing are alternately performed. The first optimization processing includes causing the DNN to predict a first augmentation label from a first augmented sample, calculating a first error function between the first augmentation label and a first correct label for a first sample, and updating the parameters in accordance with the first error function. The second optimization processing includes acquiring a second sample, causing the DNN after the updating of the parameters to predict a second label from the second sample, calculating a second error function between the second label and a second correct label for the second sample, and updating the hyperparameter in accordance with a gradient obtained by differentiation of the second error function with respect to the hyperparameter.
Automated Content Analysis and Annotation
A system includes a computing platform having processing hardware, and a systems memory storing a software code. The processing hardware is configured to execute the software code to receive content including an image having multiple image regions, determine boundaries of each of the image regions to identify multiple bounded image regions, identify, within each of the bounded image regions, one or more local features and one or more global features, and identify, within each of the hounded image regions, another one or more local features based on a comparison with corresponding local features identified in each of one or more other bounded image regions. The processing hardware is further configured to execute the software code to annotate each of the bounded image regions using its respective one or more local features, its other one or more local features, and its one or more global features, to provide annotated content.
AUTOMATED ANNOTATION OF VISUAL DATA THROUGH COMPUTER VISION TEMPLATE MATCHING
A method of generating labeled training images for a machine learning system includes providing a set of labeled images, each of the labeled images in the set of labeled images depicting an instance of a type of object and comprising a label identifying the type of object, providing an unlabeled image including an instance of the object, generating bounding box coordinates for one or more bounding boxes around the instance of the object in the unlabeled image using the labeled images in the set of labeled images as templates, consolidating the one or more bounding boxes into a consolidated bounding box around the instance of the object in the unlabeled image, and labeling the consolidated bounding box according to the type of object to generate a labeled output image including bounding box coordinates of the consolidated bounding box.
INTELLIGENT LAYOUT DESIGN METHOD OF CURVILINEARLY STIFFENED STRUCTURES BASED ON IMAGE FEATURE LEARNING
An intelligent layout design method of curvilinearly stiffened structure based on image feature learning. Firstly, the design variables of the curvilinearly stiffened structure are determined based on the path function. The autoencoder network is built to complete the learning of the structural characteristics of the image, and the transfer learning of the model is further carried out. The convolution neural network is built to complete the learning of the image set with mechanical response labels. Finally, the evolutionary algorithm is used to optimize the layout of the curvilinearly stiffened structure based on the model. The invention solves the problem that the traditional optimization method is difficult to deal with the optimization design with many and variable design variables, and is expected to become one of the most potential technical means involved in the layout design of components in the engineering field.
AUTONOMOUS AND CONTINUOUSLY SELF-IMPROVING LEARNING SYSTEM
A system and methods are provided in which an artificial intelligence inference module identifies targeted information in large-scale unlabeled data, wherein the artificial intelligence inference module autonomously learns hierarchical representations from large-scale unlabeled data and continually self-improves from self-labeled data points using a teacher model trained to detect known targets from combined inputs of a small hand labeled curated dataset prepared by a domain expert together with self-generated intermediate and global context features derived from the unlabeled dataset by unsupervised and self-supervised processes. The trained teacher model processes further unlabeled data to self-generate new weakly-supervised training samples that are self-refined and self-corrected, without human supervision, and then used as inputs to a noisy student model trained in a semi-supervised learning process on a combination of the teacher model training set and new weakly-supervised training samples. With each iteration, the noisy student model continually self-optimizes its learned parameters against a set of configurable validation criteria such that the learned parameters of the noisy student surpass and replace the learned parameter of the prior iteration teacher model, with these optimized learned parameters periodically used to update the artificial intelligence inference module.