Patent classifications
G06V10/7753
UNSUPERVISED DOMAIN ADAPTATION WITH NEURAL NETWORKS
Approaches presented herein provide for unsupervised domain transfer learning. In particular, three neural networks can be trained together using at least labeled data from a first domain and unlabeled data from a second domain. Features of the data are extracted using a feature extraction network. A first classifier network uses these features to classify the data, while a second classifier network uses these features to determine the relevant domain. A combined loss function is used to optimize the networks, with a goal of the feature extraction network extracting features that the first classifier network is able to use to accurately classify the data, but prevent the second classifier from determining the domain for the image. Such optimization enables object classification to be performed with high accuracy for either domain, even though there may have been little to no labeled training data for the second domain.
SYSTEMS AND COMPUTER-IMPLEMENTED METHODS FOR IDENTIFYING ANOMALIES IN AN OBJECT AND TRAINING METHODS THEREFOR
A system identifies anomalies in an image of an object. An input image of the object containing zero or more anomalies is supplied to an image encoder. The image encoder generates an image model. The image model is applied to an image decoder that forms a substitute non-anomalous image of the object. Differences between the input image and the substitute non-anomalous image identify zero or more areas of the input image that contain the zero or more the anomalies. The system implements a flow-based model and has been trained using (a) a set of augmented anomaly-free images of the object applied at the image encoder and (b) a reconstruction loss calculated based on a norm of differences between each augmented anomaly-free image of the object and a corresponding output image from the image decoder.
CONTINUOUS TRAINING METHODS FOR SYSTEMS IDENTIFYING ANOMALIES IN AN IMAGE OF AN OBJECT
A system identifying anomalies in an image of an object is first trained using first sets of images corresponding to first anomaly types for the object. A model of the object is formed in a latent space. A label for each anomalous image is used to calculate vectors containing means and standard deviations for each first anomaly types. The means and standard deviations are used to calculate a log-likelihood loss for each first anomaly type. The system is retrained using second sets of images corresponding to second anomaly types for the object. The vectors are supplemented using labels for each second anomaly types. A statistically sufficient sample of information in the means and standard deviations vectors is supplied to the latent space. A log-likelihood loss for each of the first and second anomaly types is calculated based on their respective mean and standard deviation.
Methods and systems for identifying internal conditions in juvenile fish through non-invasive means
Methods and systems are disclosed for improvements in aquaculture that allow for increasing the number and harvesting efficiency of fish in an aquaculture setting by identifying and predicting internal conditions of the juvenile fish based on external characteristics that are imaged through non-invasive means.
APPLYING SELF-CONFIDENCE IN MULTI-LABEL CLASSIFICATION TO MODEL TRAINING
A computer model is trained to classify regions of a space (e.g., a pixel of an image or a voxel of a point cloud) according to a multi-label classification. To improve the model's accuracy, the model's self-confidence is determined with respect to its own predictions of regions in a training space. The self-confidence is determined based on the class predictions, such as a difference between the highest-predicted class and a second-highest-predicted class. When these are similar, it may reflect areas for potential improvement by focusing training on these low-confidence areas. Additional training may be performed by including modified training data in subsequent training iterations that focuses on low-confidence areas. As another example, additional training may be performed using the self-confidence to modify a classification loss used to refine parameters of the model.
MOBILE AI
A machine learning model can be updated based on collected data (i.e., initially unlabeled data). The unlabeled data can be labeled based on comparisons to labeled data. The newly labeled data, referred to as “weak labeled data” (as it was labeled without direct input of a professional) can then be used as training data in order to retrain the machine learning model.
Generative Adversarial Network Medical Image Generation for Training of a Classifier
Mechanisms are provided to implement a machine learning training model. The machine learning training model trains an image generator of a generative adversarial network (GAN) to generate medical images approximating actual medical images. The machine learning training model augments a set of training medical images to include one or more generated medical images generated by the image generator of the GAN. The machine learning training model trains a machine learning model based on the augmented set of training medical images to identify anomalies in medical images. The trained machine learning model is applied to new medical image inputs to classify the medical images as having an anomaly or not.
RECOGNIZER TRAINING DEVICE, RECOGNITION DEVICE, DATA PROCESSING SYSTEM, DATA PROCESSING METHOD, AND STORAGE MEDIUM
The disclosure is training a recognizer that outputs a recognition result by using a time series of feature data as an input. In addition, the disclosure is setting a data range whose length is a specified time width to a set of feature data to which a time is added, and selecting a specified number of pieces of the feature data from within the data range; adding a teacher label corresponding to the recognition result to the selected plurality of pieces of feature data, whose time order is retained, based on information regarding the plurality of pieces of feature data; and training the recognizer by using, as training data, a set of the plurality of pieces of feature data, whose time order is retained, and the teacher label.
SYSTEMS AND METHODS FOR PARTIALLY SUPERVISED LEARNING WITH MOMENTUM PROTOTYPES
A learning mechanism with partially-labeled web images is provided while correcting the noise labels during the learning. Specifically, the mechanism employs a momentum prototype that represents common characteristics of a specific class. One training objective is to minimize the difference between the normalized embedding of a training image sample and the momentum prototype of the corresponding class. Meanwhile, during the training process, the momentum prototype is used to generate a pseudo label for the training image sample, which can then be used to identify and remove out of distribution (OOD) samples to correct the noisy labels from the original partially-labeled training images. The momentum prototype for each class is in turn constantly updated based on the embeddings of new training samples and their pseudo labels.
METHOD FOR ANNOTATING TRAINING DATA
The invention relates to a method of annotating training data for an artificial intelligence comprising the following steps: storing, in a database, a set of data to be annotated, storing, in said database, at least a first description of a first facet for data selection in said set of data, said first description being associated with a first task to be performed by said artificial intelligence, selecting said first facet in said database, applying said first facet to data in said set of data to obtain first filtered data, receiving at least a first annotation of said first filtered data, and store said first annotation in the database in association with said first facet.