G06V10/7753

AUTONOMOUS VEHICLE SYSTEM FOR INTELLIGENT ON-BOARD SELECTION OF DATA FOR BUILDING A REMOTE MACHINE LEARNING MODEL
20220230021 · 2022-07-21 ·

Systems and methods for on-board selection of data logs for training a machine learning model are disclosed. The system includes an autonomous vehicle having a plurality of sensors and a processor. The processor receives a plurality of unlabeled images from the plurality of sensors, a machine learning model, and a loss function corresponding to the machine learning model. For each of the plurality of images, the processor then determines one or more predictions using the machine learning model, compute an importance function based on the loss function and the one or more predictions, and transmit that image to a remote server for updating the machine learning model when a value of the importance function is greater than a threshold.

LABEL-FREE PERFORMANCE EVALUATOR FOR TRAFFIC LIGHT CLASSIFIER SYSTEM
20210390349 · 2021-12-16 ·

A method is disclosed for evaluating a classifier used to determine a traffic light signal state in images. The method includes, by a computer vision system of a vehicle, receiving at least one image of a traffic signal device of an imminent intersection. The traffic signal device includes a traffic signal face including one or more traffic signal elements. The method includes classifying, by a traffic light classifier (TLC), a classification state of the traffic signal face using labeled images correlated to the received at least one image. The classification state controls an operation of the vehicle at the intersection. The method includes evaluating a performance of the classifying of the classification state generated by the TLC. The evaluation is a label-free performance evaluation based on unlabeled images. The method includes training the TLC based on the evaluated performance.

Image learning device, image learning method, neural network, and image classification device
11200460 · 2021-12-14 · ·

An object of the invention is to provide an image learning device, an image learning method, a neural network, and an image classification device which can support appropriate classification of an image. In the image learning device according to an aspect of the invention, the neural network performs a first task of classifying a recognition target in a medical image and outputting a classification score as an evaluation result, and a second task different from the first task. The neural network updates a weight coefficient on the basis of a comparison result between the classification score output for the medical image of a first image group and a ground truth classification label, and does not reflect the classification score output for the medical image of a second image group in an update of the weight coefficient, for the first task. The neural network updates the weight coefficient on the basis of the evaluation result output for the medical image of the first image group and the evaluation result output for the medical image of the second image group, for the second task.

Adversarial network for transfer learning
11200459 · 2021-12-14 · ·

Disclosed herein are arrangements that facilitate the transfer of knowledge from models for a source data-processing domain to models for a target data-processing domain. A convolutional neural network space for a source domain is factored into a first classification space and a first reconstruction space. The first classification space stores class information and the first reconstruction space stores domain-specific information. A convolutional neural network space for a target domain is factored into a second classification space and a second reconstruction space. The second classification space stores class information and the second reconstruction space stores domain-specific information. Distribution of the first classification space and the second classification space is aligned.

ROBUST LEARNING DEVICE, ROBUST LEARNING METHOD, AND ROBUST LEARNING PROGRAM
20210383274 · 2021-12-09 · ·

This robust learning device 10 includes a quantity-increasing unit 11 which, in the classification results of a classification model for classifying learning data into one class from among two or more classes, quantity-increases by a predetermined number the highest score among scores for each of the plurality of classes prior to activation of an output layer of the classification model, with the exception of a score for a correct class represented by a correct label with respect to the learning data.

SYSTEM AND METHOD FOR UTILIZING GROUPED PARTIAL DEPENDENCE PLOTS AND GAME-THEORETIC CONCEPTS AND THEIR EXTENSIONS IN THE GENERATION OF ADVERSE ACTION REASON CODES

A framework for interpreting machine learning models is proposed that utilizes interpretability methods to determine the contribution of groups of input variables to the output of the model. Input variables are grouped based on dependencies with other input variables. The groups are identified by processing a training data set with a clustering algorithm. Once the groups of input variables are defined, scores related to each group of input variables for a given instance of the input vector processed by the model are calculated according to one or more algorithms. The algorithms can utilize group Partial Dependence Plot (PDP) values, Shapley Additive Explanations (SHAP) values, and Banzhaf values, and their extensions among others, and a score for each group can be calculated for a given instance of an input vector per group. These scores can then be sorted, ranked, and then combined into one hybrid ranking.

METHOD, APPARATUS, AND ELECTRONIC DEVICE FOR TRAINING NEURAL NETWORK MODEL
20210374474 · 2021-12-02 ·

The present disclosure relates to a method for training a neural network model performed at an electronic device. The method includes: performing initial training by using a first training sample set to obtain an initial neural network model; performing a prediction on a second training sample set by using the initial neural network model to obtain a prediction result of each of training samples in the second training sample set; determining a plurality of preferred samples from the second training sample set based on the prediction results; adding the plurality of preferred samples that are annotated to the first training sample set to obtain an expanded first training sample set; updating training of the initial neural network model by using the expanded first training sample set to obtain an updated neural network model until a training ending condition is satisfied.

DEFECT DETECTION METHOD AND APPARATUS
20210374928 · 2021-12-02 · ·

A computer implemented method including acquiring a live image of a subject physical sample of a product or material; inputting the live image to a trained generator neural network to generate a defect-free reconstruction of the live image; comparing the defect-free reconstruction of the live image with the live image to determine a difference; and identifying a defect corresponding to the subject physical sample at a location of the determined difference. An unsupervised training of the generator neural network includes acquiring a set of images of the subject defect-free physical sample; executing a training phase including a plurality of training epochs, in which: training data images are synthesized by superimposing, onto each member of the set of images of subject defect-free physical sample as a respective parent image, defect image data; the synthesized training data images are reconstructed by the generator neural network which is iteratively trained to minimize a loss function between each reconstruction of the reconstructing of the synthesized training data images and the respective parent image of defect-free physical sample and increase an amount of difference between a training data image and the respective parent image caused by the superimposed defect image data from a minimum to a maximum as a function of a training condition.

SELF-SUPERVISED CROSS-VIDEO TEMPORAL DIFFERENCE LEARNING FOR UNSUPERVISED DOMAIN ADAPTATION
20210374481 · 2021-12-02 ·

A method is provided for Cross Video Temporal Difference (CVTD) learning. The method adapts a source domain video to a target domain video using a CVTD loss. The source domain video is annotated, and the target domain video is unannotated. The CVTD loss is computed by quantizing clips derived from the source and target domain videos by dividing the source domain video into source domain clips and the target domain video into target domain clips. The CVTD loss is further computed by sampling two clips from each of the source domain clips and the target domain clips to obtain four sampled clips including a first source domain clip, a second source domain clip, a first target domain clip, and a second target domain clip. The CVTD loss is computed as |(second source domain clip−first source domain clip)−(second target domain clip−first target domain clip)|.

Systems and Methods for Out-of-Distribution Detection
20210374524 · 2021-12-02 ·

Some embodiments of the current disclosure disclose methods and systems for detecting out-of-distribution (ODD) data. For example, a method for detecting ODD data includes obtaining, at a neural network composed of a plurality of layers, a set of training data generated according to a distribution. Further, the method comprises generating, via a processor, a feature map by combining mapping functions corresponding to the plurality of layers into a vector of mapping function elements and mapping, by the feature map, the set of training data to a set of feature space training data in a feature space. Further, the method comprises identifying, via the processor, a hyper-ellipsoid in the feature space enclosing the feature space training data based on the generated feature map. In addition, the method comprises determining, via the processor, the first test data sample is OOD data when a mapped first test data sample in the feature space is outside the hyper-ellipsoid.