Patent classifications
G06V10/778
PLATFORM FOR PERCEPTION SYSTEM DEVELOPMENT FOR AUTOMATED DRIVING SYSTEM
The present invention relates to methods and systems that utilize the production vehicles to develop new perception features related to new sensor hardware as well as new algorithms for existing sensors by using self-supervised continuous training. To achieve this the production vehicle's own perception output is fused with other sensors in order to generate a bird's eye view of the road scenario over time. The bird's eye view is synchronized with buffered sensor data that was recorded when the road scenario took place and subsequently used to train a new perception model to output the bird's eye view directly.
PLATFORM FOR PERCEPTION SYSTEM DEVELOPMENT FOR AUTOMATED DRIVING SYSTEM
The present invention relates to methods and systems that utilize the production vehicles to develop new perception features related to new sensor hardware as well as new algorithms for existing sensors by using self-supervised continuous training. To achieve this the production vehicle's own perception output is fused with other sensors in order to generate a bird's eye view of the road scenario over time. The bird's eye view is synchronized with buffered sensor data that was recorded when the road scenario took place and subsequently used to train a new perception model to output the bird's eye view directly.
INFORMATION PROCESSING METHOD, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING DEVICE
An information processing method is executed by a computer and includes: obtaining a third model in which a first model which is a machine learning model that performs a deblurring process of an input image and outputs a feature quantity and a second model which is a machine learning model that performs an object recognition process of the input image and outputs a result of the object recognition are connected so that an output of the first model becomes an input of the second model; training the third model through machine learning so that a difference between a result of object recognition that is output from the third model after a training image with blur is input into the third model and reference data relating to the result of the object recognition corresponding to the training image decreases; and outputting the third model which has undergone the training.
AI-ASSISTED HUMAN DATA AUGMENTATION AND CONTINUOUS TRAINING FOR MACHINE LEARNING MODELS
A method is provided for training at least one classifier model used by an artificial intelligence (AI) system to recognize each of a set of objects and to assign each of the set of objects to a class. The method includes training the at least one classifier model on a training dataset, thereby producing at least one trained classifier model; using the at least one trained classifier model to detect and classify each member of a set of objects, thereby generating a set of inferences, wherein each inference includes (a) a cropped image of a classified object, (b) the classified object's inferred class, and (c) a confidence score associated with the inferred classification; examining the set of inferences with a machine implemented audit trigger, wherein the audit trigger identifies a subset of the set of inferences whose members have (i) a confidence score that falls below a predetermined threshold value, or (ii) a missing classification; and if the identified subset has at least one member, subjecting the identified subset to a human audit, thereby yielding a corrected set of observations, wherein, for each member of the corrected set of observations, the inferred class of the corresponding member of the set of inferences is replaced with a corrected class. The corrected set of observations is then added to a training dataset and used to improve the future accuracy of the classifier model.
TRAINING ENERGY-BASED MODELS FROM A SINGLE IMAGE FOR INTERNAL LEARNING AND INFERENCE USING TRAINED MODELS
Different from prior works that model the internal distribution of patches within an image implicitly with a top-down latent variable model (e.g., generator), embodiments explicitly represent the statistical distribution within a single image by using an energy-based generative framework, where a pyramid of energy functions, each parameterized by a bottom-up deep neural network, are used to capture the distributions of patches at different resolutions. Also, embodiments of a coarse-to-fine sequential training and sampling strategy are presented to train the model efficiently. Besides learning to generate random samples from white noise, embodiments can learn in parallel with a self-supervised task (e.g., recover an input image from its corrupted version), which can further improve the descriptive power of the learned model. Embodiments does not require an auxiliary model (e.g., discriminator) to assist the training, and embodiments also unify internal statistics learning and image generation in a single framework.
Data generating method, and computing device and non-transitory medium implementing same
A data generating method includes obtaining first sample data, determining a type of the first sample data and a corresponding data expansion method, expanding the first sample data according to the determined data expansion method to generate second sample data, and dividing the first sample data and the second sample data into a training set and a verification set according to a preset rule. A data model is trained according to the training set, and the data model is verified according to the verification set after training.
Data generating method, and computing device and non-transitory medium implementing same
A data generating method includes obtaining first sample data, determining a type of the first sample data and a corresponding data expansion method, expanding the first sample data according to the determined data expansion method to generate second sample data, and dividing the first sample data and the second sample data into a training set and a verification set according to a preset rule. A data model is trained according to the training set, and the data model is verified according to the verification set after training.
Reservoir computing
Provided is a reservoir computing system including a reservoir having a random laser for emitting a non-linear optical signal with respect to an input signal. The reservoir computing system also includes a converter for converting the non-linear optical signal into an output signal by applying a conversion function. The conversion function is trained by using a training input signal and a target output signal.
METHOD AND SYSTEM FOR SCENE GRAPH GENERATION
Broadly speaking, the disclosure generally relates to relates to a computer-implemented methods and systems for scene graph generation, and in particular for training a machine learning, ML, model to generate a scene graph. The method includes inputting training a training image into a machine learning model, outputting a predicted label for at least two objects in the training image and a predicted label for a relationship between the at least two objects. The training method includes calculating a loss, which takes into account both a supervised loss calculated by comparing the predicted labels to the actual labels for the training image, and a logic-based loss calculated by comparing the predicted labels to stored integrity constraints comprising common-sense knowledge. Advantageously, this means that the performance of the model is improved without increasing processing at inference-time.
METHOD FOR TRAINING IMAGE RECOGNITION MODEL BASED ON SEMANTIC ENHANCEMENT
Embodiments of the present disclosure provide a method and apparatus for training an image recognition model based on a semantic enhancement, a method and apparatus for recognizing an image, an electronic device, and a computer readable storage medium. The method for training an image recognition model based on a semantic enhancement comprises: extracting, from an inputted first image being unannotated and having no textual description, a first feature representation of the first image; calculating a first loss function based on the first feature representation; extracting, from an inputted second image being unannotated and having an original textual description, a second feature representation of the second image; calculating a second loss function based on the second feature representation, and training an image recognition model based on a fusion of the first loss function and the second loss function.