Patent classifications
G06F18/254
Defect detection of a component in an assembly
A system for validating installation correctness of a component in a test assembly includes a housing having a platform including a tiered surface. The tiered surface forms an abutment surface configured as a stop against which a test assembly is abutted. A plurality of cameras is positioned to capture different views of the test assembly. A processing device is configured to execute instructions to capture an image from each of the plurality of cameras of the test assembly that includes a plurality of components. Each of the plurality of components is analyzed within each image of the plurality of images. A matching score is determined and an indication of whether the plurality of components was correctly installed in the test assembly is generated.
Machine learning model training method and apparatus, server, and storage medium
A machine learning model training method includes: training a machine learning model using features of each sample in a training set based on an initial first weight and an initial second weight. In one iteration, the method includes determining a first sample set in which a target variable is incorrectly predicted, and a second sample set in which a target variable is correctly predicted, based on a predicted loss of each sample; and determining overall predicted loss of the first sample set based on a predicted loss and a first weight of each sample in the first sample set. The method also includes updating the first weight and a second weight of each sample in the first sample set based on the overall predicted loss; and inputting the updated second weight, the features, and the target variable of each sample to the machine learning model, and initiating a next iteration.
Face verification method and apparatus
A face verification method and apparatus is disclosed. The face verification method includes selecting a current verification mode, from among plural verification modes, to be implemented for the verifying of the face, determining one or more recognizers, from among plural recognizers, based on the selected current verification mode, extracting feature information from information of the face using at least one of the determined one or more recognizers, and indicating whether a verification is successful based on the extracted feature information.
Apparatus, method, and medium for merging pattern detection results
There is provided with an information processing apparatus. An acquisition unit acquires a plurality of pattern discrimination results each indicating a location of a pattern that is present in an image. A selection unit selects a predetermined number of pattern discrimination results from the plurality of pattern discrimination results. A determination unit determines whether or not the selected predetermined number of pattern discrimination results are to be merged, based on a similarity of the locations indicated by the predetermined number of pattern discrimination results. A merging unit merges the predetermined number of pattern discrimination results for which it was determined by the determination unit that merging is to be performed. A control unit controls the selection unit, the determination unit, and the merging unit to repeatedly perform respective processes.
LiDAR localization using 3D CNN network for solution inference in autonomous driving vehicles
In one embodiment, a method for solution inference using neural networks in LiDAR localization includes constructing a cost volume in a solution space for a predicted pose of an autonomous driving vehicle (ADV), the cost volume including a number of sub volumes, each sub volume representing a matching cost between a keypoint from an online point cloud and a corresponding keypoint on a pre-built point cloud map. The method further includes regularizing the cost volume using convention neural networks (CNNs) to refine the matching costs; and inferring, from the regularized cost volume, an optimal offset of the predicted pose. The optimal offset can be used to determine a location of the ADV.
Method for Generating an Augmented Video
The invention relates to a method (800) performed by a portable computer device (600) configured to generate an augmented reality video (ARV), the method comprising detecting first object proposal region information (Ba) using a first trained model (M.sub.a) based on a frame (F.sub.n) of a video (V), the first trained model (M.sub.a) 5 configured to provide object proposal regions having an accurate width, detecting second object proposal region information (B.sub.b) using a second trained model (M.sub.b) based on the frame (F.sub.n) of the video (V), the second trained model (M.sub.b) configured to provide object proposal regions having an accurate height, determining combined object proposal region information (B.sub.Combined), by combining object proposal regions of the first object proposal region information (B.sub.a) overlapping with object proposal regions of the second object proposal region information (B.sub.b), generating an augmented reality video (ARV) by generating an augmented frame (AF), wherein the augmented frame (AF) is generated by overlaying object proposal regions comprised in the combined object proposal region information (B.sub.Combined) onto the frame (F.sub.n) of the video (V) and adding the augmented frame (AF) to the augmented reality video (ARV).
System and method for digital image steganography detection using an ensemble of neural spatial rich models
Exemplary systems and methods are disclosed for detecting embedded data in a digital image. The system includes a processing device that extracts one or more features from a digital image and analyzes the one or more extracted features in a plurality of steganography analyzers, each steganography analyzer configured to execute a different steganography algorithm. The processing device generates an output data value at each steganography analyzer, the output data value indicating a probability that the digital image includes steganography according to the steganography algorithm of the steganography analyzer. Each output probability value is fed to an ensemble classifier, the ensemble classifier including a neural network in which the output probability values of the plurality of steganography analyzers are ensembled together to generate an output ensemble data value indicating a probability that the digital image includes any steganography according to the steganography algorithms of the steganography analyzers.
BOARD DAMAGE CLASSIFICATION SYSTEM
A board damage classification system includes a Convolutional Neural Network (CNN) sub-engine and a Graph Convolutional Network (GCN) sub-engine that were trained based on digital images of structures that have experienced natural disasters. The CNN sub-engine receives a board digital image of a board, analyzes the board digital image to identify board features, and determines a board feature damage classification for the board features. The CGN sub-engine receives a board feature graph that was generated using the board digital image and that includes nodes that correspond to the board features in the board digital image, and defines relationships between the nodes included in the board feature graph. The board feature damage classification determined by the CNN sub-engine and the relationships defined by the GCN sub-engine are then used to generate a board damage classification that includes a damage probability for board features in the board digital image.
Automatically choosing data samples for annotation
Among other things, we describe techniques for automatically selecting data samples for annotation. The techniques use bounding box prediction based on a bounding box score distribution, spatial probability density determined from bounding box sizes and positions and an ensemble score variance determined from outputs of multiple machine learning models to select data samples for annotation. In an embodiment, temporal inconsistency cues are used to select data samples for annotation. In an embodiment, digital map constraints or other map-based data are used to exclude data samples from annotation. In an exemplary application, the annotated data samples are used to train a machine learning model that outputs perception data for an autonomous vehicle application.
Confident peak-aware response time estimation by exploiting telemetry data from different system configurations
A prediction manager for providing responsiveness predictions for deployments includes persistent storage and a predictor. The persistent storage stores training data and conditioned training data. The predictor is programmed to obtain training data based on: a configuration of at least one deployment of the deployments, and a measured responsiveness of the at least one deployment, perform a peak extraction analysis on the measured responsiveness to obtain conditioned training data, obtain a prediction model using: the training data, and a first untrained prediction model, obtain a confidence prediction model using: the conditioned training data, and a second untrained prediction model, obtain a combined prediction using: the prediction model, and the confidence prediction model, and perform, based on the combined prediction, an action set to prevent a responsiveness failure.