G06V10/771

COMPUTER-IMPLEMENTED METHOD AND SYSTEM FOR GENERATING A SYNTHETIC TRAINING DATA SET FOR TRAINING A MACHINE LEARNING COMPUTER VISION MODEL

A computer-implemented method for generating a synthetic training data set for training a machine learning computer vision model for performing at least one user defined computer vision task, in which spatially resolved sensor data are processed and evaluated with respect to at least one user defined object of interest, including receiving at least one model of a user defined object of interest; determining at least one render parameter and multiple render parameters; generating a set of training images by rendering the at least one model of the object of interest based on the at least one render parameter; generating annotation data for the set of training images with respect to the at least one object of interest; and providing a training data set including the set of training images and the annotation data for being output to the user and/or for training the computer vision model.

STORAGE MEDIUM, INFORMATION PROCESSING DEVICE, AND TRAINING PROCESSING METHOD
20220405526 · 2022-12-22 · ·

A storage medium storing a training processing program that causes at least one computer to execute a process that includes acquiring a deviation degree of a feature in a training dataset, by using a determination model, the training dataset being unlabeled; selecting one or more pieces of data included in the training dataset based on the deviation degree; outputting the selected one or more pieces of data or related data related to the selected one or more pieces of data; receiving an input of a determination result by a user for the one or more pieces of data; and determining an adjustment standard used to adjust a feature of each piece of the data included in the training dataset based on the received determination result, wherein when determination target data is determined by the determination model, a feature of the determination target data is adjusted based on the adjustment standard.

COMPUTER-IMPLEMENTED METHOD FOR DEFECT ANALYSIS, APPARATUS FOR DEFECT ANALYSIS, COMPUTER-PROGRAM PRODUCT, AND INTELLIGENT DEFECT ANALYSIS SYSTEM
20220405909 · 2022-12-22 · ·

A computer-implemented method for defect analysis is provided. The computer-implemented method includes obtaining a plurality of sets of defect point coordinates, a respective set of the plurality of sets of detect point coordinates including coordinates of defect points in a respective substrate of a plurality of substrates, the coordinates of defect points in the respective substrate being coordinates in an image coordinate system; combining the plurality of sets of defect point coordinates according to the image coordinate system into a composite set of coordinates to generate a composite image; and performing a clustering analysis to classify defect points in the composite set in the composite image into a plurality of clusters.

POSE DETERMINING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
20220398767 · 2022-12-15 ·

This application provides a pose determining method performed by an electronic device. The method includes: determining a first two-dimensional plane point in a first video frame captured by a camera, in response to a user-selected point within a display region of a target horizontal plane in a real world captured in the first video frame; obtaining first orientation information of the camera when acquiring the first video frame; determining a first three-dimensional space point corresponding to the first two-dimensional plane point in the real world and first coordinates of the first three-dimensional space point in a camera coordinate system; and determining a pose of the camera when acquiring the first video frame in the world coordinate system, according to the first orientation information of the camera and the first coordinates of the first three-dimensional space point in the real world.

POSE DETERMINING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
20220398767 · 2022-12-15 ·

This application provides a pose determining method performed by an electronic device. The method includes: determining a first two-dimensional plane point in a first video frame captured by a camera, in response to a user-selected point within a display region of a target horizontal plane in a real world captured in the first video frame; obtaining first orientation information of the camera when acquiring the first video frame; determining a first three-dimensional space point corresponding to the first two-dimensional plane point in the real world and first coordinates of the first three-dimensional space point in a camera coordinate system; and determining a pose of the camera when acquiring the first video frame in the world coordinate system, according to the first orientation information of the camera and the first coordinates of the first three-dimensional space point in the real world.

METHOD FOR DETERMINING OBJECT INFORMATION RELATING TO AN OBJECT IN A VEHICLE ENVIRONMENT, CONTROL UNIT AND VEHICLE
20220398852 · 2022-12-15 ·

The disclosure relates to a method for determining object information relating to an abject in an environment of a vehicle having a camera. The method includes: capturing the environment with the camera from a first position; changing the position of the camera; capturing the environment with the camera from a second position; determining object information relating to an object by selecting at least one first pixel in the first image and at least one second pixel in the second image, by selecting the first pixel and the second pixel such that they are assigned to the same object point of the object, and determining object coordinates of the assigned object point by triangulation. Changing the position of the camera is brought about by controlling an active actuator system in the vehicle. The actuator system adjusts the camera by an adjustment distance without changing a driving condition of the vehicle.

METHOD FOR DETERMINING OBJECT INFORMATION RELATING TO AN OBJECT IN A VEHICLE ENVIRONMENT, CONTROL UNIT AND VEHICLE
20220398852 · 2022-12-15 ·

The disclosure relates to a method for determining object information relating to an abject in an environment of a vehicle having a camera. The method includes: capturing the environment with the camera from a first position; changing the position of the camera; capturing the environment with the camera from a second position; determining object information relating to an object by selecting at least one first pixel in the first image and at least one second pixel in the second image, by selecting the first pixel and the second pixel such that they are assigned to the same object point of the object, and determining object coordinates of the assigned object point by triangulation. Changing the position of the camera is brought about by controlling an active actuator system in the vehicle. The actuator system adjusts the camera by an adjustment distance without changing a driving condition of the vehicle.

METHOD AND SYSTEM FOR SCENE GRAPH GENERATION

Broadly speaking, the disclosure generally relates to relates to a computer-implemented methods and systems for scene graph generation, and in particular for training a machine learning, ML, model to generate a scene graph. The method includes inputting training a training image into a machine learning model, outputting a predicted label for at least two objects in the training image and a predicted label for a relationship between the at least two objects. The training method includes calculating a loss, which takes into account both a supervised loss calculated by comparing the predicted labels to the actual labels for the training image, and a logic-based loss calculated by comparing the predicted labels to stored integrity constraints comprising common-sense knowledge. Advantageously, this means that the performance of the model is improved without increasing processing at inference-time.

METHOD AND SYSTEM FOR SCENE GRAPH GENERATION

Broadly speaking, the disclosure generally relates to relates to a computer-implemented methods and systems for scene graph generation, and in particular for training a machine learning, ML, model to generate a scene graph. The method includes inputting training a training image into a machine learning model, outputting a predicted label for at least two objects in the training image and a predicted label for a relationship between the at least two objects. The training method includes calculating a loss, which takes into account both a supervised loss calculated by comparing the predicted labels to the actual labels for the training image, and a logic-based loss calculated by comparing the predicted labels to stored integrity constraints comprising common-sense knowledge. Advantageously, this means that the performance of the model is improved without increasing processing at inference-time.

IMAGE RECOGNITION SYSTEM

According to the present invention, an image recognition system calculates importance of a feature for each target shape recognized in an image and for each type of feature, and determines correctness of a recognition result by comparing the importance with a statistic for each type of feature, for each target shape.