G06V10/771

FACIAL STRUCTURE ESTIMATING DEVICE, FACIAL STRUCTURE ESTIMATING METHOD, AND FACIAL STRUCTURE ESTIMATING PROGRAM
20230215016 · 2023-07-06 · ·

A facial structure estimating device 10 includes an acquiring unit 11 and a controller 13. The acquiring unit 11 acquires a facial image. The controller 13 functions as an estimator 16 that estimates a facial structure from a facial image. The controller 13 tracks a starting feature point constituting a facial structure using a tracking algorithm in a facial image of a frame subsequent to a facial image used to estimate the facial structure. The controller 13 obtains a resulting feature point by tracking a tracked feature point using an algorithm in an original frame facial image. The controller 13 selects a learning facial image for which the interval between resulting and starting feature points is less than or equal to a threshold. The controller 13 trains the estimator using the facial image selected for learning and the facial structure estimated by the estimator 16 based on the facial image.

FACIAL STRUCTURE ESTIMATING DEVICE, FACIAL STRUCTURE ESTIMATING METHOD, AND FACIAL STRUCTURE ESTIMATING PROGRAM
20230215016 · 2023-07-06 · ·

A facial structure estimating device 10 includes an acquiring unit 11 and a controller 13. The acquiring unit 11 acquires a facial image. The controller 13 functions as an estimator 16 that estimates a facial structure from a facial image. The controller 13 tracks a starting feature point constituting a facial structure using a tracking algorithm in a facial image of a frame subsequent to a facial image used to estimate the facial structure. The controller 13 obtains a resulting feature point by tracking a tracked feature point using an algorithm in an original frame facial image. The controller 13 selects a learning facial image for which the interval between resulting and starting feature points is less than or equal to a threshold. The controller 13 trains the estimator using the facial image selected for learning and the facial structure estimated by the estimator 16 based on the facial image.

SYSTEM AND METHOD OF EXTRACTING OR INSPECTING A FEATURE OF AN OBJECT USING THERMAL IMAGING, AND A METHOD OF INSPECTING AN OBJECT OF A GARMENT PRODUCT
20230217087 · 2023-07-06 ·

A system and method of extracting or inspecting a feature of an object using thermal imaging, and a method of inspecting an object of a garment product. The system includes a source of thermal influence arranged to heat or cool an object; an imager arranged to capture a plurality of images of the object when the object is subjected to the thermal influence; and an image processor arrange to processing the plurality of images and to distinguish a feature of interest from the other portions of the object presented on the plurality of images.

SYSTEM AND METHOD OF EXTRACTING OR INSPECTING A FEATURE OF AN OBJECT USING THERMAL IMAGING, AND A METHOD OF INSPECTING AN OBJECT OF A GARMENT PRODUCT
20230217087 · 2023-07-06 ·

A system and method of extracting or inspecting a feature of an object using thermal imaging, and a method of inspecting an object of a garment product. The system includes a source of thermal influence arranged to heat or cool an object; an imager arranged to capture a plurality of images of the object when the object is subjected to the thermal influence; and an image processor arrange to processing the plurality of images and to distinguish a feature of interest from the other portions of the object presented on the plurality of images.

DETECTION OF ARTIFACTS IN MEDICAL IMAGES

There is provided a method of re-classifying a clinically significant feature of a medical image as an artifact, comprising: feeding a target medical image captured by a specific medical imaging sensor at a specific setup into a machine learning model, obtaining a target feature map as an outcome of the machine learning model, wherein the target feature map includes target features classified as clinically significant, analyzing the target feature map with respect to sample feature map(s) obtained as an outcome of the machine learning model fed a sample medical image captured by at least one of: the same specific medical imaging sensor and the same specific setup, wherein the sample feature map(s) includes sample features classified as clinically significant, identifying target feature(s) depicted in the target feature map having attributes matching sample feature(s) depicted in the sample feature map(s), and re-classifying the identified target feature(s) as an artifact.

DETECTION OF ARTIFACTS IN MEDICAL IMAGES

There is provided a method of re-classifying a clinically significant feature of a medical image as an artifact, comprising: feeding a target medical image captured by a specific medical imaging sensor at a specific setup into a machine learning model, obtaining a target feature map as an outcome of the machine learning model, wherein the target feature map includes target features classified as clinically significant, analyzing the target feature map with respect to sample feature map(s) obtained as an outcome of the machine learning model fed a sample medical image captured by at least one of: the same specific medical imaging sensor and the same specific setup, wherein the sample feature map(s) includes sample features classified as clinically significant, identifying target feature(s) depicted in the target feature map having attributes matching sample feature(s) depicted in the sample feature map(s), and re-classifying the identified target feature(s) as an artifact.

Signal translation system and signal translation method

A signal translating method may include, according to one aspect of the present application, receiving a source signal of a first domain; identifying erroneous features and effective features from the source signal; translating the source signal of the first domain into a first virtual signal of a second domain, the first virtual signal is that in which erroneous features included in the source signal has been removed; and outputting the first virtual signal. Therefore, the virtual signal of the second domain in which the erroneous features removed may be output.

Signal translation system and signal translation method

A signal translating method may include, according to one aspect of the present application, receiving a source signal of a first domain; identifying erroneous features and effective features from the source signal; translating the source signal of the first domain into a first virtual signal of a second domain, the first virtual signal is that in which erroneous features included in the source signal has been removed; and outputting the first virtual signal. Therefore, the virtual signal of the second domain in which the erroneous features removed may be output.

Methods, systems, articles of manufacture, and apparatus to classify labels based on images using artificial intelligence

Example methods, apparatus, and articles of manufacture to classify labels based on images using artificial intelligence are disclosed. An example apparatus includes a regional proposal network to determine a first bounding box for a first region of interest in a first input image of a product; and determine a second bounding box for a second region of interest in a second input image of the product; a neural network to: generate a first classification for a first label in the first input image using the first bounding box; and generate a second classification for a second label in the second input image using the second bounding box; a comparator to determine that the first input image and the second input image correspond to a same product; and a report generator to link the first classification and the second classification to the product.

COMPUTER-IMPLEMENTED METHOD AND SYSTEM FOR GENERATING A SYNTHETIC TRAINING DATA SET FOR TRAINING A MACHINE LEARNING COMPUTER VISION MODEL

A computer-implemented method for generating a synthetic training data set for training a machine learning computer vision model for performing at least one user defined computer vision task, in which spatially resolved sensor data are processed and evaluated with respect to at least one user defined object of interest, including receiving at least one model of a user defined object of interest; determining at least one render parameter and multiple render parameters; generating a set of training images by rendering the at least one model of the object of interest based on the at least one render parameter; generating annotation data for the set of training images with respect to the at least one object of interest; and providing a training data set including the set of training images and the annotation data for being output to the user and/or for training the computer vision model.