Patent classifications
G06V10/774
SYSTEMS AND METHODS FOR AUTOMATED X-RAY INSPECTION
A computer-implemented method of automated X-ray inspection during the production of printed circuit board, PCB, assemblies. The method includes capturing an X-ray image of a PCB assembly, determining a first error indicator based on image processing of the captured X-ray image, determining, in case the first error indicator indicates the PCB assembly as faulty, a second error indicator based on the captured X-ray image using a trained adaptive algorithm, and outputting the second error indicator as a result of the inspection.
LEARNING APPARATUS, LEARNING METHOD, AND RECORDING MEDIUM
In a learning apparatus, an acquisition unit acquires image data and label data corresponding to the image data. An object candidate extraction unit extracts each object candidate rectangle from the image data. A correct answer data generation unit generates a background object label corresponding to each background object included in each object candidate rectangle as correct answer data corresponding to the object candidate rectangle by using the label data. A prediction unit predicts a classification using each object candidate rectangle and outputs a prediction result. An optimization unit optimizes the object candidate extraction unit and the prediction unit using the prediction result and the correct answer data.
LEARNING APPARATUS, LEARNING METHOD, AND RECORDING MEDIUM
In a learning apparatus, an acquisition unit acquires image data and label data corresponding to the image data. An object candidate extraction unit extracts each object candidate rectangle from the image data. A correct answer data generation unit generates a background object label corresponding to each background object included in each object candidate rectangle as correct answer data corresponding to the object candidate rectangle by using the label data. A prediction unit predicts a classification using each object candidate rectangle and outputs a prediction result. An optimization unit optimizes the object candidate extraction unit and the prediction unit using the prediction result and the correct answer data.
Image Processing Method and Apparatus, Computer Device, Storage Medium, and Program Product
Methods and apparatuses for image processing are provided. A first image belonging to a first image domain is acquired and input to an image processing model to be trained to obtain a second image belonging to a second image domain. A first correlation degree between an image feature of the first image and an image feature of the second image to obtain a target feature correlation degree is calculated. A second correlation degree between feature value distribution of the image feature of the first image and feature value distribution of the image feature of the second image is calculated to obtain a distribution correlation degree. Model parameters of an image processing model are adjusted to a direction in which the target feature correlation degree is increased and a direction in which the distribution correlation degree is increased to obtain a trained image processing.
Image Processing Method and Apparatus, Computer Device, Storage Medium, and Program Product
Methods and apparatuses for image processing are provided. A first image belonging to a first image domain is acquired and input to an image processing model to be trained to obtain a second image belonging to a second image domain. A first correlation degree between an image feature of the first image and an image feature of the second image to obtain a target feature correlation degree is calculated. A second correlation degree between feature value distribution of the image feature of the first image and feature value distribution of the image feature of the second image is calculated to obtain a distribution correlation degree. Model parameters of an image processing model are adjusted to a direction in which the target feature correlation degree is increased and a direction in which the distribution correlation degree is increased to obtain a trained image processing.
DEVICE AND COMPUTER-IMPLEMENTED METHOD FOR OBJECT TRACKING
A device and computer-implemented method for object tracking. The method comprises providing a sequence of digital images, determining a sequence of relational graph embeddings, wherein a first relational graph embedding of the sequence comprises a first object embedding representing a first object in a first digital image of the sequence of digital images, wherein the first relational graph embedding comprises a first relation embedding of a relation for the first object embedding, wherein the first relation embedding relates the first object embedding to embeddings representing other objects of the first digital image in the first relational graph embedding and to embeddings in a second relational graph embedding of the sequence that represent objects of a second digital image of the sequence of digital images.
DEVICE AND COMPUTER-IMPLEMENTED METHOD FOR OBJECT TRACKING
A device and computer-implemented method for object tracking. The method comprises providing a sequence of digital images, determining a sequence of relational graph embeddings, wherein a first relational graph embedding of the sequence comprises a first object embedding representing a first object in a first digital image of the sequence of digital images, wherein the first relational graph embedding comprises a first relation embedding of a relation for the first object embedding, wherein the first relation embedding relates the first object embedding to embeddings representing other objects of the first digital image in the first relational graph embedding and to embeddings in a second relational graph embedding of the sequence that represent objects of a second digital image of the sequence of digital images.
METHOD OF FUSING IMAGE, AND METHOD OF TRAINING IMAGE FUSION MODEL
A method of fusing an image, a method of training an image fusion model, an electronic device, and a storage medium. The method of fusing the image includes: encoding a stitched image obtained by stitching a foreground image and a background image, so as to obtain a feature map; and decoding the feature map to obtain a fused image, wherein the feature map is decoded by: performing a weighting on the feature map by using an attention mechanism, so as to obtain a weighted feature map; performing a fusion on the feature map according to feature statistical data of the weighted feature map, so as to obtain a fused feature; and decoding the fused feature to obtain the fused image.
METHOD OF FUSING IMAGE, AND METHOD OF TRAINING IMAGE FUSION MODEL
A method of fusing an image, a method of training an image fusion model, an electronic device, and a storage medium. The method of fusing the image includes: encoding a stitched image obtained by stitching a foreground image and a background image, so as to obtain a feature map; and decoding the feature map to obtain a fused image, wherein the feature map is decoded by: performing a weighting on the feature map by using an attention mechanism, so as to obtain a weighted feature map; performing a fusion on the feature map according to feature statistical data of the weighted feature map, so as to obtain a fused feature; and decoding the fused feature to obtain the fused image.
STORAGE MEDIUM, DETERMINATION DEVICE, AND DETERMINATION METHOD
A non-transitory computer-readable storage medium storing a determination program that causes at least one computer to execute a process, the process includes acquiring a group of captured images that includes images including a face to which markers are attached; selecting, from a plurality of patterns that indicates a transition of positions of the markers, a first pattern that corresponds to a time-series change in the positions of the markers included in consecutive images among the group of captured images; and determining occurrence intensity of an action based on a determination criterion of the action determined based on the first pattern and the positions of the markers included in a captured image included after the consecutive images among the group of captured images.