G06V10/759

LEARNING DATA SET GENERATION DEVICE, LEARNING DATA SET GENERATION METHOD, AND RECORDING MEDIUM

Provided is to generate learning data set for learning a cloud correction processing method. A device includes: synthesis unit configured to have a more cloud image and a less cloud image including a same observation object as a set, and receive a first thick cloud area indicating a thick cloud in the more cloud image and a second thick cloud area indicating the thick cloud in the less cloud image; execute a first operation for the first and/or second thick cloud area to generate a first mask in the less cloud image; execute a second operation for the first and/or second thick cloud area to generate a second mask in the more cloud image; and adopt, as learning data, the set including data including the generated first mask and the more cloud image and data including the generated second mask and the less cloud image.

Automatic Bounding Region Annotation for Localization of Abnormalities

Mechanisms are provided for automatically annotating input images with bounding region annotations and corresponding anomaly labels. The mechanisms segment an input image to generate a mask corresponding to recognized internal structures of a subject. A template data structure is generated that specifies standardized internal structure zones of the subject. The mechanisms register the mask with the template data structure to generate a template registered mask identifying standardized internal structure zones present within the mask, and generate bounding region annotations for each standardized internal structure zone of the template registered mask. The bounding region annotations are correlated with labels indicating whether or not the bounding region comprises an anomaly in the input image based on an analysis of a received natural language text description of the input image. The bounding region annotations and labels are stored in association with the input image.

Medical image diagnostic apparatus, image processing apparatus, and registration method

A medical image diagnostic apparatus according to an embodiment includes processing circuitry configured to determine a plurality of small blocks for each of a plurality of pieces of medical image data, generate a plurality of superpixels corresponding to the plurality of small blocks, assign a label to at least one of the plurality of pieces of medical image data, and perform registration between the plurality of pieces of medical image data using the plurality of superpixels and the label.

TEXTURE DETECTION METHOD, TEXTURE IMAGE COMPENSATION METHOD AND DEVICE, AND ELECTRONIC DEVICE
20210264575 · 2021-08-26 ·

A texture image compensation method, a texture detection method and a device, and an electronic device are provided. The texture detection method includes: performing an image difference calculation on a first texture image acquired by a texture detection device and a foreign object correction image acquired by the texture detection device to compensate foreign object information in the first texture image, and to acquire a second texture image; performing and the texture detection by using the second texture image.

Image detection device, image detection method and storage medium storing program
11107223 · 2021-08-31 · ·

Provided are an image detection device, an image detection method and a program, which are capable of improving correspondence to a target deformation by optimizing a template shape, when performing target detection using template matching. An image detection device 100 for detecting a target from an input image comprises: a template generation unit 10 that generates a template for detecting a target; a mask generation unit 20 that generates a mask which shields a portion of the template, on the basis of temporal variations of a feature point extracted from an area including the image target; and a detection unit 30 that detects the target from the image using the template a portion of which is shielded by the mask.

System and method for facilitating efficient damage assessments
11069048 · 2021-07-20 · ·

Embodiments described herein provide a system for facilitating image sampling for training a target detector. During operation, the system obtains a first image depicting a first target. Here, the continuous part of the first target in the first image is labeled and enclosed in a target bounding box. The system then generates a set of positive image samples from an area of the first image enclosed by the target bounding box. A respective positive image sample includes at least a part of the first target. The system can train the target detector with the set of positive image samples to detect a second target from a second image. The target detector can be an artificial intelligence (AI) model capable of detecting an object.

Using rear sensor for wrong-way driving warning
11087628 · 2021-08-10 · ·

Using a read sensor to sense wrong-way driving. A method may include sensing, by a rear sensor of a vehicle, an environment of the vehicle to provide rear sensed information; processing the rear sensed information to provide at least one rear-sensed vehicle progress direction indications; generating or receiving at least one front-sensed vehicle progress direction indications; wherein the at least one front-sensed vehicle progress direction indications is generated by processing front-sensed information acquired during right-way progress; comparing at least one rear-sensed vehicle progress direction indications to the at least one front-sensed vehicle progress direction indications to determine whether the vehicle is wrong-way driving; and responding to the finding of the wrong-way driving.

Volumetric video creation from user-generated content

A processing system having at least one processor may obtain at least a first source video from a first endpoint device and a second source video from a second endpoint device, where each of the first source video and the second source video is a two-dimensional video, determine that the first source video and the second source video share at least one feature that is the same for both the first source video and the second source video, and generate a volumetric video from the first source video and the second source video, where the volumetric video comprises a photogrammetric combination of the first source video and the second source video.

METHOD OF AND SYSTEM FOR GENERATING TRAINING IMAGES FOR INSTANCE SEGMENTATION MACHINE LEARNING ALGORITHM
20210241034 · 2021-08-05 · ·

A method and a system for generating training images for training an instance segmentation machine learning algorithm (MLA). A set of image-level labelled images are received, where a given image is labelled with a label indicative of a presence of an object having an object class in the image. A classification MLA detects the object having the object class in each image. A class activation map (CAM) indicative of discriminative regions used by the classification MLA for detecting the object in each image is generated. A region proposal MLA is used to generate region proposals for each image. A pseudo mask of the respective object is generated based on the region proposals and the CAM, where a pseudo mask is indicative of pixels corresponding to the respective object class. The pseudo masks are used as a label with the image-level labelled images for training the instance segmentation MLA.

INSPECTION APPARATUS, INSPECTION METHOD, AND STORAGE MEDIUM

An inspection apparatus including an image generation device which generates a second image corresponding to a first image, and a defect detection device which detects a defect in the second image. Each of the first and second image includes partial regions each including pixels. The defect detection device is configured to estimate a first value indicating a position difference between the first and second image for each of the partial regions, based on a luminance difference between the first and second image, estimate a second value indicating a reliability of the first value for each of the partial regions, and estimate a position difference between the first and second image for each of the pixels, based on the first and second value estimated for each of the partial regions.