Patent classifications
G06T2207/20132
TEMPLATE-BASED IMAGE PROCESSING FOR TARGET SEGMENTATION AND METROLOGY
One or more images of a portion of a wafer with fabricated devices are acquired using an imaging tool. A pattern of repeating features in an input image of a wafer is identified using various methods, such as correlation and clustering of neighboring vectors. A template is generated based on the found pattern of repeating features. The template is aligned with the acquired image to identify target locations. The target locations are then isolated from the original image for performing detailed metrology.
Generation of synthetic image data
Techniques are generally described for generation of photorealistic synthetic image data. A generator network generates first synthetic image data. A first class of image data represented by a first portion of the first synthetic image data is detected and the first portion is sent to a first discriminator network. The first discriminator network generates a prediction of whether the first portion of the first synthetic image data is synthetically generated. A second class of image data represented by a second portion of the first synthetic image data is detected and the second portion is sent to a second discriminator network. The second discriminator network generates a prediction of whether the second portion of the first synthetic image data is synthetically generated. The generator network is updated based on the predictions of the discriminators.
COMPUTER ASSISTED SURGERY SYSTEM, SURGICAL CONTROL APPARATUS AND SURGICAL CONTROL METHOD
A computer assisted surgery system comprising: a computerised surgical apparatus; and a control apparatus; wherein the control apparatus comprises circuitry configured to: receive information indicating a first region of a surgical scene from which information is obtained by the computerised surgical apparatus to make a decision; receive information indicating a second region of the surgical scene from which information is obtained by a medical professional to make a decision; determine if there is a discrepancy between the first and second regions of the surgical scene; and if there is a discrepancy between the first and second regions of the surgical scene: perform a predetermined process based on the discrepancy.
DEEP NEURAL NETWORK FRAMEWORK FOR PROCESSING OCT IMAGES TO PREDICT TREATMENT INTENSITY
Systems and methods relate to processing optical tomography coherence (OCT) images to predict characteristics of a treatment to be administered to effectively treat age-related macular degeneration. The processing can include pre-processing the image by flattening and/or cropping the image and processing the pre-processed image using a neural network. The neural network can include a deep convolutional neural network. An output of the neural network can indicate a predicted frequency and/or interval at which a treatment (e.g., anti-vascular endothelial growth factor therapy) is to be administered so as to prevent leakage of vasculature in the eye.
META-LEARNING FOR DETECTING OBJECT ANOMALY FROM IMAGES
Methods, computer systems, and apparatus, including computer programs encoded on computer storage media, for training a classification neural network. The system generates, from a set of object-specific data, one or more meta-learning datasets for one or more respective initial training tasks. The system determines values for a set of meta parameters by performing meta-learning with a classification neural network on the one or more meta-learning datasets. The system obtains a set of labeled training examples for a characteristic-detection task. The system determines based at least on one of the values for the set of meta parameters and using the set of labeled training examples, target values for the network parameters for the classification neural network to perform the characteristic-detection task.
DUAL-LEVEL MODEL FOR SEGMENTATION
The present disclosure describes techniques for dual-level semantic segmentation. Data may be input to a first segmentation network. The input data comprises an image and label information associated with the image. The image may be captured at nighttime and may comprise a plurality of regions. At least one region among the plurality of regions may be determined based at least in part on output of the first segmentation network. The at least one region of the image may be cropped. The cropped at least one region may be input to a second segmentation network. A final output may be produced based on the output of the first segmentation network and output of the second segmentation network.
HIERARCHICAL IMAGE GENERATION VIA TRANSFORMER-BASED SEQUENTIAL PATCH SELECTION
Systems and methods for image processing are described. Embodiments of the present disclosure identify a first image depicting a first object; identify a plurality of candidate images depicting a second object; select a second image from the plurality of candidate images depicting the second object based on the second image and a sequence of previous images including the first image using a crop selection network trained to select a next compatible image based on the sequence of previous images; and generate a composite image depicting the first object and the second object based on the first image and the second image.
IMAGE PROCESSING APPARATUS, METHOD AND PROGRAM, LEARNING APPARATUS, METHOD AND PROGRAM, AND DERIVATION MODEL
An image processing apparatus includes at least one processor, and the processor derives three-dimensional coordinate information that defines a position of a structure in a tomographic plane from a tomographic image including the structure, and that defines a position of an end part of the structure outside the tomographic plane in a direction intersecting the tomographic image.
IMAGE ANNOTATION TOOLS
A method of annotating known objects in road images captured from a sensor-equipped vehicle, the method implemented in an annotation system and comprising: receiving at the annotation system a road image containing a view of a known object; receiving ego localization data, as computed in a map frame of reference, via localization applied to sensor data captured by the sensor-equipped vehicle, the ego localization data indicating an image capture pose of the road image in the map frame of reference; determining, from a predetermined road map, an object location of the known object in the map frame of reference, the predetermined road map representing a road layout the map frame of reference, wherein the known object is one of: a piece of road structure, and an object on or adjacent a road; computing, in an image plane defined by the image capture pose, an object projection, by projecting an object model of the known object from the object location into the image plane; and storing, in an image database, image data of the road image, in association with annotation data of the object projection for annotating the image data with a location of the known object in the image plane.
Image processing apparatus and non-transitory computer readable medium storing program
An image processing apparatus includes an input unit that inputs an image, and a processor configured to read out a program stored in a memory, and executes the program. The processor is configured to detect an intended subject from the input image by a first detection method, set an intended subject region for the detected intended subject, detect the intended subject from the input image by a second detection method different from the first detection method, and update the set intended subject region by using a detection result of the second detection method.