Patent classifications
G06T7/586
COMPUTER VISION METHOD AND SYSTEM
A computer vision method for generating a three dimensional reconstruction of an object, the method comprising: receiving a set of photometric stereo images of the object, the set of photometric stereo images comprising a plurality of images using illumination from different directions using one or more light sources; using a trained neural network to generate a normal map of the object; and producing a 3D reconstruction of said object from said normal map, wherein using said trained neural network comprises converting said set of photometric stereo images to an input form suitable for an input layer of said neural network, wherein said input form comprises, for each pixel, a representation of the different lighting directions and their corresponding intensities which have been obtained from photometric stereo images to which a compensation has been applied, the compensation being determined from an estimate of the distance between the lighting source and a point on the object to which the pixel corresponds.
IMAGE PROCESSING SYSTEM
Provided is an image processing system capable of estimating a three-dimensional shape of a semiconductor pattern or a particle by solving problems of measurement reduction in a height direction and taking an enormous amount of time at a time of acquiring learning data. The image processing system according to the disclosure stores a detectable range of a detector provided in a charged particle beam device in a storage device in advance, generates a simulated image of a three-dimensional shape pattern using the detectable range, and learns a relationship between the simulated image and the three-dimensional shape pattern in advance.
3D STRUCTURE INSPECTION OR METROLOGY USING DEEP LEARNING
Methods and systems for determining information for a specimen are provided. Certain embodiments relate to bump height 3D inspection and metrology using deep learning artificial intelligence. For example, one embodiment includes a deep learning (DL) model configured for predicting height of one or more 3D structures formed on a specimen based on one or more images of the specimen generated by an imaging subsystem. One or more computer systems are configured for determining information for the specimen based on the predicted height. Determining the information may include, for example, determining if any of the 3D structures are defective based on the predicted height. In another example, the information determined for the specimen may include an average height metric for the one or more 3D structures.
ADAPTIVE LIGHT SOURCE
A method includes capturing a first image of a scene, detecting a face in a section of the scene from the first image, and activating an infrared (IR) light source to selectively illuminate the section of the scene with IR light. The IR light source includes an array of IR light emitting diodes (LEDs). The method includes capturing a second image of the scene under selective IR lighting from the IR light source, detecting the face in the second image, and identifying a person based on the face in the second image.
System and method for tracking
Systems and methods are provided for generating calibration information for a media projector. The method includes tracking at least position of a tracking apparatus that can be positioned on a surface. The media projector shines a test spot on the surface, and the test spot corresponds to a known pixel coordinate of the media projector. The system includes a computing device in communication with at least two cameras, wherein each of the cameras are able to capture images of one or more light sources attached to an object. The computing device determines the object's position by comparing images of the light sources and generates an output comprising the real-world position of the object. This real-world position is mapped to the known pixel coordinate of the media projector.
System and method for tracking
Systems and methods are provided for generating calibration information for a media projector. The method includes tracking at least position of a tracking apparatus that can be positioned on a surface. The media projector shines a test spot on the surface, and the test spot corresponds to a known pixel coordinate of the media projector. The system includes a computing device in communication with at least two cameras, wherein each of the cameras are able to capture images of one or more light sources attached to an object. The computing device determines the object's position by comparing images of the light sources and generates an output comprising the real-world position of the object. This real-world position is mapped to the known pixel coordinate of the media projector.
Method for inspecting mounting state of component, printed circuit board inspection apparatus, and computer readable recording medium
A printed circuit board inspection apparatus can inspect a mounting state of a component by generating depth information on the component by using a pattern of light reflected from the component mounted on a printed circuit board received by an image sensor, generating two-dimensional image data for the component by using at least one of light of a first wavelength, light of a second wavelength, light of a third wavelength, and light of a fourth wavelength reflected from the component received by a first image sensor, inputting the depth information and the two-dimensional image data for the component into a machine learning-based model, obtaining depth information with reduced noise from the machine learning-based model, and using the noise-reduced information.
Method for inspecting mounting state of component, printed circuit board inspection apparatus, and computer readable recording medium
A printed circuit board inspection apparatus can inspect a mounting state of a component by generating depth information on the component by using a pattern of light reflected from the component mounted on a printed circuit board received by an image sensor, generating two-dimensional image data for the component by using at least one of light of a first wavelength, light of a second wavelength, light of a third wavelength, and light of a fourth wavelength reflected from the component received by a first image sensor, inputting the depth information and the two-dimensional image data for the component into a machine learning-based model, obtaining depth information with reduced noise from the machine learning-based model, and using the noise-reduced information.
METHOD AND APPARATUS FOR SUBJECT IDENTIFICATION
Comprehensive 2D learning images are collected for learning subjects. Standardized 2D gallery images of many gallery subjects are collected, one per gallery subject. A 2D query image of a query subject is collected, of arbitrary viewing aspect, illumination, etc. 3D learning models, 3D gallery models, and a 3D query model are determined from the learning, gallery, and query images. A transform is determined for the selected learning model and each gallery model that yields or approximates the query image. The transform is at least partly 3D, such as 3D illumination transfer or 3D orientation alignment. The transform is applied to each gallery model so that the transformed gallery models more closely resemble the query model. 2D transformed gallery images are produced from the transformed gallery models, and are compared against the 2D query image to identify whether the query subject is also any of the gallery subjects.
METHOD AND APPARATUS FOR SUBJECT IDENTIFICATION
Comprehensive 2D learning images are collected for learning subjects. Standardized 2D gallery images of many gallery subjects are collected, one per gallery subject. A 2D query image of a query subject is collected, of arbitrary viewing aspect, illumination, etc. 3D learning models, 3D gallery models, and a 3D query model are determined from the learning, gallery, and query images. A transform is determined for the selected learning model and each gallery model that yields or approximates the query image. The transform is at least partly 3D, such as 3D illumination transfer or 3D orientation alignment. The transform is applied to each gallery model so that the transformed gallery models more closely resemble the query model. 2D transformed gallery images are produced from the transformed gallery models, and are compared against the 2D query image to identify whether the query subject is also any of the gallery subjects.