Patent classifications
G06T7/586
Information processing apparatus, information processing method, and program
There is provided an information processing apparatus, an information processing method, and a program with which highly accurate depth information can be acquired. The information processing apparatus includes an interpolation image generation unit, a difference image generation unit, and a depth calculation unit. The interpolation image generation unit generates an interpolation image on the basis of a first normal image and a second normal image among the first normal image, a pattern image irradiated with infrared pattern light, and the second normal image, the interpolation image corresponding to a time at which the pattern image is captured. The difference image generation unit generates a difference image between the interpolation image and the pattern image. The depth calculation unit calculates depth information by using the difference image.
Information processing apparatus, information processing method, and program
There is provided an information processing apparatus, an information processing method, and a program with which highly accurate depth information can be acquired. The information processing apparatus includes an interpolation image generation unit, a difference image generation unit, and a depth calculation unit. The interpolation image generation unit generates an interpolation image on the basis of a first normal image and a second normal image among the first normal image, a pattern image irradiated with infrared pattern light, and the second normal image, the interpolation image corresponding to a time at which the pattern image is captured. The difference image generation unit generates a difference image between the interpolation image and the pattern image. The depth calculation unit calculates depth information by using the difference image.
Apparatus and method for obstacle detection
Apparatuses, methods and storage media associated with surface traversing by robotic apparatuses with an obstacle detection system are described herein. In some instances, the obstacle detection system is mounted on the body of the apparatus, and includes one or more light sources, to illuminate the surface to be traversed; a camera, to capture one or more images of the illuminated surface; and a processing device coupled with the camera and the light source, to process the captured one or more images, to detect, or cause to be detected, an obstacle disposed on the illuminated surface, based at least in part on a result of the processing of the images. Other embodiments may be described and claimed.
REAL-TIME 3D FACIAL ANIMATION FROM BINOCULAR VIDEO
A method for providing real-time three-dimensional facial animation from video is provided. The method includes collecting images of a subject, and forming a three-dimensional mesh for the subject based on a facial expression factor and a head pose of the subject extracted from the images of the subject. The method also includes forming a texture transformation based on an illumination parameter associated with an illumination configuration for the images from the subject, forming a three-dimensional model for the subject based on the three-dimensional mesh and the texture transformation, determining a loss factor based on selected points in a test image from the subject and a rendition of the test image by the three-dimensional model, and updating the three-dimensional model according to the loss factor. A system and a non-transitory, computer-readable medium storing instructions to perform the above method are also provided.
REAL-TIME 3D FACIAL ANIMATION FROM BINOCULAR VIDEO
A method for providing real-time three-dimensional facial animation from video is provided. The method includes collecting images of a subject, and forming a three-dimensional mesh for the subject based on a facial expression factor and a head pose of the subject extracted from the images of the subject. The method also includes forming a texture transformation based on an illumination parameter associated with an illumination configuration for the images from the subject, forming a three-dimensional model for the subject based on the three-dimensional mesh and the texture transformation, determining a loss factor based on selected points in a test image from the subject and a rendition of the test image by the three-dimensional model, and updating the three-dimensional model according to the loss factor. A system and a non-transitory, computer-readable medium storing instructions to perform the above method are also provided.
Deep photometric learning (DPL) systems, apparatus and methods
An imaging system is disclosed herein. The imaging system includes an imaging apparatus and a computing system. The imaging apparatus includes a plurality of light sources positioned at a plurality of positions and a plurality of angles relative to a stage configured to support a specimen. The imaging apparatus is configured to capture a plurality of images of a surface of the specimen. The computing system in communication with the imaging apparatus. The computing system configured to generate a 3D-reconstruction of the surface of the specimen by receiving, from the imaging apparatus, the plurality of images of the surface of the specimen, generating, by the imaging apparatus via a deep learning model, a height map of the surface of the specimen based on the plurality of images, and outputting a 3D-reconstruction of the surface of the specimen based on the height map generated by the deep learning model.
Deep photometric learning (DPL) systems, apparatus and methods
An imaging system is disclosed herein. The imaging system includes an imaging apparatus and a computing system. The imaging apparatus includes a plurality of light sources positioned at a plurality of positions and a plurality of angles relative to a stage configured to support a specimen. The imaging apparatus is configured to capture a plurality of images of a surface of the specimen. The computing system in communication with the imaging apparatus. The computing system configured to generate a 3D-reconstruction of the surface of the specimen by receiving, from the imaging apparatus, the plurality of images of the surface of the specimen, generating, by the imaging apparatus via a deep learning model, a height map of the surface of the specimen based on the plurality of images, and outputting a 3D-reconstruction of the surface of the specimen based on the height map generated by the deep learning model.
IMAGE PROCESSING APPARATUS AND VIRTUAL ILLUMINATION SYSTEM
An image processing apparatus includes: a first input interface that acquires first image data indicating an original image shot with an imaging device with respect to a subject in a shooting environment; a second input interface that acquires illuminance distribution information indicating a distribution based on illumination light radiated onto the subject from an illumination device in the shooting environment; a user interface that receives a user operation to set virtual illumination; and a controller that generates second image data by retouching the first image data to apply an illumination effect onto the original image with reference to the illuminance distribution information, the illumination effect corresponding to the virtual illumination set by the user operation.
Holographic imaging device and method
A holographic imaging device is disclosed. In one aspect, the holographic imaging device comprises an imaging unit comprising at least two light sources, wherein the imaging unit is configured to illuminate an object by emitting at least two light beams with the at least two light sources. A first and second light beams have different wave-vectors and wavelengths. The holographic imaging device further comprises a processing unit configured to obtain at least two holograms of the object by controlling the imaging unit to sequentially illuminate the object with respectively the first light beam and the second light beam, construct at least two 2D image slices based on the at least two holograms, wherein each 2D image slice is constructed at a determined depth within the object volume, and generate a three-dimensional image of the object based on a combination of the 2D image slices.
Holographic imaging device and method
A holographic imaging device is disclosed. In one aspect, the holographic imaging device comprises an imaging unit comprising at least two light sources, wherein the imaging unit is configured to illuminate an object by emitting at least two light beams with the at least two light sources. A first and second light beams have different wave-vectors and wavelengths. The holographic imaging device further comprises a processing unit configured to obtain at least two holograms of the object by controlling the imaging unit to sequentially illuminate the object with respectively the first light beam and the second light beam, construct at least two 2D image slices based on the at least two holograms, wherein each 2D image slice is constructed at a determined depth within the object volume, and generate a three-dimensional image of the object based on a combination of the 2D image slices.