Patent classifications
G06T7/586
ELECTRONIC DEVICE AND CONTROL METHOD THEREOF
An electronic device and a control method thereof are provided. The electronic device includes a camera, a camera flash, and at least one processor configured to control the camera to capture a natural light image and a depth image of an object, control the camera and the camera flash to capture an artificial light image of the object, obtain distance information from the depth image to generate a depth mask image, create a cluster mask image from the natural light image, obtain a flash image in which the illuminance of the natural light image has been removed from the illuminance of the artificial light image, obtain an optimization parameter based on the distance information, the depth mask image, the cluster mask image, and the flash image, and obtain three-dimensional topographic information and surface reflection information about the object based on the obtained optimization parameter.
Three-dimensional modeling using hemispherical or spherical visible light-depth images
Three-dimensional modeling includes obtaining a hemispherical or spherical visible light-depth image capturing an operational environment of a user device, generating a perspective converted hemispherical or spherical visible light-depth image, generating a three-dimensional model of the operational environment based on the perspective converted hemispherical or spherical visible light-depth image, and outputting the three-dimensional model. Obtaining the hemispherical or spherical visible light-depth image includes obtaining a hemispherical or spherical visual light image and obtaining a hemispherical or spherical non-visual light depth image. Generating the perspective converted hemispherical or spherical visible light-depth image includes generating a perspective converted hemispherical or spherical visual light image and generating a perspective converted hemispherical or spherical non-visual light depth image.
Three-dimensional modeling using hemispherical or spherical visible light-depth images
Three-dimensional modeling includes obtaining a hemispherical or spherical visible light-depth image capturing an operational environment of a user device, generating a perspective converted hemispherical or spherical visible light-depth image, generating a three-dimensional model of the operational environment based on the perspective converted hemispherical or spherical visible light-depth image, and outputting the three-dimensional model. Obtaining the hemispherical or spherical visible light-depth image includes obtaining a hemispherical or spherical visual light image and obtaining a hemispherical or spherical non-visual light depth image. Generating the perspective converted hemispherical or spherical visible light-depth image includes generating a perspective converted hemispherical or spherical visual light image and generating a perspective converted hemispherical or spherical non-visual light depth image.
Device for extracting depth information and method thereof
A device for extracting depth information according to one embodiment of the present invention includes: a light outputting unit for outputting IR (Infrared) light; a light inputting unit for inputting light reflected from an object after outputting from the light outputting unit; a light adjusting unit for adjusting the angle of the light so as to radiate the light into a first area including the object, and then for adjusting the angle of the light so as to radiate the light into a second area; and a controlling unit for estimating the motion of the object by using at least one of the lights between the light inputted to the first area and the light inputted to the second area.
Layered Scene Decomposition CODEC Method
A system and methods for a CODEC driving a real-time light field display for multi-dimensional video streaming, interactive gaming and other light field display applications is provided applying a layered scene decomposition strategy. Multi-dimensional scene data is divided into a plurality of data layers of increasing depths as the distance between a given layer and the plane of the display increases. Data layers are sampled using a plenoptic sampling scheme and rendered using hybrid rendering, such as perspective and oblique rendering, to encode light fields corresponding to each data layer. The resulting compressed, (layered) core representation of the multi-dimensional scene data is produced at predictable rates, reconstructed and merged at the light field display in real-time by applying view synthesis protocols, including edge adaptive interpolation, to reconstruct pixel arrays in stages (e.g. columns then rows) from reference elemental images.
Layered Scene Decomposition CODEC Method
A system and methods for a CODEC driving a real-time light field display for multi-dimensional video streaming, interactive gaming and other light field display applications is provided applying a layered scene decomposition strategy. Multi-dimensional scene data is divided into a plurality of data layers of increasing depths as the distance between a given layer and the plane of the display increases. Data layers are sampled using a plenoptic sampling scheme and rendered using hybrid rendering, such as perspective and oblique rendering, to encode light fields corresponding to each data layer. The resulting compressed, (layered) core representation of the multi-dimensional scene data is produced at predictable rates, reconstructed and merged at the light field display in real-time by applying view synthesis protocols, including edge adaptive interpolation, to reconstruct pixel arrays in stages (e.g. columns then rows) from reference elemental images.
PROPERTIES MEASUREMENT DEVICE
An intra-oral optical scanning method for intra-oral optical scanning including projecting a pattern, the pattern including at least a first area illuminated by a first color of light and a second area illuminated by a second color of light and at least one non-illuminated area onto an intra-oral feature, making a first image of the first area, the second area and the non-illuminated area differentiating between the first color of light and the second color of light in the first image of the projected pattern, and determining from the image of the non-illuminated area at least one of an ambient light level, a level of scattered light, a level of light absorption and a level of light reflected from at least one of the first area and the second area. Related apparatus and methods are also described.
DUAL-PATTERN OPTICAL 3D DIMENSIONING
An optical dimensioning system includes one or more light emitting assemblies configured to project one or more predetermined patterns on an object; an imaging assembly configured to sense light scattered and/or reflected off the object, and to capture an image of the object while the patterns are projected; and a processing assembly configured to analyze the image of the object to determine one or more dimension parameters of the object. The light emitting assembly may include a single piece optical component configured for producing a first pattern and second pattern. The patterns may be distinguishable based on directional filtering, feature detection, feature shift detection, or the like. A method for optical dimensioning includes illuminating an object with at least two detectable patterns; and calculating dimensions of the object by analyzing pattern separate of the elements comprising the projected patterns. One or more pattern generators may produce the patterns.
Deep Photometric Learning (DPL) Systems, Apparatus and Methods
An imaging system is disclosed herein. The imaging system includes an imaging apparatus and a computing system. The imaging apparatus includes a plurality of light sources positioned at a plurality of positions and a plurality of angles relative to a stage configured to support a specimen. The imaging apparatus is configured to capture a plurality of images of a surface of the specimen. The computing system in communication with the imaging apparatus. The computing system configured to generate a 3D-reconstruction of the surface of the specimen by receiving, from the imaging apparatus, the plurality of images of the surface of the specimen, generating, by the imaging apparatus via a deep learning model, a height map of the surface of the specimen based on the plurality of images, and outputting a 3D-reconstruction of the surface of the specimen based on the height map generated by the deep learning model.
Deep Photometric Learning (DPL) Systems, Apparatus and Methods
An imaging system is disclosed herein. The imaging system includes an imaging apparatus and a computing system. The imaging apparatus includes a plurality of light sources positioned at a plurality of positions and a plurality of angles relative to a stage configured to support a specimen. The imaging apparatus is configured to capture a plurality of images of a surface of the specimen. The computing system in communication with the imaging apparatus. The computing system configured to generate a 3D-reconstruction of the surface of the specimen by receiving, from the imaging apparatus, the plurality of images of the surface of the specimen, generating, by the imaging apparatus via a deep learning model, a height map of the surface of the specimen based on the plurality of images, and outputting a 3D-reconstruction of the surface of the specimen based on the height map generated by the deep learning model.