Patent classifications
G06T7/586
Properties measurement device
An intra-oral optical scanning method for intra-oral optical scanning including projecting a pattern, the pattern including at least a first area illuminated by a first color of light and a second area illuminated by a second color of light and at least one non-illuminated area onto an intra-oral feature, making a first image of the first area, the second area and the non-illuminated area differentiating between the first color of light and the second color of light in the first image of the projected pattern, and determining from the image of the non-illuminated area at least one of an ambient light level, a level of scattered light, a level of light absorption and a level of light reflected from at least one of the first area and the second area. Related apparatus and methods are also described.
Signal generating systems for three-dimensional imaging systems and methods thereof
Methods and systems for generating illumination modulation signals, image intensifier gain modulation signals, and image sensor shutter control signals in a three-dimensional image-capturing system with one or more frequency synthesizers is disclosed. The illumination modulation signals, image intensifier gain modulation signals, and image sensor shutter signal are all coherent with one another, being derived from a common clock source. The use of frequency synthesizer concepts allow for the use of rapid modulation phase changes for homodyne operation, and further allow the use of rapid modulation frequency changes to mitigate the effects of inter-camera interference.
Signal generating systems for three-dimensional imaging systems and methods thereof
Methods and systems for generating illumination modulation signals, image intensifier gain modulation signals, and image sensor shutter control signals in a three-dimensional image-capturing system with one or more frequency synthesizers is disclosed. The illumination modulation signals, image intensifier gain modulation signals, and image sensor shutter signal are all coherent with one another, being derived from a common clock source. The use of frequency synthesizer concepts allow for the use of rapid modulation phase changes for homodyne operation, and further allow the use of rapid modulation frequency changes to mitigate the effects of inter-camera interference.
Baffles for three-dimensional sensors having spherical fields of view
In one example, a distance sensor includes a camera to capture images of a field of view, a plurality of light sources arranged around a lens of the camera, wherein each light source of the plurality of light sources is configured to project a plurality of beams of light into the field of view, and wherein the plurality of beams of light creates a pattern of projection artifacts in the field of view that is visible to a detector of the camera, a baffle attached to a first light source of the plurality of light sources, wherein the baffle is positioned to limit a fan angle of a plurality of beams of light that is projected by the first light source, and a processing system to calculate a distance from the distance sensor to an object in the field of view, based on an analysis of the images.
Baffles for three-dimensional sensors having spherical fields of view
In one example, a distance sensor includes a camera to capture images of a field of view, a plurality of light sources arranged around a lens of the camera, wherein each light source of the plurality of light sources is configured to project a plurality of beams of light into the field of view, and wherein the plurality of beams of light creates a pattern of projection artifacts in the field of view that is visible to a detector of the camera, a baffle attached to a first light source of the plurality of light sources, wherein the baffle is positioned to limit a fan angle of a plurality of beams of light that is projected by the first light source, and a processing system to calculate a distance from the distance sensor to an object in the field of view, based on an analysis of the images.
Visible light blocking lens assembly and electronic device including the same
A disclosed lens assembly may include at least four lenses sequentially arranged along an optical axis from a subject to an image sensor. Among the at least four lenses, a first lens disposed closest to the subject may have a visible light transmittance ranging from 0% to 5%, and, among subject-side surfaces and image-sensor-side surfaces of remaining lenses other than the first lens, at least four surfaces may include an inflection point. The lens assembly or an electronic device including the lens assembly may be variously implemented according to embodiments.
Visible light blocking lens assembly and electronic device including the same
A disclosed lens assembly may include at least four lenses sequentially arranged along an optical axis from a subject to an image sensor. Among the at least four lenses, a first lens disposed closest to the subject may have a visible light transmittance ranging from 0% to 5%, and, among subject-side surfaces and image-sensor-side surfaces of remaining lenses other than the first lens, at least four surfaces may include an inflection point. The lens assembly or an electronic device including the lens assembly may be variously implemented according to embodiments.
METHOD FOR TRAINING DEPTH ESTIMATION MODEL, ELECTRONIC DEVICE, AND STORAGE MEDIUM
A method for training a depth estimation model includes: obtaining sample images; generating sample depth images and sample residual maps corresponding to the sample images; determining sample photometric error information corresponding to the sample images based on the sample depth images; and obtaining a target depth estimation model by training an initial depth estimation model based on the sample images, the sample residual maps and the sample photometric error information.
Augmented reality system using structured light
An augmented reality system having a light source and a camera. The light source projects a pattern of light onto a scene, the pattern being periodic. The camera captures an image of the scene including the projected pattern. A projector pixel of the projected pattern corresponding to an image pixel of the captured image is determined. A disparity of each correspondence is determined, the disparity being an amount that corresponding pixels are displaced between the projected pattern and the captured image. A three-dimensional computer model of the scene is generated based on the disparity. A virtual object in the scene is rendered based on the three-dimensional computer model.
Augmented reality system using structured light
An augmented reality system having a light source and a camera. The light source projects a pattern of light onto a scene, the pattern being periodic. The camera captures an image of the scene including the projected pattern. A projector pixel of the projected pattern corresponding to an image pixel of the captured image is determined. A disparity of each correspondence is determined, the disparity being an amount that corresponding pixels are displaced between the projected pattern and the captured image. A three-dimensional computer model of the scene is generated based on the disparity. A virtual object in the scene is rendered based on the three-dimensional computer model.