Patent classifications
H04N13/25
Wide-angle stereoscopic vision with cameras having different parameters
A stereoscopic vision system uses at least two cameras having different parameters to image a scene and create stereoscopic views. The different parameters of the two cameras can be intrinsic or extrinsic, including, for example, the distortion profile of the lens in the cameras, the field of view of the lens, the orientation of the cameras, the positions of the cameras, the color spectrum of the cameras, the frame rate of the cameras, the exposure time of the cameras, the gain of the cameras, the aperture size of the lenses, or the like. An image processing apparatus is then used to process the images from the at least two different cameras to provide optimal stereoscopic vision to a display.
Distance measuring camera
A distance measuring camera 1 includes a first optical system for collecting light from a subject to form a first subject image, a second optical system for collecting the light from the subject to form a second subject image, an imaging part for imaging the first subject image formed by the first optical system and the second subject image formed by the second optical system, and a distance calculating part 4 for calculating a distance to the subject based on the first subject image and the second subject image imaged by the imaging part S. The first optical system and the second optical system are configured so that a change of a magnification of the first subject image according to the distance to the subject is different from a change of a magnification of the second subject image according to the distance to the subject.
Distance measuring camera
A distance measuring camera 1 includes a first optical system for collecting light from a subject to form a first subject image, a second optical system for collecting the light from the subject to form a second subject image, an imaging part for imaging the first subject image formed by the first optical system and the second subject image formed by the second optical system, and a distance calculating part 4 for calculating a distance to the subject based on the first subject image and the second subject image imaged by the imaging part S. The first optical system and the second optical system are configured so that a change of a magnification of the first subject image according to the distance to the subject is different from a change of a magnification of the second subject image according to the distance to the subject.
MULTI-APERTURE ZOOM DIGITAL CAMERAS AND METHODS OF USING SAME
Multi-aperture zoom digital cameras comprising first and second scanning cameras having respective first and second native fields of view (FOV) and operative to scan a scene in respective substantially parallel first and second planes over solid angles larger than the respective native FOV, wherein the first and second cameras have respective centers that lie on an axis that is perpendicular to the first and second planes and are separated by a distance B from each other, and a camera controller operatively coupled to the first and second scanning cameras and configured to control the scanning of each camera.
MULTI-APERTURE ZOOM DIGITAL CAMERAS AND METHODS OF USING SAME
Multi-aperture zoom digital cameras comprising first and second scanning cameras having respective first and second native fields of view (FOV) and operative to scan a scene in respective substantially parallel first and second planes over solid angles larger than the respective native FOV, wherein the first and second cameras have respective centers that lie on an axis that is perpendicular to the first and second planes and are separated by a distance B from each other, and a camera controller operatively coupled to the first and second scanning cameras and configured to control the scanning of each camera.
Electronic device and method for controlling the same
An electronic device (100) and a method for controlling the electronic device (100) are provided. The electronic device (100) includes a time-of-flight (TOF) module 20, a color camera 30, a monochrome camera (40), and a processor (10). The TOF module (20) is configured to capture a depth image of a subject. The color camera (30) is configured to capture a color image of the subject. The monochrome camera (40) is configured to capture a monochrome image of the subject. The processor (10) is configured to obtain a current brightness of ambient light in real time, and to construct a three-dimensional image of the subject according to the depth image, the color image, and the monochrome image when the current brightness is less than a first threshold.
Electronic device and method for controlling the same
An electronic device (100) and a method for controlling the electronic device (100) are provided. The electronic device (100) includes a time-of-flight (TOF) module 20, a color camera 30, a monochrome camera (40), and a processor (10). The TOF module (20) is configured to capture a depth image of a subject. The color camera (30) is configured to capture a color image of the subject. The monochrome camera (40) is configured to capture a monochrome image of the subject. The processor (10) is configured to obtain a current brightness of ambient light in real time, and to construct a three-dimensional image of the subject according to the depth image, the color image, and the monochrome image when the current brightness is less than a first threshold.
MULTI-VIEW IMAGE FUSION BY IMAGE SPACE EQUALIZATION AND STEREO-BASED RECTIFICATION FROM TWO DIFFERENT CAMERAS
Methods to solve the problem of performing fusion of images acquired with two cameras with different type sensors, for example a visible (VIS) digital camera and an short wave infrared (SWIR) camera, include performing image space equalization on images acquired with the different type sensors before performing rectification and registration of such images in a fusion process.
MULTI-DIMENSIONAL DATA CAPTURE OF AN ENVIRONMENT USING PLURAL DEVICES
Embodiments of the invention describe apparatuses, systems, and methods related to data capture of objects and/or an environment. In one embodiment, a user can capture time-indexed three-dimensional (3D) depth data using one or more portable data capture devices that can capture time indexed color images of a scene with depth information and location and orientation data. In addition, the data capture devices may be configured to captured a spherical view of the environment around the data capture device.
MULTI-DIMENSIONAL DATA CAPTURE OF AN ENVIRONMENT USING PLURAL DEVICES
Embodiments of the invention describe apparatuses, systems, and methods related to data capture of objects and/or an environment. In one embodiment, a user can capture time-indexed three-dimensional (3D) depth data using one or more portable data capture devices that can capture time indexed color images of a scene with depth information and location and orientation data. In addition, the data capture devices may be configured to captured a spherical view of the environment around the data capture device.