Patent classifications
H04N13/211
SINGLE PULSE LIDAR CORRECTION TO STEREO IMAGING
An apparatus for determining a distance to a target area, including at least one imaging system configured to provide at least two images of a target area, the images being associated with different imaging axes for forming a stereo image of the target area. The apparatus also includes a Lidar system including at least one laser configured to direct an optical beam to the target area and an optical detection system configured to receive a portion of the optical beam from the target area and establish a distance to the target area based on the received portion.
VISUALIZATION ALIGNMENT FOR THREE-DIMENSIONAL SCANNING
A method for generating a three-dimensional virtual representation includes performing a first scan of the physical object while the physical object has a first position. A displayed visualization of the physical object is generated based on the first scan. Aligning input is received that causes increased correspondence between the displayed visualization and a second position of the physical object. A second scan of the physical object is performed while the object is in the second position. Based on the first scan and the second scan, a three-dimensional virtual representation of the physical object is generated.
Devices and methods for generating a 3D imaging dataset of an object
A computerized imaging system and method for creating a 3D imaging dataset of an object are disclosed. The computerized imaging system includes an object stage mounted on a system base plate, the object stage is configured to rotate 360 degrees around its axis perpendicular to the base plate plane. The computerized imaging system includes an elongated elevation arm positioned alongside the object stage, wherein the elongated elevation arm having an image sensor, at least one lens, and a mirror mounted thereon, and wherein the optical axis of the image sensor is parallel to the elongated elevation arm elevation axis. The image sensor is used to capture a plurality of images of the object in a plurality of rotation and elevation angles of the object stage and elongated elevation arm.
Image sensor, imaging apparatus, and image processing device
An image sensor captures a subject image of a partial luminous flux passed through different regions from among a total luminous flux through one optical system. The image sensor is configured from a pixel arrangement in which a plurality of each of at least three types of pixels are arranged, including non-parallax pixels which comprise an aperture mask that produces a viewpoint in a reference direction, first parallax pixels which comprise an aperture mask that produces a viewpoint in a first direction different from the reference direction and second parallax pixels which comprise an aperture mask that produces a viewpoint in a second direction different from the reference direction, wherein each of the aperture mask of the first parallax pixels and the aperture mask of the second parallax pixels has a width of a greater-than-half-aperture aperture area in the direction of change of the viewpoint.
Multiscale depth estimation using depth from defocus
To extend the working range of depth from defocus (DFD) particularly on small depth of field (DoF) images, DFD is performed on an image pair at multiple spatial resolutions and the depth estimates are then combined. Specific implementations construct a Gaussian pyramid for each image of an image pair, perform DFD on the corresponding pair of images at each level of the two image pyramids, convert DFD depth scores to physical depth values using calibration curves generated for each level, and combine the depth values from all levels in a coarse-to-fine manner to obtain a final depth map that covers the entire depth range of the scene.
System and Method for Generating Motion-Stabilized Images of a Target Using Lidar and Video Measurements
A system uses range and Doppler velocity measurements from a lidar system and images from a video system to estimate a six degree-of-freedom trajectory of a target. The system estimates this trajectory in two stages: a first stage in which the range and Doppler measurements from the lidar system along with various feature measurements obtained from the images from the video system are used to estimate first stage motion aspects of the target (i.e., the trajectory of the target); and a second stage in which the images from the video system and the first stage motion aspects of the target are used to estimate second stage motion aspects of the target. Once the second stage motion aspects of the target are estimated, a three-dimensional image of the target may be generated.
System and Method for Generating Motion-Stabilized Images of a Target Using Lidar and Video Measurements
A system uses range and Doppler velocity measurements from a lidar system and images from a video system to estimate a six degree-of-freedom trajectory of a target. The system estimates this trajectory in two stages: a first stage in which the range and Doppler measurements from the lidar system along with various feature measurements obtained from the images from the video system are used to estimate first stage motion aspects of the target (i.e., the trajectory of the target); and a second stage in which the images from the video system and the first stage motion aspects of the target are used to estimate second stage motion aspects of the target. Once the second stage motion aspects of the target are estimated, a three-dimensional image of the target may be generated.
Stereoscopic image capturing device and stereoscopic image capturing method
A camera unit that captures and generates a plurality of images of a subject and a setting unit that sets different image capturing positions of the camera unit are provided. The setting unit sets the different image capturing positions so that the distance between an n-th image capturing position and an n+1th image capturing position and the distance between an m-th image capturing position and an m+1th image capturing position among the different image capturing positions differ from each other, where n and m are different natural numbers.
Imager integrated circuit and stereoscopic image capture device
An imager integrated circuit intended to cooperate with an optical system configured to direct light rays from a scene to an inlet face of the circuit, the circuit being configured to perform a simultaneous stereoscopic capture of N images corresponding to N distinct views of the scene, each of the N images corresponding to light rays directed by a portion of the optical system which is different from those directing the rays corresponding to the N−1 other images, including: N subsets of pixels made on a same substrate, each of the N subsets of pixels being intended to perform the capture of one of the N associated images, means interposed between each of the N subsets of pixels and the inlet face of the circuit, and configured to pass the rays corresponding to the image associated with said subset of pixels and block the other rays.
System and Method for Tracking Objects Using Lidar and Video Measurements
A system uses range and Doppler velocity measurements from a lidar system and images from a video system to estimate a six degree-of-freedom trajectory of a target. The system estimates this trajectory in two stages: a first stage in which the range and Doppler measurements from the lidar system along with various feature measurements obtained from the images from the video system are used to estimate first stage motion aspects of the target (i.e., the trajectory of the target); and a second stage in which the images from the video system and the first stage motion aspects of the target are used to estimate second stage motion aspects of the target. Once the second stage motion aspects of the target are estimated, a three-dimensional image of the target may be generated.