Patent classifications
H04N13/00
Binocular See-Through AR Head-Mounted Display Device and Information Display Method Therefor
A binocular see-through AR head-mounted display device is disclosed. Based on that the mapping relationships f.sub.c.fwdarw.s and f.sub.d.fwdarw.i are pre-stored in the head-mounted device, the position of the target object in the camera image is obtained through an image tracking method, and is mapped to the screen coordinate system of the head-mounted device for calculating the left/right image display position. Through a monocular distance finding method, the distance between the target object and the camera is real-time calculated referring to the imaging scale of the camera, so as to calculate a left-right image distance, thereby calculating the right or the right image display position. Correspondingly, the present invention also provides an information display method for a binocular see-through AR head-mounted display device and an augmented reality information display system. The present invention is highly reliable with low cost. The conventional depth of field adjustment is to change an image distance of an optical element. However, the present invention breaks conventional thinking, which calculates the left and the right image display positions for depth of field adjustment without changing a structure of an optical device. The present invention is novel and practical compared to changing an optical focal length.
BALANCING COLORS IN A SCANNED THREE-DIMENSIONAL IMAGE
A method of balancing colors of three-dimensional (3D) points measured by a scanner from a first location and a second location. The scanner measures 3D coordinates and colors of first object points from a first location and second object points from a second location. The scene is divided into local neighborhoods, each containing at least a first object point and a second object point. An adapted second color is determined for each second object point based at least in part on the colors of first object points in the local neighborhood.
Measuring Accuracy of Image Based Depth Sensing Systems
A special test target may enable standardized testing of performance of image based depth measuring systems. In addition, the error in measured depth with respect to the ground truth may be used as a metric of system performance. This test target may aid in identifying the limitations of the disparity estimation algorithms.
ELECTRONIC DEVICE, METHOD AND COMPUTER PROGRAM
An electronic device, comprising a processor which is configured to reconstruct in real-time a preview image of compressed sensing image data.
Free-viewpoint method and system
A method of generating a 3D reconstruction of a scene, the scene comprising a plurality of cameras positioned around the scene, comprises: obtaining the extrinsics and intrinsics of a virtual camera within a scene; accessing a data structure so as to determine a camera pair that is to be used in reconstructing the scene from the viewpoint of the virtual camera; wherein the data structure defines a voxel representation of the scene, the voxel representation comprising a plurality of voxels, at least some of the voxel surfaces being associated with respective camera pair identifiers; wherein each camera pair identifier associated with a respective voxel surface corresponds to a camera pair that has been identified as being suitable for obtaining depth data for the part of the scene within that voxel and for which the averaged pose of the camera pair is oriented towards the voxel surface; identifying, based on the obtained extrinsics and intrinsics of the virtual camera, at least one voxel that is within the field of view of the virtual camera and a corresponding voxel surface that is oriented towards the virtual camera; identifying, based on the accessed data structure, at least one camera pair that is suitable for reconstructing the scene from the viewpoint of the virtual camera, and generating a reconstruction of the scene from the viewpoint of the virtual camera based on the images captured by the cameras in the identified at least one camera pair.
SINGLE-VIEW FEATURE-LESS DEPTH AND TEXTURE CALIBRATION
A method and apparatus for performing a single view depth and texture calibration are described. In one embodiment, the apparatus comprises a calibration unit operable to perform a single view calibration process using a captured single view a target having a plurality of plane geometries having detectable features and being at a single orientation and to generate calibration parameters to calibrate one or more of the projector and multiple cameras using the single view of the target.
STEREO IMAGE MATCHING APPARATUS AND METHOD REQUIRING SMALL CALCULATION
A stereo image matching apparatus includes a processor which includes: a bit distributor distributing values of each pixel of stereo images into sequential N bits and outputting a plurality of stereo images including the sequential N bits; a plurality of cost calculators each receiving the plurality of stereo images and calculating matching cost values for each pixel of each of the stereo images; a confidence calculator calculating a matching confidence by using cost characteristics lit of the respective matching cost values calculated by the plurality of cost calculators; and a depth determiner determining that a depth value of which the matching confidence is high and the matching cost values are relatively low is a final depth value.
Computer Vision Based Driver Assistance Devices, Systems, Methods and Associated Computer Executable Code
The present invention includes computer vision based driver assistance devices, systems, methods and associated computer executable code (hereinafter collectively referred to as: “ADAS”). According to some embodiments, an ADAS may include one or more fixed image/video sensors and one or more adjustable or otherwise movable image/video sensors, characterized by different dimensions of fields of view. According to some embodiments of the present invention, an ADAS may include improved image processing. According to some embodiments, an ADAS may also include one or more sensors adapted to monitor/sense an interior of the vehicle and/or the persons within. An ADAS may include one or more sensors adapted to detect parameters relating to the driver of the vehicle and processing circuitry adapted to assess mental conditions/alertness of the driver and directions of driver gaze. These may be used to modify ADAS operation/thresholds.
Generation of three-dimensional scans for intraoperative imaging
A system for executing a three-dimensional (3D) intraoperative scan of a patient is disclosed. A 3D scanner controller projects the object points included onto a first image plane and the object points onto a second image plane. The 3D scanner controller determines first epipolar lines associated with the first image plane and second epipolar lines associated with the second image plane based on an epipolar plane that triangulates the object points included in the first 2D intraoperative image to the object points included in the second 2D intraoperative image. Each epipolar lines provides a depth of each object as projected onto the first image plane and the second image plane. The 3D scanner controller converts the first 2D intraoperative image and the second 2D intraoperative image to the 3D intraoperative scan of the patient based on the depth of each object point provided by each corresponding epipolar line.
Information processing apparatus, program and information processing method
An apparatus and method provide logic for processing information. In one implementation, an apparatus includes a display unit configured to display a first stereoscopic image. The first stereoscopic image includes a first and a second content, which may be disposed at corresponding display positions in a depth direction, and at least a portion of the first content appears to overlap at least a portion of the second content. A position-changing unit is configured to modify the display positions of the first and second content, in response to the apparent overlap. A control unit is configured to generate a signal to display, a second stereoscopic image that includes the first and second content disposed at the modified display positions. The display unit is further configured to display the second stereoscopic image such that the second stereoscopic image reduces the apparent overlap between the first and second content.