Patent classifications
H04N13/246
FREE VIEWPOINT VIDEO GENERATION AND INTERACTION METHOD BASED ON DEEP CONVOLUTIONAL NEURAL NETWORK
A Free Viewpoint Video (FVV) generation and interaction method based on a deep Convolutional Neural Network (CNN) includes the steps of: acquiring multi-viewpoint data of a target scene by a synchronous shooting system with a multi-camera array arranged accordingly to obtain groups of synchronous video frame sequences from a plurality of viewpoints, and rectifying baselines of the sequences at pixel level in batches; extracting, by encoding and decoding network structures, features of each group of viewpoint images input into a designed and trained deep CNN model, to obtain deep feature information of the scene, and combining the information with the input images to generate a virtual viewpoint image between each group of adjacent physical viewpoints at every moment; and synthesizing all viewpoints into frames of the FVV based on time and spatial position of viewpoints by stitching matrices. The method eliminates the need for camera rectification and depth image calculation.
CAMERA PARAMETER DERIVATION APPARATUS, CAMERA PARAMETER DERIVATION METHOD, AND CAMERA PARAMETER DERIVATION PROGRAM
A camera parameter derivation apparatus for deriving camera parameters of a plurality of cameras for which conditions that the cameras be arranged at ideal positions that are set on a straight line at equal intervals and that directions of all arranged cameras be parallel to each other have been set includes an internal parameter derivation unit that derives internal parameter matrices of the cameras based on estimated internal parameter matrices for all cameras that are arranged at estimated positions while being oriented in estimated directions, a camera position derivation unit that derives ideal positions of the cameras that minimize the maximum of the distances between the estimated positions and the ideal positions, and a rotation matrix derivation unit that derives rotation matrices for correcting external parameters such that errors from parallel directions are equal to or less than a threshold value.
SYSTEM FOR DETERMINING AN EXPECTED FIELD OF VIEW
An image capture system is configured to align a field of view of the image capture component with a field of view of a user of the system. In some cases, the image capture system may adjust the field of view of the image data based at least in part on orientation and position data associated with the capture device.
SYSTEM FOR DETERMINING AN EXPECTED FIELD OF VIEW
An image capture system is configured to align a field of view of the image capture component with a field of view of a user of the system. In some cases, the image capture system may adjust the field of view of the image data based at least in part on orientation and position data associated with the capture device.
METHOD AND SYSTEM FOR GENERATING A DEPTH MAP
A system for depth estimation, comprises at least a first and a second depth estimation optical systems, each configured for receiving a light beam from a scene and estimating depths within the scene, wherein the first system is a monocular depth estimation optical system; and an image processor, configured for receiving depth information from the first and second systems, and generating a depth map or a three-dimensional image of the scene based on the received depth information.
CALIBRATING SENSOR ALIGNMENT WITH APPLIED BENDING MOMENT
Examples are disclosed that relate to calibration data related to a determined alignment of sensors on a wearable display device. One example provides a wearable display device comprising a frame, a first sensor and a second sensor, one or more displays, a logic system, and a storage system. The storage system comprises calibration data related to a determined alignment of the sensors with the frame in a bent configuration and instructions executable by the logic system. The instructions are executable to obtain a first sensor data and a second sensor data respectfully from the first and second sensors, determine a distance from the wearable display device to a feature based at least upon the first and second sensor data using the calibration data, obtain a stereo image to display based upon the distance from the wearable display device to the feature, and output the stereo image via the displays.
CALIBRATING SENSOR ALIGNMENT WITH APPLIED BENDING MOMENT
Examples are disclosed that relate to calibration data related to a determined alignment of sensors on a wearable display device. One example provides a wearable display device comprising a frame, a first sensor and a second sensor, one or more displays, a logic system, and a storage system. The storage system comprises calibration data related to a determined alignment of the sensors with the frame in a bent configuration and instructions executable by the logic system. The instructions are executable to obtain a first sensor data and a second sensor data respectfully from the first and second sensors, determine a distance from the wearable display device to a feature based at least upon the first and second sensor data using the calibration data, obtain a stereo image to display based upon the distance from the wearable display device to the feature, and output the stereo image via the displays.
INFORMATION PROCESSING APPARATUS, CONTROL METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
An information processing apparatus includes: a processor; and a memory storing a program which, when executed by the processor, causes the information processing apparatus to obtain an image and correction information on a first optical system and a second optical system, the image including a first area corresponding to a first image inputted via the first optical system and a second area corresponding to a second image inputted via the second optical system having a predetermined parallax with respect to the first optical system; execute correcting processing of correcting, based on the correction information, positions of a pixel included in the first area and a pixel included in the second area in the image, and generate a processed image by executing processing of transforming the corrected first area and the corrected second area.
INFORMATION PROCESSING APPARATUS, CONTROL METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
An information processing apparatus includes: a processor; and a memory storing a program which, when executed by the processor, causes the information processing apparatus to obtain an image and correction information on a first optical system and a second optical system, the image including a first area corresponding to a first image inputted via the first optical system and a second area corresponding to a second image inputted via the second optical system having a predetermined parallax with respect to the first optical system; execute correcting processing of correcting, based on the correction information, positions of a pixel included in the first area and a pixel included in the second area in the image, and generate a processed image by executing processing of transforming the corrected first area and the corrected second area.
Methods and systems for traffic monitoring
A system and method for determining a dimension of a target. The method includes: determining a camera parameter, the camera parameter including at least one of a focal length, a yaw angle, a roll angle, a pitch angle, or a height of one or more cameras; acquiring a first image and a second image of an target captured by the one or more cameras; generating a first corrected image and a second corrected image by correcting the first image and the second image; determining a parallax between a pixel in the first corrected image and a corresponding pixel in the second corrected image; determining an outline of the target; and determining a dimension of the target based at least in part on the camera parameter, the parallax, and the outline of the target.