H04N13/10

Augmented three dimensional point collection of vertical structures

Automated methods and systems are disclosed, including a method comprising: capturing images and three-dimensional LIDAR data of a geographic area with an image capturing device and a LIDAR system, as well as location and orientation data for each of the images corresponding to the location and orientation of the image capturing device capturing the images, the images depicting an object of interest and the three-dimensional LIDAR data including the object of interest; storing the three-dimensional LIDAR data on a non-transitory computer readable medium; analyzing the images with a computer system to determine three dimensional locations of points on the object of interest; and updating the three-dimensional LIDAR data with the three dimensional locations of points on the object of interest determined by analyzing the images to create a 3D point cloud.

MAPPING OF SPHERICAL IMAGE DATA INTO RECTANGULAR FACES FOR TRANSPORT AND DECODING ACROSS NETWORKS
20190182473 · 2019-06-13 ·

A system captures a first hemispherical image and a second hemispherical image, each hemispherical image including an overlap portion, the overlap potions capturing a same field of view, the two hemispherical images collectively comprising a spherical FOV and separated along a longitudinal plane. The system maps a modified first hemispherical image to a first portion of the 2D projection of a cubic image, the modified first hemispherical image including a non-overlap portion of the first hemispherical image, and maps a modified second hemispherical image to a second portion of the 2D projection of the cubic image, the modified second hemispherical image also including a non-overlap portion. The system maps the overlap portions of the first hemispherical image and the second hemispherical image to the 2D projection of the cubic image, and encodes the 2D projection of the cubic image to generate an encoded image representative of the spherical FOV.

MAPPING OF SPHERICAL IMAGE DATA INTO RECTANGULAR FACES FOR TRANSPORT AND DECODING ACROSS NETWORKS
20190182473 · 2019-06-13 ·

A system captures a first hemispherical image and a second hemispherical image, each hemispherical image including an overlap portion, the overlap potions capturing a same field of view, the two hemispherical images collectively comprising a spherical FOV and separated along a longitudinal plane. The system maps a modified first hemispherical image to a first portion of the 2D projection of a cubic image, the modified first hemispherical image including a non-overlap portion of the first hemispherical image, and maps a modified second hemispherical image to a second portion of the 2D projection of the cubic image, the modified second hemispherical image also including a non-overlap portion. The system maps the overlap portions of the first hemispherical image and the second hemispherical image to the 2D projection of the cubic image, and encodes the 2D projection of the cubic image to generate an encoded image representative of the spherical FOV.

Depth measurement using a phase grating
10317205 · 2019-06-11 · ·

Binocular depth-perception systems use binary, phase-antisymmetric gratings to cast point-source responses onto an array of photosensitive pixels. The gratings and arrays can be manufactured to tight tolerances using well characterized and readily available integrated-circuit fabrication techniques, and can thus be made small, cost-effective, and efficient. The gratings produce point-source responses that are large relative to the pitch of the pixels, and that exhibit wide ranges of spatial frequencies and orientations. Such point-source responses make it easy to distinguish the point-source responses from fixed-pattern noise the results from spatial frequencies of structures that form the array.

Depth measurement using a phase grating
10317205 · 2019-06-11 · ·

Binocular depth-perception systems use binary, phase-antisymmetric gratings to cast point-source responses onto an array of photosensitive pixels. The gratings and arrays can be manufactured to tight tolerances using well characterized and readily available integrated-circuit fabrication techniques, and can thus be made small, cost-effective, and efficient. The gratings produce point-source responses that are large relative to the pitch of the pixels, and that exhibit wide ranges of spatial frequencies and orientations. Such point-source responses make it easy to distinguish the point-source responses from fixed-pattern noise the results from spatial frequencies of structures that form the array.

Depth map generation apparatus, method and non-transitory computer-readable medium therefor
10313657 · 2019-06-04 · ·

The present disclosure provides a depth map generation apparatus, including a camera assembly with at least three cameras, an operation mode determination module and a depth map generation module. The camera assembly with at least three cameras may a first camera, a second camera and a third camera that are sequentially aligned on a same axis. The operation mode determination module may be configured to determine an operation mode of the camera assembly. The operation mode includes at least: a first mode using images of non-adjacent cameras, and a second mode using images of adjacent cameras. Further, the depth map generation module may be configured to generate depth maps according to the determined operation mode.

Depth map generation apparatus, method and non-transitory computer-readable medium therefor
10313657 · 2019-06-04 · ·

The present disclosure provides a depth map generation apparatus, including a camera assembly with at least three cameras, an operation mode determination module and a depth map generation module. The camera assembly with at least three cameras may a first camera, a second camera and a third camera that are sequentially aligned on a same axis. The operation mode determination module may be configured to determine an operation mode of the camera assembly. The operation mode includes at least: a first mode using images of non-adjacent cameras, and a second mode using images of adjacent cameras. Further, the depth map generation module may be configured to generate depth maps according to the determined operation mode.

SETTING APPARATUS, SETTING METHOD, AND STORAGE MEDIUM
20190163356 · 2019-05-30 ·

A setting method for setting a parameter of a virtual viewpoint relating to a virtual viewpoint video to be generated based on images captured by a plurality of cameras includes receiving a first user operation relating to a first parameter relating to a position of a virtual viewpoint, determining a settable range of a second parameter of the virtual viewpoint by using the first parameter based on the received first user operation, and setting the first parameter based on the received first user operation and the second parameter that is based on a second user operation different from the first user operation and falls within the determined settable range as parameters of the virtual viewpoint.

Capturing and aligning three-dimensional scenes

Systems and methods for building a three-dimensional composite scene are disclosed. Certain embodiments of the systems and methods may include the use of a three-dimensional capture device that captures a plurality of three-dimensional images of an environment. Some embodiments may further include elements concerning aligning and/or mapping the captured images. Various embodiments may further include elements concerning reconstructing the environment from which the images were captured. The methods disclosed herein may be performed by a program embodied on a non-transitory computer-readable storage medium when executed the program is executed a processor.

Capturing and aligning three-dimensional scenes

Systems and methods for building a three-dimensional composite scene are disclosed. Certain embodiments of the systems and methods may include the use of a three-dimensional capture device that captures a plurality of three-dimensional images of an environment. Some embodiments may further include elements concerning aligning and/or mapping the captured images. Various embodiments may further include elements concerning reconstructing the environment from which the images were captured. The methods disclosed herein may be performed by a program embodied on a non-transitory computer-readable storage medium when executed the program is executed a processor.