H04N13/122

MATCHING SEGMENTS OF VIDEO FOR VIRTUAL DISPLAY OF A SPACE
20220394228 · 2022-12-08 ·

Systems, methods, and non-transitory computer-readable medium storing instructions that, when executed, causes a processor to perform operations to display a three-dimensional (3D) space. The methods may include, with an imaging device, capturing a first series of frames as the imaging device travels from a first location to a second location within a space, and capturing a second series of frames as the imaging device travels from the second location to the first location. The method may also include determining a first segment in the first series of frames that matches a second segment in the second series of frames to create a segmentation dataset, generating video clip data based on the segmentation dataset, the video clip data defining a series of video clips, and displaying the series of video clips.

MATCHING SEGMENTS OF VIDEO FOR VIRTUAL DISPLAY OF A SPACE
20220394228 · 2022-12-08 ·

Systems, methods, and non-transitory computer-readable medium storing instructions that, when executed, causes a processor to perform operations to display a three-dimensional (3D) space. The methods may include, with an imaging device, capturing a first series of frames as the imaging device travels from a first location to a second location within a space, and capturing a second series of frames as the imaging device travels from the second location to the first location. The method may also include determining a first segment in the first series of frames that matches a second segment in the second series of frames to create a segmentation dataset, generating video clip data based on the segmentation dataset, the video clip data defining a series of video clips, and displaying the series of video clips.

Process and apparatus for the capture of plenoptic images between arbitrary planes

A process and an apparatus for the plenoptic capture of photographic or cinematographic images of an object or a 3D scene (10) of interest are based on a correlated light emitting source and correlation measurement, along the line of “Correlation Plenoptic Imaging” (CPI). A first image sensor (Da) and a second image sensor (Db) detect images along a path of a first light beam (a) and a second light beam (b), respectively. A processing unit (100) of the intensities detected by the synchronized image sensors (Da, Db) is configured to retrieve the propagation direction of light by measuring spatio-temporal correlations between light intensities detected in the image planes of at least two arbitrary planes (P′, P″; D′b, D″a) chosen in the vicinity of the object or within the 3D scene (10).

Process and apparatus for the capture of plenoptic images between arbitrary planes

A process and an apparatus for the plenoptic capture of photographic or cinematographic images of an object or a 3D scene (10) of interest are based on a correlated light emitting source and correlation measurement, along the line of “Correlation Plenoptic Imaging” (CPI). A first image sensor (Da) and a second image sensor (Db) detect images along a path of a first light beam (a) and a second light beam (b), respectively. A processing unit (100) of the intensities detected by the synchronized image sensors (Da, Db) is configured to retrieve the propagation direction of light by measuring spatio-temporal correlations between light intensities detected in the image planes of at least two arbitrary planes (P′, P″; D′b, D″a) chosen in the vicinity of the object or within the 3D scene (10).

SYSTEMS AND METHODS OF MULTIVIEW STYLE TRANSFER

A system and method of multiview style transfer apply a style transfer to individual views of a multiview image in a way that produces consistent results across all images. In some embodiments, the multiview style transfer includes receiving first and second images representative of first and second perspectives of a scene and first and second disparity maps corresponding to the first and second images, generating a first stylized image, generating a stylized shifted image based on the first stylized image and the first disparity map, generating a second stylized image based on a guided filter of the stylized shifted image and the second image, and generating a first and second stylized image based on the stylized shifted images and the disparity maps.

SYSTEMS AND METHODS OF MULTIVIEW STYLE TRANSFER

A system and method of multiview style transfer apply a style transfer to individual views of a multiview image in a way that produces consistent results across all images. In some embodiments, the multiview style transfer includes receiving first and second images representative of first and second perspectives of a scene and first and second disparity maps corresponding to the first and second images, generating a first stylized image, generating a stylized shifted image based on the first stylized image and the first disparity map, generating a second stylized image based on a guided filter of the stylized shifted image and the second image, and generating a first and second stylized image based on the stylized shifted images and the disparity maps.

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
20220385883 · 2022-12-01 ·

There is provided an image processing apparatus comprising. An obtainment unit obtains a first circular fisheye image accompanied by a first missing region in which no pixel value is present. A generation unit generates a first equidistant cylindrical projection image by performing first equidistant cylindrical transformation processing based on the first circular fisheye image. The generation unit generates the first equidistant cylindrical projection image such that a first corresponding region corresponding to the first missing region has a pixel value in the first equidistant cylindrical projection image.

Systems and methods for temporal corrections for parallax reprojection

Systems are configured for generating temporally corrected pass-through images. In some instances, the systems obtain depth maps of an environment at a first timepoint, generate a 3D representation of the environment by unprojecting the depth information represented in the depth map, and obtain one or more first images of the environment captured at a second timepoint. The systems may also be configured to perform a first intermediate projection to identify first texture information from the one or more first images, identify a display pose associated with the system, generate a display projection of the 3D representation, and creating a composite image based on the display projection and the first texture information.

Systems and methods for temporal corrections for parallax reprojection

Systems are configured for generating temporally corrected pass-through images. In some instances, the systems obtain depth maps of an environment at a first timepoint, generate a 3D representation of the environment by unprojecting the depth information represented in the depth map, and obtain one or more first images of the environment captured at a second timepoint. The systems may also be configured to perform a first intermediate projection to identify first texture information from the one or more first images, identify a display pose associated with the system, generate a display projection of the 3D representation, and creating a composite image based on the display projection and the first texture information.

MULTI-VIEW IMAGE FUSION BY IMAGE SPACE EQUALIZATION AND STEREO-BASED RECTIFICATION FROM TWO DIFFERENT CAMERAS

Methods to solve the problem of performing fusion of images acquired with two cameras with different type sensors, for example a visible (VIS) digital camera and an short wave infrared (SWIR) camera, include performing image space equalization on images acquired with the different type sensors before performing rectification and registration of such images in a fusion process.