Patent classifications
H04N13/111
Iterative synthesis of views from data of a multi-view video
Synthesis of an image of a view from data of a multi-view video. The synthesis includes an image processing phase as follows: generating image synthesis data from texture data of at least one image of a view of the multi-view video; calculating an image of a synthesised view from the generated synthesis data and at least one image of a view of the multi-view video; analysing the image of the synthesised view relative to a synthesis performance criterion; if the criterion is met, delivering the image of the synthesised view; and if not, iterating the processing phase. The calculation of an image of a synthesised view at a current iteration includes modifying, based on synthesis data generated in the current iteration, an image of the synthesised view calculated during a processing phase preceding the current iteration.
Iterative synthesis of views from data of a multi-view video
Synthesis of an image of a view from data of a multi-view video. The synthesis includes an image processing phase as follows: generating image synthesis data from texture data of at least one image of a view of the multi-view video; calculating an image of a synthesised view from the generated synthesis data and at least one image of a view of the multi-view video; analysing the image of the synthesised view relative to a synthesis performance criterion; if the criterion is met, delivering the image of the synthesised view; and if not, iterating the processing phase. The calculation of an image of a synthesised view at a current iteration includes modifying, based on synthesis data generated in the current iteration, an image of the synthesised view calculated during a processing phase preceding the current iteration.
System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function
A method for generating a three-dimensional (3D) model of an object includes: capturing images of the object from a plurality of viewpoints, the images including color images; generating a 3D model of the object from the images, the 3D model including a plurality of planar patches; for each patch of the planar patches: mapping image regions of the images to the patch, each image region including at least one color vector; and computing, for each patch, at least one minimal color vector among the color vectors of the image regions mapped to the patch; generating a diffuse component of a bidirectional reflectance distribution function (BRDF) for each patch of planar patches of the 3D model in accordance with the at least one minimal color vector computed for each patch; and outputting the 3D model with the BRDF for each patch.
System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function
A method for generating a three-dimensional (3D) model of an object includes: capturing images of the object from a plurality of viewpoints, the images including color images; generating a 3D model of the object from the images, the 3D model including a plurality of planar patches; for each patch of the planar patches: mapping image regions of the images to the patch, each image region including at least one color vector; and computing, for each patch, at least one minimal color vector among the color vectors of the image regions mapped to the patch; generating a diffuse component of a bidirectional reflectance distribution function (BRDF) for each patch of planar patches of the 3D model in accordance with the at least one minimal color vector computed for each patch; and outputting the 3D model with the BRDF for each patch.
Methods and apparatus for encoding, communicating and/or using images
Methods and apparatus for capturing, communicating and using image data to support virtual reality experiences are described. Images, e.g., frames, are captured at a high resolution but lower frame rate than is used for playback. Interpolation is applied to captured frames to generate interpolated frames. Captured frames, along with interpolated frame information, are communicated to the playback device. The combination of captured and interpolated frames correspond to a second frame playback rate which is higher than the image capture rate. Cameras operate at a high image resolution but slower frame rate than images could be captured with the same cameras at a lower resolution. Interpolation is performed prior to delivery to the user device with segments to be interpolated being selected based on motion and/or lens FOV information. A relatively small amount of interpolated frame data is communicated compared to captured frame data for efficient bandwidth use.
Multiscopic image capture system
Systems, devices, and methods disclosed herein may generate captured views and a plurality of intermediate views within a pixel disparity range, T.sub.d, the plurality of intermediate views being extrapolated from the captured views.
Multiscopic image capture system
Systems, devices, and methods disclosed herein may generate captured views and a plurality of intermediate views within a pixel disparity range, T.sub.d, the plurality of intermediate views being extrapolated from the captured views.
METHOD FOR ENCODING IMMERSIVE IMAGE AND METHOD FOR DECODING IMMERSIVE IMAGE
Disclosed herein is a method for encoding an immersive image. The method includes detecting a non-diffuse surface in a first texture image of a first view, generating an additional texture image from the first texture image based on the detected non-diffuse surface, performing pruning on the additional texture image based on a second texture image of a second view, generating a texture atlas based on the pruned additional texture image, and encoding the texture atlas.
METHOD FOR ENCODING IMMERSIVE IMAGE AND METHOD FOR DECODING IMMERSIVE IMAGE
Disclosed herein is a method for encoding an immersive image. The method includes detecting a non-diffuse surface in a first texture image of a first view, generating an additional texture image from the first texture image based on the detected non-diffuse surface, performing pruning on the additional texture image based on a second texture image of a second view, generating a texture atlas based on the pruned additional texture image, and encoding the texture atlas.
Method for image processing of image data for image and visual effects on a two-dimensional display wall
A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall in the live action scene. Pixel display values to add or modify an image effect or a visual effect are determined, and the image data is adjusted using the pixel display values and the image matte.