Patent classifications
H04N13/133
AUTOMATIC SELECTION OF VIEWPOINT CHARACTERISTICS AND TRAJECTORIES IN VOLUMETRIC VIDEO PRESENTATIONS
A method for automatic selection of viewpoint characteristics and trajectories in volumetric video presentations includes receiving a plurality of video streams depicting a scene, wherein the plurality of video streams provides images of the scene from a plurality of different viewpoints, identifying a set of desired viewpoint characteristics for a volumetric video traversal of the scene, determining a trajectory through the plurality of video streams that is consistent with the set of desired viewpoint characteristics, rendering a volumetric video traversal that follows the trajectory, wherein the rendering comprises compositing the plurality of video streams, and publishing the volumetric video traversal for viewing on a user endpoint device.
Automatic selection of viewpoint characteristics and trajectories in volumetric video presentations
A method for automatic selection of viewpoint characteristics and trajectories in volumetric video presentations includes receiving a plurality of video streams depicting a scene, wherein the plurality of video streams provides images of the scene from a plurality of different viewpoints, identifying a set of desired viewpoint characteristics for a volumetric video traversal of the scene, determining a trajectory through the plurality of video streams that is consistent with the set of desired viewpoint characteristics, rendering a volumetric video traversal that follows the trajectory, wherein the rendering comprises compositing the plurality of video streams, and publishing the volumetric video traversal for viewing on a user endpoint device.
Automatic selection of viewpoint characteristics and trajectories in volumetric video presentations
A method for automatic selection of viewpoint characteristics and trajectories in volumetric video presentations includes receiving a plurality of video streams depicting a scene, wherein the plurality of video streams provides images of the scene from a plurality of different viewpoints, identifying a set of desired viewpoint characteristics for a volumetric video traversal of the scene, determining a trajectory through the plurality of video streams that is consistent with the set of desired viewpoint characteristics, rendering a volumetric video traversal that follows the trajectory, wherein the rendering comprises compositing the plurality of video streams, and publishing the volumetric video traversal for viewing on a user endpoint device.
Method for image processing of image data for a two-dimensional display wall with three-dimensional objects
A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, stereoscopic image data of the live action scene is received, and display wall metadata of the precursor image is determined. Further, a first portion of the stereoscopic image data comprising the stage element in the live action scene is determined based on the stereoscopic image data and the display wall metadata. A second portion of the stereoscopic image data comprising the display wall in the live action scene with the display wall displaying the precursor image is also determined. Thereafter, an image matte for the stereoscopic image data is generated based on the first portion and the second portion.
Method for image processing of image data for a two-dimensional display wall with three-dimensional objects
A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, stereoscopic image data of the live action scene is received, and display wall metadata of the precursor image is determined. Further, a first portion of the stereoscopic image data comprising the stage element in the live action scene is determined based on the stereoscopic image data and the display wall metadata. A second portion of the stereoscopic image data comprising the display wall in the live action scene with the display wall displaying the precursor image is also determined. Thereafter, an image matte for the stereoscopic image data is generated based on the first portion and the second portion.
Stereoscopic imaging device and method for image processing
A stereoscopic imaging device includes at least a first and a second image recording unit configured to record a first and a second original image of an object from different perspectives, wherein the original images differ at least with regard to one item of image information, an image display unit for imaging displayed images, an image processing unit for further processing the original images, and the image processing unit is configured to supplement at least one of the two original images with at least one item of image information from the other original image to generate a displayed image. In addition, a method for generating at least one displayed image that can be imaged on an image display unit is provided.
A METHOD, APPARATUS AND SYSTEM FOR REDUCING CROSSTALK OF AUTO STEREOSCOPIC DISPLAYS
The disclosure describes a method, apparatus and system for reducing crosstalk of auto-stereoscopic displays using higher resolution panels. In such panels, a fraction of a total number of views is generated by sending a same signal on a number of adjacent views. A signal processing correcting function is applied to the fractioned views to reduce crosstalk.
IMAGING DEVICE, IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD
A third imaging unit including a pixel not having a polarization characteristic is interposed between a first imaging unit and a second imaging unit including a pixel having a polarization characteristic for each of a plurality of polarization directions. A depth map is generated from a viewpoint of the first imaging unit by matching processing using a first image generated by the first imaging unit and a second image generated by the second imaging unit A normal map is generated on the basis of a polarization state of the first image. Integration processing of the depth map and the normal map is performed and a depth map with a high accuracy is generated. The depth map generated by the map integrating unit is converted into a map from a viewpoint of the third imaging unit, and an image free from deterioration can be generated.
STEREO IMAGE GENERATING METHOD AND ELECTRONIC APPARATUS UTILIZING THE METHOD
A stereo image generating method and an electronic apparatus utilizing the method are provided. The electronic apparatus includes a first camera and a second camera capable of capturing stereo images, and a resolution of the first camera is larger than that of the second camera. In the method, a first image is captured by the first camera, and a second image is captured by the second camera. The second image is upscaled to the resolution of the first camera, and a depth map is generated with use of the first image and the upscaled second image. With reference to the depth map, the first image is re-projected to reconstruct a reference image of the second image. An occlusion region in the reference image is detected and compensated by using the upscaled second image. A stereo image including the first image and the compensated reference image is generated.
Critical alignment of parallax images for autostereoscopic display
A method is provided for generating an autostereoscopic display. The method includes acquiring a first parallax image and at least one other parallax image. At least a portion of the first parallax image may be aligned with a corresponding portion of the at least one other parallax image. Alternating views of the first parallax image and the at least one other parallax image may be displayed.