H04N13/275

DISPLAYING THREE-DIMENSIONAL OBJECTS

Methods, apparatus, devices, and systems for displaying three-dimensional objects by individually diffracting different colors of light are provided. In one aspect, an optical device includes: a first optically diffractive component including a first diffractive structure configured to diffract a first color of light having a first incident angle at a first diffracted angle, a second optically diffractive component including a second diffractive structure configured to diffract a second color of light having a second incident angle at a second diffracted angle, a first reflective layer configured to totally reflect the first color of light having the first incident angle and transmit the second color of light, and a second reflective layer configured to totally reflect the second color of light having the second incident angle. The first reflective layer is between the first and second diffractive structures, and the second diffractive structure is between the first and second reflective layers.

Method, apparatus and stream for volumetric video format

Methods and device for encoding/decoding data representative of a 3D scene. To reach that aim, first data representative of texture of the 3D scene visible according to a first viewpoint is encoded into one or more first tracks, the first data being arranged in first tiles of a first frame, a part of the 3D scene being associated with each first tile; second data representative of depth associated with points of the 3D scene is encoded into one or more second tracks, the second data being arranged in second tiles of a second frame, the total number of second tiles being greater than the total number of first tiles, a set of second tiles being allocated to each first tile and patches are arranged in the set of second tiles a set, each patch corresponding to a two-dimensional parametrization of a group of 3D points comprised into the part of the 3D scene associated with each first tile and comprising second data representative of depth associated with the 3D points of the group; and instructions to extract at least a part of the first data and second data from at least a part of the at least a first track and at least a second track are further encoded into one or more third tracks.

Method, apparatus and stream for volumetric video format

Methods and device for encoding/decoding data representative of a 3D scene. To reach that aim, first data representative of texture of the 3D scene visible according to a first viewpoint is encoded into one or more first tracks, the first data being arranged in first tiles of a first frame, a part of the 3D scene being associated with each first tile; second data representative of depth associated with points of the 3D scene is encoded into one or more second tracks, the second data being arranged in second tiles of a second frame, the total number of second tiles being greater than the total number of first tiles, a set of second tiles being allocated to each first tile and patches are arranged in the set of second tiles a set, each patch corresponding to a two-dimensional parametrization of a group of 3D points comprised into the part of the 3D scene associated with each first tile and comprising second data representative of depth associated with the 3D points of the group; and instructions to extract at least a part of the first data and second data from at least a part of the at least a first track and at least a second track are further encoded into one or more third tracks.

METHODS, SYSTEMS, AND MEDIA FOR GENERATING AND RENDERING IMMERSIVE VIDEO CONTENT
20230209031 · 2023-06-29 ·

Methods, systems, and media for generating and rendering immersive video content are provided. In some embodiments, the method comprises: receiving information indicating positions of cameras in a plurality of cameras; generating a mesh on which video content is to be projected based on the positions of the cameras in the plurality of cameras, wherein the mesh is comprised of a portion of a faceted cylinder, and wherein the faceted cylinder has a plurality of facets each corresponding to a projection from a camera in the plurality of cameras; receiving video content corresponding to the plurality of cameras; and transmitting the video content and the generated mesh to a user device in response to receiving a request for the video content from the user device.

METHODS, SYSTEMS, AND MEDIA FOR GENERATING AND RENDERING IMMERSIVE VIDEO CONTENT
20230209031 · 2023-06-29 ·

Methods, systems, and media for generating and rendering immersive video content are provided. In some embodiments, the method comprises: receiving information indicating positions of cameras in a plurality of cameras; generating a mesh on which video content is to be projected based on the positions of the cameras in the plurality of cameras, wherein the mesh is comprised of a portion of a faceted cylinder, and wherein the faceted cylinder has a plurality of facets each corresponding to a projection from a camera in the plurality of cameras; receiving video content corresponding to the plurality of cameras; and transmitting the video content and the generated mesh to a user device in response to receiving a request for the video content from the user device.

THREE-DIMENSIONAL DATA CREATION METHOD, CLIENT DEVICE, AND SERVER

A three-dimensional data creation method in a client device includes: creating three-dimensional data of a surrounding area of the client device using sensor information that is obtained through a sensor equipped in the client device and indicates a surrounding condition of the client device; estimating a self-location of the client device using the three-dimensional data created; and transmitting the sensor information obtained to a server or an other client device.

THREE-DIMENSIONAL DATA CREATION METHOD, CLIENT DEVICE, AND SERVER

A three-dimensional data creation method in a client device includes: creating three-dimensional data of a surrounding area of the client device using sensor information that is obtained through a sensor equipped in the client device and indicates a surrounding condition of the client device; estimating a self-location of the client device using the three-dimensional data created; and transmitting the sensor information obtained to a server or an other client device.

METHODS AND APPARATUS FOR ENCODING, COMMUNICATING AND/OR USING IMAGES
20230199333 · 2023-06-22 ·

Methods and apparatus for capturing, communicating and using image data to support virtual reality experiences are described. Images, e.g., frames, are captured at a high resolution but lower frame rate than is used for playback. Interpolation is applied to captured frames to generate interpolated frames. Captured frames, along with interpolated frame information, are communicated to the playback device. The combination of captured and interpolated frames correspond to a second frame playback rate which is higher than the image capture rate. Cameras operate at a high image resolution but slower frame rate than images could be captured with the same cameras at a lower resolution. Interpolation is performed prior to delivery to the user device with segments to be interpolated being selected based on motion and/or lens FOV information. A relatively small amount of interpolated frame data is communicated compared to captured frame data for efficient bandwidth use.

METHOD, APPARATUS, AND DEVICE FOR REALIZING VIRTUAL STEREOSCOPIC SCENE
20170359571 · 2017-12-14 ·

A method and a system for realizing a virtual stereoscopic scene based on mapping are provided. The method comprises, acquiring a distance between an observer's two eyes E_R, a maximum convex displaying distance of a real screen N_R, a distance from the observer's eyes to the real screen Z_R, and a maximum concave displaying distance of the real screen F_R; calculating a parallax d.sub.N.sub._.sub.R at N_R, and a parallax d.sub.F.sub._.sub.R at F_R; acquiring a distance between a virtual single camera and a virtual near clipping plane N_V, and a distance between a virtual single camera and a virtual far clipping plane F_V; calculating a distance E_V between a left virtual camera and a right virtual camera, and asymmetric perspective projection parameters of the left virtual camera and the right virtual camera; performing a perspective projection transformation of scene content of the virtual single camera, and displaying a virtual stereoscopic scene.

Multi-perspective stereoscopy from light fields

Methods and systems for generating stereoscopic content with granular control over binocular disparity based on multi-perspective imaging from representations of light fields are provided. The stereoscopic content is computed as piecewise continuous cuts through a representation of a light field, minimizing an energy reflecting prescribed parameters such as depth budget, maximum binocular disparity gradient, desired stereoscopic baseline. The methods and systems may be used for efficient and flexible stereoscopic post-processing, such as reducing excessive binocular disparity while preserving perceived depth or retargeting of already captured scenes to various view settings. Moreover, such methods and systems are highly useful for content creation in the context of multi-view autostereoscopic displays and provide a novel conceptual approach to stereoscopic image processing and post-production.