H04N13/161

Method for synthesizing intermediate view of light field, system for synthesizing intermediate view of light field, and method for compressing light field

A method of synthesizing intermediate views of a light field includes selecting a configuration of specific input views of a light field collected by a light field acquiring device, specifying coordinates of intermediate views to be synthesized and inputting the specified coordinates to a neural network, and synthesizing intermediate views based on a scene disparity, a selected configuration of the specific input views, and the specified coordinates of the intermediate views, using a neural network.

Video transmission method, video transmission device, video receiving method and video receiving device
11528509 · 2022-12-13 · ·

A video transmission method that includes predicting, from a texture picture or a depth picture of an anchor viewing position, a picture for a target viewing position on the basis of target viewing position information and processing a prediction error with respect to the predicted picture on the basis of a source picture of the target viewing position. An error-prone region map is generated on the basis of the predicted picture and the source picture. The video transmission method also includes patch packing the prediction error-processed picture on the basis of the error-prone region map and encoding the packed patch on the basis of the texture picture or the depth picture of the anchor viewing position.

Video transmission method, video transmission device, video receiving method and video receiving device
11528509 · 2022-12-13 · ·

A video transmission method that includes predicting, from a texture picture or a depth picture of an anchor viewing position, a picture for a target viewing position on the basis of target viewing position information and processing a prediction error with respect to the predicted picture on the basis of a source picture of the target viewing position. An error-prone region map is generated on the basis of the predicted picture and the source picture. The video transmission method also includes patch packing the prediction error-processed picture on the basis of the error-prone region map and encoding the packed patch on the basis of the texture picture or the depth picture of the anchor viewing position.

CODING SCHEME FOR DEPTH DATA
20220394229 · 2022-12-08 ·

Methods of encoding and decoding depth data are disclosed. In an encoding method, depth values and occupancy data are both encoded into a depth map. The method adapts how the depth values and occupancy data are converted to map values in the depth map. For example, it may adaptively select a threshold, above or below which all values represent unoccupied pixels. By adapting how the depth and occupancy are encoded, based on analysis of the depth values, the method can enable more effective encoding and transmission of the depth data and occupancy data. The encoding method outputs metadata defining the adaptive encoding. This metadata can be used by a corresponding decoding method, to decode the map values. Also provided are an encoder and a decoder for depth data, and a corresponding bitstream, comprising a depth map and its associated metadata.

CODING SCHEME FOR DEPTH DATA
20220394229 · 2022-12-08 ·

Methods of encoding and decoding depth data are disclosed. In an encoding method, depth values and occupancy data are both encoded into a depth map. The method adapts how the depth values and occupancy data are converted to map values in the depth map. For example, it may adaptively select a threshold, above or below which all values represent unoccupied pixels. By adapting how the depth and occupancy are encoded, based on analysis of the depth values, the method can enable more effective encoding and transmission of the depth data and occupancy data. The encoding method outputs metadata defining the adaptive encoding. This metadata can be used by a corresponding decoding method, to decode the map values. Also provided are an encoder and a decoder for depth data, and a corresponding bitstream, comprising a depth map and its associated metadata.

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, PROGRAM, AND INFORMATION PROCESSING SYSTEM
20220394231 · 2022-12-08 · ·

The present technology relates to an information processing device, an information processing method, a program, and an information processing system capable of providing a user who views a free viewpoint moving image with better user experience.

An information processing device of the present technology includes: a transmission unit that transmits a moving image; and a control unit that controls the transmission unit to transmit a free viewpoint moving image or a real camera viewpoint moving image on the basis of a result of determination on whether or not the free viewpoint moving image has been successfully generated, the free viewpoint moving image being a moving image generated by using a plurality of camera moving images generated by imaging a subject by using a plurality of cameras and viewed from an arbitrary position and direction, the real camera viewpoint moving image being a moving image generated from a camera moving image generated by imaging the subject by using a camera and viewed from a position and direction of the camera. The present technology is applicable to a real-time volumetric system that transmits a free viewpoint moving image in real time.

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, PROGRAM, AND INFORMATION PROCESSING SYSTEM
20220394231 · 2022-12-08 · ·

The present technology relates to an information processing device, an information processing method, a program, and an information processing system capable of providing a user who views a free viewpoint moving image with better user experience.

An information processing device of the present technology includes: a transmission unit that transmits a moving image; and a control unit that controls the transmission unit to transmit a free viewpoint moving image or a real camera viewpoint moving image on the basis of a result of determination on whether or not the free viewpoint moving image has been successfully generated, the free viewpoint moving image being a moving image generated by using a plurality of camera moving images generated by imaging a subject by using a plurality of cameras and viewed from an arbitrary position and direction, the real camera viewpoint moving image being a moving image generated from a camera moving image generated by imaging the subject by using a camera and viewed from a position and direction of the camera. The present technology is applicable to a real-time volumetric system that transmits a free viewpoint moving image in real time.

Cloud-based Rendering of Interactive Augmented/Virtual Reality Experiences

Systems and methods for implementing methods for cloud-based rendering of interactive augmented reality (AR) and/or virtual reality (VR) experiences. A client device may initiate execution of a content application on a server and provide information associated with the content application to the server. The client device may initialize, while awaiting a notification from the server, local systems associated with the content application and, upon receipt of the notification, provide, to the server, information associated with the local systems. Further, the client device may receive, from the server, data associated with the content application and render an AR/VR scene based on the received data. The data may be based, at least in part, on the information associated with the local system. The providing and receiving may be performed periodically, e.g., at a rate to sustain a comfortable viewing environment of the AR/VR scene by a user of the client device.

Cloud-based Rendering of Interactive Augmented/Virtual Reality Experiences

Systems and methods for implementing methods for cloud-based rendering of interactive augmented reality (AR) and/or virtual reality (VR) experiences. A client device may initiate execution of a content application on a server and provide information associated with the content application to the server. The client device may initialize, while awaiting a notification from the server, local systems associated with the content application and, upon receipt of the notification, provide, to the server, information associated with the local systems. Further, the client device may receive, from the server, data associated with the content application and render an AR/VR scene based on the received data. The data may be based, at least in part, on the information associated with the local system. The providing and receiving may be performed periodically, e.g., at a rate to sustain a comfortable viewing environment of the AR/VR scene by a user of the client device.

Apparatus, a method and a computer program for volumetric video

There are disclosed various methods, apparatuses and computer program products for volumetric video encoding and decoding. In some embodiments of a method for encoding, obtaining one or more patches formed from a three-dimensional image information are obtained. The one or more patches represent projection data of at least a part of an object to a projection plane. Priority for at least one of the one or more patches is determined and the one or more patches are projected to a projection plane. Indication of the priority is encoded into or along a bitstream. In some embodiments of a method for decoding, one or more encoded patches formed from a three-dimensional image information are received. Also at least one indication of priority determined for at least one of the one or more patches is received and the patches are reconstructed in the order defined by the at least one indication of priority.