Patent classifications
H04N19/597
2D UV ATLAS SAMPLING BASED METHODS FOR DYNAMIC MESH COMPRESSION
Method, apparatus, and system for sampling-based dynamic mesh compression are provided. The process may include determining one or more sample positions associated with an input mesh based on one or more sampling rates, and an occupancy status associated respectively with each of the one or more sample positions indicating whether each of the one or more sample positions is within boundaries of one or more polygons defined by the input mesh is determined. The process may include generating a sample-based occupancy map based on the occupancy status associated respectively with each of the one or more sample positions.
CODING SCHEME FOR VIDEO DATA USING DOWN-SAMPLING/UP-SAMPLING AND NON-LINEAR FILTER FOR DEPTH MAP
Methods of encoding and decoding video data are provided. In an encoding method, source video data comprising one or more source views is encoded into a video bitstream. Depth data of at least one of the source views is nonlinearly filtered and downsampled prior to encoding. After decoding, the decoded depth data is up-sampled and nonlinearly filtered.
METHOD AND APPARATUS FOR SELECTING NEIGHBOR POINT IN POINT CLOUD, ENCODER, AND DECODER
This application provides a method for selecting a neighbor point of a current point in a point cloud. The method includes: determining, from point cloud data, a target region where the current point is located, the target region comprising a plurality of points; determining, for at least two decoded target points in the target region, a weight coefficient of each of the at least two target points, the at least two target points not comprising the current point; determining a weight of each of the at least two target points according to the weight coefficient and geometry information of each of the at least two target points and geometry information of the current point; and selecting at least one from the at least two target points according to the weight of each of the at least two target points as the neighbor point of the current point.
METHOD AND APPARATUS FOR SELECTING NEIGHBOR POINT IN POINT CLOUD, ENCODER, AND DECODER
This application provides a method for selecting a neighbor point of a current point in a point cloud. The method includes: determining, from point cloud data, a target region where the current point is located, the target region comprising a plurality of points; determining, for at least two decoded target points in the target region, a weight coefficient of each of the at least two target points, the at least two target points not comprising the current point; determining a weight of each of the at least two target points according to the weight coefficient and geometry information of each of the at least two target points and geometry information of the current point; and selecting at least one from the at least two target points according to the weight of each of the at least two target points as the neighbor point of the current point.
Iterative synthesis of views from data of a multi-view video
Synthesis of an image of a view from data of a multi-view video. The synthesis includes an image processing phase as follows: generating image synthesis data from texture data of at least one image of a view of the multi-view video; calculating an image of a synthesised view from the generated synthesis data and at least one image of a view of the multi-view video; analysing the image of the synthesised view relative to a synthesis performance criterion; if the criterion is met, delivering the image of the synthesised view; and if not, iterating the processing phase. The calculation of an image of a synthesised view at a current iteration includes modifying, based on synthesis data generated in the current iteration, an image of the synthesised view calculated during a processing phase preceding the current iteration.
Iterative synthesis of views from data of a multi-view video
Synthesis of an image of a view from data of a multi-view video. The synthesis includes an image processing phase as follows: generating image synthesis data from texture data of at least one image of a view of the multi-view video; calculating an image of a synthesised view from the generated synthesis data and at least one image of a view of the multi-view video; analysing the image of the synthesised view relative to a synthesis performance criterion; if the criterion is met, delivering the image of the synthesised view; and if not, iterating the processing phase. The calculation of an image of a synthesised view at a current iteration includes modifying, based on synthesis data generated in the current iteration, an image of the synthesised view calculated during a processing phase preceding the current iteration.
In-tree geometry quantization of point clouds
An example device includes one or more processors, and memory storing instructions that when executed by the processors, cause the processors to receive points that represent a point cloud in three-dimensional space, and generate a data structure representing the point cloud. Generating the data structure includes encoding a position of each point in each dimension as a sequence of bits according to a tree data structure; partitioning each of the sequences into two or more portions according to a scaling depth; determining that a subset of the points is spatially isolated from a remainder of the points; quantizing each of the portions associated with the subset of the points according to a first quantization step size; quantizing each of the portions associated with the remainder of the points according to a second quantization step size; and including the quantized portions in the data structure.
In-tree geometry quantization of point clouds
An example device includes one or more processors, and memory storing instructions that when executed by the processors, cause the processors to receive points that represent a point cloud in three-dimensional space, and generate a data structure representing the point cloud. Generating the data structure includes encoding a position of each point in each dimension as a sequence of bits according to a tree data structure; partitioning each of the sequences into two or more portions according to a scaling depth; determining that a subset of the points is spatially isolated from a remainder of the points; quantizing each of the portions associated with the subset of the points according to a first quantization step size; quantizing each of the portions associated with the remainder of the points according to a second quantization step size; and including the quantized portions in the data structure.
AN APPARATUS, A METHOD AND A COMPUTER PROGRAM FOR VOLUMETRIC VIDEO
A method comprising: providing a 3D representation of at least one object as an input for an encoder (500); projecting the 3D representation onto at least one 2D patch (502); generating at least a geometry image and a texture image from the 2D patch (504); generating, based on the geometry image, a mesh comprising a number of vertices (506); mapping the number of vertices to two- dimensional (2D) coordinates of the texture image (508); and signalling said 2D coordinates of the texture image to be applied to the number of vertices of the mesh in or along a bitstream (510).
AN APPARATUS, A METHOD AND A COMPUTER PROGRAM FOR VOLUMETRIC VIDEO
A method comprising: providing a 3D representation of at least one object as an input for an encoder (500); projecting the 3D representation onto at least one 2D patch (502); generating at least a geometry image and a texture image from the 2D patch (504); generating, based on the geometry image, a mesh comprising a number of vertices (506); mapping the number of vertices to two- dimensional (2D) coordinates of the texture image (508); and signalling said 2D coordinates of the texture image to be applied to the number of vertices of the mesh in or along a bitstream (510).