H04N19/62

Processing of motion information in multidimensional signals through motion zones and auxiliary information through auxiliary zones

Computer processor hardware receives zone information specifying multiple elements of a rendition of a signal belonging to a zone. The computer processor hardware also receives motion information associated with the zone. The motion information can be encoded to indicate to which corresponding element in a reference signal each of the multiple elements in the zone pertains. For each respective element in the zone as specified by the zone information, the computer processor hardware utilizes the motion information to derive a corresponding location value in the reference signal; the corresponding location value indicates a location in the reference signal to which the respective element pertains.

Processing of motion information in multidimensional signals through motion zones and auxiliary information through auxiliary zones

Computer processor hardware receives zone information specifying multiple elements of a rendition of a signal belonging to a zone. The computer processor hardware also receives motion information associated with the zone. The motion information can be encoded to indicate to which corresponding element in a reference signal each of the multiple elements in the zone pertains. For each respective element in the zone as specified by the zone information, the computer processor hardware utilizes the motion information to derive a corresponding location value in the reference signal; the corresponding location value indicates a location in the reference signal to which the respective element pertains.

Overlay processing method in 360 video system, and device thereof

A 360 image data processing method performed by a 360 video receiving device, according to the present invention, comprises the steps of: receiving 360 image data; acquiring information and metadata on an encoded picture from the 360 image data; decoding the picture on the basis of the information on the encoded picture; and rendering the decoded picture and an overlay on the basis of the metadata, wherein the metadata includes overlay-related metadata, the overlay is rendered on the basis of the overlay-related metadata, and the overlay-related metadata includes information on a region of the overlay.

Methods and devices for encoding and reconstructing a point cloud

This method comprises: —accessing (2) a point cloud (PC) comprising a plurality of points defined by attributes, said attributes including a spatial position of a point in a 3D space and at least one feature of the point;—segmenting (2) the point cloud into one or more clusters (C.sub.i) of points on the basis of the attributes of the points; and for at least one cluster (C.sub.i):—constructing (4) a similarity graph having a plurality of vertices and at least one edge, the similarity graph representing a similarity among neighboring points of the cluster (C.sub.i) in terms of the attributes, the plurality of vertices including vertices P.sub.i and P.sub.j corresponding to points of the cluster (C.sub.i);—assigning one or more weights w.sub.i,j to one or more edges connecting vertices P.sub.i and P.sub.j of the graph;—computing (6) a transform using the one or more assigned weights, said transform being characterized by coefficients; and—quantizing (8) and encoding (10) the transform coefficients.

Methods and devices for encoding and reconstructing a point cloud

This method comprises: —accessing (2) a point cloud (PC) comprising a plurality of points defined by attributes, said attributes including a spatial position of a point in a 3D space and at least one feature of the point;—segmenting (2) the point cloud into one or more clusters (C.sub.i) of points on the basis of the attributes of the points; and for at least one cluster (C.sub.i):—constructing (4) a similarity graph having a plurality of vertices and at least one edge, the similarity graph representing a similarity among neighboring points of the cluster (C.sub.i) in terms of the attributes, the plurality of vertices including vertices P.sub.i and P.sub.j corresponding to points of the cluster (C.sub.i);—assigning one or more weights w.sub.i,j to one or more edges connecting vertices P.sub.i and P.sub.j of the graph;—computing (6) a transform using the one or more assigned weights, said transform being characterized by coefficients; and—quantizing (8) and encoding (10) the transform coefficients.

Three-dimensional data encoding method, three-dimensional data decoding method, three-dimensional data encoding device, and three-dimensional data decoding device

A three-dimensional data encoding method includes: performing a conversion process including a displacement on, out of first point cloud data and second point cloud data having a same time, the second point cloud data, and combining the first point cloud data and the second point cloud data after being subjected to the conversion process, to generate third point cloud data; and encoding the third point cloud data to generate a bitstream. The bitstream includes first information and second information, the first information indicating to which of the first point cloud data and the second point cloud each of three-dimensional points included in the third point cloud data belongs, the second information indicating details of the displacement.

Three-dimensional data encoding method, three-dimensional data decoding method, three-dimensional data encoding device, and three-dimensional data decoding device

A three-dimensional data encoding method includes: performing a conversion process including a displacement on, out of first point cloud data and second point cloud data having a same time, the second point cloud data, and combining the first point cloud data and the second point cloud data after being subjected to the conversion process, to generate third point cloud data; and encoding the third point cloud data to generate a bitstream. The bitstream includes first information and second information, the first information indicating to which of the first point cloud data and the second point cloud each of three-dimensional points included in the third point cloud data belongs, the second information indicating details of the displacement.

Decomposition of residual data during signal encoding, decoding and reconstruction in a tiered hierarchy

Computer processor hardware receives a first set of adjustment values. The first set of adjustment values specify adjustments to be made to a predicted rendition of a signal generated at a first level of quality to reconstruct a rendition of the signal at the first level of quality. The computer processor hardware processes the first set of adjustment values and derives a second set of adjustment values based on the first set of adjustment values and a rendition of the signal at a second level of quality. The second level of quality is lower than the first level of quality.

Decomposition of residual data during signal encoding, decoding and reconstruction in a tiered hierarchy

Computer processor hardware receives a first set of adjustment values. The first set of adjustment values specify adjustments to be made to a predicted rendition of a signal generated at a first level of quality to reconstruct a rendition of the signal at the first level of quality. The computer processor hardware processes the first set of adjustment values and derives a second set of adjustment values based on the first set of adjustment values and a rendition of the signal at a second level of quality. The second level of quality is lower than the first level of quality.

Encoding and decoding based on blending of sequences of samples along time

Computer processor hardware receives image data specifying element settings for each image of multiple original images in a sequence. The computer processor hardware analyzes the element settings across the multiple original images. The computer processor hardware then utilizes the element settings of the multiple original images in the sequence to produce first encoded image data specifying a set of common image element settings, the set of common image element settings being a baseline to substantially reproduce each of the original images in the sequence.