Patent classifications
H04N19/85
Image processing apparatus and image processing method for decoding raw image data encoded with lossy encoding scheme
An image processing apparatus decodes encoded RAW data that includes subband data being encoded with lossy encoding scheme, and determines one of a plurality of classifications based on the decoded subband data, wherein the plurality of classifications are based on a feature of an image. The apparatus also obtains correction data corresponding to the determined classification, and corrects recomposed data, which is obtained by applying frequency recomposition to the decoded subband data, based on the correction data, in order to obtain the corrected data as decoded RAW data.
Image processing apparatus and image processing method for decoding raw image data encoded with lossy encoding scheme
An image processing apparatus decodes encoded RAW data that includes subband data being encoded with lossy encoding scheme, and determines one of a plurality of classifications based on the decoded subband data, wherein the plurality of classifications are based on a feature of an image. The apparatus also obtains correction data corresponding to the determined classification, and corrects recomposed data, which is obtained by applying frequency recomposition to the decoded subband data, based on the correction data, in order to obtain the corrected data as decoded RAW data.
3D POINT CLOUD ENHANCEMENT WITH MULTIPLE MEASUREMENTS
Systems and methods are described for refining first point cloud data using at least second point cloud data and one or more sets of quantizer shifts. An example point cloud decoding method includes obtaining data representing at least a first point cloud and a second point cloud; obtaining information identifying at least a first set of quantizer shifts associated with the first point cloud; and obtaining refined point cloud data based on at least the first point cloud, the first set of quantizer shifts, and the second point cloud. The obtaining of the refined point cloud data may include performing a subtraction based on at least the first set of quantizer shifts. Corresponding encoding systems and methods are also described.
3D POINT CLOUD ENHANCEMENT WITH MULTIPLE MEASUREMENTS
Systems and methods are described for refining first point cloud data using at least second point cloud data and one or more sets of quantizer shifts. An example point cloud decoding method includes obtaining data representing at least a first point cloud and a second point cloud; obtaining information identifying at least a first set of quantizer shifts associated with the first point cloud; and obtaining refined point cloud data based on at least the first point cloud, the first set of quantizer shifts, and the second point cloud. The obtaining of the refined point cloud data may include performing a subtraction based on at least the first set of quantizer shifts. Corresponding encoding systems and methods are also described.
EFFICIENT ENCODING OF FILM GRAIN NOISE
One embodiment of the present invention sets forth a technique for encoding video frames. The technique includes performing one or more operations to generate a plurality of denoised video frames associated with a video sequence. The technique also includes determining a first set of motion vectors based on a first denoised frame included in the plurality of denoised video frames and a second denoised frame included in the plurality of denoised video frames, and determining a first residual between the second denoised frame and a prediction frame associated with the second denoised frame. The technique further includes performing one or more operations to generate an encoded video frame associated with the second denoised frame based on the first set of motion vectors, the first residual, and a first frame that is included in the video sequence and corresponds to the first denoised frame.
EFFICIENT ENCODING OF FILM GRAIN NOISE
One embodiment of the present invention sets forth a technique for encoding video frames. The technique includes performing one or more operations to generate a plurality of denoised video frames associated with a video sequence. The technique also includes determining a first set of motion vectors based on a first denoised frame included in the plurality of denoised video frames and a second denoised frame included in the plurality of denoised video frames, and determining a first residual between the second denoised frame and a prediction frame associated with the second denoised frame. The technique further includes performing one or more operations to generate an encoded video frame associated with the second denoised frame based on the first set of motion vectors, the first residual, and a first frame that is included in the video sequence and corresponds to the first denoised frame.
VIDEO PROCESSING APPARATUS USING INTERNAL PREDICTION BUFFER THAT IS SHARED BY MULTIPLE CODING TOOLS FOR PREDICTION
A video processing apparatus implemented in a chip includes an on-chip prediction buffer and a processing circuit. The on-chip prediction buffer is shared by a plurality of coding tools for prediction, and is used to store reference data. The processing circuit supports the coding tools for prediction, reads a plurality of first reference data from the on-chip prediction buffer as input data of a first coding tool that is included in the coding tools and enabled by the processing circuit, and writes output data of the first coding tool enabled by the processing circuit into the on-chip prediction buffer as a plurality of second reference data.
VIDEO PROCESSING APPARATUS USING INTERNAL PREDICTION BUFFER THAT IS SHARED BY MULTIPLE CODING TOOLS FOR PREDICTION
A video processing apparatus implemented in a chip includes an on-chip prediction buffer and a processing circuit. The on-chip prediction buffer is shared by a plurality of coding tools for prediction, and is used to store reference data. The processing circuit supports the coding tools for prediction, reads a plurality of first reference data from the on-chip prediction buffer as input data of a first coding tool that is included in the coding tools and enabled by the processing circuit, and writes output data of the first coding tool enabled by the processing circuit into the on-chip prediction buffer as a plurality of second reference data.
Color volume transforms in coding of high dynamic range and wide color gamut sequences
A method of encoding a digital video, comprising receiving a high dynamic range (HDR) master of a video, a reference standard dynamic range (SDR) master of the video, and target SDR display properties at an encoder, finding a color volume transform that converts HDR values from the HDR master into SDR values that, when converted for display on the target SDR display, are substantially similar to SDR values in the reference SDR master, converting HDR values from the HDR master into SDR values using the color volume transform, generating metadata items that identifies the color volume transform to decoders, and encoding the SDR values into a bitstream.
Color volume transforms in coding of high dynamic range and wide color gamut sequences
A method of encoding a digital video, comprising receiving a high dynamic range (HDR) master of a video, a reference standard dynamic range (SDR) master of the video, and target SDR display properties at an encoder, finding a color volume transform that converts HDR values from the HDR master into SDR values that, when converted for display on the target SDR display, are substantially similar to SDR values in the reference SDR master, converting HDR values from the HDR master into SDR values using the color volume transform, generating metadata items that identifies the color volume transform to decoders, and encoding the SDR values into a bitstream.