Patent classifications
H04N19/21
Image alignment method and device therefor
Provided is a method for automatically performing image alignment without a user input. An image alignment method performed by an image alignment device, according to one embodiment of the present invention, can comprise the steps of: recognizing at least one person in an inputted image; determining a person-of-interest among the recognized persons; and performing image alignment, on the basis of the person-of-interest, on the inputted image, wherein the image alignment is performed without an input of a user of the image alignment device for the image alignment.
Method and apparatus for supporting augmented and/or virtual reality playback using tracked objects
Methods for capturing and generating information about objects in a 3D environment that can be used to support augmented reality or virtual reality playback operations in a data efficient manner are described. In various embodiments one or more frames including foreground objects are generated and transmitted with corresponding information that can be used to determine the location where the foreground objects are to be positioned relative to a background for one or more frame times are described. Data efficiency is achieved by specifying different locations for a foreground object for different frame times avoiding in some embodiments the need to transmit an image and depth information defining the same of the foreground for each frame time. The frames can be encoded using a video encoder even though some of the information communicated are not pixel values but alpha blending values, object position information, mesh distortion information, etc.
Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
The image decoding method includes determining a context for use in a current block to be processed, from among a plurality of contexts, wherein in the determining: the context is determined under a condition that control parameters of a left block and an upper block are used, when the signal type is a first type; and the context is determined under a third condition that the control parameter of the upper block is not used and a hierarchical depth of a data unit to which the control parameter of the current block belongs is used, when the signal type is a third type, and the third type is one or more of (i) “merge_flag”, (ii) “ref_idx_l0” or “ref_idx_l1”, (iii) “inter_pred_flag”, (iv) “mvd_l0” or “mvd_l1”, (v) “intra_chroma_pred_mode”, (vi) “cbf_luma”, and (vii) “cbf_cb” or “cbf_cr”.
Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
The image decoding method includes determining a context for use in a current block to be processed, from among a plurality of contexts, wherein in the determining: the context is determined under a condition that control parameters of a left block and an upper block are used, when the signal type is a first type; and the context is determined under a third condition that the control parameter of the upper block is not used and a hierarchical depth of a data unit to which the control parameter of the current block belongs is used, when the signal type is a third type, and the third type is one or more of (i) “merge_flag”, (ii) “ref_idx_l0” or “ref_idx_l1”, (iii) “inter_pred_flag”, (iv) “mvd_l0” or “mvd_l1”, (v) “intra_chroma_pred_mode”, (vi) “cbf_luma”, and (vii) “cbf_cb” or “cbf_cr”.
Apparatus and a method for associating a video block partitioning pattern to a video coding block
Embodiments of the invention relates to an apparatus for associating a video block partitioning pattern to a video coding block, wherein the apparatus comprises: an obtainer adapted to obtain values of a set of segmentation mask samples, wherein each segmentation sample of the set of segmentation mask samples represents a different position in a segmentation mask adapted to define video coding block partitions of the video coding block; a selector adapted to select a video block partitioning pattern from a predetermined group of video block partitioning patterns based on the values of segmentation mask samples of the set of segmentation mask samples; and an associator adapted to associate the selected video block partitioning pattern to the video coding block.
Apparatus and a method for associating a video block partitioning pattern to a video coding block
Embodiments of the invention relates to an apparatus for associating a video block partitioning pattern to a video coding block, wherein the apparatus comprises: an obtainer adapted to obtain values of a set of segmentation mask samples, wherein each segmentation sample of the set of segmentation mask samples represents a different position in a segmentation mask adapted to define video coding block partitions of the video coding block; a selector adapted to select a video block partitioning pattern from a predetermined group of video block partitioning patterns based on the values of segmentation mask samples of the set of segmentation mask samples; and an associator adapted to associate the selected video block partitioning pattern to the video coding block.
Face-based frame rate upsampling for video calls
A method includes receiving a set of video frames that correspond to a video, including a first video frame and a second video frame that each include a face, wherein the second video frame is subsequent to the first video frame. The method further includes performing face tracking on the first video frame to identify a first face resampling keyframe and performing face tracking on the second video frame to identify a second face resampling keyframe. The method further includes deriving an interpolation amount. The method further includes determining a first interpolated face frame based on the first face resampling keyframe and the interpolation amount. The method further includes determining a second interpolated face frame based on the second face resampling keyframe and the interpolation amount. The method further includes rendering an interpolated first face and an interpolated second face. The method further includes displaying a final frame.
Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
The image decoding method includes determining a context for use in a current block to be processed, from among a plurality of contexts, wherein in the determining: the context is determined under a condition that control parameters of a left block and an upper block are used, when the signal type is a first type; and the context is determined under a third condition that the control parameter of the upper block is not used and a hierarchical depth of a data unit to which the control parameter of the current block belongs is used, when the signal type is a third type, and the third type is one or more of (i) “merge_flag”, (ii) “ref_idx_I0” or “ref_idx_I1”, (iii) “inter_pred_flag”, (iv) “mvd_I0” or “mvd_I1”, (v) “intra_chroma_pred_mode”, (vi) “cbf_luma”, and (vii) “cbf_cb” or “cbf_cr”.
Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
The image decoding method includes determining a context for use in a current block to be processed, from among a plurality of contexts, wherein in the determining: the context is determined under a condition that control parameters of a left block and an upper block are used, when the signal type is a first type; and the context is determined under a third condition that the control parameter of the upper block is not used and a hierarchical depth of a data unit to which the control parameter of the current block belongs is used, when the signal type is a third type, and the third type is one or more of (i) “merge_flag”, (ii) “ref_idx_I0” or “ref_idx_I1”, (iii) “inter_pred_flag”, (iv) “mvd_I0” or “mvd_I1”, (v) “intra_chroma_pred_mode”, (vi) “cbf_luma”, and (vii) “cbf_cb” or “cbf_cr”.
Alpha channel prediction
Image coding using alpha channel prediction may include generating a reconstructed image using alpha channel prediction and outputting the reconstructed image. Generating the reconstructed image using alpha channel prediction may include decoding reconstructed color channel values for a current pixel expressed with reference to first color space, obtaining color space converted color channel values for the current pixel by converting the reconstructed color channel values to a second color space, obtaining an alpha channel lower bound for an alpha channel value for the current pixel using the color space converted color channel values, generating a candidate predicted alpha value for the current pixel, obtaining an adjusted predicted alpha value for the current pixel using the candidate predicted alpha value and the alpha channel lower bound, generating a reconstructed pixel for the current pixel using the adjusted predicted alpha value, and including the reconstructed pixel in the reconstructed image.