H04N11/04

Triangle prediction with applied-block settings and motion storage settings
11546603 · 2023-01-03 · ·

A video coder receives data from a bitstream for a block of pixels to be encoded or decoded as a current block of a current picture of a video. Upon determining that an applied block setting of the current block satisfies a threshold condition, the video coder generates a first prediction based on a first motion information for a first prediction unit of the current block. The video coder generates a second prediction based on a second motion information for a second prediction unit of the current block. The video coder generates a third prediction based on the first and second motion information for an overlap prediction region that is defined based on a partitioning between the first prediction unit and the second prediction unit. The video coder encodes or decodes the current block by using the first, second, and third predictions.

Video coding and decoding

A method of encoding a motion information predictor index for an Affine Merge mode, comprising: generating a list of motion information predictor candidates; selecting one of the motion information predictor candidates in the list as an Affine Merge mode predictor; and generating a motion information predictor index for the selected motion information predictor candidate using CABAC coding, one or more bits of the motion information predictor index being bypass CABAC coded.

Context initialization in entropy coding

A decoder includes an entropy decoder configured to derive a number of bins of the binarizations from the data stream using binary entropy decoding by selecting a context among different contexts and updating probability states associated with the different contexts, dependent on previously decoded portions of the data stream; a desymbolizer configured to debinarize the binarizations of the syntax elements to obtain integer values of the syntax elements; a reconstructor configured to reconstruct the video based on the integer values of the syntax elements using a quantization parameter, wherein the entropy decoder is configured to distinguish between 126 probability states and to initialize the probability states associated with the different contexts according to a linear equation of the quantization parameter, wherein the entropy decoder is configured to, for each of the different contexts, derive a slope and an offset of the linear equation from first and second four bit parts of a respective 8 bit initialization value.

Method and apparatus for encoding/decoding video

Disclosed is a method and apparatus for encoding/decoding a video. According to an embodiment, provided is a method of setting a level for each of one or more regions, including decoding a definition syntax element related to level definition and a designation syntax element related to target designation from a bitstream; defining one or more levels based on the definition syntax element; and setting a target level designated by the designation syntax element among the defined levels for a target region designated by the designation syntax element.

Method and apparatus of signaling subpicture information in video coding

Methods and apparatus for video coding are disclosed. According to one method, a bitstream is generated or received, where the bitstream includes a first syntax and a second syntax. The first syntax is related to a target number of bits used to represent a set of third syntaxes and each third syntax specifies one subpicture ID for one subpicture in a set of subpictures. The second syntax is related to a total number of subpictures in the set of subpicture, where a first number that can be represented by the target number of bits is equal to or greater than the total number of subpictures. According to another method, the subpicture ID syntaxes have different values for different subpictures.

Picture tile attributes signaled using loop(s) over tiles

In encoding a picture, comprising a plurality of tiles, into a bit-stream, a method and apparatus is provided for signaling the tile attribute values per-tile, using a compact syntax. These embodiments signal per-tile attribute values using a loop over the tiles. The tile attributes may, for example, be in the form of a set of tile syntax elements (one syntax element per tile attribute), or for example in the form of a set of flags to enable or disable the usage of the tile attributes. These embodiments provide freedom for an encoder to assign the tile attribute values per tile, or per any subset of tiles in a picture, and the attribute values are signaled in a compact syntax using a loop (or loops) over tiles.

Video encoding method and video decoding method

A video encoding method of encoding a multi-view image including one or more basic view images and a plurality of reference view images includes determining a pruning order of the plurality of reference view images, acquiring a plurality of residual reference view images, by pruning the plurality of reference view images based on the one or more basic view images according to the pruning order, encoding the one or more basic view images and the plurality of residual reference view images, and outputting a bitstream including encoding information of the one or more basic view images and the plurality of residual reference view images.

Method and device for coding transform coefficient

An image decoding method according to the present document comprises the steps of: receiving a bitstream including residual information; deriving a quantized transform coefficient for a current block on the basis of the residual information included in the bitstream; deriving a residual sample for the current block on the basis of the quantized transform coefficient; and generating a reconstructed picture on the basis of the residual sample for the current block, wherein the residual information may be derived via different syntax elements depending on whether a transform has been applied to the current block.

Device and method for scalable coding of video information

An apparatus configured to code video information includes a memory unit and a processor in communication with the memory unit. The memory unit is configured to store video information associated with a first video layer having a current picture. The processor is configured to process a first offset associated with the current picture, the first offset indicating a difference between (a) most significant bits (MSBs) of a first picture order count (POC) of a previous picture in the first video layer that precedes the current picture in decoding order and (b) MSBs of a second POC of the current picture.

Increasing effective eyebox size of an HMD
09851565 · 2017-12-26 · ·

A method of dynamically increasing an effective size of an eyebox of a head mounted display includes displaying a computer generated image (“CGI”) to an eye of a user wearing the head mounted display. The CGI is perceivable by the eye within an eyebox. An eye image of the eye is captured while the eye is viewing the CGI. A location of the eye is determined based upon the eye image. A lateral position of the eyebox is dynamically adjusted based upon the determined location of the eye thereby extending the effective size of the eyebox from which the eye can view the CGI.