Patent classifications
H04N19/64
Systems and methods for optimizing video encoding
The disclosed computer-implemented method may include receiving, from a client device, a video and data about at least one specialized construct applied to the video. The method may also include detecting, based on the data about the specialized construct, a region of interest to apply the specialized construct to the video. Additionally, the method may include reapplying the specialized construct to the video at the region of interest. Furthermore, the method may include encoding the video by prioritizing bit rate allocation for the region of interest containing the specialized construct. Various other methods, systems, and computer-readable media are also disclosed.
Encoding and decoding a sequence of pictures
An apparatus for decoding a sequence of pictures from a data stream is configured for decoding a picture of the sequence by deriving a residual transform signal of the picture from the data stream; combining a residual transform signal with a buffered transform signal approximation of a previous picture of the sequence so as to obtain a transform signal representing the picture, the transform signal comprising a plurality of transform coefficients; and subjecting the transform signal to a spectral-to-spatial transformation. The apparatus is configured for deriving the buffered transform signal approximation from a further transform signal representing the previous picture so that the buffered transform signal approximation comprises approximations of further transform coefficients of the further transform signal.
METHODS AND APPARATUS FOR UNIFIED SIGNIFICANCE MAP CODING
Methods and apparatus are provided for unified significance map coding. An apparatus includes a video encoder (400) for encoding transform coefficients for at least a portion of a picture. The transform coefficients are obtained using a plurality of transforms. One or more context sharing maps are generated for the transform coefficients based on a unified rule. The one or more context sharing maps are for providing at least one context that is shared among at least some of the transform coefficients obtained from at least two different ones of the plurality of transforms.
Method for decoding an image, encoding method, devices, terminal equipment and associated computer programs
A method for decoding an encoded data stream representative of at least one image, which is divided into blocks. The decoding method includes, for a current block: evaluating a plurality of value hypotheses of at least one description element of the current block, by calculating a likelihood measurement per hypothesis; calculating a disparity in the likelihood measurements obtained; determining at least one parameter of a decoder based on the calculated disparity; decoding, using the determined decoder, complementary information for identifying at least one of the hypotheses; and identifying at least one of the hypotheses using the decoded complementary information and obtaining a value of the at least one description element for the current block, from the at least one identified hypothesis.
Method for decoding an image, encoding method, devices, terminal equipment and associated computer programs
A method for decoding an encoded data stream representative of at least one image, which is divided into blocks. The decoding method includes, for a current block: evaluating a plurality of value hypotheses of at least one description element of the current block, by calculating a likelihood measurement per hypothesis; calculating a disparity in the likelihood measurements obtained; determining at least one parameter of a decoder based on the calculated disparity; decoding, using the determined decoder, complementary information for identifying at least one of the hypotheses; and identifying at least one of the hypotheses using the decoded complementary information and obtaining a value of the at least one description element for the current block, from the at least one identified hypothesis.
Method, apparatus and system for decoding and generating an image frame from a bitstream
A system and method of decoding a set of greatest coded line index values for a precinct of video data from a video bitstream, the precinct of video data including one or more subbands. The method comprises decoding a greatest coded line index prediction mode for each subband from the video bitstream; decoding a plurality of greatest coded line index delta values for each subband from the video bitstream using the greatest coded line index prediction mode for the subband; and producing the greatest coded line index values for each subband using the plurality of greatest coded line index delta values and the greatest coded line index prediction mode for the subband.
TEMPLATE MATCHING FOR JVET INTRA PREDICTION
A method of decoding JVET video, comprising defining a coding unit (CU) template within a decoded area of a video frame, the CU template being positioned above and/or to the left of a current decoding position for which data was intra predicted, defining a search window within the decoded area, the search window being adjacent to the CU template, generating a plurality of candidate prediction templates based on pixel values in the search window, each of the plurality of candidate prediction templates being generated using different intra prediction modes, calculating a matching cost between the CU template and each of the plurality of candidate prediction templates, selecting an intra prediction mode that generated the candidate prediction template that had the lowest matching cost relative to the CU template, and generating a prediction CU for the current decoding position based on the intra prediction mode.
Template-based inter prediction techniques based on encoding and decoding latency reduction
Video coding methods are described for reducing latency in template-based inter coding. In some embodiments, a method is provided for coding a video that includes a current picture and at least one reference picture. For at least a current block in the current picture, a respective predicted value is generated (e.g. using motion compensated prediction) for each sample in a template region adjacent to the current block. Once the predicted values are generated for each sample in the template region, a process is invoked to determine a template-based inter prediction parameter by using predicted values in the template region and sample values the reference picture. This process can be invoked without waiting for reconstructed sample values in the template region. Template-based inter prediction of the current block is then performed using the determined template-based inter prediction parameter.
Template-based inter prediction techniques based on encoding and decoding latency reduction
Video coding methods are described for reducing latency in template-based inter coding. In some embodiments, a method is provided for coding a video that includes a current picture and at least one reference picture. For at least a current block in the current picture, a respective predicted value is generated (e.g. using motion compensated prediction) for each sample in a template region adjacent to the current block. Once the predicted values are generated for each sample in the template region, a process is invoked to determine a template-based inter prediction parameter by using predicted values in the template region and sample values the reference picture. This process can be invoked without waiting for reconstructed sample values in the template region. Template-based inter prediction of the current block is then performed using the determined template-based inter prediction parameter.
Methods and apparatus for unified significance map coding
Methods and apparatus are provided for unified significance map coding. An apparatus includes a video encoder (400) for encoding transform coefficients for at least a portion of a picture. The transform coefficients are obtained using a plurality of transforms. One or more context sharing maps are generated for the transform coefficients based on a unified rule. The one or more context sharing maps are for providing at least one context that is shared among at least some of the transform coefficients obtained from at least two different ones of the plurality of transforms.