Patent classifications
H04N19/146
Video encoding method, video decoding method, video encoding apparatus, and video decoding apparatus
A video encoding method of performing scalable encoding on input video includes: determining a total number of layers of the scalable encoding to be less than or equal to a maximum layer count determined according to a frame rate; and performing the scalable encoding on the input video to generate a bitstream, using the determined total number of layers.
Fast multi-rate encoding for adaptive HTTP streaming
According to embodiments of the disclosure, information of higher and lower quality encoded video segments is used to limit Rate-Distortion Optimization (RDO) for each Coding Unit Tree (CTU). A method first encodes the highest bit-rate segment and consequently uses it to encode the lowest bit-rate video segment. Block structure and selected reference frame of both highest and lowest bit-rate video segments are used to predict and shorten RDO process for each CTU in middle bit-rates. The method delays just one frame using parallel processing. This approach provides time-complexity reduction compared to the reference software for middle bit-rates while degradation is negligible.
Fast multi-rate encoding for adaptive HTTP streaming
According to embodiments of the disclosure, information of higher and lower quality encoded video segments is used to limit Rate-Distortion Optimization (RDO) for each Coding Unit Tree (CTU). A method first encodes the highest bit-rate segment and consequently uses it to encode the lowest bit-rate video segment. Block structure and selected reference frame of both highest and lowest bit-rate video segments are used to predict and shorten RDO process for each CTU in middle bit-rates. The method delays just one frame using parallel processing. This approach provides time-complexity reduction compared to the reference software for middle bit-rates while degradation is negligible.
DEVICE AND METHOD FOR CODING VIDEO DATA
A method of decoding a bitstream by an electronic device is provided. The electronic device receives the bitstream. In addition, the electronic device determines a block unit from an image frame according to the bitstream and selects a plurality of intra candidate modes from a plurality of intra default modes for the block unit. The electronic device further generates a template prediction for each of the plurality of intra candidate modes, selects a plurality of prediction modes from the plurality of intra candidate modes based on the template predictions, and reconstructs the block unit based on the plurality of prediction modes.
Robust encoding/decoding of escape-coded pixels in palette mode
Approaches to robust encoding and decoding of escape-coded pixels in a palette mode are described. For example, sample values of escape-coded pixels in palette mode are encoded/decoded using a binarization process that depends on a constant value of quantization parameter (“QP”) for the sample values. Or, as another example, sample values of escape-coded pixels in palette mode are encoded/decoded using a binarization process that depends on sample depth for the sample values. Or, as still another example, sample values of escape-coded pixels in palette mode are encoded/decoding using a binarization process that depends on some other fixed rule. In example implementations, these approaches avoid dependencies on unit-level QP values when parsing the sample values of escape-coded pixels, which can make encoding/decoding more robust to data loss.
Techniques for optimizing encoding tasks
In various embodiments, a shot collation application causes multiple encoding instances to encode a source video sequence that includes at least two shot sequences. The shot collation application assigns a first shot sequence to a first chunk. Subsequently, the shot collation application determines that a second shot sequence does not meet a collation criterion with respect to the first chunk. Consequently, the shot collation application assigns the second shot sequence or a third shot sequence derived from the second shot sequence to a second chunk. The shot collation application causes a first encoding instance to independently encode each shot sequence assigned to the first chunk. Similarly, the shot collation application causes a second encoding instance to independently encode each shot sequence assigned to the second chunk. Finally, a chunk assembler combines the first encoded chunk and the second encoded chunk to generate an encoded video sequence.
Techniques for optimizing encoding tasks
In various embodiments, a shot collation application causes multiple encoding instances to encode a source video sequence that includes at least two shot sequences. The shot collation application assigns a first shot sequence to a first chunk. Subsequently, the shot collation application determines that a second shot sequence does not meet a collation criterion with respect to the first chunk. Consequently, the shot collation application assigns the second shot sequence or a third shot sequence derived from the second shot sequence to a second chunk. The shot collation application causes a first encoding instance to independently encode each shot sequence assigned to the first chunk. Similarly, the shot collation application causes a second encoding instance to independently encode each shot sequence assigned to the second chunk. Finally, a chunk assembler combines the first encoded chunk and the second encoded chunk to generate an encoded video sequence.
Smoothing bit rate variations in the distribution of media content
Methods and apparatus are described for delivering streams of media content in ways that smooth out the peaks that might otherwise occur due to the bit rate variations that result from encoding of the media content. This is accomplished by controlling the timing of the transmission of packets of the encoded media content.
Smoothing bit rate variations in the distribution of media content
Methods and apparatus are described for delivering streams of media content in ways that smooth out the peaks that might otherwise occur due to the bit rate variations that result from encoding of the media content. This is accomplished by controlling the timing of the transmission of packets of the encoded media content.
ADAPTIVELY ENCODING VIDEO FRAMES USING CONTENT AND NETWORK ANALYSIS
An example apparatus for adaptively encoding video frames includes a network analyzer to predict an instant bitrate based on channel throughput feedback received from a network. The apparatus also includes a content analyzer to generate ladder info based on a received frame. The apparatus further includes an adaptive decision executer to determine a frame rate, a video resolution, and a target frame size based on the predicted instant bitrate and the ladder outputs. The apparatus further includes an encoder to encode the frame based on the frame rate, the video resolution, and the target frame size.