H04N19/10

Adaptive streaming using chunked time-to-offset mapping
09807138 · 2017-10-31 · ·

Systems and methods are provided herein relating to adaptive video streaming. Time-to-offset mapping associated with a set of video blocks can be broken up into chunks. A client can download a first set of seek index chunks and use the first set of seek index chunks to select a stream. Seek index chunks within remaining sets of seek index chunks can be ranked for relevance based on client capabilities. A subset of remaining sets of seeks index chunks can be downloaded based on the rankings and client capabilities during streaming. Chunked time-to-offset mapping can facilitate faster startup when playing streamed video.

VIDEO CODING USING A SALIENCY MAP
20170310979 · 2017-10-26 ·

A video coder includes a processing resource and a non-transitory storage device containing instructions executable by the processing resource to compute a weighted Δ frame based on a saliency map and a Δ frame. The saliency map is to indicate the relative importance of each pixel in a current frame based on its perceptual significance. The Δ frame is to include differences between corresponding pixels in a current frame and a motion predicted frame.

VIDEO CODING USING A SALIENCY MAP
20170310979 · 2017-10-26 ·

A video coder includes a processing resource and a non-transitory storage device containing instructions executable by the processing resource to compute a weighted Δ frame based on a saliency map and a Δ frame. The saliency map is to indicate the relative importance of each pixel in a current frame based on its perceptual significance. The Δ frame is to include differences between corresponding pixels in a current frame and a motion predicted frame.

Adaptive deadzone and rate-distortion skip in video processing
09781418 · 2017-10-03 · ·

This disclosure relates to implementing an adaptive deadzone for one or more quantized coefficients in a quantized block. In particular, one or more candidate blocks with one or more coefficients and an end of block (EOB) indicator are generated. The one or more coefficients are a subset of the one or more quantized coefficients in the quantized block. A cost value for each of the one or more candidate blocks is calculated based at least in part on a rate value and a distortion value of the one or more coefficients in each of the one or more candidate blocks. Accordingly, a candidate block from the one or more candidate blocks with a lowest calculated cost value is selected as an output block.

Methods and apparatuses for encoding and decoding video using periodic buffer description

A method of encoding video including: writing a plurality of predetermined buffer descriptions into a sequence parameter set of a coded video bitstream; writing a plurality of updating parameters into a slice header of the coded video bitstream for selecting and modifying one buffer description out of the plurality of buffer descriptions; and encoding a slice into the coded video bitstream using the slice header and the modified buffer description.

Methods and apparatuses for encoding and decoding video using periodic buffer description

A method of encoding video including: writing a plurality of predetermined buffer descriptions into a sequence parameter set of a coded video bitstream; writing a plurality of updating parameters into a slice header of the coded video bitstream for selecting and modifying one buffer description out of the plurality of buffer descriptions; and encoding a slice into the coded video bitstream using the slice header and the modified buffer description.

Advanced motion estimation
09743103 · 2017-08-22 · ·

Encoding and decoding using advanced motion estimation may include encoding a video stream including a plurality of frames by generating a first encoded frame based on a first frame from the plurality of frames, generating a first reconstructed frame based on the first encoded frame, generating reference frame index information based on the first reconstructed frame, generating an encoded reference frame based on the first reconstructed frame, generating a second reconstructed reference frame based on the encoded reference frame, and generating a second encoded frame based on a second frame from the plurality of frames, the reference frame index information, and the second reconstructed reference frame.

Advanced motion estimation
09743103 · 2017-08-22 · ·

Encoding and decoding using advanced motion estimation may include encoding a video stream including a plurality of frames by generating a first encoded frame based on a first frame from the plurality of frames, generating a first reconstructed frame based on the first encoded frame, generating reference frame index information based on the first reconstructed frame, generating an encoded reference frame based on the first reconstructed frame, generating a second reconstructed reference frame based on the encoded reference frame, and generating a second encoded frame based on a second frame from the plurality of frames, the reference frame index information, and the second reconstructed reference frame.

Method of accelerated test automation through unified test workflows
09740596 · 2017-08-22 · ·

Various embodiments are describe techniques, methods, and system disclosing accelerated test automation that is invoking a first script representing a first test case of an application under test, in response to a set of input data. From the first script, a plurality of generalized script elements are invoked, where each generalized script element tests a specific functionality of the application under test. A second script, representing a second test case is executed, and at least some of the plurality of generalized script elements that were invoked by the first script are invoked by the second script. Thereafter, it is determined whether the first and second test cases have passed or failed the software testing based on execution of the first and second scripts.

Variable number of intra modes for video coding

A video coder determines a first block of the video data is intra mode coded; based on a first height and the first width of the first block, identifies a group of N available intra prediction modes for the first block of video data; selects from the group of N available intra prediction modes, a first intra prediction mode used to code the first block of the video data; and codes the first block using the first intra prediction mode. A video coder generates a first most probable mode (MPM) candidate list for the block; codes a first flag indicating an actual intra prediction mode used to code the block is not included in the first MPM candidate list; generates a second MPM candidate list by deriving at least one candidate intra prediction mode based on an intra prediction mode in the first MPM candidate list.