H04N19/152

Method and apparatus for decoding an enhanced video stream
09854272 · 2017-12-26 · ·

A method of decoding an enhanced video stream composed of base layer video access units and enhancement layer video access units, each access unit comprising a plurality of syntax structures, includes passing the syntax structures of the base layer access units to a base layer buffer, passing syntax structures of the enhancement layer access units to an enhancement layer buffer, outputting the syntax structures passed to the base layer buffer in a predetermined sequence, outputting the syntax structures passed to the enhancement layer buffer in a predetermined sequence, and recombining the sequences of syntax structures output by the base layer buffer and the enhancement layer buffer respectively to form a complete enhanced access unit, composed of base layer syntax structures and enhancement layer syntax structures in a predetermined sequence.

Local illumination compensation for video encoding and decoding using stored parameters

A Local illumination compensation system for video encoding and decoding uses memory for storing illumination compensation parameters and does not require access to reconstructed pixels of neighboring blocks. A set of illumination compensation parameters is stored in a dedicated buffer, which is of limited size, and which is decoupled from the coding unit level storage of information. The buffer contains a set of illumination compensation parameters, which may be, for example, computed (or determined in some other manner) on the fly or determined beforehand (for example for example obtained from the video signal or from a device).

Local illumination compensation for video encoding and decoding using stored parameters

A Local illumination compensation system for video encoding and decoding uses memory for storing illumination compensation parameters and does not require access to reconstructed pixels of neighboring blocks. A set of illumination compensation parameters is stored in a dedicated buffer, which is of limited size, and which is decoupled from the coding unit level storage of information. The buffer contains a set of illumination compensation parameters, which may be, for example, computed (or determined in some other manner) on the fly or determined beforehand (for example for example obtained from the video signal or from a device).

System and method for selecting quantization parameter (QP) in display stream compression (DSC)

An apparatus for coding video data according to certain aspects includes a memory for storing the video data and a processor. The memory includes a buffer. The processor is configured to receive the video data to be coded. The processor is further configured to determine a quantization parameter (QP) of a current block of the video data without considering a type of content of the video data and a rate-distortion model associated with the type of content. The processor is also configured to code the current block in a bitstream using the determined QP.

System and method for selecting quantization parameter (QP) in display stream compression (DSC)

An apparatus for coding video data according to certain aspects includes a memory for storing the video data and a processor. The memory includes a buffer. The processor is configured to receive the video data to be coded. The processor is further configured to determine a quantization parameter (QP) of a current block of the video data without considering a type of content of the video data and a rate-distortion model associated with the type of content. The processor is also configured to code the current block in a bitstream using the determined QP.

Using Uplink Buffer Status to Improve Video Stream Adaptation Control

A method for sending video over a cellular link includes obtaining from a modem layer an uplink buffer status metric. An indication derived from the metric is transmitted from the modem layer to a video source application. The indication is used to adjust a video bitrate.

Frame Dropping Method for Video Frame and Video Sending Apparatus
20170347158 · 2017-11-30 ·

A frame dropping method for a video frame and a video sending apparatus are used to perform frame dropping processing on video frames in order to reduce a quantity of dropped frames, enhance video playing smoothness, and improve user experience. A specific solution includes obtaining a video frame sequence of a to-be-sent video, establishing a reference relationship between video frames in the video frame sequence according to a preset criterion, and detecting a data occupation length of buffered video frames in a video sending buffer during a process of sending the video frame sequence, dropping a current to-be-buffered video frame when the data occupation length is greater than a preset threshold, and dropping all video frames in the video frame sequence that reference the current to-be-buffered video frame according to the reference relationship.

ENCODING AND DECODING USING TILING
20220353513 · 2022-11-03 ·

Video coding using tiling may include encoding a current frame by identifying a tile-width for encoding a current tile of the current frame, the tile-width indicating a cardinality of horizontally adjacent blocks in the current tile, identifying a tile-height for encoding the current tile of the current frame, the tile-height indicating a cardinality of vertically adjacent block in the current tile, and generating an encoded tile by encoding the current tile, such that a row of the current tile includes tile-width horizontally adjacent blocks from the plurality of blocks, and a column of the current tile includes tile-height vertically adjacent blocks from the plurality of blocks. Encoding the current frame may include outputting the encoded tile, wherein outputting the encoded tile includes including an encoded-tile size in an output bitstream, the encoded-tile size indicating a cardinality of bytes for including the encoded tile in the output bitstream.

ENCODING AND DECODING USING TILING
20220353513 · 2022-11-03 ·

Video coding using tiling may include encoding a current frame by identifying a tile-width for encoding a current tile of the current frame, the tile-width indicating a cardinality of horizontally adjacent blocks in the current tile, identifying a tile-height for encoding the current tile of the current frame, the tile-height indicating a cardinality of vertically adjacent block in the current tile, and generating an encoded tile by encoding the current tile, such that a row of the current tile includes tile-width horizontally adjacent blocks from the plurality of blocks, and a column of the current tile includes tile-height vertically adjacent blocks from the plurality of blocks. Encoding the current frame may include outputting the encoded tile, wherein outputting the encoded tile includes including an encoded-tile size in an output bitstream, the encoded-tile size indicating a cardinality of bytes for including the encoded tile in the output bitstream.

IR camera and method for processing thermal image information
09807318 · 2017-10-31 · ·

A method for processing information from an IR detector of an IR camera, for an embodiment, comprises receiving a series of frames of data from said IR detector being operable to detect IR radiation from a scene, said frames of IR data representing detected IR radiation; performing a compression of said frames of IR data; wherein each data value together with calibration data uniquely represents measured IR radiation from the scene.