Patent classifications
H04N19/48
Quantization artifact suppression and signal recovery by the transform domain filtering
An apparatus for decoding video data includes memory and one or more processors implemented in circuitry. The one or more processors are configured to receive a bitstream including encoded video data, decode, from the bitstream, values for one or more syntax elements to generate a residual block for a current block, prediction information for the current block, and transform domain filtering information. The one or more processors are further configured to reconstruct the current block using the prediction information and the residual block to generate a reconstructed block. In response to determining that the transform domain filtering information indicates that transform domain filtering is enabled for the current block, the one or more processors are configured to perform transform domain filtering on the reconstructed block to generate a filtered block.
IMAGE PROCESSING SYSTEM FOR VERIFICATION OF RENDERED DATA
An image processing system for verifying that embedded digital content satisfies a predetermined criterion associated with display of the content, the image processing system a content embedding engine that embeds content in a resource provided by a content provider and that configures the resource for rendering, a rendering engine that renders the content embedded in the resource; an application interface engine that interfaces with the rendering engine and that generates a visualization of the resource and of the embedded content rendered in the resource; and an image processing engine that processes one or more pixels of the generated visualization of the resource and of the embedded content and the resource to verify that the specified visual element satisfies the predetermined criterion; and transmits verification data comprising an indication of whether the predetermined criterion is satisfied.
IMAGE PROCESSING SYSTEM FOR VERIFICATION OF RENDERED DATA
An image processing system for verifying that embedded digital content satisfies a predetermined criterion associated with display of the content, the image processing system a content embedding engine that embeds content in a resource provided by a content provider and that configures the resource for rendering, a rendering engine that renders the content embedded in the resource; an application interface engine that interfaces with the rendering engine and that generates a visualization of the resource and of the embedded content rendered in the resource; and an image processing engine that processes one or more pixels of the generated visualization of the resource and of the embedded content and the resource to verify that the specified visual element satisfies the predetermined criterion; and transmits verification data comprising an indication of whether the predetermined criterion is satisfied.
Methods and apparatus for foveated compression
The present disclosure relates to methods and apparatus for graphics processing. Aspects of the present disclosure can render at least one frame including display content at a server. Aspects of the present disclosure can also downscale the at least one frame including the display content, where a downscaling rate of one or more portions of the at least one frame is based on a location of each of the one or more portions. Moreover, aspects of the present disclosure can communicate the downscaled at least one frame including the display content to a client device. Aspects of the present disclosure can also encode the downscaled at least one frame including the display content. Further, aspects of the present disclosure can decode the encoded at least one frame including the display content. Aspects of the present disclosure can also upscale the at least one frame including the display content.
Methods and apparatus for foveated compression
The present disclosure relates to methods and apparatus for graphics processing. Aspects of the present disclosure can render at least one frame including display content at a server. Aspects of the present disclosure can also downscale the at least one frame including the display content, where a downscaling rate of one or more portions of the at least one frame is based on a location of each of the one or more portions. Moreover, aspects of the present disclosure can communicate the downscaled at least one frame including the display content to a client device. Aspects of the present disclosure can also encode the downscaled at least one frame including the display content. Further, aspects of the present disclosure can decode the encoded at least one frame including the display content. Aspects of the present disclosure can also upscale the at least one frame including the display content.
Temporal signalling for video coding technology
An encoder (300) configured to receive an input video (302) comprising respective frames, each frame being divided into a plurality of tiles and each tile being divided into a plurality of blocks. The encoder is configured to generate a base encoded stream (310) using abase encoder (306), determine (334) a temporal mode for one or more further encoded enhancement streams (328) generated using an enhancement encoder and generate the one or more further encoded enhancement streams (328) according to the determined temporal mode. The temporal mode is either a first temporal mode that does not apply non-zero values from a temporal buffer or a second temporal mode that does apply non-zero values from the temporal buffer (332). Generating the one or more further encoded enhancement streams comprises applying a transform (348) to each of a series of blocks. The temporal mode is determined for one or more of a frame, tile or block.
Temporal signalling for video coding technology
An encoder (300) configured to receive an input video (302) comprising respective frames, each frame being divided into a plurality of tiles and each tile being divided into a plurality of blocks. The encoder is configured to generate a base encoded stream (310) using abase encoder (306), determine (334) a temporal mode for one or more further encoded enhancement streams (328) generated using an enhancement encoder and generate the one or more further encoded enhancement streams (328) according to the determined temporal mode. The temporal mode is either a first temporal mode that does not apply non-zero values from a temporal buffer or a second temporal mode that does apply non-zero values from the temporal buffer (332). Generating the one or more further encoded enhancement streams comprises applying a transform (348) to each of a series of blocks. The temporal mode is determined for one or more of a frame, tile or block.
Method and apparatus for syntax redundancy removal in palette coding
A method and apparatus for palette coding of a block of video data using a candidate prediction mode list with syntax redundancy removed are disclosed. In one embodiment, whether a redundant prediction mode exists in the candidate prediction mode list for the current samples of the current block is determined based on the candidate prediction mode list and the previous prediction mode associated with the previous samples. If the redundant prediction mode exists in the candidate prediction mode list, the redundant prediction mode is removed from the candidate prediction mode list to generate a reduced candidate prediction mode list. In another embodiment, whether a redundant predictor exists in a candidate predictor list for a current sample of the current block is determined based on a condition related to one or more predictors for the current sample of the current block.
Method and apparatus for syntax redundancy removal in palette coding
A method and apparatus for palette coding of a block of video data using a candidate prediction mode list with syntax redundancy removed are disclosed. In one embodiment, whether a redundant prediction mode exists in the candidate prediction mode list for the current samples of the current block is determined based on the candidate prediction mode list and the previous prediction mode associated with the previous samples. If the redundant prediction mode exists in the candidate prediction mode list, the redundant prediction mode is removed from the candidate prediction mode list to generate a reduced candidate prediction mode list. In another embodiment, whether a redundant predictor exists in a candidate predictor list for a current sample of the current block is determined based on a condition related to one or more predictors for the current sample of the current block.
Object region detection method, object region detection apparatus, and non-transitory computer-readable medium thereof
The present invention relates to an object region detection method, an object region detection apparatus, and a non-transitory computer-readable medium thereof, and more particularly, to an object region detection method, an object region detection apparatus, and a non-transitory computer-readable medium thereof, capable of further accelerating analysis of object recognition and tracking by preliminarily detecting an object region based on a parameter value obtained from an image decoding process and referring to the detected object region for image analysis.