H04N19/467

METHOD FOR EMBEDDING WATERMARK IN VIDEO DATA AND APPARATUS, METHOD FOR EXTRACTING WATERMARK IN VIDEO DATA AND APPARATUS, DEVICE, AND STORAGE MEDIUM

Disclosed in this application are a method for embedding a watermark in video data and apparatus, a method for extracting a watermark in video data and apparatus, a device, and a storage medium. The method for embedding the watermark includes: acquiring a target image frame in video data; performing time-frequency transformation on the target image frame to obtain target frequency domain data, the target frequency domain data comprising a matrix formed by frequency domain coefficients; changing the frequency domain coefficients in the target frequency domain data according to watermark data to obtain watermarked frequency domain data; performing inverse time-frequency transformation on the watermarked frequency domain data to obtain a watermarked image frame; and synthesizing watermarked video data according to the watermarked image frame.

Method and Apparatus For Selection of Content From A Stream of Data

A main stream contains successive content elements of video and/or audio information that encode video and/or audio information at a first data rate. A computation circuit (144) computes main fingerprints from the successive content elements. A reference stream is received having a second data rate lower than the first data rate. The reference stream defines a sequence of the reference fingerprints. A comparator unit (144) compares the main fingerprints with the reference fingerprints. The main stream is monitored for the presence of inserted content elements between original content elements, where the original content elements have main fingerprints that match successive reference fingerprints and the inserted content elements have main fingerprints that do not match reference fingerprints. Rendering of inserted content elements to be skipped. In an embodiment when more than one content element matches only one is rendered. In another embodiment matching is used to control zapping to or from the main stream. In another embodiment matching is used to control linking of separately received mark-up information such as subtitles to points in the main stream.

Method and Apparatus For Selection of Content From A Stream of Data

A main stream contains successive content elements of video and/or audio information that encode video and/or audio information at a first data rate. A computation circuit (144) computes main fingerprints from the successive content elements. A reference stream is received having a second data rate lower than the first data rate. The reference stream defines a sequence of the reference fingerprints. A comparator unit (144) compares the main fingerprints with the reference fingerprints. The main stream is monitored for the presence of inserted content elements between original content elements, where the original content elements have main fingerprints that match successive reference fingerprints and the inserted content elements have main fingerprints that do not match reference fingerprints. Rendering of inserted content elements to be skipped. In an embodiment when more than one content element matches only one is rendered. In another embodiment matching is used to control zapping to or from the main stream. In another embodiment matching is used to control linking of separately received mark-up information such as subtitles to points in the main stream.

Point cloud compression with supplemental information messages

A system comprises an encoder configured to compress attribute information and/or spatial for a point cloud and/or a decoder configured to decompress compressed attribute and/or spatial information for the point cloud. To compress the attribute and/or spatial information, the encoder is configured to convert a point cloud into an image based representation. Also, the decoder is configured to generate a decompressed point cloud based on an image based representation of a point cloud. Additionally, an encoder is configured to signal and/or a decoder is configured to receive a supplementary message comprising volumetric tiling information that maps portions of 2D image representations to objects in the point. In some embodiments, characteristics of the object may additionally be signaled using the supplementary message or additional supplementary messages.

Point cloud compression with supplemental information messages

A system comprises an encoder configured to compress attribute information and/or spatial for a point cloud and/or a decoder configured to decompress compressed attribute and/or spatial information for the point cloud. To compress the attribute and/or spatial information, the encoder is configured to convert a point cloud into an image based representation. Also, the decoder is configured to generate a decompressed point cloud based on an image based representation of a point cloud. Additionally, an encoder is configured to signal and/or a decoder is configured to receive a supplementary message comprising volumetric tiling information that maps portions of 2D image representations to objects in the point. In some embodiments, characteristics of the object may additionally be signaled using the supplementary message or additional supplementary messages.

CASCADE PREDICTION

A first predictor is applied to an input image to generate first-stage predicted codewords approximating prediction target codewords of a prediction target image. Second-stage prediction target values are created by performing an inverse cascade operation on the prediction target codewords and the first-stage predicted codewords. A second predictor is applied to the input image to generate second-stage predicted values approximating the second-stage prediction target values. Multiple sets of cascade prediction coefficients are generated to comprise first and second sets of cascade prediction coefficients specifying the first and second predictors. The multiple sets of cascade prediction coefficients are encoded, in a video signal, as image metadata. The video signal is further encoded with the input image.

METHOD AND APPARATUS FOR DECODING IMAGING RELATED TO SIGN DATA HIDING
20230007267 · 2023-01-05 ·

An image decoding method performed by a decoding apparatus according to the present document comprises the steps of: obtaining a sign data hiding availability flag indicating whether or not sign data hiding is available for a current slice; obtaining a TSRC availability flag indicating whether or not TSRC is available for a transform skip block of the current slice; obtaining residual coding information about the transform skip block on the basis of the TSRC availability flag; deriving a residual sample for the transform skip block on the basis of the residual coding information; and generating a reconstruction picture on the basis of the residual sample, wherein the TSRC availability flag is obtained on the basis of the sign data hiding availability flag.

DIGITAL WATERMARKING
20230005094 · 2023-01-05 ·

In one example, a method for inserting a digital watermark in a signal includes obtaining the signal comprising a plurality of frames, inserting a first digital watermark in a first frame of the plurality of frames, inserting a second digital watermark in a second frame of the plurality of frames, wherein the second digital watermark differs from the first digital watermark in at least one way selected from a group of: a location within a respective frame, a number of bits, a pattern of bits, and a number of bits of a noise, and outputting a watermarked signal including the first digital watermark in the first frame and the second digital watermark in the second frame.

Methods and apparatus to perform audio watermarking and watermark detection and extraction

Methods and apparatus to perform audio watermarking and watermark detection and extraction are disclosed. Example apparatus disclosed herein are to select frequency components to be used to represent a code, different sets of frequency components to represent respectively different information, respective ones of the frequency components in the sets of frequency components located in respective code bands, there being multiple code bands and spacing between adjacent code bands being equal to or less than the spacing between adjacent frequency components in the code bands. Disclosed example apparatus are also to synthesize the frequency components to be used to represent the code, combine the synthesized frequency components with an audio block of an audio signal, and output the audio signal and a video signal associated with the audio signal.

Methods and apparatus to perform audio watermarking and watermark detection and extraction

Methods and apparatus to perform audio watermarking and watermark detection and extraction are disclosed. Example apparatus disclosed herein are to select frequency components to be used to represent a code, different sets of frequency components to represent respectively different information, respective ones of the frequency components in the sets of frequency components located in respective code bands, there being multiple code bands and spacing between adjacent code bands being equal to or less than the spacing between adjacent frequency components in the code bands. Disclosed example apparatus are also to synthesize the frequency components to be used to represent the code, combine the synthesized frequency components with an audio block of an audio signal, and output the audio signal and a video signal associated with the audio signal.