ERROR MODELING METHOD AND DEVICE FOR PREDICTION CONTEXT OF REVERSIBLE IMAGE WATERMARKING
20200250785 ยท 2020-08-06
Inventors
- Yikui Zhai (Jiangmen, CN)
- Wenbo Deng (Jiangmen, CN)
- Ying Xu (Jiangmen, CN)
- He Cao (Jiangmen, CN)
- Junying Gan (Jiangmen, CN)
- Tianlei Wang (Jiangmen, CN)
- Junying Zeng (Jiangmen, CN)
- Chuanbo Qin (Jiangmen, CN)
- Chaoyun Mai (Jiangmen, CN)
- Jinxin Wang (Jiangmen, CN)
Cpc classification
G06T1/0028
PHYSICS
H04N1/32347
ELECTRICITY
G06T2201/0083
PHYSICS
International classification
Abstract
The present disclosure discloses an error modeling method and device for prediction context of reversible image watermarking. A predictor based on omnidirectional context is established; then, the prediction context is self-adaptively error modeled to obtain a self-adaptive error model; and finally, output data from the self-adaptive error model is fed back to the predictor to update and correct the prediction context, so as to correct a prediction value of a current pixel x[i,j]. Since the non-linear correlation between the current pixel and the prediction context thereof, i.e., the non-linear correlation redundancy between pixels can be found by the error modeling of the prediction context of the predictor, the non-linear correlation redundancy between the pixels can be effectively removed. Thus, the embeddable watermarking capacity can be increased.
Claims
1. An error modeling method for prediction context of reversible image watermarking, comprising following steps: S1: scanning an original image to obtain a current pixel x[i,j] and adjacent pixels surrounding the current pixel; S2: constructing prediction context according to the current pixel x[i,j] and the adjacent pixels surrounding the current pixel, and establishing a predictor based on omnidirectional context; S3: self-adaptively error molding for the prediction context to obtain a self-adaptive error model; and S4: feeding output data from the self-adaptive error model back to the predictor to update and correct the prediction context, so as to correct a prediction value of the current pixel x[i,j].
2. The error modeling method for prediction context of reversible image watermarking of claim 1, wherein, in the step S2, the predictor based on omnidirectional context is established, the formula for the predictor being:
3. The error modeling method for prediction context of reversible image watermarking of claim 2, wherein self-adaptively error molding the prediction context in the step S3 comprises following steps: S31: dividing the original image into four sub-images, the original image being I={x[i, j]|1iH,1jW}, where H and W are the height and width of the original image, the four sub-images being:
U.sub.1={u.sub.1[i,j]=x[2i,2j] |1iH,1jW}
U.sub.2={u.sub.2[i,j]=x[2i,2j+1] |1iH,1jW}
U.sub.3={u.sub.3[i,j]=x[2i+1,2j] |1iH,1jW}
U.sub.4={u.sub.4[i,j]=x[2i+1,2j+1] |1iH,1jW} where, H and W are the height and width of the sub-image U.sub.1, the sub-image U.sub.2, the sub-image U.sub.3 and the sub-image U.sub.4, respectively, and HH and WW; S32: quantifying the value of the prediction context; S33: calculating, by the quantified prediction context, a prediction error of pixels in the four sub-images, the prediction error being obtained by:
e[i,j]=u[i,j][i,j] where, u[i,j] is the pixel in the sub-image, [i, j] is the prediction value of the pixel in the sub-image, and e[i,j] is the prediction error of the pixel in the sub-image; and S34: establishing a self-adaptive error model according to the prediction error.
4. The error modeling method for prediction context of reversible image watermarking of claim 3, wherein the height H and the width W of the sub-image U.sub.1, the sub-image U.sub.2, the sub-image U.sub.3 and the sub-image U.sub.4 satisfy the following condition, respectively:
H=(H2)/2
W=(w2)/2.
5. The error modeling method for prediction context of reversible image watermarking of claim 3, wherein, in the step S4, output data from the self-adaptive error model is fed back to the predictor to update and correct the prediction context, so as to correct a prediction value of the current pixel x[i,j], and the corrected prediction value {dot over (x)} of the current pixel x[i,j] is obtained by:
{dot over (x)}={circumflex over (x)}+(d,t) where, t is the parameter for the quantified predicated context, d is the prediction error of the predictor, (d,t) is the error feedback fed back to the predictor by the self-adaptive error model, and {circumflex over (x)} is the prediction value of the current pixel x[i,j] before correction.
6. A device for storing an error modeling method for prediction context of reversible image watermarking, comprising a control module and a storage medium used for storing control instructions, the control module being configured to read the control instructions in the storage medium and execute the following steps: Q1: scanning an original image to obtain a current pixel x[i,j] and adjacent pixels surrounding the current pixel; Q2: constructing prediction context according to the current pixel x[i,j] and the adjacent pixels surrounding the current pixel, and establishing a predictor based on omnidirectional context; Q3: self-adaptively error molding for the prediction context to obtain a self-adaptive error model; and Q4: feeding output data from the self-adaptive error model back to the predictor to update and correct the prediction context, so as to correct a prediction value of the current pixel x[i,j].
7. The device according to claim 6, wherein, when the control module executes the step Q2, a predictor based on omnidirectional context is established, the formula for the predictor being:
8. The device according to claim 7, wherein, when the control module executes the step Q3, self-adaptively error molding the prediction context comprises following steps: Q31: dividing the original image into four sub-images, the original image being I={x[i,j]|1iH,1jW}, where H and W are the height and width of the original image, the four sub-images being:
U.sub.1={u.sub.1[i,j]=x[2i,2j] |1iH,1jW}
U.sub.2={u.sub.2[i,j]=x[2i,2j+1] |1iH,1jW}
U.sub.3={u.sub.3[i,j]=x[2i+1,2j] |1iH,1jW}
U.sub.4={u.sub.4[i,j]=x[2i+1,2j+1] |1iH,1jW} where, H and W are the height and width of the sub-image U.sub.1, the sub-image U.sub.2, the sub-image U.sub.3 and the sub-image U.sub.4, respectively, and HH and WW; Q32: quantifying the value of the prediction context; Q33: calculating, by the quantified prediction context, a prediction error of pixels in the four sub-images, the prediction error being obtained by:
e[i,j]=u[i,j][i,j] where, u[i,j] is the pixel in the sub-image, [i, j] is the prediction value of the pixel in the sub-image, and e[i,j] is the prediction error of the pixel in the sub-image; and Q34: establishing a self-adaptive error model according to the prediction error.
9. The device according to claim 8, wherein the height H and the width W of the sub-image U.sub.1, the sub-image U.sub.2, the sub-image U.sub.3 and the sub-image U.sub.4 satisfy the following condition, respectively:
H=(H2)/2
W=(W2)/2.
10. The device according to claim 8, wherein, when the control module executes the step Q4, output data from the self-adaptive error model is fed back to the predictor to update and correct the prediction context, so as to correct a prediction value of the current pixel x[i,j], and the corrected prediction value {dot over (x)} of the current pixel x[i,j] is obtained by:
{dot over (x)}={circumflex over (x)}+(d,t) where, t is the parameter for the quantified predicated context, d is the prediction error of the predictor, (d,t) is the error feedback fed back to the predictor by the self-adaptive error model, and {circumflex over (x)} is the prediction value of the current pixel x[i,j] before correction.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0040] The present disclosure will be further described below with reference to the accompanying drawings and specific embodiments.
[0041]
[0042]
DETAILED DESCRIPTION
[0043] With reference to
[0044] S1: scanning an original image to obtain a current pixel x[i,j] and adjacent pixels surrounding the current pixel;
[0045] S2: constructing prediction context according to the current pixel x[i,j] and the adjacent pixels surrounding the current pixel, and establishing a predictor based on omnidirectional context;
[0046] S3: self-adaptively error molding the prediction context to obtain a self-adaptive error model; and
[0047] S4: feeding output data from the self-adaptive error model back to the predictor to update and correct the prediction context, so as to correct a prediction value of the current pixel x[i,j].
[0048] Wherein, in the step S2, a predictor based on omnidirectional context is established, the formula for the predictor being:
[0049] where, {circumflex over (x)}[i,j] is the prediction value of the pixel x[i,j], x.sub.n is a pixel located directly above the pixel x[i,j], x.sub.w is a pixel located directly to the left of the pixel x[i,j], x.sub.e is a pixel located directly to the right of the pixel x[i,j], and x.sub.s is a pixel located directly below the pixel x[i,j].
[0050] Wherein, self-adaptively error molding the prediction context in the step S3 comprises following steps:
[0051] S31: dividing the original image into four sub-images, the original image being I={x[i,j]|1iH,1jW}, where H and W are the height and width of the original image, the four sub-images being:
U.sub.1={u.sub.1[i,j]=x[2i,2j] |1iH,1jW}
U.sub.2={u.sub.2[i,j]=x[2i,2j+1] |1iH,1jW}
U.sub.3={u.sub.3[i,j]=x[2i+1,2j] |1iH,1jW}
U.sub.4={u.sub.4[i,j]=x[2i+1,2j+1] |1iH,1jW}
[0052] where, H and W are the height and width of the sub-image U.sub.1, the sub-image U.sub.2, the sub-image U.sub.3 and the sub-image U.sub.4, respectively, and H and W satisfy the following condition, respectively:
H=(H2)/2
W=(w2)/2;
[0053] S32: quantifying the value of the prediction context;
[0054] S33: calculating, by the quantified prediction context, a prediction error of pixels in the four sub-images, the prediction error being obtained by:
e[i,j]=u[i,j]-[i,j]
[0055] where, u[i,j] is the pixel in the sub-image, [i,j] is the prediction value of the pixel in the sub-image, and e[i,j] is the prediction error of the pixel in the sub-image; and
[0056] S34: establishing a self-adaptive error model according to the prediction error.
[0057] Specifically, in the step S34, a self-adaptive error model is established according to the prediction error. The self-adaptive error model is a model commonly used in reversible image watermarking, and has different expression forms and may be established in different methods according to different practical parameters. However, the error modeling method for prediction context of reversible image watermarking as disclosed in the present disclosure is neither limited to the use of a certain specific self-adaptive error model, nor limited to a certain specific method for establishing a self-adaptive error model. By the error modeling method of the present disclosure, only output data from the self-adaptive error model is fed back to the predictor to update the prediction context of the predictor. Therefore, the specific method for establishing the self-adaptive error model will not be repeated here.
[0058] Wherein, in the step S4, output data from the self-adaptive error model is fed back to the predictor to update and correct the prediction context, so as to correct a prediction value of the current pixel x[i,j], and the corrected prediction value {dot over (x)} of the current pixel x[i,j] is obtained by:
{dot over (x)}={circumflex over (x)}+(d,t)
[0075] where, t is the parameter for the quantified predicated context, d is the prediction error of the predictor, (d,t) is the error feedback fed back to the predictor by the self-adaptive error model, and {circumflex over (x)} is the prediction value of the current pixel x[i,j] before correction.
[0059] Additionally, a device for storing an error modeling method for prediction context of reversible image watermarking is provided, comprising a control module and a storage medium used for storing control instructions, the control module being configured to read the control instructions in the storage medium and execute the following steps:
[0060] Q1: scanning an original image to obtain a current pixel x[i,j] and adjacent pixels surrounding the current pixel;
[0061] Q2: constructing prediction context according to the current pixel x[i,j] and the adjacent pixels surrounding the current pixel, and establishing a predictor based on omnidirectional context;
[0062] Q3: self-adaptively error molding the prediction context to obtain a self-adaptive error model; and
[0063] Q4: feeding output data from the self-adaptive error model back to the predictor to update and correct the prediction context, so as to correct a prediction value of the current pixel x[i,j].
[0064] Wherein, when the control module executes the step Q2, a predictor based on omnidirectional context is established, the formula for the predictor being:
[0065] where, {circumflex over (x)}[i,j] is the prediction value of the pixel x[i,j], x.sub.n is a pixel located directly above the pixel x[i,j], x.sub.w is a pixel located directly to the left of the pixel x[i,j], x.sub.e is a pixel located directly to the right of the pixel x[i,j], and x.sub.s is a pixel located directly below the pixel x[i,j].
[0066] Wherein, when the control module executes the step Q3, self-adaptively error molding the prediction context comprises following steps:
[0067] Q31: dividing the original image into four sub-images, the original image being I={x[i,j]|1iH,1jW}, where H and Ware the height and width of the original image, the four sub-images being:
U.sub.1={u.sub.1[i,j]=x[2i,2j] |1iH,1jW}
U.sub.2={u.sub.2[i,j]=x[2i,2j+1] |1iH,1jW}
U.sub.3={u.sub.3[i,j]=x[2i+1,2j] |1iH,1jW}
U.sub.4={u.sub.4[i,j]=x[2i+1,2j+1] |1iH,1jW}
[0068] where, H and W are the height and width of the sub-image U.sub.1, the sub-image U.sub.2, the sub-image U.sub.3 and the sub-image U.sub.4, respectively, and H_H and WsW;
[0069] Q32: quantifying the value of the prediction context;
[0070] Q33: calculating, by the quantified prediction context, a prediction error of pixels in the four sub-images, the prediction error being obtained by:
e[i,j]=u[i,j]-[i,j]
[0071] where, u[i,j] is the pixel in the sub-image, [i,j] is the prediction value of the pixel in the sub-image, and e[i,j] is the prediction error of the pixel in the sub-image; and
[0072] Q34: establishing a self-adaptive error model according to the prediction error.
[0073] Wherein, the height H and the width W of the sub-image U.sub.1, the sub-image U.sub.2, the sub-image U.sub.3 and the sub-image U.sub.4 satisfy the following condition, respectively:
H=(H2)/2
W=(W2)/2.
[0074] Wherein, when the control module executes the step Q4, output data from the self-adaptive error model is fed back to the predictor to update and correct the prediction context, so as to correct a prediction value of the current pixel x[i,j], and the corrected prediction value {dot over (x)} of the current pixel x[i,j] is obtained by:
{dot over (x)}={circumflex over (x)}+(d,t)
[0075] where, t is the parameter for the quantified predicated context, d is the prediction error of the predictor, (d,t) is the error feedback fed back to the predictor by the self-adaptive error model, and {circumflex over (x)} is the prediction value of the current pixel x[i,j] before correction.
[0076] Specifically, the non-linear correlation redundancy in the image cannot be completely removed by simply estimating the current pixel value from the prediction context in the predictor, and therefore, first, a predictor based on omnidirectional context is established; then, the prediction context is self-adaptively error modeled to obtain a self-adaptive error model; and finally, output data from the self-adaptive error model is fed back to the predictor to update and correct the prediction context, so as to correct a prediction value of a current pixel x[i,j]. Since the non-linear correlation between the current pixel and the prediction context thereof, i.e., the non-linear correlation redundancy between pixels can be found by the error modeling of the prediction context of the predictor, the non-linear correlation redundancy between the pixels can be effectively removed. Thus, the embeddable watermarking capacity can be increased.
[0077] For half-directional predictors or omnidirectional predictors, the prediction algorithm is mostly linear. Such a linear algorithm can effectively analyze the linear correlation redundancy between pixels, but fails to remove the non-linear correlation redundancy between the pixels, such as texture redundancy. However, by modeling the prediction context of the predictor, the non-linear correlation between the current pixel and the prediction context thereof can be found. Since the predictor used in this embodiment is a predictor based on omnidirectional context, the prediction context of the predictor is composed of at least eight pixels surrounding the current pixel, and each pixel has a value between 0 and 255. If the prediction context of the predictor is directly modeled, the model will have 8256 cases, which will lead to a large amount of calculation and thus reduce the calculation efficiency. Therefore, the value of the prediction context is first quantified, and then the self-adaptive error modeling is performed using the quantified prediction context. In addition, because there is a certain correlation between prediction errors, the use of self-adaptive error modeling can also effectively eliminate the prediction bias of the predictor, thereby improving the prediction accuracy of the predictor. Then, the output data from the self-adaptive error model is fed back to the predictor, and the prediction context is updated and corrected, so as to correct the prediction value of the current pixel x[i,j]. Since the corrected prediction value can reduce the prediction error of the predictor, the accuracy of prediction can be enhanced. When quantifying the value of the prediction context, assuming that the parameter for the quantified prediction context is t and the prediction error of the predictor is d, then the error feedback (d,t) can be obtained. Then, the error feedback (d,t) is used to correct the predictor, and the current pixel x[i,j] after introducing the corrected prediction value will become {dot over (x)} from {circumflex over (x)}, that is, {dot over (x)}={circumflex over (x)}+(d,t). In this case, the corrected {dot over (x)} will be closer to x than {circumflex over (x)}. Therefore, the prediction error will be smaller, which can effectively increase the embeddable watermarking capacity.
[0078] The preferred embodiments of the present disclosure have been specifically described. However, the present disclosure is not limited to those implementations. A person of ordinary skill in the art may make various equivalent variations or replacements without departing from the spirit of the present disclosure, and those equivalent variations or replacements shall be included in the scope defined by the appended claims.