SINGLE IMAGE DERAINING METHOD AND SYSTEM THEREOF

20230060736 · 2023-03-02

    Inventors

    Cpc classification

    International classification

    Abstract

    A single image deraining method is proposed. A wavelet transforming step processes an initial rain image to generate an i-th stage low-frequency rain image and a plurality of i-th stage high-frequency rain images. An image deraining step inputs the i-th stage low-frequency rain image to a low-frequency deraining model to output an i-th stage low-frequency derain image. A first inverse wavelet transforming step recombines the n-th stage low-frequency derain image with the n-th stage high-frequency derain images to form an n-th stage derain image. A weighted blending step blends a (n−1)-th stage low-frequency derain image with the n-th stage derain image to generate a (n−1)-th stage blended derain image. A second inverse wavelet transforming step recombines the (n−1)-th stage high-frequency derain images with the (n−1)-th stage blended derain image to form a (n−1)-th stage derain image, and sets n to n−1 and repeats the last two steps.

    Claims

    1. A single image deraining method, which is configured to convert an initial rain image into a final derain image, the single image deraining method comprising: performing a wavelet transforming step to drive a processing unit to process the initial rain image to generate a first stage low-frequency rain image, a plurality of first stage high-frequency rain images, a second stage low-frequency rain image and a plurality of second stage high-frequency rain images according to a wavelet transforming procedure; performing an image deraining step to drive the processing unit to input the first stage low-frequency rain image and the second stage low-frequency rain image to a low-frequency deraining model to output a first stage low-frequency derain image and a second stage low-frequency derain image, and input the first stage high-frequency rain images and the second stage high-frequency rain images to a high-frequency deraining model to output a plurality of first stage high-frequency derain images and a plurality of second stage high-frequency derain images; performing a first inverse wavelet transforming step to drive the processing unit to recombine the second stage low-frequency derain image with the second stage high-frequency rain images to form a second stage derain image according to a first inverse wavelet transforming procedure; performing a weighted blending step to drive the processing unit to blend the first stage low-frequency derain image with the second stage derain image to generate a first stage blended derain image according to a weighted blending procedure; and performing a second inverse wavelet transforming step to drive the processing unit to recombine the first stage high-frequency derain images with the first stage blended derain image to form the final derain image according to a second inverse wavelet transforming procedure.

    2. The single image deraining method of claim 1, wherein the wavelet transforming step comprises: performing a first stage wavelet decomposing step to drive the processing unit to decompose the initial rain image into the first stage low-frequency rain image and the first stage high-frequency rain images according to the wavelet transforming procedure; and performing a second stage wavelet decomposing step to drive the processing unit to decompose the first stage low-frequency rain image into the second stage low-frequency rain image and the second stage high-frequency rain images according to the wavelet transforming procedure.

    3. The single image deraining method of claim 1, wherein the wavelet transforming procedure comprises a wavelet transforming function, the initial rain image, a first stage low-frequency wavelet coefficient, a plurality of first stage high-frequency wavelet coefficients, a second stage low-frequency wavelet coefficient and a plurality of second stage high-frequency wavelet coefficients, the wavelet transforming function is represented as SWT, the initial rain image is represented as R.sup.0, the first stage low-frequency wavelet coefficient is represented as I.sub.LL.sup.1, the first stage high-frequency wavelet coefficients are represented as I.sub.LH.sup.1, I.sub.HL.sup.1 and I.sub.HH.sup.1, respectively, the second stage low-frequency wavelet coefficient is represented as I.sub.LL.sup.2, and the second stage high-frequency wavelet coefficients are represented as I.sub.LH.sup.2, I.sub.HL.sup.2 and I.sub.HH.sup.2, respectively, and conform to a following equation:
    SWT(R.sup.0)=[I.sub.LL.sup.1,I.sub.LH.sup.1,I.sub.HL.sup.1,I.sub.HH.sup.1];
    SWT(I.sub.LL.sup.1)=[I.sub.LL.sup.2,I.sub.LH.sup.2,I.sub.HL.sup.2,I.sub.HH.sup.2].

    4. The single image deraining method of claim 1, wherein the second inverse wavelet transforming procedure comprises an inverse wavelet transforming function, a concatenation function, a weighted blending function, the first stage low-frequency derain image, the second stage low-frequency derain image, the first stage high-frequency derain images, the second stage high-frequency derain images and the final derain image, the inverse wavelet transforming function is represented as ISWT, the concatenation function is represented as concat, the weighted blending function is represented as IWB, the first stage low-frequency derain image is represented as O.sub.LL.sup.1, the second stage low-frequency derain image is represented as O.sub.LL.sup.2, the first stage high-frequency derain images are represented as O.sub.Detail.sup.1, the second stage high-frequency derain images are represented as O.sub.Detail.sup.2, and the final derain image is represented as C.sup.1, and conforms to a following equation:
    C.sup.1=ISWT(O.sub.Detail.sup.1,IWB(O.sub.LL.sup.1,ISWT(concat(O.sub.LL.sup.2,O.sub.Detail.sup.2)))).

    5. The single image deraining method of claim 1, wherein the weighted blending procedure comprises a weighted blending function, a weighted value, the first stage low-frequency derain image and the second stage derain image, the weighted blending function is represented as IWB, the weighted value is represented as α, the first stage low-frequency derain image is represented as image1, and the second stage derain image is represented as image2, and conforms to a following equation:
    IWB=image1*(1.0−α)+image2*α.

    6. The single image deraining method of claim 1, wherein the processing unit updates a parameter of the low-frequency deraining model and the high-frequency deraining model according to a loss function, and the loss function and the parameter conform to a following equation: Loss ( w ) = 1 m .Math. i = 1 m .Math. n = 1 N .Math. H ( y i , n Detail ; w ) - x i , n Detail .Math. + 1 m .Math. i = 1 m .Math. n = 1 N - 1 .Math. L ( y i , n LL ; w ) - x i , n LL .Math. 2 + 1 m .Math. i = 1 m .Math. L ( y i , N LL ; w ) - x i , N LL .Math. 2 ; wherein Loss is the loss function, w is the parameter, i is a training image pair, m is a number of the training image pair, n is a stage of a wavelet transformation, N is a number of the stage, y.sub.i,n.sup.LL is an n-th stage low-frequency rain image, x.sub.i,n.sup.LL is an n-th stage low-frequency rainless image, y.sub.i,n.sup.Detail is an n-th stage high-frequency rain image, x.sub.i,n.sup.Detail is an n-th stage high-frequency rainless image, y.sub.i,n.sup.LL is an N-th stage low-frequency rain image, and x.sub.i,n.sup.LL is an N-th stage low-frequency rainless image.

    7. A single image deraining method, which is configured to convert an initial rain image into a final derain image, the single image deraining method comprising: performing a wavelet transforming step to drive a processing unit to process the initial rain image to generate an i-th stage low-frequency rain image and a plurality of i-th stage high-frequency rain images according to a wavelet transforming procedure, wherein i is 1 to n, i and n are both positive integers, and n is greater than or equal to 3; performing an image deraining step to drive the processing unit to input the i-th stage low-frequency rain image to a low-frequency deraining model to output an i-th stage low-frequency derain image, and input the i-th stage high-frequency rain images to a high-frequency deraining model to output a plurality of i-th stage high-frequency derain images; performing a first inverse wavelet transforming step to drive the processing unit to recombine the n-th stage low-frequency derain image with the n-th stage high-frequency rain images to form an n-th stage derain image according to a first inverse wavelet transforming procedure; performing a weighted blending step to drive the processing unit to blend the (n−1)-th stage low-frequency derain image with the n-th stage derain image to generate a (n−1)-th stage blended derain image according to a weighted blending procedure; performing a second inverse wavelet transforming step to drive the processing unit to recombine the (n−1)-th stage high-frequency derain images with the (n−1)-th stage blended derain image to form a (n−1)-th stage derain image according to a second inverse wavelet transforming procedure, and set n to n−1; and performing a residual network learning step to drive the processing unit to repeatedly execute the weighted blending step and the second inverse wavelet transforming step according to n until n=2; wherein in response to determining that n=2 in the residual network learning step, the (n−1)-th stage derain image of the second inverse wavelet transforming step is the final derain image.

    8. A single image deraining system, which is configured to convert an initial rain image into a final derain image, the single image deraining system comprising: a storing unit configured to access the initial rain image, a wavelet transforming procedure, a low-frequency deraining model, a high-frequency deraining model, a first inverse wavelet transforming procedure, a weighted blending procedure and a second inverse wavelet transforming procedure; and a processing unit connected to the storing unit, wherein the processing unit is configured to implement a single image deraining method comprising: performing a wavelet transforming step to process the initial rain image to generate a first stage low-frequency rain image, a plurality of first stage high-frequency rain images, a second stage low-frequency rain image and a plurality of second stage high-frequency rain images according to the wavelet transforming procedure; performing an image deraining step to input the first stage low-frequency rain image and the second stage low-frequency rain image to the low-frequency deraining model to output a first stage low-frequency derain image and a second stage low-frequency derain image, and input the first stage high-frequency rain images and the second stage high-frequency rain images to the high-frequency deraining model to output a plurality of first stage high-frequency derain images and a plurality of second stage high-frequency derain images; performing a first inverse wavelet transforming step to recombine the second stage low-frequency derain image with the second stage high-frequency rain images to form a second stage derain image according to the first inverse wavelet transforming procedure; performing a weighted blending step to blend the first stage low-frequency derain image with the second stage derain image to generate a first stage blended derain image according to the weighted blending procedure; and performing a second inverse wavelet transforming step to recombine the first stage high-frequency derain images with the first stage blended derain image to form the final derain image according to the second inverse wavelet transforming procedure.

    9. The single image deraining system of claim 8, wherein the wavelet transforming step comprises: performing a first stage wavelet decomposing step to drive the processing unit to decompose the initial rain image into the first stage low-frequency rain image and the first stage high-frequency rain images according to the wavelet transforming procedure; and performing a second stage wavelet decomposing step to drive the processing unit to decompose the first stage low-frequency rain image into the second stage low-frequency rain image and the second stage high-frequency rain images according to the wavelet transforming procedure.

    10. The single image deraining system of claim 8, wherein the wavelet transforming procedure comprises a wavelet transforming function, the initial rain image, a first stage low-frequency wavelet coefficient, a plurality of first stage high-frequency wavelet coefficients, a second stage low-frequency wavelet coefficient and a plurality of second stage high-frequency wavelet coefficients, the wavelet transforming function is represented as SWT, the initial rain image is represented as R.sup.0, the first stage low-frequency wavelet coefficient is represented as I.sub.LL.sup.1, the first stage high-frequency wavelet coefficients are represented as I.sub.LH.sup.1, I.sub.HL.sup.1 and I.sub.HH.sup.1, respectively, the second stage low-frequency wavelet coefficient is represented as I.sub.LL.sup.2, and the second stage high-frequency wavelet coefficients are represented as I.sub.LH.sup.2, I.sub.HL.sup.2 and I.sub.HH.sup.2, respectively, and conform to a following equation:
    SWT(R.sup.0)=[I.sub.LL.sup.1,I.sub.LH.sup.1,I.sub.HL.sup.1,I.sub.HH.sup.1];
    SWT(I.sub.LL.sup.1)=[I.sub.LL.sup.2,I.sub.LH.sup.2,I.sub.HL.sup.2,I.sub.HH.sup.2].

    11. The single image deraining system of claim 8, wherein the second inverse wavelet transforming procedure comprises an inverse wavelet transforming function, a concatenation function, a weighted blending function, the first stage low-frequency derain image, the second stage low-frequency derain image, the first stage high-frequency derain images, the second stage high-frequency derain images and the final derain image, the inverse wavelet transforming function is represented as ISWT, the concatenation function is represented as concat, the weighted blending function is represented as IWB, the first stage low-frequency derain image is represented as O.sub.LL.sup.1, the second stage low-frequency derain image is represented as O.sub.LL.sup.2, the first stage high-frequency derain images are represented as O.sub.Detail.sup.1, the second stage high-frequency derain images are represented as O.sub.Detail.sup.2, and the final derain image is represented as C.sup.1, and conforms to a following equation:
    C.sup.1=ISWT(O.sub.Detail.sup.1,IWB(O.sub.LL.sup.1,ISWT(concat(O.sub.LL.sup.2,O.sub.Detail.sup.2)))).

    12. The single image deraining system of claim 8, wherein the weighted blending procedure comprises a weighted blending function, a weighted value, the first stage low-frequency derain image and the second stage derain image, the weighted blending function is represented as IWB, the weighted value is represented as α, the first stage low-frequency derain image is represented as image1, and the second stage derain image is represented as image2, and conforms to a following equation:
    IWB=image1*(1.0−α)+image2*α.

    13. The single image deraining system of claim 8, wherein the processing unit updates a parameter of the low-frequency deraining model and the high-frequency deraining model according to a loss function, and the loss function and the parameter conform to a following equation: Loss ( w ) = 1 m .Math. i = 1 m .Math. n = 1 N .Math. H ( y i , n Detail ; w ) - x i , n Detail .Math. + 1 m .Math. i = 1 m .Math. n = 1 N - 1 .Math. L ( y i , n LL ; w ) - x i , n LL .Math. 2 + 1 m .Math. i = 1 m .Math. L ( y i , N LL ; w ) - x i , N LL .Math. 2 ; wherein Loss is the loss function, w is the parameter, i is a training image pair, m is a number of the training image pair, n is a stage of a wavelet transformation, N is a number of the stage, y.sub.i,n.sup.LL is an n-th stage low-frequency rain image, x.sub.i,n.sup.LL is an n-th stage low-frequency rainless image, y.sub.i,n.sup.Detail is an n-th stage high-frequency rain image, x.sub.i,n.sup.Detail is an n-th stage high-frequency rainless image, y.sub.i,n.sup.LL is an N-th stage low-frequency rain image, and x.sub.i,n.sup.LL is an N-th stage low-frequency rainless image.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0010] The present disclosure can be more fully understood by reading the following detailed description of the embodiment, with reference made to the accompanying drawings as follows:

    [0011] FIG. 1 shows a flow chart of a single image deraining method according to a first embodiment of the present disclosure.

    [0012] FIG. 2 shows a schematic view of a Wavelet Approximation-Attention Residual (WAAR) of the single image deraining method of the first embodiment of the present disclosure.

    [0013] FIG. 3 shows a schematic view of a wavelet transforming step of the single image deraining method of FIG. 1.

    [0014] FIG. 4 shows a schematic view of a low-frequency deraining model and a high-frequency deraining model of the WAAR of FIG. 2.

    [0015] FIG. 5 shows a flow chart of a single image deraining method according to a second embodiment of the present disclosure.

    [0016] FIG. 6 shows a schematic view of a WAAR (i is 1 to n) of the single image deraining method of the second embodiment of the present disclosure.

    [0017] FIG. 7 shows a schematic view of a WAAR (i is 1 to 3) of a single image deraining method according to a third embodiment of the present disclosure.

    [0018] FIG. 8 shows a block diagram of a single image deraining system according to a fourth embodiment of the present disclosure.

    DETAILED DESCRIPTION

    [0019] The embodiment will be described with the drawings. For clarity, some practical details will be described below. However, it should be noted that the present disclosure should not be limited by the practical details, that is, in some embodiment, the practical details is unnecessary. In addition, for simplifying the drawings, some conventional structures and elements will be simply illustrated, and repeated elements may be represented by the same labels.

    [0020] It will be understood that when an element (or device) is referred to as be “connected to” another element, it can be directly connected to the other element, or it can be indirectly connected to the other element, that is, intervening elements may be present. In contrast, when an element is referred to as be “directly connected to” another element, there are no intervening elements present. In addition, the terms first, second, third, etc. are used herein to describe various elements or components, these elements or components should not be limited by these terms. Consequently, a first element or component discussed below could be termed a second element or component.

    [0021] Please refer to FIGS. 1 and 2. FIG. 1 shows a flow chart of a single image deraining method 100 according to a first embodiment of the present disclosure. FIG. 2 shows a schematic view of a Wavelet Approximation-Attention Residual (WAAR) 110 of the single image deraining method 100 of the first embodiment of the present disclosure. In FIGS. 1 and 2, the single image deraining method 100 is configured to convert an initial rain image R.sup.0 into a final derain image C.sup.1, and includes performing a wavelet transforming step S01, an image deraining step S02, a first inverse wavelet transforming step S03, a weighted blending step S04 and a second inverse wavelet transforming step S05.

    [0022] The wavelet transforming step S01 is performed to drive a processing unit to process the initial rain image R.sup.0 to generate a first stage low-frequency rain image LL.sub.1, a plurality of first stage high-frequency rain images LH.sub.1, HL.sub.1, HH.sub.1, a second stage low-frequency rain image LL.sub.2 and a plurality of second stage high-frequency rain images LH.sub.2, HL.sub.2, HH.sub.2 according to a wavelet transforming procedure 411.

    [0023] The image deraining step S02 is performed to drive the processing unit to input the first stage low-frequency rain image LL.sub.1 and the second stage low-frequency rain image LL.sub.2 to a low-frequency deraining model 412 to output a first stage low-frequency derain image O.sub.LL.sup.1 and a second stage low-frequency derain image O.sub.LL.sup.2, and input the first stage high-frequency rain images LH.sub.1, HL.sub.1, HH.sub.1 and the second stage high-frequency rain images LH.sub.2, HL.sub.2, HH.sub.2 to a high-frequency deraining model 413 to output a plurality of first stage high-frequency derain images O.sub.Detail.sup.1 and a plurality of second stage high-frequency derain images O.sub.Detail.sup.2.

    [0024] The first inverse wavelet transforming step S03 is performed to drive the processing unit to recombine the second stage low-frequency derain image O.sub.LL.sup.2 with the second stage high-frequency rain images O.sub.Detail.sup.2 to form a second stage derain image DR.sup.2 according to a first inverse wavelet transforming procedure 414.

    [0025] The weighted blending step S04 is performed to drive the processing unit to blend the first stage low-frequency derain image O.sub.LL.sup.1 with the second stage derain image DR.sup.2 to generate a first stage blended derain image BDR.sup.1 according to a weighted blending procedure 415.

    [0026] The second inverse wavelet transforming step S05 is performed to drive the processing unit to recombine the first stage high-frequency derain images O.sub.Detail.sup.1 with the first stage blended derain image BDR.sup.1 to form the final derain image C.sup.1 according to a second inverse wavelet transforming procedure 416.

    [0027] Therefore, the single image deraining method 100 of the present disclosure decomposes the initial rain image R.sup.0 through a Stationary Wavelet Transform (SWT), and uses the low-frequency deraining model 412 and the high-frequency deraining model 413 to remove the rain patterns. Then, the processing unit performs an Inverse Stationary Wavelet Transform (ISWT) on the second stage low-frequency derain image O.sub.LL.sup.2 and the second stage high-frequency rain images O.sub.Detail.sup.2, and then performs an Image Weighted Blending (IWB) with the first stage low-frequency derain image O.sub.LL.sup.1. Finally, the processing unit performs another inverse stationary wavelet transform on the first stage high-frequency derain images O.sub.Detail.sup.1 and the first stage blended derain image BDR.sup.1 to generate the final derain image C.sup.1 so as to restore the initial rain image R.sup.0 to the final derain image C.sup.1 having a clean background.

    [0028] Please refer to FIGS. 1, 2 and 3. FIG. 3 shows a schematic view of a wavelet transforming step S01 of the single image deraining method 100 of FIG. 1. In FIG. 1, the wavelet transforming step S01 includes a first stage wavelet decomposing step S011 and a second stage wavelet decomposing step S012. The first stage wavelet decomposing step S011 is performed to drive the processing unit to decompose the initial rain image R.sup.0 into the first stage low-frequency rain image LL.sub.1 and the first stage high-frequency rain images LH.sub.1, HL.sub.1, HH.sub.1 according to the wavelet transforming procedure 411. The second stage wavelet decomposing step S012 is performed to drive the processing unit to decompose the first stage low-frequency rain image LL.sub.1 into the second stage low-frequency rain image LL.sub.2 and the second stage high-frequency rain images LH.sub.2, HL.sub.2, HH.sub.2 according to the wavelet transforming procedure 411.

    [0029] In FIG. 3, according to the first stage wavelet decomposing step S011, the processing unit uses a low-pass filter F and a high-pass filter G to perform the first time of a horizontal decomposition (i.e., rows in FIG. 3) and a vertical decomposition (i.e., columns in FIG. 3) on the initial rain image R.sup.0 to generate four frequency bands with different frequencies (that is, the first stage low-frequency rain image LL.sub.1 and the first stage high-frequency rain images LH.sub.1, HL.sub.1, HH.sub.1). In detail, the initial rain image R.sup.0 is input to the low-pass filter F, and the low-pass filter F filters out the high-frequency part of the initial rain image R.sup.0 and outputs the low-frequency part of the initial rain image R.sup.0; the initial rain image R.sup.0 is input to the high-pass filter G, and the high-pass filter G filters out the low-frequency part of the initial rain image R.sup.0 and outputs the high-frequency part of the initial rain image R.sup.0. After that, according to the second stage wavelet decomposing step S012, the processing unit performs the second time of the horizontal decomposition and the vertical decomposition on the first stage low-frequency rain image LL.sub.1 to generate the other four frequency bands with different frequencies (that is, the second stage low-frequency rain image LL.sub.2 and the second stage high-frequency rain images LH.sub.2, HL.sub.2, HH.sub.2).

    [0030] In specific, the wavelet transforming procedure 411 includes a wavelet transforming function, the initial rain image, a first stage low-frequency wavelet coefficient, a plurality of first stage high-frequency wavelet coefficients, a second stage low-frequency wavelet coefficient and a plurality of second stage high-frequency wavelet coefficients. The wavelet transforming function is represented as SWT, the initial rain image is represented as R.sup.0, the first stage low-frequency wavelet coefficient is represented as I.sub.LL.sup.1, the first stage high-frequency wavelet coefficients are represented as I.sub.LH.sup.1, I.sub.HL.sup.1 and I.sub.HH.sup.1, respectively, the second stage low-frequency wavelet coefficient is represented as I.sub.LL.sup.2, and the second stage high-frequency wavelet coefficients are represented as I.sub.LH.sup.2, I.sub.HL.sup.2, and I.sub.HH.sup.2, respectively, and conform to two following equations (1) and (2):


    SWT(R.sup.0)=[I.sub.LL.sup.1,I.sub.LH.sup.1,I.sub.HL.sup.1,I.sub.HH.sup.1]  (1);


    SWT(I.sub.LL.sup.1)=[I.sub.LL.sup.2,I.sub.LH.sup.2,I.sub.HL.sup.2,I.sub.HH.sup.2]  (2);

    [0031] The first stage low-frequency wavelet coefficient I.sub.LL.sup.1 corresponds to the first stage low-frequency rain image LL.sub.1 and contains the smooth part of the image. The first stage high-frequency wavelet coefficients I.sub.LH.sup.1, I.sub.HL.sup.1, I.sub.HH.sup.1, correspond to the first stage high-frequency rain images LH.sub.1, HL.sub.1, HH.sub.1 and contain the vertical details, the horizontal details and the diagonal details of the image, respectively. Likewise, the second stage low-frequency wavelet coefficient I.sub.LL.sup.2 corresponds to the second stage low-frequency rain image LL.sub.2. The second stage high-frequency wavelet coefficients I.sub.LH.sup.2, I.sub.HL.sup.2, I.sub.HH.sup.2 correspond to the second stage high-frequency rain images LH.sub.2, HL.sub.2, HH.sub.2, respectively. The wavelet transforming function can be a Haar Wavelet, but the present disclosure is not limited thereto.

    [0032] Please refer to FIG. 4. FIG. 4 shows a schematic view of the low-frequency deraining model 412 and the high-frequency deraining model 413 of the WAAR 110 of FIG. 2. In FIG. 4, the low-frequency deraining model 412 in the image deraining step S02 can include a convolution operation conv, an activation function PReLU and a plurality of residual blocks RLB. I.sub.LL.sup.n represents an n-th stage low-frequency wavelet coefficient, O.sub.LL.sup.n represents an n-th stage low-frequency derain image, F.sub.LF.sub.1.sup.n represents a shallow feature extracted from I.sub.LL.sup.n, F.sub.LL.sub.1.sup.n represents a calculation result of the first RLB, F.sub.LL.sub.2.sup.n represents a calculation result of the second RLB, and so on to F.sub.LL.sub.h.sup.n represents a calculation result of the h-th RLB, F.sub.LL.sub.H.sup.n represents a calculation result of the H-th RLB. In particular, F.sub.LF.sub.2.sup.n is a residual feature fusion result, which represents that F.sub.LL.sub.1.sup.n goes through the H residual blocks RLB for the deep feature extraction and the residual learning to generate F.sub.LL.sub.H.sup.n, and then performs the convolution operation conv on F.sub.LL.sub.H.sup.n, and finally adds the shallow feature F.sub.LF.sub.1.sup.n to F.sub.LL.sub.H.sup.n, so that F.sub.LF.sub.2.sup.n is obtained. h and H are both positive integers.

    [0033] On the other hand, the high-frequency deraining model 413 can include a convolution operation conv, a 1*1 convolution operation 1*1conv, a concatenation function C and a plurality of residual dense blocks RDB. I.sub.Detail.sup.n represents an n-th stage high-frequency wavelet coefficient, O.sub.Detail.sup.n represents an n-th stage high-frequency derain image, F.sub.HF.sub.1.sup.n represents a shallow feature extracted from I.sub.Detail.sup.n, F.sub.DD.sub.1.sup.n represents a calculation result of the first RDB, F.sub.DD.sub.2.sup.n represents a calculation result of the second RDB, and so on to F.sub.DD.sub.h.sup.n represents a calculation result of the h-th RDB, and F.sub.DD.sub.h.sup.n represents a calculation result of the H-th RDB. In particular, F.sub.HF.sub.2.sup.n is a global feature connection result which represents that F.sub.HF.sub.1.sup.n goes through the H residual dense blocks RDB for the deep feature extraction and the residual learning to generate a plurality of operation results (i.e., F.sub.DD.sub.1.sup.n to F.sub.DD.sub.H.sup.n), and then the aforementioned operation results are processed in sequence through the concatenation function C, the 1*1 convolution operation 1*1conv and the convolution operation conv, and finally adds the shallow feature F.sub.HF.sub.1.sup.n to the aforementioned operation results, so that F.sub.LF.sub.2.sup.n is obtained.

    [0034] The low-frequency deraining model 412 and the high-frequency deraining model 413 in the WAAR 110 can form a deep network having the high-frequency blocks connected to the low-frequency blocks; in other words, the residual blocks RLB of the low-frequency deraining model 412 is connected to the residual dense blocks RDB of the high-frequency deraining model 413, respectively so as to extract the richer image features, which are beneficial to obtain the potential dependence between the high-frequency features and the high-frequency features, so that the high-frequency deraining model 413 can remove the rain patterns more effectively in the high-frequency part. The training methods of the low-frequency deraining model 412 and the high-frequency deraining model 413 of the present disclosure is then described in the following paragraphs.

    [0035] First, storing a training image pair of a rain image and a rainless image {y.sub.i, x.sub.i}.sub.i=1, . . . ,m into a storage unit, wherein m is a number of the training image pair. The processing unit is connected to the storage unit, and carries out the n stage stationary wavelet transform (n=1, 2, . . . , N) on the rain image and the rainless image to obtain an n-th stage rain image y.sub.i,n and an n-th rainless image x.sub.i,n, and conform to two following equations (3) and (4):


    y.sub.i,n={y.sub.i,n.sup.LL,y.sub.i,n.sup.Detail}  (3);


    x.sub.i,n={x.sub.i,n.sup.LL,x.sub.i,n.sup.Detail}  (4).

    [0036] Then, the processing unit updates a parameter of the low-frequency deraining model 412 and the high-frequency deraining model 413 according to a loss function, and the loss function and the parameter conform to a following equation (5):

    [00001] Loss ( w ) = 1 m .Math. i = 1 m .Math. n = 1 N .Math. H ( y i , n Detail ; w ) - x i , n Detail .Math. + 1 m .Math. i = 1 m .Math. n = 1 N - 1 .Math. L ( y i , n LL ; w ) - x i , n LL .Math. 2 + 1 m .Math. i = 1 m .Math. L ( y i , N LL ; w ) - x i , N LL .Math. 2 . ( 5 )

    [0037] Loss is the loss function, w is the parameter of the low-frequency deraining model 412 and the high-frequency deraining model 413, i is the training image pair, m is the number of the training image pair, n is a stage of a wavelet transformation (i.e., the stationary wavelet transform), N is a number of the stage, y.sub.i,n.sup.LL is an n-th stage low-frequency rain image, x.sub.i,n.sup.LL is an n-th stage low-frequency rainless image, y.sub.i,n.sup.Detail is an n-th stage high-frequency rain image, x.sub.i,n.sup.Detail is an n-th stage high-frequency rainless image, y.sub.i,N.sup.LL is an N-th stage low-frequency rain image, and x.sub.i,N.sup.LL is an N-th stage low-frequency rainless image. In detail, w is the parameter represented the entire model. The parameter w is optimized by a backpropagation, and the processing unit uses the loss function Loss to train the parameter w, so that the parameter w can jointly estimate the n-th stage low-frequency rain image y.sub.i,n.sup.LL, the n-th stage high-frequency rain image y.sub.i,n.sup.Detail, the n-th stage low-frequency rainless image x.sub.i,n.sup.LL and the n-th stage high-frequency rainless image x.sub.i,n.sup.Detail.

    [0038] Please refer to FIGS. 1 and 2. In the first inverse wavelet transforming step S03, the first inverse wavelet transforming procedure 414 is a reverse procedure of the wavelet transforming procedure 411, and is mainly configured to perform a reverse Haar Wavelet on the second stage low-frequency derain image O.sub.LL.sup.2 and the second stage high-frequency derain images O.sub.Detail.sup.2; in other words, the first inverse wavelet transforming procedure 414 converts the wavelet coefficients corresponding to the second stage low-frequency derain image O.sub.LL.sup.2 and the second stage high-frequency derain images O.sub.Detail.sup.2 from the frequency domain back to the space domain to recompose the second stage derain image DR.sup.2.

    [0039] In the weighted blending step S04, the weighted blending procedure 415 includes a weighted blending function, a weighted value, the first stage low-frequency derain image and the second stage derain image. The weighted blending function is represented as IWB, the weighted value is represented as α, the first stage low-frequency derain image is represented as image1, and the second stage derain image is represented as image2, and conforms to a following equation (6):


    IWB=image1*(1.0−α)+image2*α  (6).

    [0040] In addition, the weighted value α in the first embodiment can be 0.5, but the present disclosure is not limited thereto.

    [0041] It is worth explaining that the second inverse wavelet transforming procedure 416 in the second inverse wavelet transforming step S05 includes an inverse wavelet transforming function, a concatenation function, a weighted blending function, the first stage low-frequency derain image, the second stage low-frequency derain image, the first stage high-frequency derain images, the second stage high-frequency derain images and the final derain image. The inverse wavelet transforming function is represented as ISWT, the concatenation function is represented as concat, the weighted blending function is represented as IWB, the first stage low-frequency derain image is represented as O.sub.LL.sup.1, the second stage low-frequency derain image is represented as O.sub.LL.sup.2, the first stage high-frequency derain images are represented as O.sub.Detail.sup.1, the second stage high-frequency derain images are represented as O.sub.Detail.sup.2, and the final derain image is represented as C.sup.1, and conforms to a following equation (7):


    C.sup.1=ISWT(O.sub.Detail.sup.1,IWB(O.sub.LL.sup.1,ISWT(concat(O.sub.LL.sup.2,O.sub.Detail.sup.2))))  (7).

    [0042] The single image deraining method 100 of the present disclosure compared with the conventional image deraining method not only removes rain for the last stage low-frequency rain image, but retains the high-frequency wavelet coefficients and the low-frequency wavelet coefficients in each stage, and then input the low-frequency rain image decomposed by the stationary wavelet transform in the each stage to the low-frequency deraining model 412 for deraining. Therefore, the low-frequency rain patterns can be eliminated effectively and recursively through the weighted blending function IWB and the inverse wavelet transforming function ISWT, and then restores to the final derain image C.sup.1 having a clean background.

    [0043] Please refer to FIGS. 5 and 6. FIG. 5 shows a flow chart of a single image deraining method 200 according to a second embodiment of the present disclosure. FIG. 6 shows a schematic view of a WAAR 210 (i is 1 to n) of the single image deraining method 200 of the second embodiment of the present disclosure. In FIGS. 5 and 6, the single image deraining method 200 is configured to convert an initial rain image R.sup.0 into a final derain image C.sup.1, and includes performing a wavelet transforming step S11, an image deraining step S12, a first inverse wavelet transforming step S13, a weighted blending step S14, a second inverse wavelet transforming step S15 and a residual network learning step S16.

    [0044] The wavelet transforming step S11 is performed to drive a processing unit to process the initial rain image R.sup.0 to generate an i-th stage low-frequency rain image and a plurality of i-th stage high-frequency rain images according to a wavelet transforming procedure 411, wherein i is 1 to n, i and n are both positive integers, and n is greater than or equal to 3. In specific, the processing unit performs the n stage stationary wavelet transform on the initial rain image R.sup.0 according to the wavelet transforming procedure 411 to generate a first stage low-frequency rain image LL.sub.1, a plurality of first stage high-frequency rain images LH.sub.1, HL.sub.1, HH.sub.1, a second stage low-frequency rain image LL.sub.2 and a plurality of second stage high-frequency rain images LH.sub.2, HL.sub.2, HH.sub.2, and so on to generate an n-th stage low-frequency rain image and a plurality of n-th stage high-frequency rain images.

    [0045] The image deraining step S12 is performed to drive the processing unit to input the first stage low-frequency rain image LL.sub.1 and the second stage low-frequency rain image LL.sub.2 to a low-frequency deraining model 412 to output a first stage low-frequency derain image O.sub.LL.sup.1 and a second stage low-frequency derain image O.sub.LL.sup.2, and input the first stage high-frequency rain images LH.sub.1, HL.sub.1, HH.sub.1 and the second stage high-frequency rain images LH.sub.2, HL.sub.2, HH.sub.2 to a high-frequency deraining model 413 to output a plurality of first stage high-frequency derain images O.sub.Detail.sup.1 and a plurality of second stage high-frequency derain images O.sub.Detail.sup.2, and so on to input the i-th stage low-frequency rain image to the low-frequency deraining model 412 to output an i-th stage low-frequency derain image, and input the i-th stage high-frequency rain images to the high-frequency deraining model 413 to output a plurality of i-th stage high-frequency derain images.

    [0046] The first inverse wavelet transforming step S13 is performed to drive the processing unit to recombine the n-th stage low-frequency derain image with the n-th stage high-frequency rain images to form an n-th stage derain image according to a first inverse wavelet transforming procedure 414.

    [0047] The weighted blending step S14 is performed to drive the processing unit to blend the (n−1)-th stage low-frequency derain image with the n-th stage derain image to generate a (n−1)-th stage blended derain image according to a weighted blending procedure 415.

    [0048] The second inverse wavelet transforming step S15 is performed to drive the processing unit to recombine the (n−1)-th stage high-frequency derain images with the (n−1)-th stage blended derain image to form a (n−1)-th stage derain image according to a second inverse wavelet transforming procedure 416, and then the processing unit sets n to n−1.

    [0049] The residual network learning step S16 is performed to drive the processing unit to repeatedly execute the weighted blending step S14 and the second inverse wavelet transforming step S15 according to n until n=2. In response to determining that n=2 in the residual network learning step S16, the (n−1)-th stage derain image of the second inverse wavelet transforming step S15 is the final derain image C.sup.1. The details of the abovementioned steps are then described below through more detailed embodiments.

    [0050] Please refer to FIGS. 5 and 7. FIG. 7 shows a schematic view of a WAAR 310 (i is 1 to 3) of a single image deraining method according to a third embodiment of the present disclosure. The steps of the single image deraining method (not shown) of the third embodiment are the same as the steps of the single image deraining method 200 of the second embodiment. In specific, setting n of the single image deraining method 200 of the second embodiment to 3 is the single image deraining method of the third embodiment.

    [0051] In FIGS. 5 and 7, the wavelet transforming step S11 is performed to drive the processing unit to process the initial rain image R.sup.0 to generate a first stage low-frequency rain image LL.sub.1, a plurality of first stage high-frequency rain images LH.sub.1, HL.sub.1, HH.sub.1, a second stage low-frequency rain image LL.sub.2 and a plurality of second stage high-frequency rain images LH.sub.2, HL.sub.2, HH.sub.2, a third stage low-frequency rain image LL.sub.3 and a plurality of third stage high-frequency rain images LH.sub.3, HL.sub.3, HH.sub.3 according to the wavelet transforming procedure 411.

    [0052] The image deraining step S12 is performed to drive the processing unit to input the first stage low-frequency rain image LL.sub.1, the second stage low-frequency rain image LL.sub.2 and the third stage low-frequency rain image LL.sub.3 to the low-frequency deraining model 412 to output a first stage low-frequency derain image O.sub.LL.sup.1, a second stage low-frequency derain image O.sub.LL.sup.2, and a third stage low-frequency derain image O.sub.LL.sup.3, and input the first stage high-frequency rain images LH.sub.1, HL.sub.1, HH.sub.1, the second stage high-frequency rain images LH.sub.2, HL.sub.2, HH.sub.2 and the third stage high-frequency rain images LH.sub.3, HL.sub.3, HH.sub.3 to the high-frequency deraining model 413 to output a plurality of first stage high-frequency derain images O.sub.Detail.sup.1, a plurality of second stage high-frequency derain images O.sub.Detail.sup.2 and a plurality of third stage high-frequency derain images O.sub.Detail.sup.3.

    [0053] The first inverse wavelet transforming step S13 is performed to drive the processing unit to recombine the third stage low-frequency derain image O.sub.LL.sup.3 with the third stage high-frequency derain images O.sub.Detail.sup.3 to form a third stage derain image DR.sup.3 according to the first inverse wavelet transforming procedure 414.

    [0054] The weighted blending step S14 is performed to drive the processing unit to blend the second stage low-frequency derain image O.sub.LL.sup.2 with the third stage derain image DR.sup.3 to generate a second stage blended derain image BDR.sup.2 according to the weighted blending procedure 415.

    [0055] The second inverse wavelet transforming step S15 is performed to drive the processing unit to recombine the second stage high-frequency derain images O.sub.Detail.sup.2 with the second stage blended derain image BDR.sup.2 to form a second stage derain image DR.sup.2 according to the second inverse wavelet transforming procedure 416, and then the processing unit sets n (i.e., 3) to n−1 (i.e., 2).

    [0056] The residual network learning step S16 is performed to drive the processing unit to repeatedly execute the weighted blending step S14 and the second inverse wavelet transforming step S15 according to the reset n (i.e., 2) until n=2. Since the reset n is already equal to 2, the processing unit only needs to execute the weighted mixing step S14 and the second inverse wavelet transform step S15 again (i.e., execute a next weighted mixing step S14 and a next second inverse wavelet transforming step S15).

    [0057] The next weighted mixing step S14 is performed to drive the processing unit to blend the first stage low-frequency derain image O.sub.LL.sup.1 with the second stage derain image DR.sup.2 to generate a first stage blended derain image BDR.sup.1 according to the weighted blending procedure 415.

    [0058] The next second inverse wavelet transforming step S15 is performed to drive the processing unit to recombine the first stage high-frequency derain images O.sub.Detail.sup.1 with the first stage blended derain image BDR.sup.1 to form a first stage derain image according to the second inverse wavelet transforming procedure 416, and the first stage derain image is the final derain image C.sup.1.

    [0059] Therefore, the single image deraining method 200 of the second embodiment or the single image deraining method of the third embodiment performs multi-stages stationary wavelet transform to decompose the initial rain image R.sup.0, and focuses on the low-frequency rain image in each stage to perform the inverse stationary wavelet transform and the image weighted blending so as to effectively remove the rain patterns of the low-frequency rain image in each stage.

    [0060] Please refer to FIGS. 1-2 and 8. FIG. 8 shows a block diagram of a single image deraining system 400 according to a fourth embodiment of the present disclosure. In FIGS. 1-2 and 8, the single image deraining system 400 is configured to convert an initial rain image R.sup.0 into a final derain image C.sup.1, and includes a storing unit 410 and a processing unit 420.

    [0061] The storing unit 410 is configured to access the initial rain image R.sup.0, a wavelet transforming procedure 411, a low-frequency deraining model 412, a high-frequency deraining model 413, a first inverse wavelet transforming procedure 414, a weighted blending procedure 415 and a second inverse wavelet transforming procedure 416. The processing unit 420 is connected to the storing unit 410 and configured to implement the single image deraining method 100, 200. In detail, the processing unit 420 can be a Digital Signal Processor (DSP), a Micro Processing Unit (MPU), a Central Processing Unit (CPU) or other electronic processors, but the present disclosure is not limited thereto. Therefore, the single image deraining system 400 of the present disclosure decomposes the initial rain image R.sup.0 through the stationary wavelet transform, and uses the low-frequency deraining model 412 and the high-frequency deraining model 413 to remove the rain patterns. Then, performing the inverse wavelet transforming function ISWT and the weighted blending function IWB on the previous low-frequency rain image to restore the initial rain image R.sup.0 to the final derain image C.sup.1 having a clean background.

    [0062] In summary, the present disclosure has the following advantages. First, decomposing the initial rain image through multi-stages stationary wavelet transform, and retaining the high-frequency coefficients and the low-frequency coefficients in each stage, and then focusing on the low-frequency rain image in each stage to perform the inverse wavelet transforming function ISWT and the weighted blending function IWB so as to effectively remove the rain patterns of the low-frequency rain image in each stage. Second, as the residual blocks of the low-frequency deraining model and the residual dense blocks of the high-frequency deraining model are connected to each other, it is favorable for helping the high-frequency deraining model to remove the rain patterns more effectively in the high-frequency part. Third, the present disclosure not only performs deraining for the last stage low-frequency rain image, but also inputs the low-frequency rain image in each stage into the low-frequency deraining model for rain removing so as to avoid the situation that the edge of the derain image is blurred.

    [0063] Although the present disclosure has been described in considerable detail with reference to certain embodiments thereof, other embodiments are possible. Therefore, the spirit and scope of the appended claims should not be limited to the description of the embodiments contained herein.

    [0064] It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the present disclosure without departing from the scope or spirit of the disclosure. In view of the foregoing, it is intended that the present disclosure cover modifications and variations of this disclosure provided they fall within the scope of the following claims.