A METHOD, APPARATUS AND ELECTRIC DEVICE FOR IMAGE FUSION
20200342580 ยท 2020-10-29
Inventors
Cpc classification
G06T5/94
PHYSICS
International classification
Abstract
The present disclosure provides a method, an electronic device, and a computer readable storage medium for image fusion. The image fusion method includes calculating a fusion coefficient image M based on a first frame image I.sub.1 or based on both the first frame image I.sub.1 and a second frame image I.sub.2; calculating a first gradient D.sub.1 of the first frame image I.sub.1 and a second gradient D.sub.2 of the second frame image I.sub.2; calculating a preliminary fusion result J based on the calculated fusion coefficient image M, the first gradient D.sub.1 and the second gradient D.sub.2; and obtaining an output image I.sub.3 based on the calculated fusion coefficient image M, the first gradient D.sub.1, the second gradient D.sub.2 and the preliminary fusion result J, wherein brightness of the first frame image I.sub.1 is greater than brightness of the second frame image I.sub.2, and wherein the fusion coefficient image M is used to mark fusion weights of pixels in the first frame image I.sub.1.
Claims
1. An image fusion method, comprising: calculating a fusion coefficient image M based on a first frame image I.sub.1 or based on both the first frame image I.sub.1 and a second frame image I.sub.2; calculating a first gradient D.sub.1 of the first frame image I.sub.1 and a second gradient D.sub.2 of the second frame image I.sub.2; calculating a preliminary fusion result J based on the calculated fusion coefficient image M, the first gradient D.sub.1 and the second gradient D.sub.2; and obtaining an output image I.sub.3 based on the calculated fusion coefficient image M, the first gradient D.sub.1, the second gradient D.sub.2 and the preliminary fusion result J, wherein brightness of the first frame image I.sub.1 is greater than brightness of the second frame image I.sub.2, and wherein the fusion coefficient image M is used to mark fusion weights of pixels in the first frame image I.sub.1.
2. The method of claim 1, wherein obtaining the output image I.sub.3 based on the calculated fusion coefficient image M, the first gradient D.sub.1, the second gradient D.sub.2, and the preliminary fusion result J comprises: calculating a third gradient D.sub.3 based on the fusion coefficient image M, the first gradient D.sub.1, and the second gradient D.sub.2; and obtaining the output image I.sub.3 based on the calculated third gradient D.sub.3 and the preliminary fusion result J, wherein a sum of a difference between the output image I.sub.3 and the preliminary fusion result J and a difference between a gradient of the output image I.sub.3 and the third gradient D.sub.3 is a minimum.
3. The method of claim 2, wherein obtaining the output image I.sub.3 based on the calculated third gradient D.sub.3 and the preliminary fusion result J comprises: obtaining the output image I.sub.3 by a following optimization manner:
I.sub.3=argmin{(D.sub.3J).sup.2+(I.sub.3D.sub.3).sup.2}(1) I.sub.3 represents a gradient of I.sub.3 and is a positive constant.
4. The method of claim 1, wherein calculating the fusion coefficient image M based on the first frame image I.sub.1 comprises: converting the first frame image I.sub.1 into a first grayscale image G.sub.1; and calculating the fusion coefficient image M according to a following equation:
M(x, y)=255(255G.sub.1(x, y))*(2) where M(x, y) represents a value of the fusion coefficient image M at pixel (x, y), G.sub.1(x, y) represents a gray value of the first gray image G.sub.1 at the pixel (x, y), is a positive constant, and x and y are non-negative integers.
5. The method of claim 1, wherein calculating the fusion coefficient image M based on both the first frame image I.sub.1 and the second frame image I.sub.2 comprises: converting the first frame image I.sub.1 into a first grayscale image G.sub.1; converting the second frame image I.sub.2 into a second grayscale image G.sub.2; calculating a first fusion image M.sub.1 based on the first grayscale image G.sub.1; calculating a protection area M.sub.2 based on both the first grayscale image G.sub.1 and the second grayscale image G.sub.2; and calculating the fusion coefficient image M based on the calculated first fusion image M.sub.1 and the protection area M.sub.2.
6. The method of claim 5, wherein calculating the first fusion image M.sub.1 based on the first grayscale image G.sub.1 comprises calculating the first fusion image M.sub.1 according to a following equation:
M.sub.1(x, y)=255(255G.sub.1(x, y))*(3) where M.sub.1(x, y) represents a value of the first fusion image M.sub.1 at pixel (x, y), G.sub.1(x, y) represents a gray value of the first gray image G.sub.1 at the pixel (x, y), is a positive constant, and x and y are non-negative integers; and wherein calculating the protection area M.sub.2 based on both the first grayscale image G.sub.1 and the second grayscale image G.sub.2 comprises calculating the protection area M.sub.2 according to a following equation:
7. The method of claim 5, wherein calculating the fusion coefficient image M based on the calculated first fusion image M.sub.1 and the protection area M.sub.2 comprises calculating the fusion coefficient image M according to a following equation:
M(x, y)=min(M.sub.1(x, y), M.sub.2(x, y))(5) where M(x, y) represents a value of the fusion coefficient image M at the pixel (x, y), and M.sub.1(x, y) represents the value of the first fusion image M.sub.1 at the pixel (x, y); M.sub.2(x, y) represents the value of the protection region M.sub.2 at the pixel (x, y), and x, y are non-negative integers.
8. The method of claim 2, wherein calculating the preliminary fusion result J based on the calculated fusion coefficient image M, the first gradient D.sub.1 and the second gradient D.sub.2 comprises calculating the preliminary fusion result J according to a following equation:
J(x, y)=D.sub.1(x, y)*(255M(x, y))+M(x, y)*D.sub.2(x, y)(6) where J(x, y) represents a value of the preliminary fusion result J at pixel (x, y), D.sub.1(x, y) represents a gradient value of the first gradient D.sub.1 at the pixel (x, y), D.sub.2(x, y) represents a gradient value of the second gradient D.sub.2 at the pixel (x, y), and M(x, y) represents a value of the fusion coefficient image M at the pixel (x, y).
9. The method of claim 2, wherein calculating the third gradient D.sub.3 based on the fusion coefficient image M, the first gradient D.sub.1, and the second gradient D.sub.2 comprises calculating the third gradient D.sub.3 according to a following equation:
D.sub.3{circumflex over ()}x(x, y)=D.sub.1{circumflex over ()}x(x, y)*(255M(x, y))+M(x, y)*D.sub.2{circumflex over ()}x(x, y)
D.sub.3{circumflex over ()}y(x, y)=D1{circumflex over ()}y(x, y)*(255M(x, y))+M(x, y)*D.sub.2{circumflex over ()}y(x, y) (7) where D.sub.3{circumflex over ()}x (x, y) represents a gradient of the third gradient D.sub.3 in the x direction at pixel (x, y), D.sub.3{circumflex over ()}y (x, y) represents a gradient of the third gradient D.sub.3 in the y direction at the pixel (x, y), D.sub.1{circumflex over ()}x (x, y) represents a gradient of the first gradient D.sub.1 in the x direction at the pixel (x, y), D.sub.1{circumflex over ()}y (x, y) represents a gradient of the first gradient D.sub.1 in the y direction at the pixel (x, y), D.sub.2{circumflex over ()}x (x, y) represents a gradient of the second gradient D.sub.2 in the x direction at the pixel (x, y), D.sub.2{circumflex over ()}y (x, y) represents a gradient of the second gradient D.sub.2 in the y direction at the pixel (x, y), and x, y are non-negative integers.
10. The method of claim 1, further comprising: aligning the first frame image I.sub.1 and the second frame image I.sub.2.
11. The method of claim 10, further comprising: compensating the second frame image I.sub.2 by a following equation before aligning the first frame image I.sub.1 and the second frame image I.sub.2:
I.sub.2=I.sub.2*b.sub.2/b.sub.1(8) where I.sub.2 is the compensated image of the second frame image I.sub.2, b.sub.1 is an average gray value of the first frame image I.sub.1, and b.sub.2 is an average gray value of the second frame image I.sub.2.
12. The method of claim 1, wherein the first frame image I.sub.1 and the second frame image I.sub.2 are two frame images with the highest brightness among image frames to be fused or two frame images with the lowest brightness among the image frames to be fused, the method further comprises: in the case where the first frame image I.sub.1 and the second frame image I.sub.2 for an initial fusion are the two frame images with the highest brightness among the image frames to be fused, taking a output image I.sub.3 of a result of the last fusion as the first frame image I.sub.1 in the next fusion, and taking the frame image with the highest brightness among non-fused image frames of the image frames to be fused as the second frame image I.sub.2 in the next fusion; in the case where the first frame image I.sub.1 and the second frame image I.sub.2 for the initial fusion are the two frame images with the lowest brightness among the image frames to be fused, taking the frame image with the lowest brightness among non-fused image frames of the image frames to be fused as the first frame image I.sub.1 in the next fusion, and taking the output image I.sub.3 of the result of the last fusion as the second frame image I.sub.2 in the next fusion; fusing the first frame image I.sub.1 and the second frame image I.sub.2 by using the method of claim 1 to obtain a new output image I.sub.3; and repeating previous steps until fusions of all the image frames among the image frames to be fused are completed.
13. An electronic device for image fusion, the electronic device comprising a processor and a memory having instructions stored thereon, which, when executed by the processor, cause the processor to: calculate a fusion coefficient image M based on a first frame image I.sub.1 or based on both the first frame image I.sub.1 and a second frame image I.sub.2; calculate a first gradient D.sub.1 of the first frame image I.sub.1 and a second gradient D.sub.2 of the second frame image I.sub.2; calculate a preliminary fusion result J based on the calculated fusion coefficient image M, the first gradient D.sub.1 and the second gradient D.sub.2; and obtain an output image I.sub.3 based on the calculated fusion coefficient image M, the first gradient D.sub.1, the second gradient D.sub.2 and the preliminary fusion result J, wherein brightness of the first frame image I.sub.1 is greater than brightness of the second frame image I.sub.2, and wherein the fusion coefficient image M is used to mark fusion weights of pixels in the first frame image I.sub.1.
14. The electronic device of claim 13, wherein obtaining the output image I.sub.3 based on the calculated fusion coefficient image M, the first gradient D.sub.1, the second gradient D.sub.2, and the preliminary fusion result J comprises: calculating a third gradient D.sub.3 based on the fusion coefficient image M, the first gradient D.sub.1, and the second gradient D.sub.2; and obtaining the output image I.sub.3 based on the calculated third gradient D.sub.3 and the preliminary fusion result J, wherein a sum of a difference between the output image I.sub.3 and the preliminary fusion result J and a difference between a gradient of the output image I.sub.3 and the third gradient D.sub.3 is a minimum.
15. The electronic device of claim 14, wherein obtaining the output image I.sub.3 based on the calculated third gradient D.sub.3 and the preliminary fusion result J comprises: obtaining the output image I.sub.3 by a following optimization manner:
I.sub.3=argmin{(D.sub.3J).sup.2+(I.sub.3D.sub.3).sup.2}(9) where I.sub.3 represents a gradient of I.sub.3 and is a positive constant.
16. The electronic device of claim 13, wherein calculating the fusion coefficient image M based on both the first frame image I.sub.1 and the second frame image I.sub.2 comprises: converting the first frame image I.sub.1 into a first grayscale image G.sub.1; converting the second frame image I.sub.2 into a second grayscale image G.sub.2; calculating a first fusion image M.sub.1 based on the first grayscale image G.sub.1; calculating a protection area M.sub.2 based on both the first grayscale image G.sub.1 and the second grayscale image G.sub.2; and calculating the fusion coefficient image M based on the calculated first fusion image M.sub.1 and the protection area M.sub.2.
17. The electronic device of claim 14, wherein calculating the third gradient D.sub.3 based on the fusion coefficient image M, the first gradient D.sub.1, and the second gradient D.sub.2 comprises calculating the third gradient D.sub.3 according to a following equation:
D.sub.3{circumflex over ()}x(x, y)=D.sub.1{circumflex over ()}x(x, y)*(255M(x, y))+M(x, y)*D.sub.2{circumflex over ()}x(x, y)
D.sub.3{circumflex over ()}y(x, y)=D1{circumflex over ()}y(x, y)*(255M(x, y))+M(x, y)*D.sub.2{circumflex over ()}y(x, y) (10) where D.sub.3{circumflex over ()}x (x, y) represents a gradient of the third gradient D.sub.3 in the x direction at pixel (x, y), D.sub.3{circumflex over ()}y (x, y) represents a gradient of the third gradient D.sub.3 in the y direction at the pixel (x, y), D.sub.1{circumflex over ()}x (x, y) represents a gradient of the first gradient D.sub.1 in the x direction at the pixel (x, y), D.sub.1{circumflex over ()}y (x, y) represents a gradient of the first gradient D.sub.1 in the y direction at the pixel (x, y), D.sub.2{circumflex over ()}x (x, y) represents a gradient of the second gradient D.sub.2 in the x direction at the pixel (x, y), D.sub.2{circumflex over ()}y (x, y) represents a gradient of the second gradient D.sub.2 in the y direction at the pixel (x, y), and x, y are non-negative integers.
18. The electronic device of claim 13, wherein the instructions, when executed by the processor, further cause the processor to: align the first frame image I.sub.1 and the second frame image I.sub.2, before performing the processing of claim 13.
19. The electronic device of claim 13, wherein the first frame image I.sub.1 and the second frame image I.sub.2 are two frame images with the highest brightness among image frames to be fused or two frame images with the lowest brightness among the image frames to be fused, the instructions, when executed by the processor, cause the processor further to: in the case where the first frame image I.sub.1 and the second frame image I.sub.2 for an initial fusion are the two frame images with the highest brightness among the image frames to be fused, taking a output image I.sub.3 of a result of the last fusion as the first frame image I.sub.1 in the next fusion, and taking the frame image with the highest brightness among non-fused image frames of the image frames to be fused as the second frame image I.sub.2 in the next fusion; in the case where the first frame image I.sub.1 and the second frame image I.sub.2 for the initial fusion are the two frame images with the lowest brightness among the image frames to be fused, taking the frame image with the lowest brightness among non-fused image frames of the image frames to be fused as the first frame image I.sub.1 in the next fusion, and taking the output image I.sub.3 of the result of the last fusion as the second frame image I.sub.2 in the next fusion; fuse the first frame image I.sub.1 and the second frame image I.sub.2 by using the method of claim 1 to obtain a new output image I.sub.3; and repeat previous steps until fusions of all the image frames among the image frames to be fused are completed.
20. A computer readable storage medium storing instructions, which, when executed by a processor, cause the processor to: calculate a fusion coefficient image M based on a first frame image I.sub.1 or based on both the first frame image I.sub.1 and a second frame image I.sub.2; calculate a first gradient D.sub.1 of the first frame image I.sub.1 and a second gradient D.sub.2 of the second frame image I.sub.2; calculate a preliminary fusion result J based on the calculated fusion coefficient image M, the first gradient D.sub.1 and the second gradient D.sub.2; and obtain an output image I.sub.3 based on the calculated fusion coefficient image M, the first gradient D.sub.1, the second gradient D.sub.2 and the preliminary fusion result J, wherein brightness of the first frame image I.sub.1 is greater than brightness of the second frame image I.sub.2, and wherein the fusion coefficient image M is used to mark fusion weights of pixels in the first frame image I.sub.1.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] The above and other objects, features and advantages of the present disclosure will become more apparent by describing embodiments of the disclosure in more detail in connection with the accompanying drawings. The accompanying drawings are used to provide a further understanding of the embodiments of the present disclosure and form a part of the specification. The accompanying drawing together with the embodiments of the present disclosure are used to explain the present disclosure, but are not to be construed as limiting the present disclosure. In the drawings, the same reference numerals refer to the same parts, steps or elements unless explicitly indicated otherwise. In the drawings,
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
DETAILED DESCRIPTION
[0023] In order to make the objects, technical solutions and advantages of the present disclosure more obvious, exemplary embodiments according to the present disclosure will be described in detail below in connection with the accompanying drawings. Obviously, the described embodiments are only a part of the embodiments of the present disclosure and not all of the embodiments of the present disclosure, and it should be understood that the present disclosure is not limited to the example embodiments described herein.
[0024] First, referring to
[0025] Referring to
[0026] Hereinafter, an example flow of an image fusion method to use two frame images according to an embodiment of the present disclosure will be described in detail in connection with
[0027]
[0028] Referring to
[0029] After that, the method may proceed to step S210. At step S210, a first gradient D.sub.1 of the first frame image I.sub.1 and a second gradient D.sub.2 of the second frame image I.sub.2 are calculated. Specifically, the first gradient D.sub.1 of the first frame image I.sub.1 and the second gradient D.sub.2 of the second frame image I.sub.2 may be calculated according to the following equations (1) and (2).
[0030] After calculating the fusion coefficient image M, the first gradient D.sub.1 and the second gradient D.sub.2, the method may proceed to step S220. At step S220, a preliminary fusion result J is calculated based on the calculated fusion coefficient image M, the first gradient D.sub.1, and the second gradient D.sub.2. The calculation of the preliminary fusion result J based on the calculated fusion coefficient image M, the first gradient D.sub.1 and the second gradient D.sub.2 can be considered as a gradient reconstruction process. Specifically, in some embodiments, the preliminary fusion result J may be calculated based on the fusion coefficient image M, the first gradient D.sub.1, and the second gradient D.sub.2 according to the following equation (3):
J(x, y)=D.sub.1(x, y)*(255M(x, y))+M(x, y)*D.sub.2(x, y)(3)
[0031] Where J(x, y) represents a value of the preliminary fusion result J at pixel (x, y), D.sub.1(x, y) represents a gradient value of the first gradient D.sub.1 at pixel (x, y), D.sub.2(x, y) represents a gradient value of the second gradient D.sub.2 at pixel (x, y), M(x, y) represents a value of the fusion coefficient image M at pixel (x, y), and x, y are non-negative integers.
[0032] There will be obvious fusion edges in the preliminary fusion result image, so it is not the most ideal to take the preliminary fusion result image as the output image. However, in some scenes, for example, in scenes where the viewer does not pay much attention to the fusion edges, the preliminary fusion result image can be directly output as the fusion image.
[0033] After that, the method may proceed to step S230. At step S230, an output image I.sub.3 is obtained based on the calculated fusion coefficient image M, the first gradient D.sub.1, the second gradient D.sub.2, and the preliminary fusion result J. The output image I.sub.3 is the fused image. The output image I.sub.3 fuses dark-part details captured by a bright frame (e.g., the first frame image I.sub.1) and highlight details captured by a dark frame (e.g., the second frame image I.sub.2), so that the fused image I.sub.3 can present more details than a single bright frame or a single dark frame.
[0034] In the process of fusing two frame images, the first frame image I.sub.1 and the second frame image I.sub.2, by the image fusion method according to the embodiment of the present disclosure described in connection with
[0035] The image fusion method described above in connection with
[0036]
[0037] Compared with the image fusion method described in connection with
[0038] In addition, in some embodiments, it is difficult to align images due to the large brightness difference between bright frames and dark frames. Therefore, before using standard image alignment algorithms, e.g. the image alignment method of mesh flow[ ] or the global image alignment method based on homography, the dark frames can be brightness compensated first. Specifically, the dark frames can be brightness compensated according to the following equation (4).
I.sub.2=I.sub.2*b.sub.2/b.sub.1(4)
[0039] Where I.sub.2 is the compensated image of the second frame image I.sub.2 (the dark frame), b.sub.1 is an average gray value of the first frame image I.sub.1 (the bright frame), and b.sub.2 is an average gray value of the second frame image I.sub.2.The stability of image alignment can be enhanced through brightness compensation for the dark frames before image alignment.
[0040] In the above, example flows of image fusion methods to fuse two frame images according to the embodiments of the present disclosure are described in connection with
[0041]
[0042] Referring to
M(x, y)=255(255G.sub.1(x, y))*(5)
[0043] Where M(x, y) represents the value of the fusion coefficient image M at pixel (x, y) (e.g., pixel 102 in
[0044]
[0045] Referring to
M.sub.1(x, y)=255(255G.sub.1(x, y))*(6)
[0046] Where M.sub.1(x, y) represents a value of the first fusion image M.sub.1 at pixel (x, y) (e.g., pixel 102 in
[0047] At step S200_10, a protection area M.sub.2 is calculated based on both the first grayscale image G.sub.1 and the second grayscale image G.sub.2. Specifically, in some embodiments, the protection area M.sub.2 may be calculated according to the following Equation (7):
[0048] Where M.sub.2(x, y) represents a value of the protection area M.sub.2 at pixel (x, y) (e.g., pixel 102 in
[0049] After that, the method may proceed to step S200_12. At step S200_10, the fusion coefficient image M is calculated based on the calculated first fusion image M.sub.1 and the protection area M.sub.2. Specifically, in some embodiments, the fusion coefficient image M may be calculated according to the following Equation (8):
M(x, y)=min(M.sub.1(x, y), M.sub.2(x, y))(8)
[0050] Where M(x, y) represents a value of the fusion coefficient image M at pixel (x, y) (e.g., pixel 102 in
[0051] The method of calculating the fusion coefficient image M according to the embodiment of the present disclosure described in connection with
[0052]
[0053] Referring to
[0054] At step S230_2, third gradient D.sub.3 is calculated based on the fusion coefficient image M, the first gradient D.sub.1, and the second gradient D.sub.2. Specifically, in some embodiments, the third gradient D.sub.3 may be calculated based on the fusion coefficient image M, the first gradient D.sub.1, and the second gradient D.sub.2 according to the following equation (9):
[0055] Where D.sub.3{circumflex over ()}x (x, y) represents a gradient of the third gradient D.sub.3 in the x direction at pixel (x, y) (for example, the pixel 102 in
[0056] Calculating the third gradient D.sub.3 by the above equation 9 enables the fused image to have image gradient information of the second gradient D.sub.2 in the highlight region while retaining its original image gradient information in the unexposed region of the first frame image Il.
[0057] After calculating the third gradient D.sub.3 and the preliminary fusion result J, the method may proceed to step S230_4. At step S230_4, the output image I.sub.3 is obtained based on the calculated third gradient D.sub.3 and the preliminary fusion result J, wherein a sum of the difference between the output image I.sub.3 and the preliminary fusion result J and the difference between the gradient of the output image I.sub.3 and the third gradient D.sub.3 is a minimum. In the present disclosure, the minimum sum of the difference between the output image I.sub.3 and the preliminary fusion result J and the difference between the gradient of the output image I.sub.3 and the third gradient D3 can be understood as the output image I.sub.3 can be obtained by the following optimization manner:
I.sub.3=argmin{(D.sub.3J).sup.2+(I.sub.3D.sub.3).sup.2}(10)
[0058] Where, I.sub.3 represents a gradient of I.sub.3, is a positive constant.
[0059] Alternatively, in some embodiments, the minimum sum of the difference between the output image I.sub.3 and the preliminary fusion result J and the difference between the gradient of the output image I.sub.3 and the third gradient D.sub.3 can be understood as the output image I.sub.3 can be obtained by the following optimization manner:
I.sub.3=argmin{(D.sub.3J).sup.2+(I.sub.3D.sub.3).sup.2}(11)
[0060] Where, I.sub.3 represents the gradient of I.sub.3, and are positive constants.
[0061] Alternatively, in other embodiments, the minimum sum of the difference between the output image I.sub.3 and the preliminary fusion result J and the difference between the gradient of the output image I.sub.3 and the third gradient D.sub.3 can be understood as the output image I.sub.3 can be obtained by the following optimization manner:
I.sub.3=argmin{[(D.sub.3J).sup.2+(I.sub.3D.sub.3).sup.2].sup.p}(12)
[0062] Where, I.sub.3 represents a gradient of I.sub.3, and are positive constant, and p is an integer (e.g. 2 or 2).
[0063] Specifically, in the process of the above optimization manner (i.e., solving the above equations 10, 11, and 12 by optimization), these equations can be solved by various optimization manners existing or to be developed in the future. In some embodiments, equations 10, 11, and 12 may be optimally solved by a multi-scale optimization manner to obtain the output image I.sub.3.
[0064] Obtaining the output image I.sub.3 by using the new algorithm based on gradient fusion and image reconstruction based on the fusion coefficient-based image M, the first gradient D.sub.1 and the second gradient D.sub.2 described in conjunction with
[0065] It should be understood that although in the present disclosure, the output image I.sub.3 can be obtained by the above optimization based on the third gradient D.sub.3 and the preliminary fusion result J, the present disclosure is not limited thereto. That is, based on the third gradient D.sub.3 and the preliminary fusion result J, the method of obtaining the output image I.sub.3 by the above optimization manner or its variations is within the scope of this specification and the appended claims.
[0066] The image fusion method to fuse two frame images together according to the embodiment of the present disclosure has been described in detail above in connection with
[0067] Referring to
[0068] The image fusion method for fusing multi-frame images together according to the embodiment of the present disclosure described above in connection with
[0069] In addition, it should be understood that although the order of input images in the method shown in
[0070] In the above, the present disclosure describes an example flow of the image fusion method to fuse two frame images and to fuse multi-frame images according to embodiments of the present disclosure in conjunction with
[0071]
[0072] Referring to
[0073] The fusion unit 820 may perform the following operations: calculating a fusion coefficient image M based on a first frame image I.sub.1 or based on both the first frame image I.sub.1 and a second frame image I.sub.2; calculating a first gradient D.sub.1 of the first frame image I.sub.1 and a second gradient D.sub.2 of the second frame image I.sub.2; calculating a preliminary fusion result J based on the calculated fusion coefficient image M, the first gradient D.sub.1 and the second gradient D.sub.2; and obtaining an output image I.sub.3 based on the calculated fusion coefficient image M, the first gradient D.sub.1, the second gradient D.sub.2, and the preliminary fusion result J, wherein brightness of the first frame image I.sub.1 is greater than brightness of the second frame image I.sub.2, and wherein the fusion coefficient image M is used to mark fusion weights of pixels in the first frame image I.sub.1.
[0074] After the fusion unit completes the fusions of the images, the output unit 830 may output the output image I.sub.3.
[0075] Alternatively or additionally, the fusion unit 820 may also perform the image fusion method for fusing two frame images and the image fusion method for fusing multi-frame images described above in connection with
[0076]
[0077] Alternatively or additionally, the instructions, when executed by the processor, may also enable the processor to perform the image fusion method for fusing two frame images and the image fusion method for fusing multi-frame images according to the embodiments of the present disclosure described above in connection with
[0078]
[0079] The computer readable storage medium 1000 illustrated in
[0080] Alternatively or additionally, the instructions, when executed by the processor, may also enable the processor to perform the image fusion method for fusing two frame images and the image fusion method for fusing multi-frame images according to the embodiments of the present disclosure described above in connection with
[0081] In the present disclosure, the electronic apparatus 800 and the electronic device 900 that can implement the image fusion method according to the embodiment of the present disclosure may be electronic devices that can capture images, for example, but not limited to, cameras, video cameras, smart phones, tablet personal computers, mobile phones, video phones, desktop PCs, laptop PCs, netbook PCs, personal digital assistants, or any electronic devices that can capture images. Alternatively, the electronic apparatus 800 and the computing system 900 that can implement the image fusion method according to the embodiment of the present disclosure may also be computing devices wired or wirelessly connected to electronic devices that can capture images.
[0082] In the present disclosure, the memory 920 may be a non-volatile memory device, for example, electrically erasable programmable read-only memory (EEPROM), flash memory, Phase Change Random Access Memory, (PRAM), Resistance Random Access Memory (RRAM), Nano Floating Gate Memory (NFGM), Polymer Random Access Memory, (PoRAM), Magnetic Random Access Memory (MRAM), ferroelectric random access memory (FRAM), etc.
[0083] In the present disclosure, the computer readable storage medium 1000 may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a non-exhaustive list) of the computer readable storage medium may include the following items: an electrical connection having one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, such as a transmission medium supporting the Internet or an intranet, or a magnetic storage device. Since the program can be electronically captured via, for example, optical scanning to paper or other media, then compiled, interpreted or processed in an appropriate manner if necessary, and then stored in a computer memory, the computer readable storage medium can also be paper or other suitable media on which the program is printed. In the context of this document, a computer readable storage medium may be any medium capable of containing, storing, communicating, propagating, or transmitting a program for use by or used in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include a propagated data signal having computer readable program code embodied in baseband or as part of a carrier wave therewith.
[0084] In the present disclosure, instructions, for example, instructions stored in memory 920 in
[0085] In the present disclosure, a processor, for example, the processor 120 of
[0086] Furthermore, those of ordinary skill in the art should understand that the units and algorithm steps of the examples described in connection with the embodiments disclosed herein can be implemented in electronic hardware, or computer software in conjunction electronic hardware. Whether these functions are implemented in hardware or software depends on the specific interoperable application and design constraints of the technical scheme. Skilled artisans may use different methods to implement the described functions for each specific interoperable application, but such implementation should not be considered beyond the scope of the present invention.
[0087] The basic principles of the present disclosure have been described above in connection with specific embodiments. However, it should be pointed out that the advantages, merits, effects, etc. mentioned in the present disclosure are merely examples and are not limiting. These merits, advantages, effects, etc. cannot be considered as necessary for various embodiments of the present disclosure. In addition, the specific details disclosed above are for the purpose of illustration and for ease of understanding only, and are not limiting. The above details do not limit the disclosure to be necessarily implemented by using the above specific details.
[0088] The block diagrams of elements, apparatuses, devices, and systems involved in the present disclosure are merely illustrative examples and are not intended to require or imply that connections, arrangements, and configurations must be made in the manner shown in the block diagrams. As will be recognized by those skilled in the art, these elements, apparatus, devices, systems may be connected, arranged, and configured in any manner. Words such as including, comprising, having and the like are open words that refer to including but not limited to and can be used interchangeably therewith. As used herein, the words or and and refer to the words and/or and may be used interchangeably unless the context clearly indicates otherwise. As used herein, the word such as refers to the phrase such as but not limited to and may be used interchangeably therewith.
[0089] In addition, as used herein, the or used in the enumeration of items beginning with at least one indicates a separate enumeration, so that, for example, the enumeration of at least one of A, B, or C means A or B or C, or AB or AC or BC, or ABC (i.e., A and B and C). Furthermore, the word example does not mean that the described example is preferred or better than other examples.
[0090] It should also be pointed out that in the system and method of the present disclosure, various components or steps can be decomposed and/or recombined. Such decomposition and/or recombination should be regarded as equivalent to the present disclosure.
[0091] Various changes, substitutions and alterations to the techniques described herein may be made without departing from the techniques of the teachings defined by the appended claims. Furthermore, the scope of the claims of the present disclosure is not limited to the specific aspects of the above-described processes, machines, manufacture, compositions of events, means, methods, and actions. Processes, machines, manufacture, compositions of events, means, methods or actions currently existing or later to be developed that perform substantially the same functions or achieve substantially the same results as the corresponding aspects described herein may be utilized. Accordingly, the appended claims include such processes, machines, manufacture, compositions of events, means, methods, or actions within the scope thereof.
[0092] The above description of the disclosed aspects is provided to enable any person skilled in the art to implement or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other aspects without departing from the scope of the present disclosure. Therefore, the present disclosure is not intended to be limited to the aspects shown herein, but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
[0093] The above description has been given for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the present disclosure to the forms disclosed herein. Although a number of example aspects and embodiments have been discussed above, those skilled in the art will recognize certain variations, modifications, changes, additions and sub-combinations thereof.