Method and apparatus for generating HDRI
10298896 ยท 2019-05-21
Assignee
Inventors
- Caigao Jiang (Shenzhen, CN)
- Zisheng Cao (Shenzhen, CN)
- Mingyu Wang (Shenzhen, CN)
- Taiwen Liang (Shenzhen, CN)
Cpc classification
H04N9/646
ELECTRICITY
H04N23/86
ELECTRICITY
H04N23/741
ELECTRICITY
International classification
H04N17/00
ELECTRICITY
Abstract
The present invention provides a method and apparatus for generating a high dynamic range image (HDRI). After acquiring a first illuminance diagram, a base layer and detail layers are extracted from the first illuminance diagram, where the base layer contains low frequency information of the first illuminance diagram and the detail layers contain high frequency information of the first illuminance diagram. The dynamic range of the base layer is then compressed while dynamic ranges of the detail layers remain uncompressed. Further, a second illuminance diagram is generated from the first illuminance diagram by fusing the base layer with the compressed dynamic range and the detail layers, the second illuminance diagram is mapped onto a plurality of preset color channels; and the images on the plurality of preset color channels are fused into the HDRI. Therefore, the detail information in the original illuminance diagram can be preserved.
Claims
1. A method for generating a high dynamic range image (HDRI), comprising: acquiring a first illuminance diagram; extracting from the first illuminance diagram a base layer and detail layers by using a lifting wavelet transform algorithm, the base layer containing low frequency information of the first illuminance diagram and the detail layers containing high frequency information of the first illuminance diagram; compressing a dynamic range of the base layer while dynamic ranges of the detail layers remain uncompressed; generating a second illuminance diagram from the first illuminance diagram by fusing the base layer with the compressed dynamic range and the detail layers using a lifting wavelet inverse transform algorithm; mapping the second illuminance diagram onto a plurality of preset color channels to obtain images on the plurality of preset color channels; and fusing the images on the plurality of preset color channels into the HDRI.
2. The method of claim 1, wherein acquiring the first illuminance diagram comprises: generating a set of calibration equations for a camera response function from images I1, I2, . . . , IN having different exposure conditions, wherein N is a total number of the images, which is an integer greater than or equal to 2; solving the set of calibration equations for the camera response function by using a QR decomposition algorithm to obtain the camera response function and an illuminance logarithm, based on sampling pixel points selected from the images I1, I2, . . . , IN; and obtaining the first illuminance diagram based on the camera response function and the illuminance logarithm.
3. The method of claim 2, wherein generating the set of calibration equations for the camera response function from the images I1, I2, . . . , IN comprises: acquiring the images I1, I2, . . . , IN; and using the images I1, I2, . . . , IN as known parameters to solve a preset objective function by a least square method to obtain the set of calibration equations for the camera response function.
4. The method of claim 3, wherein the preset objective function comprises:
5. The method of claim 1, before fusing the images on the color channels into the HDRI, further comprising: performing Gamma correction on the images on the color channels, respectively, wherein fusing the images on the color channels into the HDRI comprises fusing the Gamma-corrected images on the color channels into the HDRI.
6. An apparatus for generating a high dynamic range image (HDRI), comprising: an acquisition module for acquiring a first illuminance diagram; a generation module for generating a second illuminance diagram from the first illuminance diagram, wherein the generation module further includes: an extraction unit for extracting from the first illuminance diagram a base layer and detail layers using a lifting wavelet transform algorithm, the base layer containing low frequency information of the first illuminance diagram and the detail layers containing high frequency information of the first illuminance diagram; a compression unit for compressing a dynamic range of the base layer while dynamic ranges of the detail layers remain uncompressed; a fusion unit for fusing the base layer with the compressed dynamic range and the detail layers into the second illuminance diagram using a lifting wavelet inverse transform algorithm; a mapping module for mapping the second illuminance diagram onto a plurality of preset color channels to obtain images on the plurality of preset color channels; and a fusion module for fusing the images on the plurality of preset color channels into the HDRI.
7. An apparatus of claim 6, wherein the acquisition module comprises: an equation generating unit for generating a set of calibration equations for a camera response function from images I1, I2, . . . , IN having different exposure conditions, wherein N is a total number of the images, which is an integer greater than or equal to 2; a solving unit for solving the set of calibration equations for the camera response function by using a QR decomposition algorithm to obtain the camera response function and an illuminance logarithm, based on sampling pixel points selected from the images I1, I2, . . . and an acquisition unit for obtaining the first illuminance diagram based on the camera response function and the illuminance logarithm.
8. An apparatus of claim 7, wherein, for generating a set of calibration equations for a camera response function from images I1, I2, . . . , IN, the equation generating unit is further configured for: acquiring the images I1, I2, . . . , IN, and using the images I1, I2, . . . , IN as known parameters to solve a preset objective function by a least square method to obtain the set of calibration equations for the camera response function.
9. An apparatus of claim 8, wherein the preset objective function is:
10. An apparatus of claim 6, further comprising: a correction module for performing Gamma correction to the images on the plurality of color channels, respectively, before fusing the images on the color channels into the HDRI.
11. An apparatus of claim 10, wherein the fusion module fuses the Gamma-corrected images on the color channels into the HDRI.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) To more clearly explain the technical solutions of the embodiments of the present invention, a brief description of the drawings used in the detailed description of the embodiments is provided as follows. It is apparent for an ordinary person skilled in the art that the drawings described below are only illustrative of some embodiments of the present invention, and other drawings can be derived based on those drawings without any creative effort.
(2)
(3)
(4)
(5)
(6)
(7)
DETAIL DESCRIPTION OF THE INVENTION
(8) The embodiments of the present invention provide a method and an apparatus for generating HDRI, which in general only compresses the dynamic range of a base layer of an illuminance diagram and does not compress the dynamic range of a detail layer of the illuminance diagram, so as to achieve the purpose of preserving details in the image.
(9) The technical solutions of the present invention will be described clearly and thoroughly in the following embodiments with the accompanying drawings. Apparently, the described embodiments are only a part but not all of the embodiments of the present invention. Based on the embodiments of the present invention, other embodiments may be derived by those skilled in the art without any creative effort, all of which shall fall within the scope of the present invention.
(10) An embodiment of the present invention discloses a method for generating an HDRI, comprising the following steps:
(11) A: acquiring a first illuminance diagram;
(12) B: generating a second illuminance diagram from the first illuminance diagram;
(13) wherein the second illuminance diagram is formed by fusing a base layer with a compressed dynamic range and detail layers, and the base layer and the detail layers are extracted from the first illuminance diagram;
(14) C: mapping the second illuminance diagram onto the preset color channels to obtain images on the preset color channels; and
(15) D: fusing the images on the color channels into the HDRI.
(16) The specific process of step B of this embodiment is shown in
(17) The method shown in
(18) S101: extracting a base layer and detail layers from a first illuminance diagram;
(19) Typically, an illuminance diagram is used to characterize a brightness value of each pixel point in an image. A base layer of the illuminance diagram refers to low frequency information of the illuminance diagram (i.e., the main energy component of the illuminance diagram), while a detail layer refers to high frequency information of the illuminance diagram. Thus, the detail layer comprises detail information in the illuminance diagram.
(20) S102: compressing the dynamic range of the base layer into a preset range;
(21) The dynamic range refers to the ratio between a maximum value and a minimum value of a physical quantity to be measured, and it may have different meaning for different objects. In terms of a digital image, the dynamic range D refers to the ratio between a maximum brightness value and a minimum brightness value in the digital image:
D=L.sub.max/L.sub.min
(22) wherein the brightness is stated in Candela/square meter (cd/m.sup.2).
(23) The capability of simultaneously presenting in an image the details of an area with a maximum brightness and an area with a minimum brightness in a natural scene is limited by the dynamic range of the image. In a real natural scene, the brightness has a very broad dynamic range (a range of more than 9 orders of magnitude, e.g., 10.sup.4?10.sup.?5 cd/m.sup.2), and the human vision system is capable of perceiving a scene brightness with a dynamic range of about 5 orders of magnitude. However, the brightness dynamic range that can be achieved by an existing display device may be of about 2 orders of magnitude. Apparently, there would be a mismatch between a brightness dynamic range of a natural object seen in a display device and a brightness dynamic range of the natural object seen in a real world.
(24) In this embodiment, the compression of the dynamic range of the base layer into the preset range can be performed through any appropriate method. For example, the dynamic range of the base layer can be compressed into the preset range by multiplying a number less than 1, and the specific value of that number can be set according to the preset range. In terms of image display, the preset range is the dynamic range that can be displayed by a display device. Other methods for setting the preset range and/or compressing the dynamic range of the base layer can also be used.
(25) S103: fusing the base layer with the compressed dynamic range and the detail layers into a second illuminance diagram.
(26) In the method as shown in
(27) Furthermore, the method as shown in
(28) S201: extracting a base layer and detail layers of a first illuminance diagram by using a lifting wavelet transform algorithm;
(29) In this embodiment, depending on the characteristics of a lifting wavelet operator, the number of the detail layers may be 3. That is, the detail layers may include detail layers in three different directions.
(30) S202: compressing the dynamic range of the base layer into a preset range;
(31) S203: fusing the base layer with the compressed dynamic range and the detail layers into a second illuminance diagram by using a lifting wavelet inverse transform algorithm.
(32) In the method shown in
(33) It should be noted that, the use of the lifting wavelet algorithm is just an exemplary approach of the present embodiment, which is not intended to be limiting.
(34) The aforementioned method for generating HDRI will be explained in details below. As shown in
(35) S301: From images I1, I2, . . . , IN, generating a set of calibration equations of a camera response function, where the images I1, I2, . . . , IN may be a set of images having different exposure conditions and may be used to generate the HDRI, i.e., the images I1, I2, . . . , IN may be images corresponding to the HDRI, while N is the total number of the images, and is an integer greater than or equal to 2.
(36) The specific implementation of step S301 may comprise the following steps:
(37) 1) acquiring the images I1, I2, . . . , IN; and
(38) 2) using the images I1, I2, . . . , IN as known parameters, solving a preset objective function by a least square method to obtain the set of calibration equations for the camera response function.
(39) In some embodiments, the objective function may be:
(40)
where, M is the total number of the pixel points of each image among the known parameters, g(Z.sub.i,j)=InE.sub.i+In?t.sub.j, E.sub.i is a scene illuminance, ?t.sub.j is an exposure time of the current image Z.sub.i,j, Z.sub.i,j is a pixel value of the current image, ? is a control parameter, ?(Z.sub.i,j) is a weight function for the current image Z.sub.i,j, Z.sub.min is a minimum value of the pixel value of the current image, and Z.sub.max is a maximum value of the pixel value of the current image.
(41) The principle for setting the objective function is as follows:
(42) 1) defining the relationship between a camera response curve and a scene illuminance E.sub.i, an exposure time ?t.sub.j and a pixel value Z.sub.i,j of a digital image as:
Z.sub.i,j=f(E.sub.i??t.sub.j)
(43) 2) assuming the camera response curve is smooth and monotonous such that the function f is invertible, and inverting the above equation and taking the logarithm thereof to let g=Inf.sup.?1:
g(z.sub.i,j)=InE.sub.i+In?t.sub.j
(44) 3) letting the extrema of Z be Z.sub.min and Z.sub.max to establish the following objective function:
(45)
(46) Since there are always overexposed pixel points and underexposed pixel points among all the pixel points in an image, a simplest weight function is often added into the objective function:
(47)
where, Z.sub.mid=(Z.sub.min+Z.sub.max)/2. So, the final objective function is:
(48)
(49) S302: solving the set of calibration equations for the camera response function by using a QR decomposition algorithm to obtain the camera response function and an illuminance logarithm, based on sampling pixel points selected from the images I1, I2, . . . , IN;
(50) The QR decomposition algorithm first decomposes a coefficient matrix into a product of an orthogonal matrix and an upper triangular matrix, and then calculates the result by back substitution. Compared with the existing solving method of singular value decomposition, the QR decomposition algorithm is simpler, and thus easier to be implemented in hardware.
(51) A curve representation for the camera response function is shown in
(52) S303: obtaining the first illuminance diagram based on the camera response function and the illuminance logarithm;
(53) The specific implementation of this step may be any appropriate method.
(54) S304: extracting the base layer and the detail layers in 3 different directions of the first illuminance diagram by using a lifting wavelet transform algorithm;
(55) S305: compressing the dynamic range of the base layer into a preset range;
(56) S306: fusing the base layer of which the dynamic range is compressed and the detail layers into the second illuminance diagram by using a lifting wavelet inverse transform algorithm;
(57) S307: mapping the second illuminance diagram onto an R, G and B color channels;
(58) S308: performing a Gamma correction respectively on the images on the R, G and B color channels;
(59) S309: fusing the Gamma-corrected images on the R, G and B color channels into the HDRI.
(60) Since the Gamma correction may correct the brightness deviation of an image displayed on a display, the contrast of HDRI obtained through merging the images with Gamma correction can be significantly improved.
(61) As can be seen from the above steps, the method for generating HDRI as described in the present embodiment has a higher execution speed, is easier to be implemented in hardware, and is able to avoid the loss of details of an image during the process starting from the images I1, I2, . . . , IN to the HDRI. Further, since the details of the image can be preserved, the occurrence of halos in the HDRI can be reduced to a large extent; and the HDRI generated by the method of the present embodiment may have a better contrast.
(62) According to the present disclosure, compared with the existing method, the disclosed method according to the embodiments of the present invention can obtain clearer HDRIs as shown in
(63) In correspondence to the above method embodiments, an embodiment of the present invention further provides an apparatus for generating HDRI as shown in
(64) an acquisition module 601 for acquiring a first illuminance diagram;
(65) a generation module 602 for generating a second illuminance diagram from the first illuminance diagram, wherein the second illuminance diagram is formed by fusing a base layer of which the dynamic range is compressed and detail layers, and wherein the base layer and the detail layers are extracted from the first illuminance diagram;
(66) a mapping module 603 for mapping the second illuminance diagram onto preset color channels; and
(67) a fusion module 604 for fusing the images on the color channels into the HDRI.
(68) Optionally, the present embodiment may further comprise:
(69) a correction module 605 for performing a Gamma correction respectively to the images on each of the color channels before fusing the images on the color channels into the HDRI.
(70) When the correction module is present, the fusion module may be specifically used for fusing the Gamma-corrected images on the color channels into the HDRI.
(71) The extraction unit may be specifically used for extracting the base layer and the detail layers of the first illuminance diagram by using a lifting wavelet transform algorithm; and the fusion unit may be specifically used for fusing the base layer of which the dynamic range is compressed and the detail layers into the second illuminance diagram by using a lifting wavelet inverse transform algorithm.
(72) In the present embodiment, the acquisition module 601 may specifically comprise:
(73) an equation generating unit 6011 for generating a set of calibration equations for a camera response function from images I1, I2, . . . . , IN having different exposure conditions, wherein N is the total number of the images, an integer greater than or equal to 2;
(74) a solving unit 6012 for solving the set of calibration equations for the camera response function by using a QR decomposition algorithm to obtain the camera response function and a illuminance logarithm, based on the sampling pixel points selected from the images I1, I2, . . . , IN; and
(75) an acquisition unit 6013 for obtaining the first illuminance diagram based on the camera response function and the illuminance logarithm.
(76) In particular, the specific implementation of the equation generating unit to generate the set of calibration equations for a camera response function from images I1, I2, . . . , IN may be as follows:
(77) The equation generating unit is specifically used for acquiring the images I1, I2, . . . , IN, and solving a preset objective function by a least square method to obtain the set of calibration equations for the camera response function by using the images I1, I2, . . . , IN as known parameters.
(78) Further, the preset objective function may be:
(79)
(80) where M is the number of the pixel points of each image among the known parameters, g(Z.sub.i,j)=InE.sub.i+In?t.sub.j, E.sub.i is a scene illuminance, ?t.sub.j is an exposure time of the current image Z.sub.i,j, Z.sub.i,j is a pixel value of the current image, ? is a control parameter, ?(Z.sub.i,j) is a weight function for the current image Z.sub.i,j, Z.sub.min is a minimum value of the pixel value of the current image, and Z.sub.max is a maximum value of the pixel value of the current image.
(81) Particularly in the present embodiment, the generation module 602 may specifically comprise:
(82) an extraction unit 6021 for extracting the base layer and the detail layers of the first illuminance diagram;
(83) a compression unit 6022 for compressing the dynamic range of the base layer into a preset range; and
(84) a fusion unit 6023 for fusing the base layer of which the dynamic range is compressed and the detail layers into the second illuminance diagram.
(85) With the apparatus according to the present embodiment, when the dynamic range of an illuminance diagram is compressed, only the dynamic range of a base layer of the illuminance diagram is compressed whereas the dynamic range of detail layers thereof is not compressed. Thus, the completeness of the detail information can be preserved to the largest extent, thereby avoiding the loss of details.
(86) Specifically, in the present embodiment, the extraction module may extract the base layer and the detail layers of the first illuminance diagram by using a lifting wavelet transform algorithm; and the fusion module may fuse the base layer with the compressed dynamic range and the detail layers into the second illuminance diagram by using a lifting wavelet inverse transform algorithm. Since the lifting wavelet transform and its inverse transform are simple and quick, the apparatus described in the present embodiment is capable of not only avoiding the loss of details of an image but also being done simply and quickly. Therefore, the apparatus may be easily implemented in hardware with higher computational efficiency.
(87) The functionalities of the method according to the embodiments can be stored in a computing device readable storage medium when implemented in the form of software. Based on this understanding, part or all of the technical solution of the embodiments of the present invention can be embodied in the form of a software product stored in a storage medium comprising a number of instructions configured to cause the computing device (such as a personal computer, a server, a mobile computing device or a network device) to execute all or some steps of the method according to various embodiments of the present invention. The storage medium may comprise various mediums such as a flash disk, a removable hard drive, a read only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk that are capable of storing program codes.
(88) Various embodiments of the present invention are described in a progressive way. The description of each embodiment is focused on the difference between that embodiment and the others, and identical or similar parts among various embodiments can be referred to each other.
(89) The description of the disclosed embodiments can allow those skilled in the art to implement or use the present invention. Various modifications to those embodiments will be apparent to those skilled in the art, and the general principle defined herein can be implemented in other embodiments without departing from the spirit or scope of the present invention. Therefore, the present invention will not be limited to the embodiments disclosed herein, but shall conform to the broadest scope in consistence with the principle and the novel features disclosed herein.