Method and system for generating an output image from a plurality of corresponding input image channels

10789692 ยท 2020-09-29

Assignee

Inventors

Cpc classification

International classification

Abstract

A method and system for generating an output image from a plurality, N, of corresponding input image channels is described. A Jacobian matrix of the plurality of corresponding input image channels is determined. The principal characteristic vector of the outer product of the Jacobian matrix is calculated. The sign associated with the principal characteristic vector is set whereby an input image channel pixel projected by the principal characteristic vector results in a positive scalar value. The output image as a per-pixel projection of the input channels in the direction of the principal characteristic vector is generated.

Claims

1. A method for generating an output image from a plurality, N, of corresponding input image channels, the method comprising: determining a Jacobian matrix of the plurality of corresponding input image channels; calculating the principal characteristic vector of the outer product of the Jacobian matrix; setting the sign associated with the principal characteristic vector whereby an input image channel pixel projected by the principal characteristic vector results in a positive scalar value; and generating the output image in image space as a per-pixel projection of the input channels in the direction of the principal characteristic vector.

2. A system for generating an output image from a plurality, N, of corresponding input image channels, the system comprising: an input arranged to access the N input image channels; a processor configured to execute computer program code for executing an image processing module, including: computer program code configured to determine a Jacobian matrix of the plurality of corresponding input image channels; computer program code configured to calculate the principal characteristic vector of the outer product of the Jacobian matrix; computer program code configured to set the sign associated with the principal characteristic vector whereby an input image channel pixel projected by the principal characteristic vector results in a positive scalar value; and, computer program code configured to generate the output image in image space as a per-pixel projection of the input channels in the direction of the principal characteristic vector.

3. The system of claim 2, computer program code to calculate further comprises the steps of: computer program code configured to generate a sparse N-vector projector image from said Jacobian matrix, for each element of the Jacobian matrix that is non-zero; and, computer program code configured to infill the sparse N-vector projector image for elements of the Jacobian matrix that are zero.

4. The system of claim 3, wherein the infilling comprises infilling by defining the vector for each zero element to be an average of a local neighborhood.

5. The system of claim 4, wherein the average is edge-sensitive.

6. The system of claim 3, wherein the computer program code configured to infill includes computer program code configured to smooth the N-vector projector image.

7. The system of claim 3, wherein the computer program code configured to infill includes computer program code configured to interpolate the N-vector projector image.

8. The system of claim 3, wherein the computer program code configured to infill includes computer program code configured to perform edge-sensitive diffusion on the N-vector projector image.

9. The system of claim 3, wherein the processor is configured to execute computer program code to scale each vector after infilling to have unit length.

10. The system of claim 3, wherein the processor is configured to execute computer program code to spread vectors after infilling to move each vector component a fixed multiple of angular degrees away from the mean.

11. The system of claim 3, wherein the infilling comprises bilaterally filtering the sparse N-vector projector image.

12. The system of claim 11, wherein the bilateral filter comprises a cross bilateral filter.

13. The system of claim 11, wherein the filter is arranged to filter each channel of the N-vector projector image independently.

14. The system of claim 2, wherein the processor is configured to execute computer program code to obtain downsampled input channels, perform said determining and calculating step on the downsampled input image channels and upsample the calculated principal characteristic vector for use in the generating step.

15. The system of claim 2, further comprising a look-up-table mapping between a unique input image vector and a principal characteristic vector, the system being arranged to access the look-up-table to determine the principal characteristic vectors for generating the output image.

16. The system of claim 2, wherein the input image has N channels and the output image has M channels, the principal characteristic vector comprising a per-pixel MN matrix transform mapping the input image's N2 Jacobian to a target M2 output Jacobian.

17. The system of claim 16, wherein the processor is further configured to execute computer program code to per-pixel transform the input image channels by their respective MN transform.

18. The system of claim 16, wherein the MN transform maps the N2 input image Jacobian to a M2 accented Jacobian counterpart, and wherein the processor is configured to execute computer program code to generate a sparse MN transform image from infilling the sparse N2 image for elements of the Jacobian matrix that are zero.

19. The system of claim 16, wherein the processor is further configured to execute computer program code to perform said determining and calculating on downsampled input image channels and to upsample the calculated MN transforms for use in generating the output image.

20. The system of claim 16, further comprising a look-up-table mapping between a unique input image vector and MN transform, the system being arranged to access the look-up-table to determine the MN transform for generating the output image.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) Embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings in which:

(2) FIG. 1 is a flow diagram of a method for generating an output image from a plurality of corresponding input image channels;

(3) FIG. 2 is a flow diagram of aspects of an example implementation of the method of FIG. 1;

(4) FIG. 3 is a flow diagram of a method according to another embodiment; and,

(5) FIG. 4 is schematic diagram of a system for generating an output image from a plurality, N, of corresponding input image channels according to an embodiment of the present invention.

DETAILED DESCRIPTION

(6) In the following description, I(x,y) is used to denote the xth, yth pixel of an nm vector image. Each pixel has N planes. For example if I(x,y) was a colour image that is defined with respect to the red, green and blue (RGB) colour space, the pixel would be an RGB vector: [R G B]. If the image also contained an image plane that was NIR (near-infrared), or a NIR image was associated with the RGB one, then each pixel would be a 4-vector: [R G B NIR].

(7) As will be appreciated, each plane may be a channel of a single image or may be from data of images on the same subject from different sources.

(8) To understand the derivative structure of an image it is differentiated, in x and y directions, in each of the N image planes. This gives N*2 (x and y derivatives for each of the N image planes) and this is summarized in the N2 Jacobian matrix J:

(9) J = [ I 1 X I 1 Y I 2 X I 2 Y .Math. .Math. I N X I N Y ] ( 1 )

(10) In the SW approach described above, a single equivalent derivative was sought that best approximates all the derivatives in all the image planes:

(11) SW ( J ) = [ z x z y ] ( 2 )

(12) In the SW method the magnitude and orientation of the derived gradient is known but its sign is defined heuristically. In part, the artefacts discussed above that are seen in the SW method are related to the heuristic setting of the sign of the derived gradient. The SW method generally hallucinate new detail (not appearing in any of the input images or image channels) including halos, bending artefacts and large scale false gradients.

(13) FIG. 1 is a flow diagram of a method for generating an output image from a plurality of corresponding input image channels.

(14) In step 10, a Jacobian matrix (J) is determined for the plurality of corresponding input image channels. An example of such a matrix is set out at (1) above.

(15) In step 20, the principal characteristic vector of the outer product of J is calculated.

(16) In step 30, the sign of the principal characteristic vector is determined. The sign is preferably determined such that an input image channel pixel projected by the characteristic vector should result in a positive scalar value. The sign of the projection vector is set accordingly.

(17) In step 40, an output image is generated as a per-pixel projection of the input channels in the direction of the principal characteristic vector.

(18) It has been identified that the unit length characteristic vector of the column space of J, denoted here as v, has various useful properties: i. v.sup.tJ (multiplication of v into J (a 1N vector multiplied by N2 Jacobean)) gives a gradient equivalent to that produced by (2) up to an unknown sign (which can be dealt with as discussed below). ii. Because property (i) is a linear operation and differentiation is also a linear operation, the order of operations can be swapped, that is:

(19) a . [ x ( v _ t I _ ( x , y ) ) x ( v _ t I _ ( x , y ) ) ] = v _ t J ( 3 ) In the left hand side of (3) we differentiate after we make a new scalar image as the linear combination of the original image where the components of v define the weightings of the per channel combination. At a pixel it follows that given v, an output image derived from N input channels (for example the output image may be a fused image) can be computed directly (as the linear combination of the original image of I(x,y)) and there is no need for reintegration. iii. Because it is intended for the output image to be displayable, the values of pixels for the output image must be all positive. The import of this is that
i. v.sup.tI(x,y)<0 then v.fwdarw.v(4) One issue with the SW method is that the sign of the equivalent gradient vector is unknown. It has been suggested to set the sign to match the brightness gradient (R+G+G)/3 or optimize the sign to maximise integrability of the desired gradient field. Each method requires further calculation and is not always appropriate. In contrast, unlike the SW method, in embodiments of the present invention the sign of the equivalent derivative vector can be assigned in a well principled way (the left arrow in (4) means assign).

(20) FIG. 2 is a flow diagram of aspects of an example implementation of the method of FIG. 1.

(21) As discussed with reference to FIG. 1 above, a per-pixel projection (linear combination) of input channels I(x) in the direction of the principal characteristic vector .sup.x of the outer product of the Jacobian J produces a sought combined scalar image O(x):
O(x)=.sup.x.Math.I(x)=.sub.k=1.sup.N.sub.k.sup.xI.sub.k(x)(5)

(22) The Jacobian is discussed above. There are various ways of arriving at the principal characteristic vector .sup.x, a preferred embodiment illustrating one way is set out below.

(23) The principal characteristic vector, .sup.x, is the first column vector of U.sup.x:
.sup.x=U.sub.1.sup.x

(24) U in turn is part of the singular value decomposition of the Jacobian, J (the superscript .sup.x denotes the x,y image location):
I=USV.sup.T(6)

(25) U, S and V are N2, 22 and 22 matrices, respectively. U and V are othonormal and S is diagonal matrix (whose diagonal components are >=0).

(26) The SW equivalent gradient (up to an unknown sign) is the unit length principal eigenvector of Z scaled by the dominant eigenvalue. This is the first column of SVT. Premultiplying (7) by the transpose of U.sup.x returns the same equivalent gradient as found by the SW method.

(27) In other words, U.sup.x is the product of the Jacobian and the inverse of the square root of the structure tensor, Z, (Z=VS.sup.2V.sup.T) from which it follows that the inverse square root of Z is VS.sup.1.

(28) The structure tensor is positive and semi-definite and the eigenvalues will therefore be real and positive. In images where underlying channels are continuous and eigenvalues are distinct, the principal characteristic vector of the outer product will also vary continuously and can be calculated from the above.

(29) In images having regions with zero derivatives or where the structure tensor has coincident eigenvalues (e.g. corners) there may be a large change in the projection direction found at one image location compared to another (discontinuity) that may be problematic in determining the principal characteristic vector. FIG. 2 is a flow diagram of a preferred method of processing image channel data to ensure suitability for embodiments of the present invention.

(30) In step 100, a projector image P(x, y) is initialised to zero at every pixel location.

(31) In step 110, P(x, y) is populated based on .sup.x and S.sup.x as follows:
if min(S.sub.11.sup.x,S.sub.22.sup.x)>.sub.1 and |S.sub.11.sup.xS.sub.22.sup.x|>.sub.2 then P(x,y)=.sup.x

(32) Everywhere assuming the two threshold conditions are met and there is a non-zero Jacobian and the two eigenvalues are sufficiently different (i.e. everywhere the image has non-zero derivatives and we are not at, the rarely occurring, corners) then there is a sparse N-vector projector image P.sup.S(x,y) (the s superscript denoting the vector is sparse).

(33) In order to ensure P(x,y)a final projector image where every spatial location has a well defined projection vectorP.sup.S(x,y) is infilled. Specifically, the N-vector at every (x,y) location is the average of its local neighbourhood where the average is also edge-sensitive. This is done in step 120, where P(x, y) is bilaterally filtered, preferably by applying a simple cross bilateral filter.
P(x,y)=BilatFilt(I(x,y),P.sup.S(x,y),.sub.d,.sub.r)

(34) .sub.1 and .sub.2 are system parameters that may be varied according to implementation. In one embodiment they are set arbitrarily to 0.01 (assuming image values are in [0, 1]).

(35) Preferably, the bilateral filter is a cross bilateral filter with the range term defined by the original image I. The filtering is preferably carried out independently per channel with a Gaussian spatial blur with standard deviation .sub.d and the standard deviation on the range parameterised by .sub.r. With .sub.d=.sub.dr=0, no diffusion takes place. As .sub.d.fwdarw. and .sub.d.fwdarw., the diffusion becomes a global mean, and the projection tends to a global weighted sum of the input channels. If .sub.d.fwdarw. and .sub.r=0 each distinct vector of values in the image will be associated with the same projection vector and so the bilateral filtering step defines subjective mapping which could be implemented as a look-up table.

(36) With the exception of these boundary cases, the standard deviations in the bilateral filter should be chosen to provide the diffusion sought, but should also be selected to ensure the spatial term is sufficiently large to avoid spatial artefacts.

(37) In one embodiment, .sub.d and .sub.r are set to min(X; Y)*4 and ((max(I)min(I))/4)), respectively. In one embodiment, the values are found empirically.

(38) In step 130, P(x, y) is adjusted such that each projection direction is a unit vector.

(39) P ( x , y ) = P ( x , y ) .Math. P ( x , y ) .Math.

(40) An optional step 140, may also be applied. In this step, a spread function is applied to P(x, y) to improve the projector image. In particular, in one example the spread function moves each of the projection directions a fixed multiple of angular degrees away from the mean (the diffusion step pulls in the opposite direction and results in projection directions closer to the mean compared with those found at step 110).

(41) The exact spread function to be applied will vary from implementation to implementation and also be dependent on the domain in question.

(42) By default, the spread is performed by computing the average angular deviation from the mean before and after the diffusion. Post-diffusion vectors are scaled by a single factor k (k1) so that the average angular deviation is the same as prior to the diffusion step. If the spread function creates negative values, the value is clipped to 0. This scaling factor k can be varied according to the requirements of each implementation. For example, in time lapse photography images, k may be 2 to stretch the projector image. In multi-focus applications the value of K may be larger (such as 5 or 8).

(43) In the embodiment of FIG. 2, the projection vectors that are well defined across the image are interpolated or diffused. In preferred embodiments, this is achieved by applying a simple cross bilateral filter, which has been found to provide superior results to a standard Gaussian or median filter, as it uses the image structure contained in the input image channels to guide the diffusion of the projection vectors.

(44) There are other ways of providing an in-filled projection map including anisotropic diffusion, connected component labelling (enforcing the same projection for the same connected component in the input (in analogy to) or enforcing spatial constraints more strongly than in bilateral filtering).

(45) The final projection image can be further constrained so it is a function of the input multichannel image. That is, the projection image can be a look-up-table from the input multichannel image.

(46) After performing steps 100-130 (and optionally 140), the result is N values per pixel defining a projection direction along which the N-vector I(x) is projected to make a scalar output image.

(47) FIG. 3 is a flow diagram of a method according to another embodiment.

(48) In this embodiment, the input image(s) are downsampled in step 200 (or alternatively a downsampled version may have been provided or is obtained) and P calculated only for the thumbnail image in step 210 (P may be calculated in the same manner as set out with reference to FIG. 2, for example). Joint bilateral upsampling is then used in step 220 to find the full resolution projector image which is then used in generating as a per-pixel projection of the non-downsampled input channels in step 230.

(49) Again, the final projection map can be a Look-up-table (LUT) of the input multichannel image. The LUT can be calculated on the thumbnail

(50) This thumbnail computation also has the advantage that the projector image can be computed in tiles i.e. the method never needs to calculate the full resolution projector image.

(51) In an example RGB-NIR image pair at 6821024 resolution, fused as separate R, G and B channels for a total of 3 fusion steps, takes 54.93 seconds at full resolution, and 2.82 seconds when calculated on 68102 downsampled thumbnail images using a MATLAB implementation of an embodiment. This increase in speed does not significantly affect the resulting imagethe mean SSIM (structural similarity index) between the full resolution and downsampled results over the corresponding image channels is 0.9991. In general it has been found that an image could be downsampled aggressively to 10K pixel thumbnails (or, even slightly less as in this example) with good results. Though, almost always if downsized to approximately VGA resolution then the results computed on the thumbnails would be close to identical as those computed on the full resolution image.

(52) FIG. 4 is schematic diagram of a system 400 for generating an output image from a plurality, N, of corresponding input image channels 401-404. As discussed above, the channels may be separate images, image feeds from cameras, components of a single or related image feeds, components of a single or related image file etc. In the illustrated embodiment, cameras 401 and 402 (for example one may be RGB and one infra-red) and a data source 403/404 are illustrated as providing the image channels. The data source could, for example, be a layered image file from which each layer acts as a separate channel 403, 404. It will be appreciated that many combinations and permutations are possible and the number of different sources of image channels are endless.

(53) The system 400 includes an input 410 arranged to access the N input image channels. This may be an interface or bus to a data feed, a file I/O device or system or some other input.

(54) The system 400 also includes a processor 420 and any necessary memory or other components needed for the system 400 to operate and to execute computer program code for executing an image processing module, including:

(55) computer program code configured to determine a Jacobian matrix of the plurality of corresponding input image channels;

(56) computer program code configured to calculate the principal characteristic vector of the outer product of the Jacobian matrix;

(57) computer program code configured to set the sign associated with the principal characteristic vector whereby an input image channel pixel projected by the principal characteristic vector results in a positive scalar value; and,

(58) computer program code configured to generate the output image as a per-pixel projection of the input channels in the direction of the principal characteristic vector.

(59) The output image 430 may be output, for example, via an I/O device or system to memory, a data store, via a network, to a user interface or to an image reproduction device such as a printer or other device for producing a hard-copy. The output image could also serve as an input to other systems.

(60) Extension to the Above Described Method.

(61) Suppose that instead of using the SW method to map the N, x and y derivatives into a single equivalent gradient, some other function f is used. The vector function f( ) returns a 12 vector, per pixel, x and y estimated derivative. As an example of such a function, the SW method may be modified so that in deriving their equivalent gradient, large per channel derivatives are weighted more than those that are small. At each pixel, the projection vector v is found that satisfies:
v.sup.tJ=+(J)(7)

(62) Equation (7) is underdetermined. There are many v that will satisfy this.

(63) However, this can be addressed by determining the minimum norm solution:
v=Jc where c.sup.t=f(I)[J.sup.tJ].sup.1(8)

(64) where c is a 2-vector. That is, v is in the column space of J. Alternatively, a a pixel v could be found that best (in a least squares sense) satisfies v.sup.tJ=+(J) at the given pixel and all pixels in an associated neighbourhood.

(65) As with embodiments discussed above, the initial projector vector image is initially sparse and should be processed to define a projection vector everywhere through an edge-sensitive diffusion process.

(66) Here, it is not important that each v(x,y) has unit length. Rather, if a given final projector is formed as a weighted combination of the original projectors in the sparse projection image that the sum of the weights is 1.

(67) v _ ( x , y ) = .Math. i , j w ( i , j ) v _ s ( x , y ) then v _ ( x , y ) v _ ( x , y ) .Math. i , j w ( i , j ) ( 9 )

(68) The right hand side of (10) reads that the final projector image is scaled by the reciprocal of the weights (used in defining the final projector image v(x,y)).

(69) In WO2011/023969, a copy of which is herein incorporated by reference, an N-component image is fused into an M-component counterpart (where typically M<<N). An example would be to map the 4-channel RGB-NIR image into a 3 dimensional fused colour image.

(70) In the disclosed method and system, an N2 source Jacobian J.sup.S is transformed to (for the colour case) a 32 accented Jacobian J.sup.A. Each of the 3 derivative planes in J.sup.A is reintegrated to give the final image. In reintegrating (which are usually non-integrable fields), reintegration artefacts are often produced.

(71) In embodiments of the present invention, per pixel the 3N linear transform T could be solved such that:
TJ.sup.S=J.sup.A(10)

(72) Again because of the linearity of differentiation the fused 3-dimensional image at a given pixel can be computed as TI(x,y) since if we differentiate this transformed 3 channel image we precisely calculate J.sup.A. As above, there are many Ts that satisfy (11). A least-norm solution can be used to define T uniquely. Or, T can be found in a least-squares sense by finding a single T that best satisfies a given pixel location and that of its neighbourhood.

(73) It follows then that in image regions where there are non-zero Jacobians J.sup.S, J.sup.A and T.sup.S(x,y) can be calculated (as before the superscript Ts draws attention to the fact that the transform image is initially sparse). This arrives at a final, non-sparse, T(x,y) (at each location we have a 3N transform T) by diffusing the initially sparse set of transforms. Applying an analogous diffusion process as described in the last section, the final output fused image is equal to T(x,y)I(x,y).

(74) Again the T(x,y) can be a function of the input image (each multichannel input maps to a single transform and this mapping can be implemented as a Look-up-table).

(75) Various experiments have been performed to compare the above described method other algorithms, the image fusion method of Eynard et al., based on using a graph Laplacian to find an M to N channel color mapping, and the Spectral Edge (SE) method of Connah et al., which is based on the structure tensor together with look-up-table based gradient reintegration. The results are set out in the draft paper annexed hereto as Annex 1 and incorporated herein by reference. The paper was published in ICCV '15 Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Dec. 7-13, 2015, pages 334-342 and is incorporated herein by reference.

(76) A comparison of embodiments of the present invention with antecedent methods is shown in Annex 1 FIG. 1, where there are two uniform white images with respectively the top left and bottom left quarters removed. The discrete wavelet transform (DWT) images were produced using a wavelet-based method which merges the coefficients of the two images at different scales. We ran a standard DWT image fusion implementation using the CM (choose maximum) selection method, which is simple and one of the best performing in a comparison.

(77) The input images are small so there is only a 7 level wavelet decomposition. In 1c and 1d the outputs using Daubechies 4 and Biorthogonal 1.3 wavelets are shown. Clearly neither the basic wavelet method nor the SW method (1e) work on this image fusion example. However the result of an embodiment of the present invention (1f) has fusing of the images without artefact. The intensity profile of the green line in 1f, shown in 1h has the desired equiluminant white values, whereas the SW intensity profile 1g shows substantial hallucinated intensity variation.

(78) In Annex 1, FIG. 2 there is shown the colour to greyscale image fusion example of an Ishihara plate used to test colour blindness. In Annex 1 FIG. 2f there is shown the output of the SW method. The SW method fails here because the image is composed of circles of colour on a white background. Because all edges are isolated in this way, the equivalent gradient field exactly characterizes the colour gradients and is integrable and the output in Annex 1 FIG. 2f does not have integration artefacts. Yet, the fused images does not capture the actual look and feel of the input. In contrast, the image produced by an embodiment of the present invention in FIG. 2e (intermediate steps are shown in Annex 1 FIGS. 2b-2d) shows that the initial projection directions are diffused with the bilateral filtering step enforcing the projection directions calculated at a pixel to be considered in concert with other image regions.

(79) The resultant greyscale output can be used, for example, for image optimization for color-deficient viewers. The image in Annex 1 FIG. 2e may be used as a luminance channel replacement in LUV color space for the Protanope simulation image, mapping color variation in the original RGB image Annex 1 FIG. 3a that is invisible to color-deficient observers, to luminance channel detail which they can perceive. In this particular embodiment, a downsampling ratio of 0.5 is used with a k stretching parameter of 2. The result of a system suggested by Eynard et al, is also presented as a comparisonboth methods achieve the desired result, although the result of Eynard et al. produces a higher level of discrimination, as their fusion changes the output colour values, whereas the output image produced by embodiments of the present invention only affect luminance.

(80) The quality of the resultant greyscale output from an RGB image can be measured by various metrics. The metric of Kuhn et al. compares the colour distances between pixels in the original RGB image with the greyscale distances between pixels in the output greyscale image. Annex 1, Table 1 shows a comparison of the results of this metric when applied to the RGB images from the Cadik data set, and the CIE L luminance channel, the result of Eynard et al., and the results of the embodiments of the present invention. It will be appreciated that the results of embodiments of the present invention are superior in many cases.

(81) Images captured for remote sensing applications normally span the visible and infrared wavelength spectrum. Taking data from Landsat 5's Thematic Mapper (TM), an example can be seen in Annex 1 FIG. 6. There are 7 captured image channels (3 in the visible spectrum and 4 infrared images). The three visible images are captured from 0.45-0.51 m (blue), 0.52-0.60 m (green), and 0.63-0.69 m (red), which were used as the B, G and R channels respectively of the input RGB image. In Annex 1 FIG. 6a, an input RGB image is shown from the Landsat image set, and in Annex 1 FIGS. 6b and 6c the infrared bands 5 and 7 which include extra detail not present in the RGB bands. All 4 infrared channels are used in the fusion, but only 2 are shown here for reasons of space. The 4 infrared channels are fused with the R, G and B channels in turn using the SW method in Annex 1 FIG. 6d and using an embodiment of the present invention in Annex 1 FIG. 6f, and then the output RGB channels have high and low quantiles matched to the input RGB channels. In Annex 1 FIG. 6e there is shown the result of the Spectral Edge method, which directly fuses the RGB image and all 7 multiband images.

(82) For this application a downsampling ratio of 0.5 and a k stretching parameter of 2 were used. The resultant image was significantly more detailed than the SW method.

(83) In Annex 1 FIG. 3, a conventional RGB image (3a) is fused with an near-infrared (NIR) image (3b). Processing according to an embodiment of the present invention is applied 3 timesfusing the R-channel with the NIR, the G-channel with the NIR and the B-channel with the MR. Post-processing is then performed in which the images are stretched so that their 0.05 and 0.95 quantiles are the same as the original RGB image. The resultant image is shown in Annex 1 FIG. 3e. For comparison there is shown the Spectral Edge output, Annex 1 FIG. 3c and the Eynard et al. output Annex 1 3d. In the same image order there is shown a magnified detail inset in Annex 1 3f. The output image of the POP method captures more MR detail than the SE result, while producing more natural colors than the result of Eynard et al., which has a green color cast and a lack of color contrast. The POP result shows good color contrast, naturalness and detail. For this application a downsampling ratio of 0.1 and a k stretching parameter of 1 is used.

(84) Multifocus image fusion is another potential application, which has typically been studied using greyscale images with different focal settings. Standard multifocus image fusion involves fusing two greyscale input images with different focal settings. In each input image approximately half the image is in focus, so by combining them an image in focus at every point can be produced.

(85) Annex 1, Table 2 shows a comparison of the performance of embodiments of the present invention (the POP image fusion method) on this task, on several standard multifocus image pairs, using standard image fusion quality metrics. The Q.sub.XY/F metric is based on gradient similarity, the Q(X; Y; F) metric is based on the structural similarity image measure (SSIM), and the M.sub.F.sup.XY metric is based on mutual information. The results are compared to the various comparable methodsthe resultant image produced by embodiments of the present invention comes out ahead in the majority of cases.

(86) Plenoptic photography provides various refocusing options of color images, allowing images with different depths of focus to be created from a single exposure. Embodiments of the present invention can be used to fuse these differently focused images into a single image wholly in focus, Implementations can be fine tuned for this application, due to the knowledge that only one of the images is in focus at each pixel. In one example a large k scaling term in the spread function is applied, and a downsampling ratio of 0.5 is used. This allows a crystal clear output image, in focus at every pixel, to be created.

(87) Annex 1 FIG. 7 shows an image in which four different refocused images are created from a single exposure. Using an embodiment of the present invention, the differently focused images are fused into a single image in focus at every pointin comparison the result of the method of Eynard et al. does not show perfect detail in all parts of the image, and has unnatural color information.

(88) Time-lapse photography involves capturing images of the same scene at different times. These can be fused using embodiments of the present invention in the case of greyscale images. For RGB images the stacks of R, G and B channels can be fused separately. This fusion result creates an output image which combines the most salient details of all the time-lapse images. For this application a downsampling ratio of 0.5 and a k stretching parameter of 2 is used. Annex 1, FIG. 8 shows a series of time-lapse images (from Eynard et al.) from different parts of the day and night, and results of POP fusion and the method of Eynard et al. The details only visible with artificial lighting at night are combined with details only visible in the daytime in both results, but the results from embodiments of the present invention produce far more natural colors.

(89) It is to be appreciated that certain embodiments of the invention as discussed below may be incorporated as code (e.g., a software algorithm or program) residing in firmware and/or on computer useable medium having control logic for enabling execution on a computer system having a computer processor. Such a computer system typically includes memory storage configured to provide output from execution of the code which configures a processor in accordance with the execution. The code can be arranged as firmware or software, and can be organized as a set of modules such as discrete code modules, function calls, procedure calls or objects in an object-oriented programming environment. If implemented using modules, the code can comprise a single module or a plurality of modules that operate in cooperation with one another.

(90) Optional embodiments of the invention can be understood as including the parts, elements and features referred to or indicated herein, individually or collectively, in any or all combinations of two or more of the parts, elements or features, and wherein specific integers are mentioned herein which have known equivalents in the art to which the invention relates, such known equivalents are deemed to be incorporated herein as if individually set forth.

(91) Although illustrated embodiments of the present invention have been described, it should be understood that various changes, substitutions, and alterations can be made by one of ordinary skill in the art without departing from the present invention which is defined by the recitations in the claims below and equivalents thereof.