Method and Apparatus for Contrast Enhancement

20220392029 · 2022-12-08

Assignee

Inventors

Cpc classification

International classification

Abstract

A multi-scale image processing algorithm for enhancing contrast in an electronic representation of an image represented by a) decomposing said digital image into a set of detail images at multiple resolution levels and a residual image at a resolution level lower than said multiple resolution levels, b) processing at least one pixel of said detail images, c) computing a processed image by applying a reconstruction algorithm to the residual image and the processed detail images, said reconstruction algorithm being such that if it were applied to the residual image and the detail images without processing, then said digital image or a close approximation thereof would be obtained. The processing comprises the steps of: d) calculating for said pixel, at least one statistical measure for two or more translation difference image pixel values within a neighbourhood of said pixel; and e) modifying the value of said pixel of said detail images as a function of said statistical measure and said value of said pixel of said detail images.

Claims

1. A method for enhancing the contrast of an electronic representation of an original image represented by an array of pixel values by processing said image, said processing comprising the steps of: a) decomposing said digital image into a set of detail images at multiple resolution levels and a residual image at a resolution level lower than said multiple resolution levels, b) processing at least one pixel of said detail images, and c) computing a processed image by applying a reconstruction algorithm to the residual image and the processed detail images, said reconstruction algorithm being such that if it were applied to the residual image and the detail images without processing, then said digital image or a close approximation thereof would be obtained, characterized in that said processing comprises the steps of: d) calculating for said pixel, at least one statistical measure that is computed as a central moment of two or more translation difference image pixel values within a neighborhood of said pixel, or as any combination of said central moments; wherein said translation difference image pixel value expresses the difference in pixel value between a central pixel and a neighboring pixel in an approximation image, and e) applying a function of said statistical measure and said value of said pixel of said detail images to obtain the value of said pixel of said detail images.

2. The method of claim 1, wherein said statistical measure is computed as a statistical dispersion of two or more translation difference image pixel values within a neighborhood of said pixel.

3. The method of claim 1, wherein said statistical measure is computed as the local variance of two or more translation difference image pixel values within a neighborhood of said pixel.

4. The method of claim 1, wherein said statistical measure is computed as the weighted average of the squared differences between translation difference image pixel values and the detail image pixel value.

5. The method of claim 1, wherein said statistical measure is computed as the weighted average of the squared differences between translation difference image pixel values and the detail image pixel value, wherein said detail image pixel value is computed as the weighted average of the translation difference image pixel values.

6. The method of claim 1, wherein said statistical measure is computed as the difference of a first predetermined percentile value and a second predetermined percentile value of translation difference image pixel values within a neighborhood of said pixel.

7. The method of claim 6, wherein said difference of a first predetermined percentile value and a second predetermined percentile value for said pixel is defined as the interquartile range within a neighborhood of said pixel.

8. The method of claim 1, wherein said statistical measure is computed as a predetermined percentile of the squared differences between translation difference image pixel values and the said detail image pixel value within a neighborhood of said pixel.

9. The method of claim 1, wherein said modifying is defined as a non-linear function of said statistical measure and said value of said pixel of said detail images.

10. The method of claim 1, wherein said modifying value of said pixel of said detail images is defined as a product of an amplification image (a.sub.k) and detail image:
d.sub.k out(i,j)=a.sub.k(i,j)d.sub.k(i,j) wherein d.sub.k out(i,j) is the modified detail image pixel value, d.sub.k(i,j) is the detail image pixel value and a.sub.k(i,j) is the amplification image pixel value.

11. The method of claim 10, wherein amplification image pixel value is a function of at least one said statistical measure.

12. The method of claim 10, wherein amplification image is defined as: a k ( i , j ) = f k ( v a r k ( i , j ) v a r k ( i , j ) wherein var.sub.k is the local variance of at least one translation difference image pixel value within a neighborhood of said pixel.

13. The method of claim 12, wherein f.sub.k is a non-linear monotonically increasing conversion function with a slope that gradually decreases with increasing argument values.

14. The method of claim 1, wherein processing at least one pixel of said detail images is repeated at least one time for already processed detail image pixels.

15. An apparatus for enhancing the contrast in an electronic representation of an image comprising: means for decomposing said electronic representation into a set of detail images at multiple resolution levels and a residual image at a resolution level lower than the minimum of said multiple resolution levels, means for processing at least one pixel of said detail images, and means for computing a processed image by applying a reconstruction algorithm to the residual image and the processed detail images, said reconstruction algorithm being such that if it were applied to the residual image and the detail images without processing, then said electronic image or a close approximation thereof would be obtained, characterised in that said apparatus comprises: means for storing said detail images, and wherein said means for processing said detail images comprises: means for calculating for said pixel, at least one statistical measure that is computed as a central moment of two or more translation difference image pixel values within a neighborhood of said pixel, or as any combination of said central moments, wherein said translation difference image pixel value expresses the difference in pixel value between a central pixel and a neighboring pixel in an approximation image, and means for applying a function of said statistical measure and said value of said pixel of said detail images to obtain the value of said pixel of said detail images.

Description

BRIEF DESCRIPTION OF DRAWINGS

[0050] FIG. 1: Represents a comparison of the contrast enhancement results of the algorithm according to Mei et al 2002, and the method of this invention. The graph comprising “⋅” (dot-)symbols is the former method, while the graph comprising the “+” (plus-)symbols represents the enhancement result based on the method of the invention. It can be easily seen that the latter method produces less overshoot in areas with sharp edges (such as for instance around pixel 15 in the graph).

DESCRIPTION OF EMBODIMENTS

[0051] The contrast enhancement algorithm of the present invention is applicable to all multi-scale detail representation methods from which the original image can be computed by applying the inverse transformation. Preferably the multi scale representation has a pyramidal structure such that the number of pixels in each detail image decreases at each coarser resolution level.

[0052] State-of-the-art multi-scale algorithms decompose an image into a multi-scale representation comprising detail images representing detail at multiple scales and a residual image. Some of the important multi-scale decompositions are the wavelet decomposition, the Laplacian-of-Gaussians (or LoG decomposition), the Difference-of-Gaussians (or DoG) decomposition and the Burt pyramid.

[0053] The wavelet decomposition is computed by applying a cascade of high-pass and low-pass filters followed by a subsampling step. The high-pass filter extracts the detail information out of an approximation image at a specific scale. In the Burt pyramid decomposition, the detail information is extracted out of an approximation image at scale k by subtracting the upsampled version of the approximation image at scale k+1.

[0054] In a state of the art methods as the one disclosed in EP 527 525 a contrast enhanced version of the image is created by conversion of the pixel values in the detail images followed by multi-scale reconstruction. All above implementations of multiscale decomposition have a common property. Each pixel value in the detail images can be computed out of an approximation image by combining the pixel values in a moving neighbourhood.

[0055] In the above cases the combining function is a weighted sum. For the wavelet decomposition the pixel values in the detail image at scale k are computed as:


d.sub.k+1=↓(h.sub.d*g.sub.k)


g.sub.k+1=↓(l.sub.d*g.sub.k)

[0056] with h.sub.d a high-pass filter, l.sub.d a low-pass filter, * the convolution operator and ↓ the subsampling operator (i.e. leaving out every second row and column).

[0057] For the wavelet reconstruction the contrast enhanced approximation image at scale k is computed as:


h.sub.k=l.sub.r*(↑h.sub.k+1)+h.sub.r*(↑f.sub.k(d.sub.k(i,j),var.sub.k(i,j)))

[0058] with h.sub.r a high-pass filter, l.sub.r a low-pass filter and ↑ the upsampling operator (i.e. inserting pixels with value 0 in between any two rows and columns).

[0059] For the Burt decomposition the pixel values in the detail image at scale k are computed as:


d.sub.k=g.sub.k−4g*(↑g.sub.k+1)


or


d.sub.k=g.sub.k−4g*(↑(↓(g*g.sub.k)))


or


d.sub.k=(1−4g*(↑(↓g)))*g.sub.k

[0060] with g a Gaussian low-pass filter and 1 the identity operator.

[0061] For the Burt reconstruction the contrast enhanced approximation image at scale k is computed as:


h.sub.k=4g*(↑h.sub.k+1)+f.sub.k(d.sub.k(i,j),var.sub.k(i,j))

[0062] with f.sub.k(d.sub.k(i,j),var.sub.k(i,j)) the conversion operator.

[0063] The multi-scale detail pixel values as weighted sums.

[0064] Suppose that in the Burt multi-scale decomposition a 5×5 Gaussian filter is used with coefficients w.sub.k,l with k=−2, . . . , 2 and l=−2, . . . , 2 the subsampling operator removes every second row and column and the upsampling operator inserts pixels with value 0 in between any two rows and columns.

[0065] The pixel at position i,j in the approximation image g.sub.k+1 is computed as:

[00004] g k + 1 ( i , j ) = .Math. s = - 2 2 .Math. t = - 2 2 w s , t g k ( 2 i + s , 2 j + t )

[0066] The pixel at position i,j in the upsampled image u.sub.k is computed as:

[00005] u k ( i , j ) = { .Math. s = - 2 2 .Math. t = - 2 2 w s , t g k ( i + s , j + t ) , if i and j are even 0 if i and j are note even

[0067] The pixel at position i,j in the upsampled, smoothed image gu.sub.k is computed as:

[00006] g u k ( i , j ) = { .Math. m = { - 2 , 0 , 2 } .Math. n = { - 2 , 0 , 2 } w m , n .Math. s = - 2 2 .Math. t = - 2 2 w s , t g k ( i + s + m , j + t + n ) , if i and j are even .Math. m = { - 1 , 1 } .Math. n = { - 2 , 0 , 2 } w m , n .Math. s = - 2 2 .Math. t = - 2 2 w s , t g k ( i + s + m , j + t + n ) , if i is odd and j is even .Math. m = { - 2 , 0 , 2 } .Math. n = { - 1 , 1 } w m , n .Math. s = - 2 2 .Math. t = - 2 2 w s , t g k ( i + s + m , j + t + n ) , if i is even and j is odd .Math. m = { - 1 , 1 } .Math. n = { - 1 , 1 } w m , n .Math. s = - 2 2 .Math. t = - 2 2 w s , t g k ( i + s + m , j + t + n ) , if i and j are odd

[0068] Finally, the pixel at position i,j in the detail image d.sub.k is computed as:

[00007] d k ( i , j ) = { g k ( i , j ) - 4 .Math. m = { - 2 , 0 , 2 } .Math. n = { - 2 , 0 , 2 } w m , n .Math. s = - 2 2 .Math. t = - 2 2 w s , t g k ( i + s + m , j + t + n ) , if i and j are even g k ( i , j ) - 4 .Math. m = { - 1 , 1 } .Math. n = { - 2 , 0 , 2 } w m , n .Math. s = - 2 2 .Math. t = - 2 2 w s , t g k ( i + s + m , j + t + n ) , if i is odd and j is even g k ( i , j ) - 4 .Math. m = { - 2 , 0 , 2 } .Math. n = { - 1 , 1 } w m , n .Math. s = - 2 2 .Math. t = - 2 2 w s , t g k ( i + s + m , j + t + n ) , if i is even and j is odd g k ( i , j ) - 4 .Math. m = { - 1 , 1 } .Math. n = { - 1 , 1 } w m , n .Math. s = - 2 2 .Math. t = - 2 2 w s , t g k ( i + s + m , j + t + n ) , if i and j are odd

[0069] Generally, the pixel at position i,j in the detail image d.sub.k can be computed as a weighted sum of pixels in the approximation image at the same or smaller scale k,k−1,k−2, . . . :

[00008] d k ( i , j ) = g l ( ri , rj ) - .Math. m .Math. n v m , n g l ( r i + m , rj + n )

[0070] with l ∈ {0, . . . , k} and r=subsampling_factor.sup.(1-k)

[0071] Because,

[00009] .Math. m .Math. n v m , n = 1

[0072] the pixel at position i,j in the detail image d.sub.k can be computed as:

[00010] d k ( i , j ) = g l ( ri , rj ) - .Math. m .Math. n v m , n g l ( r i + m , rj + n ) d k ( i , j ) = .Math. m .Math. n v m , n g l ( ri , rj ) - .Math. m .Math. n v m , n g l ( r i + m , rj + n ) d k ( i , j ) = c k ( i , j ) = .Math. m .Math. n v m , n ( g l ( ri , rj ) - g l ( r i + m , rj + n ) )

[0073] The term (g.sub.l(ri,rj)−g.sub.l(ri+m,rj+n)) is called the translation difference. It expresses the difference in pixel value between a central pixel and a neighbouring pixel in an approximation image. It is a measure of local contrast. The weighted sum of the translation differences is called a centre difference c.sub.k(i,j). The weights can be chosen such that the center difference images are identical to the multi-scale detail images or that they are a close approximation of the multi-scale detail images. In a similar way as disclosed higher, it can be proven that the detail images in other multi-scale decomposition methods can also be represented as a combination of translation difference images.

[0074] According to a specific embodiment of this method, the center difference image may be computed by applying a linear function as a combining operator to the translation difference images. An example of such a linear function is

[00011] .Math. m .Math. n v m , n ( g l ( ri , rj ) - g l ( r i + m , rj + n ) )

[0075] such that this equation equals the detail image itself on condition that:

[00012] .Math. m .Math. n v m , n = 1

[0076] In one embodiment, the statistical measure of translation difference image pixel value can be calculated as the local variance of translation difference image pixel value. One way to calculate this, is by applying a squaring operator as a combinina operator to the translation difference images:

[00013] var k = .Math. m .Math. n v m , n ( g l ( ri , rj ) - g l ( ri + m , rj + n ) ) 2

[0077] A correct way to define variance is as the second central moment of a distribution or the expectation of the squared deviation of a random variable from its mean.

[0078] And thus, in another embodiment, the local variance of the translation difference image pixel value is computed as the weighted average of the squared difference of the translation difference image and the weighted average of the translation difference around each pixel of interest:

[00014] v a r k ( i , j ) = .Math. m .Math. n v m , n ( g l ( ri , rj ) - g l ( r i + m , rj + n ) - .Math. m .Math. n v m , n ( g l ( ri , rj ) - g l ( r i + m , rj + n ) ) ) 2

[0079] Or since d.sub.k(ri,rj)=Σ.sub.mΣ.sub.nv.sub.m,n(g.sub.l(ri,rj)−g.sub.l(ri+m,rj+n)), the local variance of translation difference image pixel value can also be computed as the weighted average of the squared difference of the translation difference image and the detail image around each pixel of interest:

[00015] v a r k ( i , j ) = .Math. m .Math. n v m , n ( g l ( ri , rj ) - g l ( r i + m , rj + n ) - d k ( ri , rj ) ) 2

[0080] or in case that

[00016] .Math. m .Math. n v m , n = 1 then var k ( i , j ) = - d k 2 + .Math. m .Math. n v m , n ( g l ( ri , rj ) - g l ( r i + m , r j + n ) ) 2

[0081] It has to be noted that the variance of the approximation image, as used in Mei et al. 2002 is defined differently as follows:

[00017] v a r k ( i , j ) = .Math. m .Math. n v m , n ( g l ( r i + m , rj + n ) - .Math. m .Math. n v m , n ( g l ( r i + m , rj + n ) ) ) 2

[0082] As an example of an N×N neighbourhood, a 3×3 neighbourhood has to be understood as a surface of 3×3 image pixels around a central pixel, such that m and n are both {−1,0,1} and

[00018] v m , n = 1 9

for the mathematical notations above. v.sub.m,n can also have Gaussian weights.

[0083] In another embodiment the statistical measure of translation difference image pixel value could also be defined with higher central moments. Such as for instance the 2.sup.nd central moment: variance (see above), or also the 3.sup.rd central moment, skewness:

[00019] s k e w k ( i , j ) = .Math. m .Math. n v m , n ( g l ( ri , rj ) - g l ( r i + m , rj + n ) - .Math. m .Math. n v m , n ( g l ( ri , rj ) - g l ( r i + m , rj + n ) ) ) 3

[0084] Or the 4.sup.th central moment, kurtosis:

[00020] k u r t o s i s k ( i , j ) = .Math. m .Math. n v m , n ( g l ( ri , rj ) - g l ( r i + m , rj + n ) - .Math. m .Math. n v m , n ( g l ( ri , rj ) - g l ( r i + m , rj + n ) ) ) 4

[0085] In another embodiment, the statistical measure can also be any combination of different statistical measures.

[0086] In another embodiment, the statistical measure of the translation difference image pixel value is not computed with an average operator, but as a predetermined percentile.

[0087] As example, this predetermined percentile may be defined as taking the maximum (or as the 100% percentile):

[00021] v a r k ( i , j ) = max m , n ( g l ( r i , r j ) - g l ( r i + m , rj + n ) - d k ( ri , rj ) ) 2

[0088] Alternatively, other percentile values may be considered; instead of using the maximum (=100% percentile), a different percentile value can be taken such as for instance the median (50%) or 95%-percentile.

[0089] In another embodiment, other values for statistical dispersion of the translation difference image pixel value can be taken. For instance, the difference of a first predetermined percentile value and a second predetermined percentile value of the translation difference image pixel values within a neighborhood. As an example, we can take the range of translation difference image pixel value around a pixel, where the first predetermined percentile is the maximum, and the second predetermined percentile the minimum:

[00022] max m , n ( g l ( ri , rj ) - g l ( r i + m , rj + n ) ) - min m , n ( g l ( ri , rj ) - g l ( r i + m , rj + n ) )

[0090] Another example is the interquartile range.

[0091] Conversion Operator

[0092] A statistical measure of translation difference image pixel value for at least one approximation image pixel within a neighbourhood of pixel and the value of the detail image pixel itself are used to modify the detail image pixel in order to obtain a processed detail image in such a way, that after reconstruction the contrast in the image is enhanced. As explained above, the term “statistical measure of translation difference image pixel value” means “any mathematical operation on translation difference image pixel value in a neighbourhood of a local approximation image pixel”. As explained above, this mathematical operation may for instance be based on the calculation of a local variance in an N×N neighbourhood.

[0093] The resulting statistical measure of translation difference image pixel value and the value of the detail image pixel are then used to modify the detail image pixel value in order to obtain a processed detail image in such a way, that after reconstruction contrast is enhanced.

[0094] This could be done with a function f.sub.k that is a function of the statistical measure var.sub.k(i,j) and the detail image pixel value d.sub.k(i,j), itself:


d.sub.k out(i,j)=f.sub.k(d.sub.k(i,j),var.sub.k(i,j))

[0095] Where d.sub.k out(i,j) is the resulting detail image pixel value.

[0096] In one embodiment, f.sub.k is a non-linear function of the statistical measure of translation difference image pixel value and the detail image pixel.

[0097] Conversion could also be applied with a multiplicative amplification image a.sub.k:


d.sub.k out(i,j)=a.sub.k(i,j)d.sub.k(i,j)

[0098] In one embodiment, a.sub.k(i,j) is a function of the statistical measure translation difference image pixel value:


a.sub.k(i,j)=f.sub.k(var.sub.k(i,j))

[0099] Typically, f.sub.k is non-linear in order to obtain contrast enhancement in the reconstructed image.

[0100] In one specific embodiment the amplification image can be defined as:

[00023] a k ( i , j ) = f k ( v a r k ( i , j ) ) v a r k ( i , j )

[0101] Wherein var.sub.k(i,j) is the local variance of translation difference image and mapping function f.sub.k is defined as for instance illustrated in EP 527 525, p 9.35:


f.sub.k:x.fwdarw.a x.sup.p

[0102] In this embodiment x will be √{square root over (var.sub.k(i,j))} (rather than being defined as the detail image pixel as in EP 527 525), As such:

[00024] a k ( i , j ) = α v a r k ( i , j ) p v a r k ( i , j ) = α v a r k ( i , j ) p - 1

[0103] where the power p is chosen within the interval 0<p<1, preferably within the interval 0.5<p<0.9. A comparative evaluation of a large number of computed radiography images of thorax and bones by a team of radiologists indicated that p=0.7 is the optimal value in most cases. a specifies a gain factor.

[0104] If this multiplicative amplification factor is applied to a detail image, then all details with a low variance of translation difference images (or elementary contrast) will be boosted relative to the image details which originally have a higher local variance.

[0105] In this respect, the above power function f.sub.k:x.fwdarw.a x.sup.p proved to perform very well, but it is clear that an infinite variety of monotonically increasing mapping functions can be found that will enhance subtle details with low variance. The main requirement is that a.sub.k(i,j) is higher, and thus the slope of said mapping function is steeper, in the region of argument values that correspond to low elementary contrast.

[0106] In an alternative embodiment, excessive noise amplification can be avoided by using a composite mapping function:

[00025] f k : x .fwdarw. α c p 2 ( x c ) p 1 if x < c x .fwdarw. α x p 2 if x c

[0107] where the power p.sub.2 is chosen in the interval 0<p.sub.2<1, preferably 0.5<p.sub.2<0.9, and most preferably p.sub.2=0.7 (however the preferred value of p.sub.2 highly depends upon the exact kind of radiological examination, and may vary). The power p.sub.1 in this equation should not be smaller than p.sub.2:p.sub.1≥p.sub.2, where the cross-over abscissa c specifies the transition point between both power functions.

[0108] Decreasing the power p.sub.2 will further enhance the contrast of subtle details, but at the same time the noise component will also be amplified. The noise amplification can be limited by choosing a power value p.sub.1 larger than p.sub.2, preferably 1.0, so that the slope of the mapping function is not extremely steep for the range of very small abscissae.

[0109] Ideally the cross-over abscissa c should be proportional to the standard deviation of the noise component (assuming additive noise), so the lowest amplitude details buried within the noise along with the majority of the noise signals will only be moderately amplified, according to the slope of the functional part controlled by the power p.sub.1, while the detail signals just exceeding the noise level will be amplified much more according to the slope of the functional part controlled by the power p.sub.2. The decreasing slope of the latter functional part still assures that the subtle details above the noise level are boosted relative to the high amplitude details. In this respect, the above composite power function proved to perform very well, but it is clear that an infinite variety of monotonically increasing mapping functions can be found that will enhance subtle details without boosting the noise to an excessive level.

[0110] The main requirement is that a.sub.k(i,j) is higher, and thus the slope of said mapping function is steeper, in the region of argument values that correspond to low elementary contrast than it is either in the sub-range of very small elementary contrast which mostly correspond to noise, or in the range of the larger elementary contrast.

[0111] When all detail images of the decomposition are modified using the same mapping according to one of the above methods, a uniform enhancement over all scales will be obtained. In a slightly modified embodiment, where a different mapping function is used at each resolution level e.g. by multiplying one of the above described mapping functions or multiplicative factor with a resolution level-dependent coefficient, it is possible to further increase the sharpness by setting the coefficient corresponding to the finest resolution level to a substantially higher value than the other coefficients as follows:


y=A.sub.iF(x)


f or i=0 . . . L−1

[0112] where F(x) is one of the above described mappings, L is the number of resolution levels, and A.sub.i is a level dependent coefficient, e.g. A.sub.0>1, and A.sub.i<1 for 1≤i≤L−1.

[0113] The detail images are modified starting from the lowest resolution detail image up to the finest level, which is the order in which they are needed in the course of the reconstruction process.

[0114] Subsequently, the reconstruction algorithm is applied to the modified detail images d.sub.k out.

[0115] In addition, other enhancements may be applied to d.sub.k prior or after the application of the conversion operator a.sub.k(i,j).

[0116] The above-mentioned aspects are realized by a method, an apparatus, computer program product, and computer readable medium as defined in the appending claims which define the invention. Specific and preferred embodiments of this invention are set out in the dependent claims.

[0117] The following apparatus may be considered an example of the invention: an apparatus for enhancing the contrast in an electronic representation of an image comprising: [0118] means for decomposing said electronic representation into a set of detail images at multiple resolution levels and a residual image at a resolution level lower than the minimum of said multiple resolution levels, [0119] means for processing at least one pixel of said detail images, [0120] means for computing a processed image by applying a reconstruction algorithm to the residual image and the processed detail images, said reconstruction algorithm being such that if it were applied to the residual image and the detail images without processing, then said electronic image or a close approximation thereof would be obtained, characterised in that said apparatus comprises [0121] means for storing said detail images, and wherein said means for processing said detail images, comprises: [0122] means for calculating for said pixel, at least one statistical measure for two or more translation difference image pixel values within a neighborhood of said pixel, [0123] means for modifying the value of said pixel of said detail images as a function of said statistical measure and said value of said pixel of said detail images.