Method and Apparatus for Contrast Enhancement

20220405886 · 2022-12-22

Assignee

Inventors

Cpc classification

International classification

Abstract

This invention is related to a method for enhancing the contrast and other image quality aspects of an electronic representation of an image that is based on a multi-scale decomposition and recomposition method, wherein the image enhancement steps involve the processing of the detail images by a conversion function, which is optimized by the algorithm itself by means of optimizing the defining parameters of this conversion function by a cost-function based optimization.

Claims

1. A method for enhancing the contrast of an electronic representation of an original image g.sub.0 represented by an array of pixel values by processing said image, said processing comprising the steps of a) decomposing said digital image into a set of detail images d.sub.k at multiple resolution levels k and a residual image at a resolution level lower than said multiple resolution levels, b) processing at least one of said detail images d.sub.k by applying a multiplicative amplification image a.sub.k optimal governed by a parameter-set p.sub.k optimal, to obtain at least one processed detail image d.sub.k optimal, c) computing a processed image h.sub.0 optimal by applying a reconstruction algorithm to the residual image, the unprocessed detail images d.sub.k and the at least one processed detail images d.sub.k optimal, said reconstruction algorithm being such that if it were applied to the residual image and the detail images without processing, then said digital image or a close approximation thereof would be obtained, characterized in that said processing comprises the steps of: d) calculating said parameter-set p.sub.k optimal, by optimizing a cost function L that is calculated from a processed detail image d.sub.k out, obtained by processing at least one of said detail images d.sub.k by applying a multiplicative amplification image a.sub.k governed by a to be optimized parameter-set p.sub.k.

2. The method according to claim 1, wherein said multiplicative amplification image a.sub.k is a function of d.sub.k a k ( i , j ) = f k ( d k ( i , j ) ) d k ( i , j ) , wherein ƒ.sub.k is a mapping function that maps said detail image d.sub.k to the enhanced detail image d.sub.k out.

3. The method according to claim 1, wherein said multiplicative amplification image a.sub.k is a function of the variance of the translation difference image pixel value √{square root over (var.sub.k(I,j))}: a k ( i , j ) = f k ( var k ( i , j ) ) var k ( i , j ) .

4. The method according to claim 1, wherein said multiplicative amplification image a.sub.k is a function of translation difference images: a k ( i , j ) = .Math. m .Math. n v m , n f k ( g k ( ri , rj ) - g k ( ri + m , rj + n ) ) d k ( i , j ) .

5. The method according to claim 1, wherein said multiplicative amplification image a.sub.k is a function of d.sub.k, var.sub.k and translation difference images (g.sub.k(ri, rj)−g.sub.k(ri+m, rj+n)).

6. The method according to claim 2, wherein ƒ.sub.k is a non-linear monotonically increasing mapping function with a slope that gradually decreases with increasing argument values.

7. The method according to claim 1, wherein said cost function L is calculated from a processed image h.sub.0 obtained by applying said reconstruction algorithm to said residual image, said unprocessed detail images d.sub.k and said processed detail image d.sub.k out, and obtained by processing at least one of said detail images d.sub.k by applying a multiplicative amplification image a.sub.k governed by a to be optimized parameter-set p.sub.k.

8. The method according to claim 1, wherein said cost function L is calculated from statistical measures calculated from said processed detail image d.sub.k out.

9. The method according to claim 8 wherein said cost function L is a student t-test of at least two sample sets h.sub.0.sup.bone and h.sub.0.sup.ST that are obtained from segmentation maps of the original image g.sub.0 of at least 2 distinct tissue types bone and ST and said processed image h.sub.0: t 2 = ( h 0 bone _ - h 0 ST _ ) 2 / σ δ 2 with h 0 bone _ , h 0 ST _ being the mean values of the sample sets for bone and soft tissue (ST), and σ δ 2 = ( σ b o n e ) 2 n b o n e + ( σ S T ) 2 n S T , with σ.sup.bone and σ.sup.ST being the standard deviations of the sample sets for bone and soft tissue (ST).

10. The method according to claim 8, wherein said cost function L is a Mann-Whitney U test of at least two sample sets h.sub.0.sup.bone and h.sub.0.sup.ST that are obtained from segmentation maps of the original image g.sub.0 of at least 2 distinct tissue types bone and ST and said processed image h.sub.0: U = .Math. bone .Math. ST S ( h 0 bone , h 0 ST ) S ( X , Y ) = 1 if Y < X S ( X , Y ) = 1 2 if Y = X S ( X , Y ) = 0 if Y > X .

11. The method according to claim 1, wherein said cost function L is determined by the relative entropy of said processed image h.sub.0: H ( h k ) = - .Math. i = 0 n P ( h 0 i ) log ( P ( h 0 i ) ) log ( n ) , with P(h.sub.0.sub.i) is the histogram bin count value for value h.sub.0.sub.i, and n amount of bins in said histogram.

12. The method according to claim 1, wherein said cost function L is determined by the mutual information l (g.sub.0, h.sub.0) between said original image g.sub.0 and said processed image h.sub.0: I ( g 0 , h 0 ) = .Math. x g 0 .Math. y h 0 P g 0 h 0 ( x , y ) log ( P g 0 h 0 ( x , y ) P g 0 ( x ) p h 0 ( y ) ) , where p.sub.g.sub.0.sub.h.sub.0 is the joint probability density function of g.sub.0, said original image and h.sub.0 said processed image, and where P.sub.g.sub.0 and P.sub.h.sub.0 are the marginal probability density function of g.sub.0 and h.sub.0, respectively.

13. The method according to claim 1, wherein said cost function L is determined by a ratio of variances Var.sub.tot for said processed detail image d.sub.k out(i, j) and said original detail image d.sub.k(i, j): L = Var tot ( log ( v a r k out ( i , j ) ) ) Var tot ( log ( v a r k ( i , j ) ) ) wherein the variance Var.sub.tot is calculated over all pixels i, j at all scales k, for the logarithm of the square root of the local variance (√{square root over (var.sub.k(i, j))}) of said processed or original detail image:
Var.sub.tot(log(√{square root over (var.sub.k(i,j))})) wherein var.sub.k(i, j) is the variance of translation difference images around pixel i, j at scale k.

14. The method according to claim 1, wherein said cost function L is determined by the ratio of two global statistical measures, wherein the nominator global statistical measure is calculated from local statistical measures for said processed image h.sub.0 and wherein the denominator global statistical measure is calculated from a local statistical measures for said original image g.sub.0, wherein: L = Var tot ( Var loc ( h 0 ) ( i , j ) ) Var tot ( var loc ( g 0 ) ( i , j ) ) .

15. The method according to claim 14, wherein said global statistical measure is the variance Var.sub.tot over at least two pixels at at least one scale k, and wherein said local statistical measure is the logarithm of the square root of the variance of translation images log√{square root over (var.sub.k(i, j))}, such that NL = Var tot ( log ( var k out ( i , j ) ) ) Var tot ( log ( var k ( i , j ) ) ) , wherein Var tot is the variance over all pixels in said image.

16. The method according to claim 1, wherein said cost function L is determined by calculation of the weighted average of RV (i, j) over all pixels: LIN = .Math. i , j w i , j RV ( i , j ) wherein RV(i, j) is calculated for each pixel i, j as the relative variance over all scales k of the logarithm of the multiplicative amplification image a.sub.k:
RV(i,j)=10.sup.Var(log10(a.sup.k.sup.(i,j)))−1.

17. The method according to claim 1, wherein said cost function L is a weighted sum of cost functions.

18. The method according to claim 3, wherein ƒ.sub.k is a non-linear monotonically increasing mapping function with a slope that gradually decreases with increasing argument values.

19. The method according to claim 4, wherein ƒ.sub.k is a non-linear monotonically increasing mapping function with a slope that gradually decreases with increasing argument values.

Description

BRIEF DESCRIPTION OF DRAWINGS

[0025] FIG. 1 is a block scheme generally illustrating an apparatus according to the present invention.

[0026] FIG. 2 is a specific embodiment of an image acquisition apparatus, that reads out a phosphor plate 13, that was exposed by an X-ray source 10 in a storage phosphor cassette 12.

[0027] FIG. 3 is another embodiment of image acquisition apparatus, being a DR detector.

[0028] FIG. 4 is a block scheme illustrating the different steps of the contrast enhancing method.

[0029] FIG. 5 illustrates one way of performing the decomposition step in a method according to the present invention.

[0030] FIG. 6 illustrates an embodiment of a reconstruction algorithm.

[0031] FIG. 7 shows the coefficients of an example of a Gaussian filter.

DESCRIPTION OF EMBODIMENTS

[0032] In the following detailed description, reference is made in sufficient detail to the above referenced drawings, allowing those skilled in the art to practice the embodiments explained below.

[0033] The contrast enhancement algorithm of the present invention is applicable to all multi-scale detail representation methods from which the original image can be computed by applying the inverse transformation. Preferably the multi scale representation has a pyramidal structure such that the number of pixels in each detail image decreases at each coarser resolution level.

[0034] State-of-the-art multi-scale algorithms decompose an image into a multi-scale representation comprising detail images representing detail at multiple scales and a residual image. Some of the important multi-scale decompositions are the wavelet decomposition, the Laplacian-of-Gaussians (or LoG decomposition), the Difference-of-Gaussians (or DoG) decomposition and the Burt pyramid.

[0035] The wavelet decomposition is computed by applying a cascade of high-pass and low-pass filters followed by a subsampling step. The high-pass filter extracts the detail information out of an approximation image g.sub.k at a specific scale k (g.sub.0 being the original image at the original scale 0). In the Burt pyramid decomposition, the detail information is extracted out of an approximation image at scale k by subtracting the upsampled version of the approximation image at scale k+1.

[0036] A simplified block diagram of an apparatus according to the present invention is shown in FIG. 1. An image acquisition unit 1 acquires a digital image by sampling the output signal of an image sensor, such as a CCD sensor, a video camera, or an image scanner, an image intensifying tube, quantizes it using an A/D convertor into an array of pixel values, called raw or original image 2, with pixel values typically 8 to 12 bits long, temporarily stores the pixel values in memory if desired, and transmits the digital image 2 to an image enhancement unit 3, where the image contrast is adaptively enhanced in accordance with the present invention, next the enhanced image 4 is transmitted to the display mapping section 5 which modifies the pixel values according to a contrast curve, such that the relevant image information is presented in an optimal way, when the processed image 6 is visualised on an image output device 7, which produces either a hardcopy on transparent film or on paper, or a viewable image on a digital monitor or display screen.

[0037] A preferred embodiment of image acquisition unit 1 is shown in FIG. 2. A radiation image of an object 11 or part thereof, e.g. a patient is recorded onto a photostimulable phosphor plate by exposing said plate to X-rays originating from an X-ray source 10, transmitted through the object. The photostimulable phosphor plate 13 is conveyed in a cassette 12.

[0038] In a radiation image readout apparatus the latent image stored in the photostimulable phosphor plate is read out by scanning the phosphor sheet with stimulating rays emitted by a laser 14. The stimulating rays are deflected according to the main scanning direction by means of a galvanometric deflection device 15. The secondary scanning motion is performed by transporting the phosphor sheet in the direction perpendicular to the scanning direction. A light collector 16 directs the light obtained by stimulated emission onto a photomultiplier 17 where it is converted into an electrical signal, which is next sampled by a sample and hold circuit 18, and converted into a 12 bit digital signal by means of an analog to digital converter 19. From there the digital image 2, called raw or original image, is sent to the enhancement section 3.

[0039] In another embodiment of image acquisition unit 1 is shown in FIG. 3, a radiation image of an object 11 or part thereof, e.g. a patient is recorded onto a digital detector or direct radiography (DR) detector. The digital image 2 is directly produced by the detector unit.

[0040] The image enhancement system 3 consists of three main parts, schematically drawn in FIG. 4. In a decomposition section 30 the original image 2 is decomposed into a sequence of detail images, which represent the amount of detail present in the original image at multiple resolution levels, from fine to coarse.

[0041] After the last decomposition step a residual image 31′ may be left. The resulting detail images 31, which represent the amount of local detail at successive resolution levels are next modified in modification section 32 by means of a non-linear mapping operation.

[0042] In a state of the art methods as the one disclosed in EP 527 525 a contrast enhanced version of the image is created by conversion in the modification section 32 of the pixel values in the detail images followed by multi-scale reconstruction. All above implementations of multiscale decomposition have a common property. Each pixel value in the detail images can be computed out of an approximation image by combining the pixel values in a moving neighbourhood.

[0043] In the above cases the combining function is a weighted sum. For the wavelet decomposition the pixel values in the detail image at scale k are computed as:


d.sub.k+1=↓(h.sub.d*g.sub.k)


g.sub.k+1=↓(l.sub.d*g.sub.k)

with h.sub.d a high-pass filter, l.sub.d a low-pass filter, * the convolution operator and ↓ the subsampling operator (i.e. leaving out every second row and column).

[0044] For the wavelet reconstruction the contrast enhanced approximation image at scale k is computed as:


h.sub.k=l.sub.r*(↑h.sub.k+1)+h.sub.r*(↑ƒ.sub.k(d.sub.k(i,j),var.sub.k(i,j)))

[0045] with h.sub.r

a high-pass filter, l.sub.r a low-pass filter and ↑ the upsampling operator (i.e. inserting pixels with value 0 in between any two rows and columns).

[0046] For the Burt decomposition the pixel values in the detail image at scale k are computed as:


d.sub.k=g.sub.k−4g*(↑g.sub.k+1)

or


d.sub.k=g.sub.k−4g*(↑(↓(g*g.sub.k)))

or


d.sub.k=(1−4g*(↑(↓g)))*g.sub.k

with g a Gaussian low-pass filter and 1 the identity operator.

[0047] For the Burt reconstruction the contrast enhanced approximation image at scale k is computed as:


h.sub.k=4g*(↑h.sub.k+1)+ƒ.sub.k(d.sub.k(i,j),var.sub.k(i,j))

with ƒ.sub.k(d.sub.k(i, j), var.sub.k(i, j)) the conversion operator.

[0048] The multi-scale detail pixel values as weighted sums.

[0049] Suppose that in the Burt multi-scale decomposition a 5×5 Gaussian filter is used with coefficients w.sub.k,l with k=−2, . . . , 2 and 1=−2, . . . , 2 the subsampling operator removes every second row and column and the upsampling operator inserts pixels with value 0 in between any two rows and columns.

[0050] The pixel at position i, j in the approximation image g.sub.k+1 is computed as:

[00001] g k + 1 ( i , j ) = .Math. s = - 2 2 .Math. t = - 2 2 w s , t g k ( 2 i + s , 2 j + t )

The pixel at position i, j in the upsampled image u.sub.k is computed as:

[00002] u k ( i , j ) = { .Math. 2 s = - 2 .Math. t = - 2 2 w s , t g k ( i + s , j + t ) , if i and j are even 0 if i and j are note even

[0051] The pixel at position i, j in the upsampled, smoothed image gu.sub.k is computed as: gu.sub.k(i, j)

[00003] = { .Math. m = { - 2 , 0 , 2 } .Math. n = { - 2 , 0 , 2 } w m , n .Math. s = - 2 2 .Math. t = - 2 2 w s , t g k ( i + s + m , j + t + n ) , if i and j are even .Math. m = { - 1 , 1 } .Math. n = { - 2 , 0 , 2 } w m , n .Math. s = - 2 2 .Math. t = - 2 2 w s , t g k ( i + s + m , j + t + n ) , if i is odd and j is even .Math. m = { - 2 , 0 , 2 } .Math. n = { - 1 , 1 } w m , n .Math. s = - 2 2 .Math. t = - 2 2 w s , t g k ( i + s + m , j + t + n ) , if i is even and j is odd .Math. m = { - 1 , 1 } .Math. n = { - 1 , 1 } w m , n .Math. s = - 2 2 .Math. t = - 2 2 w s , t g k ( i + s + m , j + t + n ) , if i and j are odd

[0052] Finally, the pixel at position i, j in the detail image d.sub.k is computed as:

[00004] d k ( i , j ) = { g k ( i , j ) - 4 .Math. m = { - 2 , 0 , 2 } .Math. n = { - 2 , 0 , 2 } w m , n .Math. s = - 2 2 .Math. t = - 2 2 w s , t g k ( i + s + m , j + t + n ) , if i and j are even g k ( i , j ) - 4 .Math. m = { - 1 , 1 } .Math. n = { - 2 , 0 , 2 } w m , n .Math. s = - 2 2 .Math. t = - 2 2 w s , t g k ( i + s + m , j + t + n ) , if i is odd and j is even g k ( i , j ) - 4 .Math. m = { - 2 , 0 , 2 } .Math. n = { - 1 , 1 } w m , n .Math. s = - 2 2 .Math. t = - 2 2 w s , t g k ( i + s + m , j + t + n ) , if i is even and j is odd g k ( i , j ) - 4 .Math. m = { - 1 , 1 } .Math. n = { - 1 , 1 } w m , n .Math. s = - 2 2 .Math. t = - 2 2 w s , t g k ( i + s + m , j + t + n ) , if i and j are odd

Generally, the pixel at position i, j in the detail image d.sub.k can be computed as a weighted sum of pixels in the approximation image at the same or smaller scale k, k−1, k−2, . . . :

[00005] d k ( i , j ) = g l ( ri , rj ) - .Math. m .Math. n v m , n g l ( ri + m , rj + n )

with l ∈ {0, . . . , k} and r=subsampling_factor.sup.(1−k)

[0053] Because,

[00006] .Math. m .Math. n v m , n = 1

the pixel at position i, j in the detail image d.sub.k can be computed as:

[00007] d k ( i , j ) = g l ( ri , rj ) - .Math. m .Math. n v m , n g l ( ri + m , rj + n ) d k ( i , j ) = .Math. m .Math. n v m , n g l ( ri , rj ) - .Math. m .Math. n v m , n g l ( ri + m , rj + n ) d k ( i , j ) = c k ( i , j ) = .Math. m .Math. n v m , n ( g l ( ri , rj ) - g l ( ri + m , rj + n ) )

[0054] The term (g.sub.l(ri, rj)−g.sub.l (ri+m, rj+n)) is called the translation difference. It expresses the difference in pixel value between a central pixel and a neighbouring pixel in an approximation image. It is a measure of local contrast. The weighted sum of the translation differences is called a centre difference c.sub.k(i, j). The weights can be chosen such that the center difference images are identical to the multi-scale detail images or that they are a close approximation of the multi-scale detail images. In a similar way as disclosed higher, it can be proven that the detail images in other multi-scale decomposition methods can also be represented as a combination of translation difference images.

[0055] According to a specific embodiment of this method, the center difference image may be computed by applying a linear function as a combining operator to the translation difference images. An example of such a linear function is

[00008] .Math. m .Math. n v m , n ( g l ( ri , rj ) - g l ( r i + m , rj + n ) )

such that this equation equals the detail image itself on condition that:

[00009] .Math. m .Math. n v m , n = 1

[0056] In one embodiment, the statistical measure of translation difference image pixel value can be calculated as the local variance of translation difference image pixel value. One way to calculate this, is by applying a squaring operator as a combining operator to the translation difference images:

[00010] var k ( i , j ) = .Math. m .Math. n v m , n ( g l ( ri , rj ) - g l ( ri + m , rj + n ) ) 2

[0057] A correct way to define variance is as the second central moment of a distribution or the expectation of the squared deviation of a random variable from its mean.

[0058] And thus, in another embodiment, the local variance of the translation difference image pixel value is computed as the weighted average of the squared difference of the translation difference image and the weighted average of the translation difference around each pixel of interest:

[00011] var k ( i , j ) = .Math. m .Math. n v m , n ( g l ( ri , rj ) - g l ( ri + m , rj + n ) - .Math. m .Math. n v m , n ( g l ( ri , rj ) - g l ( ri + m , rj + n ) ) ) 2

Or since d.sub.k(ri, rj)=Σ.sub.mΣ.sub.nv.sub.m,n(g.sub.l(ri, rj)−g.sub.l(ri+m, rj+n)), the local variance of translation difference image pixel value can also be computed as the weighted average of the squared difference of the translation difference image and the detail image around each pixel of interest:

[00012] v a r k ( i , j ) = .Math. m .Math. n v m , n ( g l ( r i , r j ) - g l ( r i + m , r j + n ) - d k ( r i , r j ) ) 2

[0059] or in case that

[00013] .Math. m .Math. n v m , n = 1 then var k ( i , j ) = - d k 2 + .Math. m .Math. n v m , n ( g l ( ri , rj ) - g l ( ri + m , rj + n ) ) 2

[0060] It has to be noted that the variance of the approximation image, as used in Mei et al. 2002 is defined differently as follows:

[00014] var k ( i , j ) = .Math. m .Math. n v n , m ( g l ( ri + m , rj + n ) - .Math. m .Math. n v m , n ( g l ( ri + m , rj + n ) ) ) 2

[0061] As an example of an N×N neighbourhood, a 3×3 neighbourhood has to be understood as a surface of 3×3 image pixels around a central pixel, such that m and n are both {−1,0,1} and v.sub.m,n=1/9 for the mathematical notations above. v.sub.m,n can also have Gaussian weights.

[0062] In another embodiment, the statistical measure of translation difference image pixel value is not limited to a variance of said translation difference image pixel values, but can also be any combination of different statistical measures or functions of said translation difference image pixel values, as for instance disclosed in EP3822907.

[0063] For instance, in another embodiment, the statistical measure of the translation difference image pixel value is not computed with an average operator, but as a predetermined percentile. As example, this predetermined percentile may be defined as taking the maximum (or as the 100% percentile):

[00015] var k ( i , j ) = max m , n ( g l ( ri , rj ) - g l ( ri + m , rj + n ) - d k ( ri , rj ) ) 2

[0064] Alternatively, other percentile values may be considered; instead of using the maximum (=100% percentile), a different percentile value can be taken such as for instance the median (50%) or 95%-percentile.

[0065] In another embodiment, other values for statistical dispersion of the translation difference image pixel value can be taken. For instance, the difference of a first predetermined percentile value and a second predetermined percentile value of the translation difference image pixel values within a neighborhood. As an example, we can take the range of translation difference image pixel value around a pixel, where the first predetermined percentile is the maximum, and the second predetermined percentile the minimum:

[00016] max m , n ( g l ( ri , rj ) - g l ( r i + m , rj + n ) ) - min m , n ( g l ( ri , rj ) - g l ( r i + m , rj + n ) )

Another example is the interquartile range.

[0066] In another embodiment, other multiscale decomposition methods based on steerable pyramids may be used, such as for instance disclosed in European application EP20180161.0, and wherein different orientations are used:


d.sub.kn(i,j)=A.sub.kn*cos ϕ.sub.kn(i,j)d.sub.kn

being the resulting detail image at scale k and orientation n, A.sub.kn the amplitude and ϕ.sub.kn the phase from the steerable pyramid decomposition. They are defined by the quadrature pair b.sub.kn and c.sub.kn:

[00017] A k n = d k n 2 + c k n 2 ϕ k n = a tan ( c k n d k n )

Where d.sub.kn is the detail image at scale (k) and orientation (n), and c.sub.kn is the conjugate detail image at scale (k) and orientation (n).

[0067] Conversion Operator

The goal of a conversion operator is to modify the detail images d.sub.k into enhanced detail images d.sub.k out, such that after reconstruction an enhanced image is obtained. The conversion operator referred to in this invention generally corresponds to a multiplicative amplification image (noted as a.sub.k), but may as well be represented as a mapping function ƒ.sub.k. The term multiplicative amplification image implies that this conversion operator corresponds to an (image) matrix having the same dimensions as the detail image to be conversed or enhanced. As a matter of fact, the multiplicative amplification image a.sub.k is a conversion function that may be based on the original detail image, wherein the zones to be emphasised or enhanced will have relatively higher pixel values to express the desired local amplification. The multiplicative amplification image a.sub.k is “applied” to the detail image d.sub.k to obtain the modified detail image d.sub.k out by means of a pixel-by-pixel multiplication of the corresponding pixel-values in both matrices or images.


d.sub.k out(i,j)=a.sub.k(i,j)d.sub.k(i,j)

The conversion operator may however also be represented by a mapping function ƒ.sub.k that operates on a pixel (i, j), rather than a multiplicative amplification image a.sub.k (that operates on an image). It is clear that mapping function ƒ.sub.k and multiplicative amplification image a.sub.k are interchangeable by following formula:


ƒ.sub.k(d.sub.k(i,j))=a.sub.k(i,j)d.sub.k(i,j)

[0068] The conversion operator is governed by a parameter-set p.sub.k that is different for each scale k. This means that the conversion operator can be expressed as a function (in principle any type of function) having the parameter-set p.sub.k as it's defining parameters.


ƒ.sub.k(d.sub.k(i,j);p.sub.k)

or that the amplification image a.sub.k can be expressed as a function of said parameter-set p.sub.k,


a.sub.k(d.sub.k(i,j);p.sub.k)

Different approaches to obtain a suitable conversion operator exist.

[0069] Referring to FIG. 4 a preferred embodiment of the modification section 32 in accordance with the findings of the present invention comprises a memory 61 for temporarily storing the detail images 31 and the residual image 31′, and a conversion function 62 which converts and enhances each detail image d.sub.k according to a multiplicative amplification image a.sub.k:


d.sub.k out(i,j)=a.sub.k(i,j)d.sub.k(i,j)

[0070] The multiplicative amplification image a.sub.k may be a function of d.sub.k such as, for instance:

[00018] a k ( i , j ) = f k ( d k ( i , j ) ) d k ( i , j )

wherein ƒ.sub.k is a mapping function or enhancement function that maps the detail image to the enhanced detail image.

[0071] Where in a preferred embodiment, the multiplicative amplification image a.sub.k may be a function of √{square root over (var.sub.k(i, j))}, which is the square root of local variance for each pixel of interest, such that:

[00019] a k ( i , j ) = f k ( var k ( i , j ) ) var k ( i , j )

[0072] Or where in again another embodiment, the multiplicative amplification image a.sub.k may be a combination of d.sub.k and var.sub.k:

[00020] a k ( i , j ) = var k ( i , j ) .Math. "\[LeftBracketingBar]" d k ( i , j ) .Math. "\[RightBracketingBar]" n s .Math. "\[LeftBracketingBar]" d k ( i , j ) .Math. "\[RightBracketingBar]"

wherein |d.sub.k (i, j)| is the absolute value of the detail image d.sub.k(i, j), and n.sub.s is a parameter to control the noise suppression level.

[0073] n.sub.s is chosen for each scale in such a way that √{square root over (var.sub.k(i, j))} and |d.sub.k (i, j)| have a similar order of magnitude. A higher value of n.sub.s results in more attenuation, a lower value in more amplification. For finer scales, with typically more noise, a larger noise suppression parameter is chosen, for coarser scales a smaller noise suppression is chosen. As an example, the noise suppression parameter n.sub.s=2.4 at scale 0, whereas n.sub.s=1 at scale 1 results in natural looking noise suppression in the reconstructed image.

[0074] In another embodiment the conversion step is applied to the translation differences. Contrast enhancement is obtained by applying the conversion step to the translation differences:

[00021] f k ( d k ( i , j ) ) .Math. m .Math. n v m , n f k ( g k ( ri , rj ) - g k ( r i + m , rj + n ) )

Resulting in following amplification image:

[00022] a k ( i , j ) = .Math. m .Math. n v m , n f k ( g k ( r i , r j ) - g k ( r i + m , r j + n ) ) d k ( i , j )

[0075] In another embodiment, other multiscale decomposition methods based on steerable pyramids may be used:


d.sub.kn(i,j)=a.sub.kn(A.sub.kn(i,j))*A.sub.kn*cos ϕ.sub.kn(i,j)d.sub.kn

being the resulting detail image at scale k and orientation n, A.sub.kn the amplitude and ϕ.sub.kn the phase from the steerable pyramid decomposition, and a.sub.kn the multiplicative amplification in a preferred embodiment defined as:

[00023] a k n ( i , j ) = f k n ( A k n ( i , j ) ) A k n ( i , j )

[0076] As stated above, it is important that the conversion operator is defined with a parameter-set p. In a first embodiment, the mapping function ƒ.sub.k is defined as:


ƒ.sub.k:x.fwdarw.αx.sup.p

[0077] For the preferred embodiment following amplification image is obtained:

[00024] a k ( i , j ) = α var k ( i , j ) p var k ( i , j ) = α var k ( i , j ) p - 1

where the power p is chosen within the interval 0<p<1, preferably within the interval 0.5<p<0.9. A comparative evaluation of a large number of computed radiography images of thorax and bones by a team of radiologists indicated that p=0.7 is the optimal value in most cases. α defines a gain factor at scale k. In this embodiment a single parameter—the power p-represents the full parameter-set; the parameter-set comprises only a single parameter in this case.

[0078] If this multiplicative amplification factor is applied to a detail image, then all details with a low variance of translation difference images (or elementary contrast) will be boosted relative to the image details which originally have a higher local variance.

[0079] In this respect, the above power function ƒ.sub.k: x.fwdarw.αx.sup.p proved to perform very well, but it is clear that an infinite variety of monotonically increasing mapping functions with a parameter-set p can be found that will enhance subtle details with low variance. The main requirement is that a.sub.k(i, j) is higher, and thus the slope of said mapping function is steeper, in the region of argument values that correspond to low elementary contrast.

[0080] When all detail images of the decomposition are modified using the same mapping according to one of the above methods, a uniform enhancement over all scales will be obtained. However, a better result is obtained by choosing slightly different mapping functions for each scale. As a consequence, the parameter-set p can be chosen to be dependent on the scale k of the detail image d.sub.k that is to be enhanced.


ƒ.sub.k: x.fwdarw.αx.sup.p.sup.k

Many different parameters p.sub.k can be chosen such that an optimization should be done to achieve optimal effects on the resulting image quality.

[0081] In an alternative embodiment, excessive noise amplification can be avoided by using a composite mapping function with a parameter set consisting of 3 parameters (p.sub.1, p.sub.2, c) per scale:

[00025] f k : x .fwdarw. α c p 2 ( x c ) p 1 if x < c x .fwdarw. α x p 2 if x c

where the power p.sub.2 is chosen in the interval 0<p.sub.2<1, preferably 0.5<p.sub.2<0.9, and most preferably p.sub.2=0.7 (however the preferred value of p.sub.2 highly depends upon the exact kind of radiological examination, and may vary). The power p.sub.i in this equation should not be smaller than p.sub.2: p.sub.1≥p.sub.2, where the cross-over abscissa c specifies the transition point between both power functions.

[0082] Decreasing the power p.sub.2 will further enhance the contrast of subtle details, but at the same time the noise component will also be amplified. The noise amplification can be limited by choosing a power value p.sub.1 larger than p.sub.2, preferably 1.0, so that the slope of the mapping function is not extremely steep for the range of very small abscissae.

[0083] Ideally the cross-over abscissa c should be proportional to the standard deviation of the noise component (assuming additive noise), so the lowest amplitude details buried within the noise along with the majority of the noise signals will only be moderately amplified, according to the slope of the functional part controlled by the power p.sub.1, while the detail signals just exceeding the noise level will be amplified much more according to the slope of the functional part controlled by the power p.sub.2. The decreasing slope of the latter functional part still assures that the subtle details above the noise level are boosted relative to the high amplitude details. In this respect, the above composite power function proved to perform very well, but it is clear that an infinite variety of monotonically increasing mapping functions can be found that will enhance subtle details without boosting the noise to an excessive level.

[0084] The main requirement is that a.sub.k(i, j) is higher, and thus the slope of said mapping function is steeper, in the region of argument values that correspond to low elementary contrast than it is either in the sub-range of very small elementary contrast which mostly correspond to noise, or in the range of the larger elementary contrast.

[0085] Cost Function

From the above, it is clear that an infinite variety of monotonically increasing mapping functions ƒ.sub.k can be found. The parameter-set p.sub.k is typically dependent on the scale k of the detail image d.sub.k that is to be enhanced, but this is not a requirement. Many different parameter-sets p.sub.k can be chosen to modify the behaviour of the mapping functions ƒ.sub.k, all of which will have a different effect on the resulting image and image quality.

[0086] For this reason, and in this invention an optimization of the parameter-set p.sub.k is performed to achieve the optimal effect of the mapping function ƒ.sub.k, or of the multiplicative amplification image a.sub.k on the resulting image quality. The parameter-set p.sub.k (which may consist of multiple individual parameters that define the conversion operator) is considered as the variable for the optimization process. In this optimization process a cost function L will be optimized, resulting in one specific parameter-set p.sub.k optimal This parameter set p.sub.k optimal will define the optimal conversion operator (optimal amplification image a.sub.k optimal) which is then applied on the detail image d.sub.k at resolution level or scale k to obtain the optimal results in the processed image. The optimization function (which can be an iterative process that evaluates the outcomes of a cost function) thus results in the determination of the optimal parameter-set p.sub.k optimal that can describe the optimal conversion operator.

[0087] In the prior art, such as for instance disclosed in EP527525 “Method and apparatus for contrast enhancement”, Vuylsteke & Schoeters, (p.9, 1.42), the optimization of the parameter-set p.sub.k was performed by a comparative evaluation of a large number of computed radiography images by a team of radiologists for each clinical image type (bones, thorax, soft tissue) and for each technical image type (defined by the gray-scale resolution—bit-depth—and spatial resolution of the image). In other words, the optimization of the parameter-set p.sub.k was a manual (or better; human) process, that required human evaluation of the resulting processed images.

[0088] In addition, since the optimal values of p.sub.k differ for different images, and especially for the coarser scales, it is the goal of this invention to find these optimal values p.sub.k optimal based on statistics of the processed detail image or the final processed image, correlated with the original image and detail image. Therefore, a cost function or loss function L is composed, and this cost function is optimized in order to find best values of p.sub.k: p.sub.k optimal, which are then applied to the detail image d.sub.k (at resolution level or scale k) to obtain the processed detail image d.sub.k optimal, which leads after the recomposition step to the the final processed image h.sub.0.

[0089] The proper composition of this cost function is important: this cost function should be backed by a meaningful effect on the final image, should ideally show a continuous and smooth progression in function of p.sub.k and should be fast to calculate. In the following section, a number of embodiments are presented illustrating the general concept described above.

[0090] Six embodiments of such a cost function L are described in the following section:

[0091] 1) optimization of parameter-sets p.sub.k based on gray value tissue separation between distinct tissue types: in this embodiment, segmentation maps for at least 2 distinct tissue types are created for an image, such that a group of pixels in the image can be attributed to one of the distinct tissue types. Such a group of attributed pixels in the enhanced approximation image are called the sample sets of the enhanced approximation image at scale k, and are then named for instance h.sub.k.sup.bone and h.sub.k.sup.ST to indicate their related tissue type (bone and soft tissue respectively).

Instead of approximation image at scale k h.sub.k, one can also use the reconstructed image h.sub.0.

[0092] The optimization of the parameters p.sub.k is then performed by maximizing the distance between the sample sets h.sub.k.sup.bone and h.sub.k.sup.ST. In a different embodiment, for instance where lung tissue is the object of interest, this distance between the sample sets of soft tissue and ribs might be minimized.

[0093] In one embodiment, maximizing the distance can be achieved by minimizing student t-test t.sup.2 for said sample sets:

[00026] t 2 = ( h k bone _ - h k ST _ ) 2 / σ δ 2 , with h k b o n e _ , h k ST _

being the mean values of the sample sets for bone and soft tissue (ST). And:

[00027] σ δ 2 = ( σ k b o n e ) 2 n b o n e + ( σ k ST ) 2 n S T

with σ.sub.k.sup.bone and σ.sub.k.sup.ST of being the standard deviations of the sample sets for bone and soft tissue (ST).

[0094] In another embodiment, maximizing the distance can be achieved by minimizing the Mann-Whitney U test for the sample sets h.sub.k.sup.bone and h.sub.k.sup.ST:

[00028] U = .Math. bone .Math. ST S ( h k b o n e , h k ST ) S ( X , Y ) = 1 if Y < X S ( X , Y ) = 1 2 if Y = X S ( X , Y ) = 0 if Y > X

[0095] Smaller U-values will result in less grey-values shared by bone and soft tissue and thus a better tissue separation.

[0096] The segmentation maps for the different tissue types may be based on any segmentation algorithm designed for this purpose known in the art.

[0097] 2) optimization of parameters p.sub.k to obtain a better overall image contrast based on an entropy measure:

[0098] The relative entropy of a signal is calculated as:

[00029] H ( h k ) = - .Math. i = 0 n P ( h k i ) log ( P ( h k i ) ) log ( n ) ,

with P(h.sub.k.sub.i) as the probability density function (or normalized bin count of histogram) for h.sub.k having value h.sub.k.sub.i. n is the amount of bins in the normalized histogram. As an extreme example, we could consider that if all h.sub.k would equal h.sub.k.sub.i, the probability density function is a delta-function and entropy will be zero. In case of histogram equalization, entropy will be one. Different weights can be used for different tissue types. Entropy can be calculated for different sample sets. Typically, high entropy is desirable for e.g. soft tissue and bone, while low entropy is desirable for implants, the collimated area or the background.

[0099] 3) optimization of parameters p.sub.k to obtain a better overall image contrast based on conservation of mutual information. Non-linear modification of the detail images could impact the amount of mutual information of the enhanced approximation image in comparison with the original approximation image. To a certain extent, this is a wanted effect. Nevertheless, to limit this effect, the mutual information l(g.sub.k, h.sub.k) between input and output is calculated:

[00030] I ( g k , h k ) = .Math. x g k .Math. y h k P g k h k ( x , y ) log ( P g k h k ( x , y ) P g k ( x ) p h k ( y ) ) ,

where P.sub.g.sub.k.sub.h.sub.k
is the joint probability density function of g.sub.k and h.sub.k (input and output approximation image or gray value image), and P.sub.gk and P.sub.hk are the marginal probability density function of g.sub.k and h.sub.k respectively.
Instead of approximation image at scale k g.sub.k and h.sub.k, one can also use the original and reconstructed image g.sub.0 and h.sub.0.

[0100] Higher mutual information between input and output means a higher mutual dependence. Typically, we want to keep l(g.sub.k, h.sub.k) above a certain threshold. Mutual information can be calculated for different sample sets.

[0101] In yet another embodiment the mutual information can be replaced by the correlation between the approximation image and the enhanced approximation image.

[0102] 4) optimization of parameters p.sub.k to obtain a better overall image contrast based on non-linear compression:

[00031] N L = Var tot ( log ( var k out ( i , j ) ) ) Var tot ( log ( var k ( i , j ) ) )

In this embodiment the variance Var.sub.tot is the global variance over all pixels i,j and all scales k, wherein var.sub.k is the local variance of the original translation difference image pixel value at pixel i,j and scale k or the elementary contrast at pixel i,j and scale k, and wherein var.sub.k out(i, j) is the local variance of the enhanced translation difference image pixel, this can be approximated by:


√{square root over (var.sub.k out(i,j))}=ƒ.sub.k(√{square root over (var.sub.k(i,j))})

The optimization of the parameters p.sub.k is then performed by evaluating NL, the ratio of the var.sub.tot for the processed detail images and the original detail images.

[0103] In the case that NL=1 the variance of elementary contrast in the processed detail images is the same as the original detail image. When NL is smaller than 1, this indicates that the variance of the output is smaller than the original, and the contrast is non-linearly compressed by the conversion operator. This is typically achieved by a conversion operator which increases the elementary contrast in pixels with subtle details and decreases the elementary contrast in pixels with excessive details.

[0104] By selecting a target compression parameter NL for a detail image at a certain scale, more compression weight can be attributed to certain scales against which the parameters p.sub.k can then be optimized. NL can also be calculated for different sample sets in order to give more weight for certain tissue types.

[0105] 5) optimization of parameters p.sub.k to obtain a better overall image linearity based on the calculation on the relative variance: in this embodiment the relative variance (over all scales k) of the logarithm of the amplification for each pixel i,j is calculated:


RV(i,j)=10.sup.Var(log10(a.sup.k.sup.(i,j)))−1

[0106] It is noted that this relative variance RV(i, j) may be calculated using different weights for the different scales k. Next, the linearity LIN is calculated as the weighted average of RV (i, j) over all pixels.

[00032] LIN = .Math. i , j w i , j RV ( i , j )

In the case that LIN equals 0, the relative variance for all pixels equals zero, and thus the amplification over all scales is the same and thus no non-linear image enhancement.

[0107] Generally, non linear image enhancement is desired across the entire image in order to obtain good image enhancement. However, in specific locations, e.g. sharp transition at implants, linear enhancement is wanted. Therefore location dependent weights w.sub.i,j are attributed to these specific locations, which means that selected zones or areas in the image may be attributed a higher weight in order to selectively optimize the cost-function (LIN) for these specific locations. Typically, large weights are attributed to locations where linearity is wanted (e.g. close to metal implants, at large edges, . . . ). These specific locations may be identified through standard segmentation algorithms described in the art, approximation image value, detail image value or a combination thereof.

[0108] 6) optimization of parameters p.sub.k to obtain a better overall image contrast based on similar amplification. As stated before we only want slightly different amplifications between scales, in other words, we don't want excessive variation between parameters p.sub.k over the scales. One way to keep this is with LIN cost-function. Another way is to minimize the change of parameters p.sub.k over the scales. Therefore we minimize:

[00033] 2 p k k 2

[0109] Often a weighted average of multiple cost-functions as described above will be used as a composed cost-function. Optimal weighing of these multiple cost-functions will result in an image dependent parameter set p.sub.k which yield further optimized enhanced images. Alternative cost functions may be further designed to optimize certain image quality aspects of a final processed image, on condition that a cost function can be found or designed that is a measure of the particular image quality aspect.

[0110] The cost-function is minimized and the optimal values for the parameter set p.sub.k is searched for. Optimization may be based on any optimization algorithm designed for this purpose known in the art.

[0111] Reconstruction

The detail images are modified based on the optimal parameter set p.sub.k, starting from the lowest resolution detail image up to the finest level, which is the order in which they are needed in the course of the reconstruction process.

[0112] In addition, other enhancements may be applied to d.sub.k, g.sub.k, h.sub.k prior or after the application of the conversion operator.

[0113] In the image reconstruction section 34 the modified detail images 33 are next accumulated at all resolution levels, along with the residual image 31′ to compute the enhanced image 4.

[0114] A preferred embodiment of the decomposition process is depicted in FIG. 5. The original image is filtered by means of a low pass filter 41, and sub-sampled by a factor of two, which is implemented by computing the resulting low resolution approximation image g.sub.1 only at every other pixel position of every alternate row.

[0115] A detail image b.sub.0 at the finest level is obtained by interpolating the low resolution approximation g.sub.1 with doubling of the number of rows and columns, and pixel-wise subtracting the interpolated image from the original image 2.

[0116] The interpolation is effectuated by the interpolator 42, which inserts a column of zero values every other column, and a row of zero values every other row respectively, and next convolves the extended image with a low pass filter. The subtraction is done by the adder 43.

[0117] The same process is repeated on the low resolution approximation g.sub.1 instead of the original image 2, yielding an approximation of still lower resolution g.sub.2 and a detail image b.sub.1.

[0118] A sequence of detail images d.sub.k, k=[0,1, . . . , L−1] and a residual low resolution approximation g.sub.L are obtained by iterating the above process L times. The finest detail image d.sub.0 has the same size as the original image. The next coarser detail image d.sub.1 has only half as many rows and columns as the first detail image d.sub.0. At each step of the iteration the maximal spatial frequency of the resulting detail image is only half that of the previous finer detail image, and also the number of columns and rows is halved, in accordance with the Nyquist criterion.

[0119] After the last iteration a residual image g.sub.L 31′ is left which can be considered to be a very low resolution approximation of the original image. In the extreme case it consists of only 1 pixel which represents the average value of the original image 2. The filter coefficients of the low pass filter of the preferred embodiment are presented in FIG. 7. They correspond approximately to the samples of a two dimensional gaussian distribution on a 5×5 grid. The same filter coefficients are used for the low pass filters 41, 41′, . . . 41′″ at all scales. The same filter kernel with all coefficients multiplied by 4 is also used within the interpolators 42, 42′, . . . 42″. The factor of 4 compensates for the insertion of zero pixel columns and rows as explained above. The corresponding reconstruction process is depicted in FIG. 6.

[0120] The residual image is first interpolated by interpolator 51 to twice its original size and the interpolated image is next pixelwise added to the detail image of the coarsest level d′.sub.L−1, using adder 52. The resulting image is interpolated and added to the next finer detail image. If this process is iterated L times using the unmodified detail images d.sub.L−1 . . . d.sub.0 then an image equal to the original image 2 will result. If at the other hand the detail images are modified before reconstruction according to the findings of the present invention, then a contrast enhanced image 4 will result. The interpolators 51, 51′ . . . 51″ are identical to those used in the decomposition section.

[0121] The image quality of the reproduction of a radiologic image can further be improved by boosting or suppressing the contribution of the detail information as a function of the average approximation values at a certain resolution level.

[0122] More specifically the contribution of the detail information is decreased in relatively bright image regions and enhanced in darker regions.

[0123] This method provides that the disturbing effect of noise is diminished without reducing the overall sharpness impression of the image reproduction, for the noise perception in bright areas is in general more prominent in bright areas than in the darker regions of a radiograph. To achieve this goal the previously described embodiment is modified according to one of the next implementations.

[0124] When the detail images are modified according to one of the above methods, and next accumulated in the previously described reconstruction section according to one of the above reconstruction methods, then the dynamic range of the resulting signal will normally exceed the original range. Therefore the resulting image signal is ultimately reduced to the dynamic range of the original image signal, or even smaller. In the former case the contrast of subtle details will show improved perceptibility in comparison with the original image, in the latter case the same perceptiblity level may be reached with a smaller dynamic range, in accordance with the findings of the present invention. In a preferred embodiment the above reduction of dynamic range is accomplished by means of a display mapping module 5, which maps said reconstructed image signal 4 to an output signal 6 which represents the desired screen brightness or film density. The mapping is monotonic and may be linear or curved, depending on the desired gradation.