Method and Apparatus for Image Enhancement of Radiographic Images

20230230213 · 2023-07-20

Assignee

Inventors

Cpc classification

International classification

Abstract

A processing method for enhancing the image quality of an image, more particularly a digital medical grey scale image, that comprises the steps of a) decomposing an original image into multiple detail images at different resolution levels and/or orientations, b) processing the detail images to obtain processed detail images, c) computing a result image by applying a reconstruction algorithm to the processed detail ages, said reconstruction algorithm being such that if it were applied to the detail images without processing, then said original image or a close approximation thereof would be obtained, the processing of the detail images comprises the steps of: d) calculating at least one conjugate detail image, and e) computing at least one value of the processed detail images as a function of said conjugate detail image and said detail images.

Claims

1-24. (canceled)

25. A method for enhancing the contrast of a digital representation of an original image X represented by an array of pixel values by processing said image, said processing comprising the steps of: a) decomposing said original image X into a high-pass filtered detail image h.sub.0, a set of detail images b.sub.kn at multiple resolution levels (K) and/or multiple orientations (N), b) processing at least one pixel (i,j) of said detail images h.sub.0 and b.sub.kn to obtain processed detail images h′.sub.0 and b′.sub.kn c) computing a result image X′ by applying a reconstruction algorithm to said processed detail images h′.sub.0 and b′.sub.kn, d) calculating at least one conjugate detail image c.sub.kn, e) computing at least one pixel value of the processed detail images h′.sub.0 and b′.sub.kn as a function of said conjugate detail image c.sub.kn and said detail images h.sub.0 and/or b.sub.kn wherein a pixel of said processed detail images is calculated as:
b′.sub.kn(i,j)=A′.sub.kn(i,j)*cos ϕ.sub.kn(i,j) wherein ϕ.sub.kn is a phase detail image, calculated as ϕ k n = a tan ( c k n b k n ) and, wherein A′.sub.kn is an amplitude of said processed detail image, calculated as
A′.sub.kn(i,j)=ƒ.sub.kn(A.sub.00 . . . A.sub.0N−1 . . . A.sub.K−1N−1,ϕ.sub.00 . . . ϕ.sub.0N−1 . . . ϕ.sub.K−1N−1) wherein, {A.sub.00 . . . A.sub.0N−1 . . . A.sub.K−1N−1} is an amplitude of at least one pixel of detail images, and {ϕ.sub.00 . . . ϕ.sub.0N−1 . . . ϕ.sub.K−1N−1} is a phase of at least one pixel of detail images, and wherein the amplitude of a detail image A.sub.kn {A.sub.k0, A.sub.k1, . . . , A.sub.kN−1} is calculated as:
A.sub.kn=√{square root over (b.sub.kn.sup.2+c.sub.kn.sup.2)}.

26. The method of claim 25, wherein detail images b.sub.kn are calculated by: a) downscaling the original image X into a pyramid of increasingly lower-resolution images l.sub.0, l.sub.1, . . . , l.sub.K−1, each being a consecutively downscaled version of its predecessor in said pyramid, and b) applying a convolution to each of said images l.sub.k ∈{l.sub.0, l.sub.1, l.sub.2, . . . , l.sub.K−1} by a set of N filters B.sub.n {B.sub.0, B.sub.1, . . . , B.sub.N−1} to calculate a corresponding set of N detail images b.sub.kn {b.sub.k0, b.sub.k1, . . . , b.sub.kN−1}, wherein each B.sub.n is a rotated version of filter B.sub.o.

27. The method of claim 26, wherein said N filters B.sub.n are steerable filters, in such a way that a convolution of an arbitrary rotation of B.sub.0 with said lower-resolution image l.sub.k can be calculated as a linear combination of said detail images b.sub.kn.

28. The method of claim 26, where said N filters B.sub.n are calculated as a linear combination of steerable filters.

29. The method of claim 26, wherein N is different for each resolution level.

30. The method of claim 25, wherein a conjugate detail image c.sub.kn is calculated by downscaling the original image X into a pyramid of increasingly lower-resolution images l.sub.0, l.sub.1, . . . , l.sub.K−1, each being a consecutively downscaled version of its predecessor in said pyramid and applying a convolution to each of said images l.sub.K∈{l.sub.0, l.sub.1, l.sub.2, . . . , l.sub.K−1} by a set of N conjugate filters C.sub.0, C.sub.1, . . . , C.sub.N−1.

31. The method of claim 30, wherein N conjugate filters C.sub.0, C.sub.1, . . . , C.sub.N−1 are rotations of a Hilbert transform of said filter B.sub.0.

32. The method of claim 30, wherein N conjugate filters C.sub.0, C.sub.1, . . . , C.sub.N−1 are calculated as a linear combination of steerable filters.

33. The method of claim 25, wherein f.sub.kn is rotationally invariant.

34. The method of claim 25, wherein A kn ( i , j ) = f kn ( A 00 .Math. . A 0 N - 1 .Math. A K - 1 N - 1 , ϕ 0 0 .Math. . ϕ 0 N - 1 .Math. ϕ K - 1 N - 1 ) = A kn * g kn ( A 00 .Math. . A 0 N - 1 .Math. A K - 1 N - 1 , ϕ 0 0 .Math. . ϕ 0 N - 1 .Math. ϕ K - 1 N - 1 ) and wherein g.sub.kn is multiplicative positive amplification function.

35. The method of claim 34, wherein:
A′.sub.kn(i,j)=A.sub.kn(i,j)g1.sub.kn(A.sub.kn)g2.sub.kn(ϕ.sub.kn)g3.sub.kn(A.sub.00 . . . A.sub.0N−1 . . . A.sub.K−1N−1,ϕ.sub.00 . . . ϕ.sub.0N−1 . . . ϕ.sub.K−1N−1) and wherein g1.sub.kn, g2.sub.kn and g3.sub.kn are multiplicative positive amplification functions.

36. The method of claim 34, wherein g1.sub.kn is a positive non-linear amplification function with an amplification that gradually decreases with increasing argument values.

37. The method of claim 34, wherein g2.sub.kn have a lower amplification for an argument of 90 degrees than for an argument 0 degrees and that it is symmetric around 90 degrees.

38. The method of claim 34, wherein g2.sub.kn(ϕ.sub.kn(i,j)) is limited to a band of values defined between an upper and lower limit.

39. The method of claim 34, wherein g3.sub.kn is a function fitted to a training set and with (A.sub.00 . . . A.sub.0N−1 . . . A.sub.K−1N−1, ϕ.sub.00 . . . ϕ.sub.0N−1 . . . ϕ.sub.K−1N−1) as argument.

40. The method of claim 34, wherein g3.sub.kn(A.sub.00 . . . A.sub.0N−1 . . . A.sub.K−1N−1, ϕ.sub.00 . . . ϕ.sub.0N−1 . . . ϕ.sub.K−1N−1)=g3.sub.kn(A.sub.k0 . . . A.sub.kN−1) and wherein amplification g3.sub.kn is larger for orientations n close to n.sub.max, wherein n.sub.max is calculated as the orientation with maximum amplitude.

41. The method of claim 25, wherein a parameter a controls the variation of g3.sub.kn for different n.

42. The method of claim 41, wherein α is a function of grayscale, segmentation value, the spread of n.sub.max in a M×M neighbourhood or any combination.

43. The method of claim 25, wherein said high-pass filtered detail image h.sub.0 is non-linear processed by: (a) calculating for said pixel, at least one statistical measure for two or more translation difference image pixel values within a neighbourhood of said pixel; and (b) modifying the value of said pixel of said detail image h′.sub.0 as a function of said statistical measure, amplitude and phase of said detail image b.sub.kn and value of said pixel of said detail image h.sub.0, as follows:
h′.sub.0(i,j)=a.sub.0(i,j)h.sub.0(i,j)
wherein,
a.sub.0(i,j)=ƒ.sub.0(√{square root over (var.sub.0)},A.sub.kn,ϕ.sub.kn).

44. The method of claim 25, wherein said high-pass filtered detail image h.sub.0 is non-linear processed by
g.sub.0(i,j)=h.sub.0*g.sub.0(A.sub.00 . . . A.sub.0N−1 . . . A.sub.K−1N−1,ϕ.sub.00 . . . ϕ.sub.0N−1 . . . ϕ.sub.K−1N−1).

45. The method of claim 25, wherein processing at least one pixel of said detail images is repeated at least one time for already processed detail image pixels.

Description

BRIEF DESCRIPTION OF DRAWINGS

[0037] FIG. 1 depicts the steerable pyramid processing method by means of the different filters and images involved. The original grayscale image X is high-pass filtered with filter H.sub.0 into high-pass detail image h.sub.0 and in low-pass image l.sub.0. The (in this case) three detail images b.sub.00, b.sub.01, b.sub.02, can be calculated from the convolution of low-pass image l.sub.0 with a set of (in this case) 3 steerable filters B.sub.n∈{B.sub.0, B.sub.1, B.sub.2}. Similarly, three conjugate images c.sub.00, c.sub.01, c.sub.02, can be calculated from the convolution of low-pass image l.sub.0 with a set of (in this case) 3 filters C, ∈{C.sub.0, C.sub.1, C.sub.2}. Further low-pass filtering of l.sub.0 and downscaling results in l.sub.1 from which in a similar way b.sub.10, b.sub.11, b.sub.12, c.sub.10, c.sub.11, c.sub.12 and l.sub.2 can be calculated. In the figure the process stops at residual image l.sub.2, but it can be further iterated and finally result in a residual low-pass image l.sub.k (l.sub.2 in the example). The detail images b.sub.kn are filtered with inverse filter B′.sub.n∈{B′.sub.0, B′.sub.1, B′.sub.2}. and iteratively added to the upscaled version of l.sub.k+1. In this way, and by adding high pass detail image h.sub.0, the grayscale image X′ can be reconstructed.

[0038] FIG. 2 represents an example of filter sets that may be applied in the method of the invention: low-pass filter L.sub.0 and high-pass filter H.sub.0 for use at scale 0, low-pass filter L, 6 steerable filters B.sub.0-5 rotated by steps of 30 degrees and 6 conjugate filters C.sub.0-5

[0039] FIG. 3 shows an example of detail images after applying filters from FIG. 2.

[0040] FIG. 4 shows an example of amplitude images by using filters from FIG. 2.

DESCRIPTION OF EMBODIMENTS

[0041] In the following detailed description, reference is made in sufficient detail to the above referenced drawings, allowing those skilled in the art to practice the embodiments explained below.

[0042] In the following, a more detailed overview of the preferred embodiments will be explained in more detail. As a first step in the process (reference is made to FIG. 1 with a complete overview of the process), and before the decomposition steps are started, a high-pass filter H.sub.0 is applied to the original image X, and results in the high-pass filtered detail image h.sub.0. Since detail images b.sub.kn are band-pass filtered, this detail image h.sub.0 is needed to reconstruct the finest detail. In a preferred embodiment, H.sub.0 is chosen in such a way that it is self-inverting, and has to be applied again before reconstruction.

[0043] The high-pass filtering step is performed as follows: a) a high-pass filter operator is applied to the original image X, obtaining a high-pass filtered detail image h.sub.0 and storing said image h.sub.0 in memory, b) a low-pass filter operator L.sub.0 is applied to the original image X, obtaining a low-pass filtered image l.sub.0 at the original image resolution and storing said low-pass filtered full-resolution image l.sub.0 in a memory.

[0044] In another embodiment, the high-pass detail image h.sub.0 could be obtained as the difference between X and low-pass image l.sub.0.

[0045] As a second step, the low-pass filtered full-resolution image l.sub.0 of the original image X is scaled down into a pyramid of increasingly lower-resolution images l.sub.I, l.sub.2, . . . , l.sub.K, each consecutive image being a low-pass filtered (with filter L) and downscaled version of its predecessor in said pyramid. By downscaling the original size image l.sub.0, one thus obtains a smaller sized next image l.sub.1 that may be further scaled down.

[0046] The downscaling of the low-pass filtered image l.sub.k can be performed by any known downscaling algorithm such as bi-linear interpolation, box-scaling, bi-cubic interpolation or alike. A preferred downscaling method for the application in the invention would be one where the spatial resolution is halved along both axes of the image. The amount of pixels of the remaining image would therefore be reduced by 4.

[0047] Since a downscaling operation on a digital image can be considered as a smoothing operation, it means that the processing of a downscaled image is equivalent to the processing of a smoothed version of the image. In this respect, it can be understood that the pyramidal decomposition approach leads to the image processing of the same image at different resolution levels or applying to different signal frequencies present in the image.

[0048] In a preferred embodiment the filters depicted in FIG. 2 and proposed by Simoncelli and Freeman, combined with reduction of resolution by 2×2 are used.

[0049] In a third step, and after the decomposition of the original image into lower resolution images l.sub.0, l.sub.1, l.sub.2 . . . l.sub.k, each resolution version l.sub.k∈{l.sub.0, l.sub.1, l.sub.2 . . . l.sub.K−1} is further decomposed by applying a set of N filters B.sub.n ∈{B.sub.0, B.sub.1, . . . , B.sub.N−1}, where B.sub.n is a rotated version of B.sub.0 by a rotation of n/N*π.

[0050] This is achieved by applying a convolution of this set of filters to each of the lower resolution images l.sub.k. As such for each l.sub.k, N detail images b.sub.kn are obtained and each of the N filters will gauge the local orientation at lower resolution image l.sub.k. As a result, for each detail image b.sub.kn, k indicates the scale or resolution level, and n the orientation.

[0051] In the preferred embodiment, this set of filters are steerable. This implies that, the convolution of l.sub.k with any rotation of B.sub.0 can be computed as a linear combination of basis images b.sub.kn.

[0052] The simplest example of a steerable basis is a set of N (N−1).sup.th-order directional derivatives of a circularly symmetric function, for example the Gaussian derivatives that are represented by a square pixel kernel of a manageable size. Since the kernels need to be convolved with the image, it speaks for itself that the choice of the kernel size will be a trade-off between accuracy and computing speed. A larger convolution kernel will require more calculation time, but will render more accurate results. Typical higher order Gaussian derivatives need larger kernel sizes. A typical kernel size will be between 5×5, 7×7, 9×9 or 11×11.

[0053] The steerable pyramid is computed iteratively using convolution and decimation operations, and it is “self-inverting”. A way to design such filters is described in “The steerable pyramid: A flexible architecture for multi-scale derivative computation” by Simoncelli and Freeman, In Proceedings International Conference on Image Processing (Vol. 3, pp. 444-447). IEEE., October 1995. These filters need to be band-limited, have a flat system response and can be used iteratively in order to construct a pyramid. In addition, they need to be steerable and self-inverting.

[0054] The derivative order (N−1) can be chosen. Higher order derivatives will result in better orientation resolution but also in larger kernels and more (N) detail images and will consequently require a higher computational effort. For instance, a first order directional derivative will not be able to discriminate between more than two orientations (only steps of 90 degrees), and is thus not useful for enhancing structures with multiple orientations. Since multiple orientations are commonly present in radiographic image, because of its additive nature, higher order derivates are preferred.

[0055] In practice, and in a preferred embodiment, a steerable filter-set having 6 orientations with steps of 30 degree rotation are taken, to match the simple cells in the primary visual cortex of the human visual system. An illustration of such a steerable filter set is given in FIG. 2 as B.sub.0-5.

[0056] The convolution operation in the example of each of these filters with the lower resolution image l.sub.k results in detail images b.sub.k0-5 (an equal amount as the number of steerable filters in the set), of which an example is shown in FIG. 3. For each convolution with filters B.sub.0-5 a corresponding detail image b.sub.k0-5 is produced. In one embodiment the filters B.sub.n are calculated by a rotation of filter B.sub.0 with n/N*π.

[0057] In another embodiment separable steerable filters are used to calculate the rotated version or a close approximation. Convolution with separable basis filters is computational cheaper. For most steerable filters, the basis filters are not x-y separable. According to Freeman and Adelson, Appendix D, for each filter that can be written as polynomial in x and y, there is a x-y separable basis. These basis functions can then be used to steer the filter B.sub.0 to any wanted rotation, and more specific to rotations n/N*π.

[0058] At the same time, or in a fourth step, the Hilbert transform of the above steerable filter-set B.sub.n (indicated as C.sub.0-5 in the example of FIG. 2) is convolved with each lower resolution image l.sub.k to calculate the conjugate detail images C.sub.k0-5. These conjugate detail images comprise the “imaginary” part of the signal in the image.

[0059] In one embodiment, C.sub.0 is calculated as Hilbert transform of B.sub.0, and C.sub.n are rotated versions over n/N*π of C.sub.0. In another embodiment, C.sub.0 is a polynomial approximation of the Hilbert transform of B.sub.0. A set of N+1 steerable filters is created as described by Adelson and Freeman. In this approach, the filter response that corresponds to rotation of n/N*π is calculated as a linear combination of a set of (separable) basis filters.

[0060] The Hilbert transform is important in signal processing to find the analytic representation of a real-valued signal u(t), or the so-called analytic signal. Specifically, the Hilbert transform of u(t) is its harmonic conjugate v(t), a function of the real variable tsuch that the complex-valued function u(t)+iv(t) admits an extension to the complex upper half-plane satisfying the Cauchy—Riemann equations.

[0061] In practice, by calculating the Hilbert transform of the real part of a digital signal, one can find the (complex valued) analytical signal. The amplitude of this analytical signal corresponds to the ‘energy’ of the signal because it is not modulated. More specifically, by applying a Riesz transform, which is an N-dimensional generalization of the Hilbert transform, it is possible to find the amplitude, phase and orientation of the signal at a certain scale (or “level”) of the image (and not only the real response). Calculating these Riesz transforms at different scales of the image will provide knowledge about the structure in the image at the different scales, which can then be used to modify the detail images accordingly. In this invention the detail images, or the real part of the steerable pyramid, will be modified, based on the calculated amplitude, phase and orientation information, and from which an enhanced image X′ can be calculated.

[0062] In order to explain this further, a simple example is given to show the importance of phase on image enhancement. For an odd order set of filters B.sub.n, the “real” part of the signal, or the detail image b.sub.kn, will generate more signal for edges than for lines; the detail images b.sub.kn thus will mainly comprise signals representing the presence of edges in the lower resolution image l.sub.k. For odd order B.sub.n, the Hilbert transform C.sub.n has even order, and the “imaginary” part of the signal, at phase +90°, c.sub.kn will generate more signal for lines than for edges, and the conjugate detail image c.sub.kn will mainly comprise signals representing lines. In this way, the phase of our signal will be a measure for the “edgeness” (i.e. the amount of edges present in a pixel) in the lower resolution image. And the amplitude or magnitude of the signal gives a measure for the “energy” in the signal, regardless of whether this signal represents a line or edge. In addition, one can obtain this amplitude and phase for each orientation and scale. As such, one can control the modification of detail image b.sub.kn based on the amplitude and ‘line-edge-ness’ (phase) in each direction, and at finer scales one could for instance enhance edges more than lines because the former are less prone to noise. In a simple and preferred embodiment, the amount of ‘line-edge-ness’ or phase enhancement is rotationally invariant or independent of its direction.

[0063] In a preferred embodiment, amplitude A.sub.kn and phase ϕ.sub.jai for each quadrature pair b.sub.kn and c.sub.kn, and thus for each scale (k) and orientation (n) are calculated as:

[00002] A k n = b k n 2 + c k n 2 ϕ k n = a tan ( c k n b k n )

[0064] Where the amplitude A.sub.kn is a measure for contrast at a certain scale k and orientation n, while the phase ϕ.sub.kn is a measure for line/edgeness at a certain scale k and orientation n.

[0065] The corresponding detail image is equal to:


b.sub.kn(i,j)=A.sub.kn(i,j)*cos ϕ.sub.kn(i,j)

[0066] It is the object of this invention to non-linearly modify the amplitude into a processed amplitude A′.sub.kn, in order to obtain processed detail images with the same phase but a modified amplitude (contrast):


b′.sub.kn(i,j)=A′.sub.kn(i,j)*cos ϕ.sub.kn(i,j)


wherein:


A′.sub.kn(i,j)=ƒ.sub.kn(A.sub.00 . . . A.sub.0N−1 . . . A.sub.K−1N−1,ϕ.sub.00 . . . ϕ.sub.0N−1 . . . ϕ.sub.K−1N−1)

[0067] f.sub.kn being a conversion function. Typically, ƒ.sub.kn is a non-linear function in order to obtain contrast enhancement in the reconstructed image.

[0068] Further is it more convenient to define this conversion function as a multiplicative amplification function or a gain function g.sub.kn:


A′.sub.kn(i,j)=A.sub.kng.sub.kn(A.sub.00 . . . A.sub.0N−1 . . . A.sub.K−1N−1,ϕ.sub.00 . . . ϕ.sub.0N−1 . . . ϕ.sub.K−1N−1)

[0069] In a preferred embodiment this amplification function is translation- and rotation-invariant (such as is the steerable pyramid itself). In other words, translation or rotation of image X has almost no impact and the processing result. Or in other words, any rotation or translation of image X followed by decomposition, modification of the detail images, reconstruction and inverse rotation or translation should have no or almost no effect on enhanced image X′. In another embodiment, the amplification function is translationally invariant and only quasi-rotationally invariant. In this case, there is no or almost no effect on the processing for some specific rotations, for instance steps of 30 degrees.

[0070] In a preferred embodiment, this amplification is only depending on the phase and amplitude at the same scale k:

[00003] g k n ( A 0 0 .Math. . A 0 N - 1 .Math. A K - 1 N - 1 , ϕ 0 0 .Math. . ϕ 0 N - 1 .Math. ϕ K - 1 N - 1 ) = g k n ( A k 0 .Math. . A k N - 1 , ϕ k 0 .Math. . ϕ k N - 1 )

[0071] In another embodiment, g.sub.kn is not only depending on amplitude and phase of the scale k. It could also be a function of lower scales. E.g. one could calculate correlation between A.sub.kn and A.sub.k+1n. Pixels at different scale with high correlations could get similar amplification to reduce artefacts.

[0072] In practice, this amplification function is split in two or more multiplicative amplification functions. In a preferred embodiment g.sub.kn is split in three parts:


g.sub.kn(A.sub.k0 . . . A.sub.kN−1,ϕ.sub.k0 . . . ϕ.sub.kN−1)=g1.sub.kn(A.sub.kn)g2.sub.kn(ϕ.sub.kn)g3.sub.kn(A.sub.k0 . . . A.sub.kN−1)

[0073] Where g1.sub.kn is amplitude, g2.sub.kn is phase and g3.sub.kn is orientation dependent.

[0074] In one specific embodiment the first amplitude dependent amplification function can be defined as:

[00004] g 1 k n = f 1 k n ( A k n ) A k n

[0075] Wherein A.sub.kn is the amplitude for scale k and orientation n and is a measure for contrast, and mapping function ƒ1.sub.kn can be defined as for instance illustrated in EP 527 525, p 9 line 35:


ƒ1.sub.kn:x.fwdarw.αx.sup.p

[0076] In this embodiment x will be A.sub.kn. (rather than being defined as the detail image pixel as in EP 527 525). As such:


g.sub.kn=αA.sub.kn.sub.p.sup.p−1

[0077] where the power p is chosen within the interval 0<p<1, preferably within the interval 0.5<p<0.9. A comparative evaluation of a large number of computed radiography images of mainly thorax and bony structures by a team of radiologists indicated that p=0.7 is the optimal value in most cases. α specifies a gain factor.

[0078] If this multiplicative amplification factor is applied to a detail image


b.sub.kn(i,j)=g.sub.kn(A.sub.kn(i,j))*A.sub.kn*cos ϕ.sub.kn(i,j)

[0079] then all details with a low amplitude will be boosted relative to the image details which originally have a higher amplitude.

[0080] In this respect, the above power function ƒ1.sub.kn: x.fwdarw.αx.sup.p proved to perform very well, but it is clear that an infinite variety of monotonically increasing mapping functions can be found that will enhance subtle details with low amplitude. The main requirement is that g.sub.kn is higher, and thus the slope of said mapping function is steeper, in the region of argument values that correspond to low elementary contrast.

[0081] In an alternative embodiment, excessive noise amplification can be avoided by using a composite mapping function:

[00005] f k n : x .fwdarw. α c p 2 ( x c ) p 1 if x < c x .fwdarw. α x p 2 if x c

[0082] where the power p.sub.2 is chosen in the interval 0<p.sub.2<1, preferably 0.5<p.sub.2<0.9, and most preferably p.sub.2=0.7 (however the preferred value of p.sub.2 highly depends upon the exact kind of radiological examination, and may vary). The power p.sub.1 in this equation should not be smaller than p.sub.2: p.sub.1≥p.sub.2, where the cross-over abscissa c specifies the transition point between both power functions.

[0083] Decreasing the power p.sub.2 will further enhance the contrast of subtle details, but at the same time the noise component will also be amplified. The noise amplification can be limited by choosing a power value p.sub.1 larger than p.sub.2, preferably 1.0, so that the slope of the mapping function is not extremely steep for the range of very small abscissae.

[0084] Ideally the cross-over abscissa c should be proportional to the standard deviation of the noise component (assuming additive noise), so the lowest amplitude details buried within the noise along with the majority of the noise signals will only be moderately amplified, according to the value of the functional part controlled by the power p.sub.1, while the detail signals just exceeding the noise level will be amplified much more according to the value of the functional part controlled by the power p.sub.2. The decreasing value of the latter functional part still ensures that the subtle details above the noise level are boosted relative to the high amplitude details. In this respect, the above composite power function proved to perform very well, but it is clear that an infinite variety of monotonically increasing mapping functions can be found that will enhance subtle details without boosting the noise to an excessive level.

[0085] The main requirement is that g.sub.kn is higher, and thus the slope of said mapping function is steeper, in the region of argument values that correspond to low elementary contrast than it is either in the sub-range of very small elementary contrast which mostly correspond to noise, or in the range of the larger elementary contrast.

[0086] When all detail images of the decomposition are modified using the same mapping according to one of the above methods, a uniform enhancement over all orientations, and a rotation invariant conversion is obtained. In addition, the enhancement will also be uniform over all scales. In a slightly modified embodiment, where a different mapping function is used at each resolution level e.g. by multiplying one of the above described mapping functions or multiplicative factor with a resolution level-dependent coefficient, it is possible to further increase the sharpness by setting the coefficient corresponding to the finest resolution level to a substantially higher value than the other coefficients as follows:


y=A.sub.iF(x)


for i=0 . . . K−1

[0087] where F(x) is one of the above described mappings, K is the number of resolution levels, and A.sub.i is a level dependent coefficient, e.g. A.sub.0>1, and A.sub.i<1 for 1≤i≤K−1.

[0088] In one specific embodiment the second, phase dependent, amplification function can be defined as:

[00006] g 2 k n ( ϕ k n ( i , j ) ) = 1 n s ( 1 - ϕ k n ( i , j ) * 2 π ) 2

[0089] In which n.sub.s is a noise suppression factor. A higher value of n.sub.s results in more attenuation, a lower value in more amplification. g2.sub.kn may also be limited to a band of values defined between an upper and lower bound. Especially for those functions with high values. Typical values for lower and upper bound are found to give good results when they are respectively 0.5 and 1. A smaller value of the lower bound (e.g. zero) will result in more attenuation of noise and cartooning like artefacts, while larger values of the upper bound (e.g. 2) will result in increased enhancement of the edges, and overshoot artefacts at the edges. n.sub.s is chosen for each scale. For finer scales, with typically more noise, a larger noise suppression parameter is chosen, for coarser scales a smaller noise suppression is chosen. As an example, the noise suppression parameter n.sub.s=1.2 at scale 0, whereas n.sub.s=0.5 at scale 1 results in natural looking noise suppression in the reconstructed image.

[0090] The role of this function is to attenuate detail image pixels representing lines or points, because they are probably, especially at finer scales, noise. Lines have a phase close to 90 degrees, edges at 0 and 180 degrees for odd-ordered B.sub.n filters. The function g2.sub.kn( ) needs to have a lower amplification at a phase of 90 degrees than at 0 degrees. Since it is a multiplicative amplification function, it needs to be positive. In addition, this function also needs to be symmetric around 90 degrees, e.g same amplification has to be found for 90−x as for 90+x. Note that for even-ordered B.sub.n filters these functions need to be shifted with 90 degrees. The above function proved to perform very well, but it is clear that an infinite variety of functions can be found that meet these requirements.

[0091] Some examples of such functions are given below:

[00007] g 2 k n ( ϕ k n ( i , j ) ) = 1 n s 1 sin ( ϕ k n ( i , j ) ) g 2 k n ( ϕ k n ( i , j ) ) = 1 n s 1 tan ( ϕ k n ( i , j ) ) 2 g 2 k n ( ϕ k n ( i , j ) ) = 1 n s cos ( ϕ k n ( i , j ) ) 2

[0092] In another embodiment, n.sub.s, as well as the lower and upper bound may be calculated based on particular parameters that can be derived from the image data (or detail image data). For instance, histogram values of b.sub.kn(i,j) and c.sub.kn(i,j) or a combination of such parameters may be applied in a mathematical function to derive the parameters above (n.sub.s, and lower and upper bound).

[0093] In one specific embodiment the third, orientation dependent, amplification function can be defined as a function:

[00008] g 3 k n ( A k 0 .Math. A k N - 1 ) = N ( if n == arg max n A kn ) = 0 ( elsewhere )

[0094] In this way, A′.sub.kn will be only non-zero for orientation n with the largest amplitude. This will reduce noise but tends to introduce many artifacts into the image. Especially at corners and overlapping structures. This could be improved in a specific embodiment by using:

[00009] g 3 k n ( A k 0 .Math. A k N - 1 ) = ( 1 - α ) + N α ( if n == arg max n A kn ) = ( 1 - α ) ( elsewhere )

[0095] In this function the direction with maximum amplitude

[00010] arg max n A k n

is calculated for each pixel, and g3.sub.kn is larger for n close

[00011] arg max n A k n .

In addition, sum of amplification over all directions is kept the same.

[0096] With α is a parameter. If α=0 than g3.sub.kii is identity function, if α=1 only orientation with maximum amplitude are kept (i.e. identical to previous embodiment). α controls the amount of ‘orientation smoothing’, higher α will give more smoothing but more artifacts. These artifacts are more prominent at corners or overlapping structures. α can be different for different scales, typical for finer scales which contain more noise, a higher α is chosen.

[0097] In addition, this function is not completely rotationally invariant, but quasi rotationally invariant (only for rotations of n/N*π). In an improved embodiment, this could be solved by interpolation. In a specific embodiment one could find n.sub.max by argmax.sub.n A.sub.kn with sub-integer accuracy by using e.g. quadratic interpolation. In another embodiment, n.sub.max is found with sub-integer accuracy by calculating amplitudes for more rotations than only n*π/N.

[00012] g 3 k n ( A k 0 .Math. A k N - 1 ) = ( 1 - α ) + N α * ( ceil ( n max ) - n ) ( if n == floor ( n max ) ) = ( 1 - α ) + N α * ( n - floor ( n max ) ) ( if n == ceil ( n max ) ) = ( 1 - α ) ( elsewhere )

[0098] In another embodiment, the steering equation, as for instance given in Freeman and Adelson, can be used instead of interpolation. E.g. for 5.sup.th order polynomial basis filter:

[00013] g 3 k n ( A k 0 .Math. . A k N - 1 ) = ( 1 - α ) + N * α * k n ( θ ) With : k n ( θ ) = 1 6 ( 2 * cos ( θ - θ n ) + 2 * cos ( 3 * ( θ - θ n ) ) + 2 * cos ( 5 * ( θ - θ n ) ) ) With : θ = n max * π N and θ n = n * π N

[0099] It is clear that many functions can be constructed wherein amplitudes in orientations n close to the maximum amplitude orientation n.sub.max are more enhanced than other orientations.

[0100] In another embodiment, multiple local maximum directions could be found and enhanced. This could be advantageous at locations with multiple orientations, such as crossing structures.

[0101] In another embodiment, one could make α a function of the spread of main orientations of A.sub.kn (=argmax.sub.nA.sub.kn) in a M×M neighborhood around pixel (i,j). The goal is to reduce α at corners and overlapping structures, and thus reduce artefacts. These points typically have neighboring points with many orientations and thus a large spread. Points at clearly oriented lines and edges have a much smaller spread. In one specific and preferred embodiment one could calculate the mean of all orientations in a M×M neighborhood:

[00014] R = ( .Math. i M × M ( w i sin 2 π N arg max n A k n ) ) 2 + ( .Math. i M × M ( w i cos 2 π N arg max n A k n ) ) 2

[0102] With w.sub.i is a weight so Σ.sub.i∈M×M(w.sub.i)=1 (e.g. for 3×3: w.sub.i=1/9)

[0103] R equals 1 (R=1) in the case that angles

[00015] 2 π N arg max n A k n

are the same Tor all pixels in M×M. R equals 0 (R=0) in the case that angles are distributed evenly along unit circle.

[0104] One could define circular standard deviation stdev as:


stdev=√{square root over (−2*log(R))}

[0105] In a specific and preferred embodiment, one could make a dependent on this stdev. If stdev is close to zero, one wants to get maximal focus and thus a large α, otherwise a smaller α is preferred. It is clear that many variations exist with similar functionality, in a specific and preferred embodiment:

[00016] α = α 0 exp ( - s t d e v 2 2 * σ 0 2 )

[0106] wherein suitable values are:

[00017] σ 0 = π N

[0107] α.sub.0=0.5 at scale 0, 0.15 at other scales.

[0108] In another embodiment, α or α.sub.0 depends on the gray value of the original image or on a segmentation mask. In medical radiographic images we found that it is advantageous to take different α.sub.0 for different tissue types, e.g. a larger α.sub.0 for bone. This could be defined by the original grey values, but this is often not a good approach since greyscale values may be similar for different tissue types.

[0109] Defining this α.sub.0 based on the final or an approximation of the final pixel value is often a better approach. This could be done during a second processing or by reconstruction of scale k+1. Another approach could be to use a segmentation map. This segmentation could be for instance a segmentation map of different tissue types, obtained by for instance a convolutional neural network.

[0110] In another specific embodiment one could train or fit a regression for:


g.sub.kn(A.sub.00 . . . A.sub.0N−1 . . . A.sub.K−1N−1,ϕ.sub.00 . . . ϕ.sub.0N−1 . . . ϕ.sub.K−1N−1)

[0111] One could apply this multiplicative amplification function in combination with other multiplicative amplification functions. In one specific embodiment, g.sub.kn is a combination of amplitude dependent amplification function g1.sub.kn(A.sub.kn) and a trained amplification function gREG.sub.kn. In this way one could use g1.sub.kn to modify the amplitude and thus the contrasts (as explained before), and train a function for gREG.sub.kn to reduce artifacts and/or noise:

[00018] g k n ( A 0 0 .Math. . A 0 N - 1 .Math. A K - 1 N - 1 , ϕ 0 0 .Math. . ϕ 0 N - 1 .Math. ϕ K - 1 N - 1 ) = g 1 k n ( A k n ) * gREG k n ( A 0 0 .Math. . A 0 N - 1 .Math. A K - 1 N - 1 , ϕ 0 0 .Math. . ϕ 0 N - 1 .Math. ϕ K - 1 N - 1 )

[0112] In another embodiment, gREG.sub.kn is applied to reduce noise and/or artifacts in a processed amplitude (gREG.sub.kn*A.sub.kn), next this processed amplitude is used in another amplification function, e.g. g1.sub.kn.

[00019] g k n ( A 0 0 .Math. . A 0 N - 1 .Math. A K - 1 N - 1 , ϕ 0 0 .Math. . ϕ 0 N - 1 .Math. ϕ K - 1 N - 1 ) = g 1 k n ( gREG kn * A k n ) * gREG k n ( A 0 0 .Math. . A 0 N - 1 .Math. A K - 1 N - 1 , ϕ 0 0 .Math. . ϕ 0 N - 1 .Math. ϕ K - 1 N - 1 )

[0113] In order to train this regression gREG.sub.kn, a training set of original images X(i,j) is collected to which artificial noise or artefacts are added to obtain a set of artificial images X.sub.n(i,j). This added noise may be Poisson noise, Gaussian noise, other noise or any combination. Artefacts could be dead pixels/lines, a simulated overexposure, grid lines, ringing artefacts, scatter or other artefacts or any combination thereof. The original images X(i,j) may be a collection of high dose images, phantom images or a lower resolution images with less noise. For each X(i,j) A.sub.kn and for each X.sub.n(i,j) An.sub.kn is calculated, next

[00020] g R E G k n = A k n A n k n

is calculated. Next a set of amplitudes and phases (An.sub.00 . . . An.sub.0N−1 . . . An.sub.K−1N−1, ϕn.sub.00 . . . ϕn.sub.0N−1 . . . ϕn.sub.K−1N−1) are computed for each X.sub.n(i,j) as explained before, and the wanted amplification gREG.sub.kn is fitted for each pixel to this set of amplitudes and phases. The goal is to train or fit a function fin such a way that:


f:(A.sub.00 . . . A.sub.0N−1 . . . A.sub.K−1N−1,ϕ.sub.00 . . . ϕ.sub.0N−1 . . . ϕ.sub.K−1N−1).fwdarw.gREG.sub.kn(gREG.sub.kn*An.sub.kn−A.sub.kn).sup.2 is minimized

[0114] In an another embodiment, one could not only take the amplitudes and phases for pixel (i,j), but also from other pixels. In a specific embodiment, the pixels in a M×M neighbourhood are taken into account.

[0115] In order to train f one can use linear regression, Bayesian networks principal component, support vector machine, neural network, or other machine learning algorithms. This function can then be applied on real data to reduce noise/artefacts.

[0116] High-pass filtered detail image h.sub.0 may be transformed into an enhanced high-pass filtered detail image h′.sub.0, by means of existing methods based on the pixel value, surrounding pixel values or by calculating statistical measures as further described in for instance EP1933273 B1, in European applications EP19202349.7 and EP19209050.4. In another embodiment the values of (b.sub.kn, c.sub.kn) can be used to modify this high pass filtered detail image h.sub.0. For example, for each pixel, the mean of amplifications at scale 0 (g.sub.0n)

[00021] ( .Math. n N g o n N )

can be taken to amplify detail image pixel values h.sub.0(i,j). In another embodiment, a hybrid method is used, based on existing methods and values of (b.sub.kn, c.sub.kn), e.g. amplification can be defined as a weighted average of both techniques. In another embodiment, the processed detail image h′.sub.0 will not be calculated and will be left out for reconstruction, because h.sub.0 may mainly contain noise.

[0117] Low pass residual l.sub.K may be transformed into an enhanced low-pass residual I′.sub.K by means of state of the art. One could for instance use other techniques as for instance described in EP1933273 B1, in European applications EP19202349.7 and EP19209050.4. In a preferred embodiment, the detail images b.sub.kn are only calculated for the first scales, because orientation is not very important for lower scales. The low-pass residual I′.sub.K is then enhanced by for instance a multi-scale contrast enhancement algorithm based on statistical measures of translation differences as described in European application EP19209050.4. In another embodiment, the low-pass residual will not be processed at all (identity processing) or will be left out for the reconstruction.

[0118] After processing the detail images b.sub.kn, the processed detail images are obtained as b′.sub.kn. Subsequently, the processed detail images are convolved with the inverse filter set B′.sub.n. Because the steerable pyramid is self-inverting, these inverse filters are derived from the original filters B.sub.n by reflection of their coefficients about the center. In order to reconstruct the processed image X′, all processed detail images are summed up (with filtering), including the residual low-pass filtered version I′.sub.K, and the high-pass filtered version h′.sub.0 (with filtering).

[00022] X = l K + .Math. k = 0 K - 1 .Math. n = 0 N - 1 B n .Math. b kn + H 0 .Math. h 0

[0119] The detail images are modified starting from the lowest resolution detail image up to the finest level. Subsequently, the reconstruction algorithm is applied to the modified detail images b′.sub.kn, again starting with the lowest resolution detail image up to the finest level.

[0120] In addition, other enhancements may be applied to b.sub.kn prior to or after the application of the conversion operator g.sub.kn(i,j).

[0121] This could be generalized to more dimensions and especially to 3D images X as proposed by Freeman and Adelson. This can be advantageous in the processing of temporal data (e.g. fluoroscopic radiographic imaging) or 3D data sets such as CT, cone beam CT, digital tomosynthesis and MRI. 3D filters which can be written as N.sup.th order polynomials multiplied with a radial windowing function (e.g. Gaussian) can be steered by a set of (N+1).sup.2 basis filters. In order to keep computational effort low, the use of x-y-z separable filters is even more advantageous. Similar to Simoncelli and Freeman such set of steerable filters that are used in a steerable pyramid can be computed. The Hilbert transform is used to calculate conjugate detail images. Detail image and conjugate detail images are used to calculate amplitude and phase for each orientation and nonlinear modification is applied before reconstruction.