METHOD OF DECOMPOSING A RADIOGRAPHIC IMAGE INTO SUB-IMAGES OF DIFFERENT TYPES

20220092785 · 2022-03-24

    Inventors

    Cpc classification

    International classification

    Abstract

    Digital signal representations of sub-images are obtained by applying an optimization process wherein a sum is minimized, the sum having a first term representing a measure of the consistency of the sum of a digital representations of sub-images with said radiographic image and wherein the second term is a sum of cost functions each describing the type of one of said sub-images.

    Claims

    1. A method comprising: decomposing a digital signal representation of an image into a sum of sub-images of different image types selected from the group consisting of a radiographic image, a collimation area image, a bone image, a soft tissue image, a noise image, a scatter image, a heel effect representing image, and an implant image, and minimizing a first term representing a measure of the consistency of the sum of the sub-images with said image and a second term representing a sum of cost functions of the different sub images, each describing the likeliness of the image being a member of the type of the sub-images, wherein different image processing is applied to said sub-images.

    2. The method according to claim 1 wherein said cost functions are weighted by a corresponding weight value.

    3. The method according to claim 1 wherein said cost function is obtained through the use of a neural network trained with images of said different types.

    4. The method according to claim 1 wherein said cost function is obtained through the use of a neural network trained with phantom images.

    5. The method according to claim 1 wherein said cost function is obtained through the use of a neural network trained with simulations of radiographic images.

    6. The method according to claim 1 wherein differently processed sub-images are combined to form a combined processed image.

    7. The RAM method according to claim 1 wherein a classification task is performed based on one or more of said sub-images.

    8. The method according to claim 1 wherein a cost function for a sub-image represents the total variation of the first derivative of the signal representation of the image.

    9. The method according to claim 1 wherein said cost function represents a noise measure.

    10. The RAM method according to claim 1 wherein said process is initialized with sub-images generated by a trained neural network.

    11. A computer program product adapted to carry out the method of claim 1 when run on a computer.

    12. A computer readable medium comprising computer executable program code adapted to carry out the steps of claim 1.

    13. A computer-readable medium storing processor-executable instructions that, when executed by a processor, configure the processor for: decomposing a digital signal representation of an image into a sum of sub-images of different image types selected from the group consisting of a radiographic image, a collimation area image, a bone image, a soft tissue image, a noise image, a scatter image, a heel effect representing image, and an implant image, and minimizing a first term representing a measure of the consistency of the sum of the sub-images with said image and a second term representing a sum of cost functions of the different sub images, each describing the likeliness of the image being a member of the type of the sub-images, wherein different image processing is applied to said sub-images.

    14. The computer-readable medium according to claim 13 wherein said cost functions are weighted by a corresponding weight value.

    15. The computer-readable medium according to claim 13 wherein the processor-executable instructions comprise instructions for executing a neural network to obtain the cost function, the neural network trained with one or more of images of said different types, phantom images, and simulations of radiographic images.

    16. A computer program product comprising processor-executable instructions that, when executed by a processor, configure the processor for: decomposing a digital signal representation of an image into a sum of sub-images of different image types selected from the group consisting of a radiographic image, a collimation area image, a bone image, a soft tissue image, a noise image, a scatter image, a heel effect representing image, and an implant image, and minimizing a first term representing a measure of the consistency of the sum of the sub-images with said image and a second term representing a sum of cost functions of the different sub images, each describing the likeliness of the image being a member of the type of the sub-images, wherein different image processing is applied to said sub-images.

    17. The computer program product according to claim 16 wherein said cost functions are weighted by a corresponding weight value.

    18. The computer program product according to claim 16 wherein the processor-executable instructions comprise instructions for executing a neural network to obtain the cost function, the neural network trained with one or more of images of said different types, phantom images, and simulations of radiographic images.

    Description

    DETAILED DESCRIPTION OF THE INVENTION

    [0028] In this invention, an image Im is decomposed into different sub images Im.sub.i such that

    [00003] .Math. Im - .Math. i Im i .Math. < ϵ ( 2 )

    with ε is a constant to allow a fault tolerance, and 0<i<N, with N the number of sub images Im.sub.i.

    [0029] The constraint in Eq. 2 could also be written as

    [00004] .Math. Im - .Math. i Im i .Math. = 0

    in which case no faults are tolerated.

    [0030] For each sub image Im.sub.i, a specialized image processing task P.sub.i or classification task D.sub.i could be designed, which might perform better than their counterparts P and D working on the original image Im.

    [0031] The inverse problem as defined in Eq. (2) is highly underdetermined.

    [0032] An infinite number of correct but random images Im.sub.i can be generated, of which the sum results in Im.

    [0033] To guarantee that each sub image Im.sub.i corresponds to a target sub class of images (e.g. bone images), a cost function L.sub.i is created which expresses prior knowledge for a given sub image (e.g. characteristics of a typical bone image)

    [0034] An example of L.sub.i could be a smoothness constraint, a Total Variation constraint, a similarity metric with a prior image, etc.

    [0035] The inverse problem can thus be written as:

    [00005] .Math. Im - .Math. i Im i .Math. + .Math. i β i L i ( Im i ) < ϵ ( 3 ) [0036] where the first term measures the consistency with the original image Im, and the second term sums up the cost functions L.sub.i of the different sub images Im.sub.i, with a weight p.sub.i.

    Design of Cost Functions L.SUB.i..

    [0037] The cost functions L.sub.i describe how well the sub image Im.sub.i, fits into the desired category i.

    [0038] It is of critical importance that the cost functions L.sub.i efficiently describe the desired category, as otherwise the decomposition of Im will result in meaningless sub images Im.sub.i.

    [0039] For example, if Im.sub.i should represent the collimator, the corresponding L.sub.i could enforce a piecewise constant image, consisting of only 2 intensities (corresponding to metal and air).

    [0040] A possible cost function to express that the values of Im.sub.L should belong to a discrete set of J values a.sub.j, with j∈[1 . . . J], is


    L.sub.i(Im.sub.i)=Σ.sub.x,y min.sub.j|Im.sub.x,y−a.sub.j|, where

    [0041] a.sub.j represents a value in the image that is to be expected based on prior knowledge.

    [0042] As an example, in the case of if Im.sub.i representing the collimator, a.sub.0 could be 0 and a.sub.1 could be set equal to a predefined value. A possible method to derive a.sub.1 could be to acquire a representative flat field exposure, containing the collimator shape. After log transform of the image, a.sub.1 could e.g. be derived as the difference between the average pixel values in the non-collimated and collimated area of the image.

    [0043] In another implementation, a.sub.j could be derived based on image statistics of Im.sub.i itself. E.g. each a.sub.j represents one of the most occurring pixel values in Im.sub.i. In the case of Im.sub.i representing the collimator, a.sub.o could be set to 0 and a.sub.1 would represent the pixel value with the highest occurrence based on a histogram analysis of Im.sub.i.

    [0044] Another way to express piecewise constancy in a cost function is


    L.sub.i(Im.sub.i)=Σ.sub.x,y|Im.sub.i,x,y−Im.sub.u,x+1,y|+Im.sub.i,x,y−Im.sub.i,x,y+1| or,

    [0045] using the L.sub.2 norm,

    [00006] L i ( Im i ) = .Math. x , y ( Im i , x , y - Im i , x + 1 , y ) 2 + ( Im i , x , y - Im i , x , y + 1 ) 2 .

    [0046] Another term, which could be added to most cost functions, is the prior knowledge that all pixel values of Im.sub.i should be positive. This can be expressed e.g. as


    L.sub.i(Im.sub.i)=Σ.sub.x,y(|Im.sub.x,y|−Im.sub.x,y)

    [0047] In general, for any of the desired categories Im.sub.i, a cost function L.sub.i could be hand crafted.

    [0048] Another way to obtain a suitable cost function L.sub.i is through the use of neural networks.

    [0049] In recent years, much progress has been made in the domain of artificial intelligence. Powerful convolutional networks (CNN) are nowadays capable of classifying images of a vast variety of subjects.

    [0050] A CNN could be trained to classify images into the different classes of sub images.

    [0051] The final outcome of this CNN could be a vector of dimension N+1, in which each element represents the match score for sub category i, and the last element the score for not belonging to any of the N categories.

    [0052] L.sub.i can thus be written in function of the resulting output vector of this CNN:


    L.sub.i(Im)=1−CNN(Im).sub.i

    [0053] CNN could be trained with relevant examples of the different sub categories. A method to obtain these images is to acquire them experimentally, e.g. acquiring images without any object exposed to obtain a relevant electronic noise image, or acquiring images with only a collimator, or using a phantom which only consists of material from a particular sub class.

    [0054] Another method to obtain training images for this CNN is to generate projection images virtually, e.g. using CT scans of existing patients/objects.

    [0055] Existing algorithms for segmentation of tissue types in CT scans could be used to segment the CT scan first. These segmentation algorithms are in general easier to develop, due to the lack of overlap of different structures such as in X-ray projection images.

    [0056] Subsequently, X-ray projection images Im.sub.i of the different sub classes could be simulated from the CT scans, in which only the relevant tissue type i is retained per simulation.

    [0057] In another embodiment, prior knowledge could be integrated in the cost function using an auto-encoder. A denoising auto-encoder can be trained to represent a subclass of images Im.sub.i, e.g. a set of collimation images, bone images, etc. A distance metric could subsequently be calculated between the original Im.sub.i and the output of the auto-encoder, assuming that if the image Im.sub.i truly belongs to the subclass on which the auto-encoder is trained, the distance will be low. This distance could be used as a cost function L.sub.i,

    Optimization

    [0058] Once the cost functions L.sub.i are defined, the inverse problem in Eq. (3) can be solved to obtain Im.sub.i. Different strategies could be followed to solve this inverse problem.

    [0059] In a first embodiment, an initial estimate Im.sub.i,0 is generated. This initial estimate might be a random image, a blank (zero) image, a low pass filtered version of the original image, the result of another image decomposition algorithm (such as a virtual dual energy algorithm, which splits an image Im into a bone and soft tissue image), a trained neural network etc. By choosing β=0, we can keep the initial guess for some sub-images.

    [0060] Then, the different images Im.sub.L are computed iteratively, wherein in each iteration n a new estimate Im.sub.i,n+1 is computed using the previous estimate Im.sub.i,n and a partial derivative image D.sub.i,n:

    [00007] D i , n = L i ( Im i , n ) x y
    Im.sub.i,n+1=Im.sub.i,n+λ.sub.iD.sub.i,n [0061] with λ.sub.i a weight. However, as n progresses, the sum of the sub images

    [00008] .Math. i Im i , n [0062] will most likely start to diverge from the initial image Im.

    [0063] Therefore, image consistency operations are needed to ensure the sum of sub images Im.sub.i result again in the initial image Im.

    [0064] This could be achieved in various ways, e.g. by re-distributing the difference over the different components Im.sub.i:


    Im.sub.i,n+1=Im.sub.i,n+λ.sub.iD.sub.i


    Im.sub.j≠i,n+1=Im.sub.j,n−λ.sub.i/ND.sub.i

    [0065] Another approach to ensure consistency is to add an additional sub image Im.sub.N, which is defined as

    [00009] Im N = Im - .Math. i = 0 N - 1 Im i .

    [0066] The optimization problem thus reduces to

    [00010] .Math. i = 0 N β i L i ( Im i ) < ϵ

    in which L.sub.N could be a simple norm,
    or another measure of the error.

    [0067] Having described in detail preferred embodiments of the current invention, it will now be apparent to those skilled in the art that numerous modifications can be made therein without departing from the scope of the invention as defined in the appending claims.