Model-based image reconstruction using analytic models learned by artificial neural networks
11354829 · 2022-06-07
Assignee
Inventors
Cpc classification
G06F18/214
PHYSICS
G06T2211/441
PHYSICS
G06F18/217
PHYSICS
G06T11/005
PHYSICS
G06T11/006
PHYSICS
International classification
Abstract
The present disclosure is related to methods and systems for image reconstruction including accelerated forward transformation with an Artificial Neural Network (ANN).
Claims
1. A method of image reconstruction, the method comprising recursive acts of: A) checking whether image data suffices predefined quality criteria, and when the image data does not suffice the predefined quality criteria, continuing with act C); C) generating predicted signal data based on the image data by accelerated forward transformation with an Artificial Neural Network (ANN) trained on providing an estimation for a solution of signal evolution equations; D) changing the image data in a signal domain and/or an image domain based on the predicted signal data and raw data received from an imaging device, wherein the changing of the image data comprises the acts of: D1) calculating a first cost value based on the predicted signal data and the raw data in the signal domain; D2) adapting the predicted signal data based on the first cost value in the signal domain; and D3) reconstructing image data based on the adapted predicted signal data; and continuing with act A).
2. The method of claim 1, wherein act D) further comprises the acts of: D4) calculating a second cost value based on the predicted signal data and the raw data; and D5) adjusting the image data based on the second cost value in the image domain.
3. The method of claim 1, further comprising, prior to recursive act A), an initial act of: generating initial image data based on the received raw data from the imaging device.
4. The method of claim 1, further comprising, after the recursive act A), a final act of: outputting the image data when the image data does suffice the quality criteria in act A).
5. The method of claim 1, further comprising an act of: B) regularizing the image data after act A) when the image data does not suffice the predefined quality criteria, wherein, in act C), the predicted signal data is generated based on the regularized image data.
6. The method of claim 1, further comprising an act of: E) adapting the raw data into error-corrected raw data based on the first cost value after act D1) providing for minimizing an error in the raw data due to specific conditions during image acquisition with the imaging device, wherein, in act D1) of a next iteration, the predicted signal data is compared to the adapted raw data.
7. The method of claim 6, wherein the act D3) of reconstructing the image data based on the error-corrected raw data and/or the adapted predicted signal data is effected by a second ANN trained on fast image reconstruction from the error-corrected raw data and/or the adapted predicted signal data.
8. The method of claim 1, wherein the act D3) of reconstructing the image data based on the adapted predicted signal data is effected by accelerated Bloch simulations readjusting the predicted signal returned by the ANN in act C) to eliminate gradient trajectory errors and field inhomogeneity and wherein an inverse Fast Fourier Transform (iFFT) is used for reconstructing the image data.
9. The method of claim 1, wherein, in the act D3), resampling is performed on the adapted predicted signal data before reconstructing the image data.
10. A method of image reconstruction, the method comprising recursive acts of: A) checking whether image data suffices predefined quality criteria, and when the image data does not suffice the predefined quality criteria, continuing with act C); C) generating predicted signal data based on the image data by accelerated forward transformation with an Artificial Neural Network (ANN) trained on providing an estimation for a solution of signal evolution equations; D) changing the image data in a signal domain and/or an image domain based on the predicted signal data and raw data received from an imaging device; and continuing with act A), wherein the ANN is trained via recursive training acts comprising: t1) calculating provisional predicted signal data corresponding to a set of training image data from a multitude of sets of training image data and by using values of internal connection weights of the ANN at a current iteration; t2) calculating a deviation between the provisional predicted signal data and a set of training signal data for a multitude of sets of training signal data corresponding to the set of training image data in act t1); t3) checking whether the deviation is equal to or smaller than a predefined abort criterion; t4) readjusting the values of the internal connection weights based on the deviation in case the deviation is not equal to or smaller than the predefined abort criterion in act t3); and continuing with act t1).
11. The method of claim 10, further comprising, prior to the recursive training act t1), an initial training act of: initializing the internal connection weights of the ANN with non-zero values.
12. The method of claim 10, further comprising, prior to the recursive training act t1), initial training acts of: providing the multitude of sets of training image data; initializing a discretization grid; calculating a set of training signal data for each set of training image data; and storing the set of training signal data together with the respective set of training image data.
13. A system for image reconstruction, the system comprising: a reconstruction controller configured to check whether image data suffices predefined quality criteria; an Artificial Neuronal Network (ANN) trained on providing an estimation for a solution of signal evolution equations configured to generate predicted signal data by accelerated forward transformation based on the image data when the image data does not suffice the predefined quality criteria; and an image changing module configured to change the image data in a signal domain and/or an image domain based on the predicted signal data and raw data received from an imaging device, wherein the change of the image data comprises a calculation of a first cost value based on the predicted signal data and the raw data in the signal domain; an adaption of the predicted signal data based on the first cost value in the signal domain; and a reconstruction of the image data based on the adapted predicted signal data, wherein the system is configured to recursively use the reconstruction controller, ANN, and image changing module until the image data suffices the predefined quality criteria.
14. The system of claim 4, wherein the image changing module is further configured to change the image data by: a calculation of a second cost value based on the predicted signal data and the raw data; and an adjustment of the image data based on the second cost value in the image domain.
15. The system of claim 13, wherein the reconstruction controller is further configured to, prior to checking whether the image data suffices the predefined quality criteria: generate initial image data based on the received raw data from the imaging device.
16. The system of claim 4, further comprising: a second ANN trained on fast image reconstruction from error-corrected raw data and/or the adapted predicted signal data, wherein the second ANN is configured to effect the reconstruction of the image data based on the error-corrected raw data and/or the adapted predicted signal data.
17. The system of claim 13, wherein the ANN is configured to be trained by: a calculation of a provisional predicted signal data corresponding to a set of training image data from a multitude of sets of training image data and by using values of internal connection weights of the ANN at a current iteration; a calculation of a deviation between the provisional predicted signal data and a set of training signal data for a multitude of sets of training signal data corresponding to the set of training image data in the calculation of the provisional predicted signal data; a check on whether the deviation is equal to or smaller than a predefined abort criterion; a readjustment of the values of the internal connection weights based on the deviation in case the deviation is not equal to or smaller than the predefined abort criterion in the check; and a continuation of the calculation of the provisional predicted signal data.
18. An imaging device comprising: a system for image reconstruction, the system having: a reconstruction controller configured to check whether image data suffices predefined quality criteria; an Artificial Neuronal Network (ANN) trained on providing an estimation for a solution of signal evolution equations configured to generate predicted signal data by accelerated forward transformation based on the image data when the image data does not suffice the predefined quality criteria; and an image changing module configured to change the image data in a signal domain and/or an image domain based on the predicted signal data and raw data received from an imaging device, wherein the change of the image data comprises a calculation of a first cost value based on the predicted signal data and the raw data in the signal domain; an adaption of the predicted signal data based on the first cost value in the signal domain; and a reconstruction of the image data based on the adapted predicted signal data, wherein the system is configured to recursively use the reconstruction controller, ANN, and image changing module until the image data suffices the predefined quality criteria.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The present disclosure and its technical field are subsequently explained in further detail by exemplary embodiments shown in the drawings. The exemplary embodiments provide better understanding of the present disclosure and in no case are to be construed as limiting for the scope of the present disclosure. Particularly, it is possible to extract aspects of the subject-matter described in the figures and to combine it with other components and findings of the present description or figures, if not explicitly described differently. Equal reference signs refer to the same objects, such that explanations from other figures may be supplementally used.
(2)
(3)
(4)
(5)
(6)
(7)
DETAILED DESCRIPTION
(8) In
(9) First in the act of generating 2 initial image data I.sub.init is generated based on the received raw data D.sub.raw from the imaging device. Then in act A) of checking 3 it is checked whether image data I.sub.adapt, I.sub.init suffices predefined quality criteria Q. Only during first execution of the act A) the initial image data I.sub.init is checked. In the following iterations reconstructed image data I.sub.adapt is checked. In case the image data I.sub.adapt, I.sub.init did not suffice the predefined quality criteria Q continuing with act B) of regularizing 4.
(10) The image data I.sub.adapt, I.sub.init is regularized in the act B) of regularizing 4 via edge-preserving filtering and/or modelling physical noise effects and/or reducing noise while preserving edges and/or empirical corrections.
(11) In the following act C) of generating 5 predicted signal data D.sub.pre is generated based on the regularized image data I.sub.reg by accelerated forward transformation with an Artificial Neural Network (ANN). The ANN is trained on providing an estimation for a solution of signal evolution equations like the Bloch equations for MR imaging.
(12) The predicted signal data D.sub.pre is used together with the raw data D.sub.raw, D.sub.raw,adapt in the act D1) of calculating 6.1 the first cost value C.sub.1 for calculating the latter in the signal domain with the following equation:
C.sub.1=∥D.sub.pre−D.sub.raw∥.sub.2+α.sub.1P.sub.1(I)+β.sub.1P.sub.2(D.sub.pre)
(13) In this equation, P.sub.1 and P.sub.2 are predefined penalty functions (known from prior art), α.sub.1 and β.sub.1 are parameters for controlling the corresponding amounts of penalization (e.g., experimentally fine-tuned to meet the predefined image quality criteria), I is the (e.g., current) image data ( ) and ∥ ∥.sub.2 is the L2-norm.
(14) In the act E) of adapting 7 the raw data D.sub.raw is adapted into error-corrected raw data D.sub.raw,adapt based on the first cost value C.sub.1 after act D1). This provides for minimizing an error in the raw data D.sub.raw due to specific conditions during image acquisition with the imaging device. In the act D1) of calculating 6.1 of the next iteration (n+1) the then predicted signal data D.sub.pre is compared to the adapted raw data D.sub.raw,adapt.
(15) Next, in the act D2) of adapting 6.2 the predicted signal data D.sub.pre is adapted based on the first cost value C.sub.1 in the signal domain according to the following equation:
D.sub.adapt=D.sub.pre−γ.sub.1∇C.sub.1
(16) In this equation, γ.sub.1 is a factor for controlling the speed and stability (convergence) of the iterative process and ∇ is the gradient.
(17) The adapted predicted signal data D.sub.adapt is used in act D3) of reconstructing 6.3 where the image data I.sub.adapt for the next iteration (n+1) is reconstructed. The act D3) can be effected by a second ANN trained on fast image reconstruction from the error-corrected raw data D.sub.raw,adapt or by accelerated Bloch simulations readjusting the predicted signal returned by the ANN in the act C) to eliminate gradient trajectory errors and field inhomogeneity and wherein a FFT is used for reconstructing the image data I.sub.adapt. Additionally, or alternatively, in the act D3), resampling is performed on the adapted predicted signal data D.sub.adapt before reconstructing the image data I.sub.adapt.
(18) As soon as the image quality criteria in the act A) of a subsequent iteration (n+1) are met by the reconstructed image data I.sub.adapt (or already by the initial image data I.sub.init) the iteration is aborted and the act of outputting 8 is executed where the image data I.sub.adapt (or I.sub.init) is output for example to a printer or a display.
(19) In
(20) In the act D4) of calculating 4 the second cost value C.sub.2 is calculated with the following equation:
C.sub.2=∥D.sub.pre−D.sub.raw∥.sub.2+α.sub.2P.sub.1(I)+β.sub.2P.sub.2(D.sub.pre)
(21) In this equation, α.sub.2 and β.sub.2 are parameters for controlling the corresponding amounts of penalization (e.g., experimentally fine-tuned to meet the predefined image quality criteria).
(22) In the following act D5) of adjusting 6.5 the image data I.sub.adapt, I.sub.init, I.sub.reg is adjusted based on the second cost value C.sub.2 in the image domain according to the following equation:
I.sub.n+1=I.sub.n−γ.sub.2∇C.sub.2
(23) In this equation, I.sub.n+1 is the image data for the following iteration (n+1), γ.sub.2 is a factor for controlling the speed and stability (e.g., convergence) of the iterative process and ∇ is the gradient.
(24) In
(25) Here, the act D3) of reconstructing 6.3 is executed together with the act D5) of adjusting 6.5 such that the reconstructed image data I.sub.adapt of the act D3) derived from the adapted predicted signal data D.sub.adapt that was adapted with the first cost value C.sub.1 in the signal domain in the acts D1) and D2) is additionally adjusted in the image domain by the act D5) with the second cost value C.sub.2.
(26) In
(27) In
(28) In
(29) The acts 21 to 24 may be a separate method of providing training data for training the ANN 12.
(30) Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that a variety of alternate and/or equivalent implementations exist. It should be appreciated that the exemplary embodiment or exemplary embodiments are only examples, and are not intended to limit the scope, applicability, or configuration in any way. Rather, the foregoing summary and detailed description will provide those skilled in the art with a convenient road map for implementing at least one exemplary embodiment, it being understood that various changes may be made in the function and arrangement of elements described in an exemplary embodiment without departing from the scope as set forth in the appended claims and their legal equivalents. Generally, this disclosure is intended to cover any adaptations or variations of the specific embodiments discussed herein.
(31) In the foregoing detailed description, various features are grouped together in one or more examples for the purpose of streamlining the disclosure. It is understood that the above description is intended to be illustrative, and not restrictive. It is intended to cover all alternatives, modifications and equivalents as may be included within the scope of the disclosure. Many other examples will be apparent to one skilled in the art upon reviewing the above specification.
(32) Specific nomenclature used in the foregoing specification is used to provide a thorough understanding of the disclosure. However, it will be apparent to one skilled in the art in light of the specification provided herein that the specific details are not required in order to practice the disclosure. Thus, the foregoing descriptions of specific embodiments of the present disclosure are presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the disclosure to the precise forms disclosed; many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the disclosure and its practical applications, to thereby enable others skilled in the art to best utilize the disclosure and various embodiments with various modifications as are suited to the particular use contemplated. Throughout the specification, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein,” respectively. Moreover, the terms “first,” “second,” and “third,” etc., are used merely as labels, and are not intended to impose numerical requirements on or to establish a certain ranking of importance of their objects. In the context of the present description and claims the conjunction “or” is to be understood as including (“and/or”) and not exclusive (“either . . . or”).