ACCELERATED TIME DOMAIN MAGNETIC RESONANCE SPIN TOMOGRAPHY
20230044166 · 2023-02-09
Inventors
- Hongyan Liu (Utrecht, NL)
- Alessandro Sbrizzi (Utrecht, NL)
- Cornelis Antonius Theodorus Van Den Berg (Utrecht, NL)
Cpc classification
G01R33/561
PHYSICS
G01R33/5608
PHYSICS
International classification
G01R33/56
PHYSICS
Abstract
The present patent disclosure relates to a method and a device 700 for determining a spatial distribution of at least one tissue parameter within a sample on a time domain magnetic resonance, TDMR, signal emitted from the sample after excitation of the sample according to an applied pulse sequence, a method of obtaining at least one time dependent parameter relating to a magnetic resonance, MR, signal emitted from a sample after excitation of the sample according to an applied spin echo pulse sequence, and a computer program product for performing the methods. A TDMR signal model is used to approximate the emitted time domain magnetic resonance signal. The model is factorized into one or more first matrix operators that have a non-linear dependence on the at least one tissue parameter and a remainder of the TDMR signal model.
Claims
1. A method for determining a spatial distribution of at least one tissue parameter within a sample based on a measured time domain magnetic resonance, TDMR, signal emitted from the sample after excitation of the sample according to an applied pulse sequence, the method comprising: i) determining a TDMR signal model to approximate the emitted time domain magnetic resonance signal, wherein the TDMR signal model is dependent on TDMR signal model parameters comprising the at least one tissue parameter within the sample, wherein the model is factorized into one or more first matrix operators that have a non-linear dependence on the at least one tissue parameter and a remainder of the TDMR signal model; ii) performing optimization with an objective function and constraints based on the first matrix operators and the remainder of the TDMR signal model until a difference between the TDMR signal model and the TDMR signal emitted from the sample is below a predefined threshold or until a predetermined number of repetitions is completed, in order to obtain an optimized or final set of TDMR signal model parameters; and iii) obtaining from the optimized or final set of TDMR signal model parameters the spatial distribution of the at least one tissue parameter.
2. The method according to claim 1, wherein one of the one or more first matrix operators represents the TDMR signal at echo time.
3. The method according to claim 2, wherein the model is factorized into at least two first matrix operators that have a non-linear dependence on the at least one tissue parameter and the remainder of the TDMR signal model, wherein a first of the at least two first matrix operators represents the TDMR signal at echo time, and wherein a second of the at least two first matrix operators represents a readout encoding matrix operator of the TDMR signal.
4. The method according to claim 1, wherein the remainder of the TDMR signal model comprises a readout encoding matrix operator of the TDMR signal.
5. The method according to claim 1, wherein the performing the optimization comprises using a surrogate predictive model wherein a TDMR signal is computed at echo time only based on the one or more first matrix operators, wherein the surrogate predictive model outputs the TDMR signal at echo time and one or more TDMR signal derivatives at echo time with respect to each of the at least one tissue parameter within the sample.
6. The method according to claim 1, wherein the TDMR signal model is a volumetric signal model and comprises a plurality of voxels, wherein preferably the step of performing optimization is done iteratively for each line in a phase encoding direction of the voxels of the TDMR signal model.
7. The method according to claim 6, wherein the TDMR signal at echo time is a compressed TDMR signal at echo time for each line of voxels, wherein the TDMR signal at echo time is compressed for each voxel, and/or wherein the remainder of the TDMR signal model is factorized into a diagonal phase encoding matrix, preferably for each of the lines of voxels, and a compression matrix for the TDMR signal at echo time.
8. The method according to claim 7, wherein the optimization with an objective function and constraints is representable by: .sup.N.sup.
.sup.N.sup.
.sup.N.sup.
.sup.N.sup.
.sup.N.sup.
9. The method according to claim 1, wherein the optimization with an objective function and constraints is representable by: .sub.A is a Lagrangian with λ representing the Lagrange multiplier; α represents the at least one tissue parameter; Z represents an alternative or slack variable; and W represents a dual variable for Z.
10. The method according to claim 9, wherein the non-linear optimization problem is represented by: .sup.N.sup.
.sup.N.sup.
.sup.N.sup.
.sup.N.sup.
.sup.N.sup.
.sup.N.sup.
11. The method according to claim 9, wherein the step ii) of performing optimization comprises: using a set of equations based on the factorized model, each equation of the set of equations being arranged to obtain an updated respective variable, wherein the variables comprise a first variable representing an auxiliary or slack variable, a second variable representing the at least one tissue parameter and a third variable representing a dual variable, the minimizing comprising; iii) obtaining an update value for the first variable while keeping the other variables fixed; iv) then obtaining an update for the second variable while keeping the other variables fixed; v) then obtaining an update for the third variable while keeping the other variables fixed, and vi) repeating steps iii), iv), and v) until a difference between the TDMR signal model and the measured TDMR signal using the updated values of the respective variables as the respective input until a difference between the updated second variable and the input second variable is smaller than a predefined threshold or until a predetermined number of repetitions is completed, thereby obtaining a final updated set of TDMR signal model parameters, wherein preferably: each equation is configured to obtain an updated variable for a line of voxels in the phase encoding direction; and/or the minimizing comprises estimating an initial set of the variables and thereafter sequentially performing the steps iii), iv) and v) according to; iii) obtaining an updated value for the first variable using the estimated initial set of variables as input; iv) obtaining an updated value for the second variable using the updated first variable and the initial third variable as input; v) obtaining an updated value for the third variable using the updated first variable and the updated second variable as input, and the step vi) of repeating is performed by using the updated values of the respective variables as the respective input until a difference between the updated second variable and the input second variable is smaller than a predefined threshold.
12. The method according to claim 11, wherein the step ii) of performing non-linear optimization comprises, for the (k+1).sup.th iteration: obtaining the updated value for the first variable according to
α.sup.(k+t)=argmin.sub.α.sub.λ(α,Z.sup.(k+1),W.sup.(k));
α.sub.i.sup.(k+1)=argmin.sub.α.sub.
W.sub.i.sup.(k+1)=W.sub.i.sup.(k)+Y(α.sub.i.sup.(k+1))C.sup.r(α.sub.i.sup.(k+1))−Z.sub.i.sup.(k+1).
13. The method according to claim 11, wherein the obtaining the updated value for the second variable is performed by solving N.sub.y separate nonlinear problems using a trust-region method.
14. The method according to claim 1, wherein the step ii) of performing optimization comprises using Alternating Direction Method of Multipliers (ADMM).
15. The method according to claim 5, wherein the surrogate predictive model is implemented as a neural network, a Bloch equation based model or simulator, or a dictionary based model.
16. The method according to claim 15, wherein the neural network is implemented as a deep neural network or a recurrent neural network, wherein, when the neural network is implemented as the deep neural network, the deep neural network is preferably fully connected.
17. The method according to claim 1, wherein the at least one tissue parameter comprises any one of a T1 relaxation time, T2 relaxation time, T2* relaxation time and a proton density, or a combination thereof.
18. The method according to claim 1, wherein the TDMR signal model is a Bloch based volumetric signal model.
19. A device for determining a spatial distribution of at least one tissue parameter within a sample based on a time domain magnetic resonance, TDMR, signal emitted from the sample after excitation of the sample according to an applied pulse sequence, the device comprising a processor which is configured to: i) determine a TDMR signal model to approximate the emitted time domain magnetic resonance signal, wherein the TDMR signal model is dependent on TDMR signal model parameters comprising the at least one tissue parameter within the sample, wherein the model is factorized into one or more first matrix operators that have a non-linear dependence on the at least one tissue parameter and a remainder of the TDMR signal model; ii) perform optimization with an objective function and constraints based on the first matrix operators and the remainder of the TDMR signal model until a difference between the TDMR signal model and the TDMR signal emitted from the sample is below a predefined threshold or until a predetermined number of repetitions is completed, in order to obtain an optimized or final set of TDMR signal model parameters; and iii) obtain from the optimized or final set of TDMR signal model parameters the spatial distribution of the at least one tissue parameter.
20. A method of obtaining at least one magnetic resonance, MR, signal derivative with respect to at least one respective tissue parameter of an MR signal, the MR signal being emitted from a sample after excitation of the sample according to an applied pulse sequence, the method comprising; performing an iterative non-linear optimization with an objective function and constraints in order to obtain an optimized or final value for the at least one MR signal derivative with respect to the at least one respective tissue parameter, wherein the performing of the optimization comprises, for each iteration of the non-linear optimization, using a predictive model receiving the at least one tissue parameter as input and outputting the at least one MR signal derivative with respect to each of the at least one time dependent parameter within the sample.
21. The method according to claim 20, wherein the predictive model is implemented as a neural network configured to accept the at least one tissue parameter and parameters relating to the applied pulse sequence as input parameters, wherein the neural network is preferably a deep neural network or a recurrent neural network; and/or wherein the predictive model is implemented as a dictionary based predictive model or a Bloch equation based model; and/or wherein the predictive model is arranged to further predict or compute values of a magnetization and one or more derivatives thereof with respect to respective ones of the at least one tissue parameter within the sample, and/or wherein the at least one tissue parameter comprises one or any combination of a T.sub.1 relaxation time, a T.sub.2 relaxation time, a T.sub.2* relaxation time and a proton density, PD.
22. The method according to claim 20, wherein the predictive model is arranged to output the MR signal for echo time only; and/or wherein the MR signal is a time domain magnetic resonance, TDMR, signal.
23. A computer program product comprising computer-executable instructions for performing the method of claim 1, when the program is run on a computer.
24. A computer program product comprising computer-executable instructions for performing the method of claim 20, when the program is run on a computer.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0106] The accompanying drawings are used to illustrate presently preferred non-limiting exemplary embodiments of devices of the present disclosure. The above and other advantages of the features and objects of the disclosure will become more apparent and the aspects and embodiments will be better understood from the following detailed description when read in conjunction with the accompanying drawings, in which:
[0107]
[0108]
[0109]
[0110]
[0111]
[0112]
[0113]
[0114]
[0115]
[0116]
DESCRIPTION
[0117] In MR-STAT, parameter maps are reconstructed by iteratively solving the large scale, non-linear problem
[0118] where d is the data in time domain, a denotes all parameter maps, and s is the volumetric signal model, such as a Bloch equation based signal model. Recent improvements have been obtained and are the subject of at present pending application NL2022890, which is incorporated herein by reference in its entirety. However, MR-STAT reconstructions still lead to long computation times because of the large scale of the problem, requiring a high performance computing cluster for application in a clinical work-flow.
[0119] In an embodiment, the MR-STAT reconstructions are accelerated by following two strategies, namely: 1) adopting an Alternative Direction Method of Multipliers (ADMM) and 2) computing the signal and derivatives by a surrogate model. Although it is preferred to apply these two strategies simultaneously, it is possible to apply these two strategies independently in order to obtain a reduced reconstruction time. When applied simultaneously, the new algorithm achieves a two order of magnitude acceleration in reconstructions with respect to the state-of-the-art MR-STAT. A high-resolution 2D dataset is reconstructed within 10 minutes on a desktop PC. This thus facilitates the application of MR-STAT in the clinical work-flow.
Example of Implementation of an Alternating Minimization Method for MR-STAT: ADMM
[0120] The general MR-STAT optimization problem can be written as:
[0121] Problem Dimension: [0122] N.sub.y: Number of voxel in phase encoding (Y) direction; [0123] N.sub.x: Number of voxel in readout encoding (X) direction; [0124] N.sub.Tr: Number of RF pulses; [0125] N.sub.Read: Number of readout points every TR (repetition time); [0126] N.sub.Eig: Length of the compressed echo time signal;
[0127] A vector definition for problem (1) is as follows: [0128] d∈.sup.N.sup.
.sup.N.sup.
[0130] Referring now to
[0131] Returning to the example implementation of the alternating minimization method, especially when assuming Cartesian sampling, the original volumetric signal s (Eq 1) can be factorized into different matrix operators, leading to the following form,
[0132] A graphic illustration of the new problem (2) and the explanation of the operators is shown in .sup.N.sup.
.sup.N.sup.
.sup.N.sup.
.sup.N.sup.
[0139] In the above equation (2), and if used equivalently elsewhere in the present disclosure, F indicates that the norm (∥ . . . ∥) is the Frobenius norm.
[0140] We reformulate problem (2) as the following constrained problem
[0141] by adding slack or auxiliary variable Z.sub.i. Adding to eq. (3) the non-linear constraints into the objective function to obtain an Augmented Lagrangian:
[0142] Augmented lagrangian, viz. the nonlinear constraints are added into the objective function
[0143] scaled Augmented Lagrangian, algebraic simplification
[0144] The introduced parameters/Matrices are defined as: [0145] Z.sub.i∈.sup.N.sup.
.sup.N.sup.
.sup.N.sup.
[0148] The corresponding alternating update scheme is as follows.
[0149] For equation (4), the three variables α, Z, W are obtained sequentially during an ADMM iteration. Reference is made here to Boyd, S., Parikh, N., Chu, E., Peleato, B., & Eckstein, J. (2011). Distributed optimization and statistical learning via the alternating direction method of multipliers, Foundations and Trends in Machine learning, 3(1), 1-122, which is incorporated herein by reference in its entirety.
[0150] The following steps are performed, after obtaining an initial value for the three variables. Then, for the (k+1)th iteration:
[0151] 1. Update Z: Z.sup.(k+1)=argmin.sub.z.sub.λ (α.sup.(k), Z, W.sup.(k));
[0152] This is a linear problem and the closed form solution is given as:
Z.sup.(k+1)=(C*C+λl).sup.(−1)(C*D+λM.sup.(k)+λW.sup.(k)),
[0153] wherein I is an identity matrix, and the definitions of C and M.sup.(k) are given by
[0154] This linear system can be solved also by standard iterative algorithms for linear least-squares problems.
[0155] 2. Update α: α.sup.(k+1)=argmin.sub.a.sub.λ(α, Z.sup.(k+1), W.sup.(k));
α.sub.i.sup.(k+1)=argmin.sub.α.sub.
[0156] in this sub-step, N.sub.y separate nonlinear problems can be solved for instance by trust-region method;
[0157] 3. Update W: W.sub.i.sup.(k+1)=W.sub.i.sup.(k)+Y(α.sub.i.sup.(k+1))C.sup.r(α.sub.i.sup.(k+1))−Z.sub.i.sup.(k+1), this is just simple linear computation.
[0158] In the above step 2, a is updated by solving the nonlinear problem indicated in
α.sub.i.sup.(k+1)=argmin.sub.α.sub.
[0159] In step 2, Y(α.sub.i) can be output of Network 1, 112, of
[0160] The C.sup.r(α.sub.i) matrix models the MR signal evolution during one readout. The preferred surrogate model only computes the MR signal at echo time, and the C.sup.r(α.sub.i) matrix is used in order to compute MR signals at all sample time points during the readout. The C.sup.r(α.sub.i) matrix describes the effects of (a) the frequency encoding gradient; (b) the T2 decay and (c) the B0 dephasing during the readout. These effects can be mathematically expressed in standard exponential and phase terms.
[0161] In summary, in the above ADMM scheme, step (1) solves a linear problem and step (2) solves N.sub.y small parallelizable nonlinear problems using the compressed signal, therefore substantially reducing the computational complexity w.r.t. the original MR-STAT.
[0162] The above ADMM approach is shown graphically in
[0163] The presently described ADMM scheme is implemented as an example for Cartesian acquisition but it will be apparent using the knowledge of the present disclosure that the ADMM scheme can be readily adapted for other kind of acquisitions.
[0164] Example with Linear Constraint
[0165] The above Eq. 3 is an example of a non-linear constrained problem. Another example for implementation is to have a linear constraint problem, as follows:
[0166] Comparing to Eq. 3, this is another way of approaching the optimization problem. Eq. 3 uses the nonlinear relationships as non-linear constraints in the first step, the above uses the linear relationship as a linear constraint. An equivalent approach to the above steps of the ADMM iterations are followed for this linear constraint variant.
[0167] Regularization
[0168] In order to reconstruct better, less noisy images, different regularization terms can be added into the optimization problems for different parameter images (e.g., T.sub.1, T.sub.2, the real part and imaginary part of the proton density (resp. real(PD) and imag(PD))). In order to solve the problem with such regularization terms, additional alternative variable and splitting schemes are added to the above Eq. (4):
[0169] Here, α.sub.j here is the parameter image for the jth parameter (e.g. T.sub.1, T.sub.2, etc.), and the number of parameters which need regularization is N.sub.par; The regularization term R(α.sub.i) is the regularization term for the jth parameter image, and R is any regularization such as L2 regularization or Total Variation (TV) regularization; γ.sub.j and η.sub.j are carefully chosen for different parameter images, in order to achieve optimal reconstructed image quality.
[0170] Thereafter, the alternating update scheme is also used to sequentially update all the parameters: 1. Update Z and β.sub.j.Math.2. Update α.Math.3. Update W and V. The adding regularization terms has almost no impact to the computation time.
[0171] Neural Network Surrogate MR signal Model
[0172] Since MR-STAT, but also some configuration parameters of other quantitative MRI techniques, are obtained/solved by a derivative-based iterative optimization scheme, both the magnetization and its derivatives with respect to all reconstructed parameters are computed at each iteration using an MR signal model such as an Extended Phase Graph, EPG, model as described in Weigel, Matthias. Journal of Magnetic Resonance Imaging 41.2 (2015): 266-295, or a Bloch equation based model. To accelerate the signal computation, a neural network (NN) is designed and trained to learn to compute the signal and derivatives with respect to the tissue parameters (α). Preferably, the NN is designed for either balanced or gradient spoiled sequence. The NN architecture according to an embodiment is shown in
[0173] In other words, the input of the NN is a combination of reconstructed parameters (T1,T2,B1,B0) and optionally time-independent sequence parameters such as TR and TE and time-dependent parameters such as flip angles. The output is the time-domain MR signal (transverse magnetization) and its derivatives w.r.t. to the parameters of interest, such as the tissue parameters T.sub.1 and T.sub.2.
[0174] The network is split in two parts: the first part includes (sub-)Network 1 to Network 4, and they learn the MR signal and their derivatives in a compressed (low-rank) domain. In this specific example, since there are three types of non-linear parameters to reconstruct, namely, T.sub.1,T.sub.2 and B.sub.1, there are three different partial derivatives that are calculated. Thus, three networks for derivatives, i.e. Networks 2, 3 and 4 are present in the present example. In the case that less, more and/or other parameters need to be reconstructed, than less, more and/or other derivative Networks will be needed. If one does not need to reconstruct, say, B1, then only two derivative Networks are needed. In general, there is one (sub-)network for the signal and N (sub-)networks for the derivatives of each of the N parameters to be reconstructed.
[0175] Each of the Networks 1-4 in an embodiment has four fully connected layers with ReLU activation function. The second part of the network is the single linear layer which is represented by the compression matrix U. The matrix U is learned during the training. The first part of the network is preferably used in step 2 in the ADMM algorithm (for computing Y and dY/dα), and the second part of the network (linear step, i.e. matrix U) is used in step 1 of the ADMM algorithm.
[0176] Several embodiments of neural network architectures are provided. It will be understood that other network architectures may also perform similar to the below described embodiments.
[0177] One described embodiment is a Deep Neural Network having several layers comprising a combinations of, for instance, non-linear activation functions, convolution layers, drop-out layers, max-pooling (or variants) layers, linear combination layers and fully-connected layers.
[0178] Another described embodiment is a Recurrent Neural Network having recurrent layers. Each recurrent layer comprises combinations of, for instance, one or more of Gated Recurrent Units (GRU); LSTM units; linear combination layers; drop-out layers; and/or convolution layers.
[0179] Network Architecture Example 1: Fully-Connected Neural Network
[0180] A fully-connected multi-layer neural network is the preferred implementation of the NN architecture of
[0181] The layers included in Network 1-4 are shown in
[0182] The “?” signs indicate the batch size, which equals the number of voxels computed. For example, if we need to computed the signal for 1000 voxels, then “?” will be 1000. If signals and derivatives for 10000 voxels are to be computed, then: ?=10000.
[0183] Network Architecture Example 2: Recurrent Neural Network
[0184] In accordance with another example, that is a multi-layer recurrent neural network (RNN) shown in
[0185]
[0186] Alternative Methods for Calculating Y(α.sub.i) and its Derivatives w.r.t. the Inputs α
[0187] A first alternative method to calculate Y(α.sub.i) and its derivatives w.r.t. the inputs α is to use a Bloch equation simulator, which is a common way to compute the signal. When using the factorized model of the present application, the signal computed would be the product of U and Y(α) operators analogue to the neural network model. This reduces the to solve numerically the physical model represented by the Bloch equation.
[0188] Another alternative method to calculate Y(α.sub.i) and its derivatives w.r.t. the inputs α is to use a dictionary based method. The signal is computed on a limited number of representative values to generate a database of signal waveforms (dictionary). From this dictionary the compression matrix U can be derived by for instance Singular Value Decomposition and the Y values from interpolation.
[0189] One example implementation of the dictionary-based method is as follows.
[0190] 1. For a fixed MR scanning sequence, a dictionary D.sub.Full is simulated by solving the Bloch equations (physical model) while varying the input parameters such as, for instance, T.sub.1, T.sub.2 and B.sub.1. For T.sub.1, many (for instance 100 or more) values in the range of 100 to 5000 ms are sampled. Usually, uniform sampling in a logarithmic scale is done. For T.sub.2, many (for instance 100 or more) values in the range of 10 to 2000 ms are sampled. Usually, uniform sampling in a logarithmic scale is done. For B.sub.1, a uniform sample of many (for instance 11 or more) values in the range of 0.8 to 1.2 T can be taken. The output dictionary value D.sub.Full can be obtained by solving the Bloch equation for each combination of the above parameters; in this example, D.sub.F would be a matrix of size 1120×(100*100*11), where 1120 is the MR sequence length, and each column of the matrix is the MR signal for specific values of T.sub.1, T.sub.2, and B.sub.1.
[0191] 2. The low-rank matrix U (compression matrix) is then computed from D.sub.full by singular value decomposition (SVD) such that D.sub.full=UY(α). For more info, see also McGivney, Debra F., et al. “SVD compression for magnetic resonance fingerprinting in the time domain.” IEEE transactions on medical imaging 33.12 (2014): 2311-2322.
[0192] 3. In the factorized model, the Y(α.sub.1) matrix is computed from the dictionary for any input value a by doing a multi-dimensional (3 dimension for T1, T2, and B1 respectively) interpolation from the compressed Dictionary matrix.
[0193] Therefore, although the NN is found to be a fast way to compute the magnetizations at echo time, the above provide alternative ways for calculating/obtaining the values for Y(α.sub.1) and its derivatives w.r.t. the inputs α.
[0194] Example Reconstruction Data
[0195] Both balanced and gradient spoiled MR-STAT sequence are used with Cartesian acquisition and slowly or smoothly time-varying flip angle trains. In an embodiment, the applied pulse sequence is configured to yield varying flip angles. Preferably, the radio frequency excitation pattern of the applied pulse sequence is configured to yield smoothly varying flip angles, such that a corresponding point-spread function is spatially limited in a width direction. Smoothly varying may indicate a sequence wherein the amplitude of the RF excitations changes in time by a limited amount. The amount of change between two consecutive RF excitations during sampling of a k-space (or of each k-space) is smaller than a predetermined amount, preferably smaller than 5 degrees. Such acquisitions are done e.g. van der Heide, Oscar, et al. arXiv preprint arXiv:1904.13244 (2019), which is incorporated herein by reference in its entirety.
[0196] The neural networks are trained for balanced and spoiled signal models where the inputs are (T1, T2, B1, B0, TR, TE) and (T1, T2, B1, TR, TE), respectively. This can be done for instance as described in Weigel, Matthias. Journal of Magnetic Resonance Imaging 41.2 (2015): 266-295, which is incorporated herein by reference in its entirety. Imperfect slice profile is also modelled. Training of the NN is performed with Tensorflow using the ADAM optimizer, 6000 epochs. The NN surrogate results are obtained by both simulation results and measured data from a Philips Ingenia 1.5T scanner. It is noted that, generally, the predictive models disclosed herein, in particular the neural networks, are configured such that they can be trained independent of the sample or scanner. Once the model is trained with certain types of input parameters, the model is able to output results for such parameters.
[0197] The accelerated MR-STAT reconstruction algorithm incorporating the surrogate model and the above described alternating minimization scheme (in particular ADMM) is implemented in MATLAB on an 8-Core desktop PC (3.7 GHz CPU). To validate the reconstruction results, gel phantom tubes were scanned with a spoiled MR-STAT sequence on a Philips Ingenia 3T scanner, and an interleaved inversion-recovery and multi spin-echo sequence (2DMix, 7 minutes acquisition) provided by the MR vendor (Philips) was also scanned as a benchmark comparison.
[0198] For in-vivo validation, the standard and accelerated MR-STAT reconstructions are run on both gradient spoiled acquisition (using a scan time of 9.8 s, TR of 8.7 ms, and TE of 4.6 ms) and balanced acquisition (using a scan time of 10.3 s, TR of 9.16 ms, TE of 4.58 ms).
[0199]
[0200]
[0201]
[0202] In
[0203] With the accelerated MR-STAT algorithm, one 2D slice reconstruction requires approximately 157 seconds with single-coil data, and 671 seconds with four compressed virtual coil data. Compared with the results reported previously (50 minutes single-coil reconstruction on a 64 CPU's cluster as per e.g. van der Heide, Oscar, et al. in Proceedings of the ISMRM, Montreal, Canada, program number 4538 (2019)). The present accelerated method thus obtains a two order of magnitude acceleration in reconstruction time.
[0204] Now referring to
[0205] A person of skill in the art would readily recognize that steps of various above-described methods can be performed by programmed computers. Herein, some embodiments are also intended to cover program storage devices, e.g., digital data storage media, which are machine or computer readable and encode machine-executable or computer-executable programs of instructions, wherein said instructions perform some or all of the steps of said above-described methods. The program storage devices may be, e.g., digital memories, magnetic storage media such as a magnetic disks and magnetic tapes, hard drives, or optically readable digital data storage media. The embodiments are also intended to cover computers programmed to perform said steps of the above-described methods.
[0206] The functions of the various elements shown in the FIGS., including any functional blocks labelled as “units”, “processors” or “modules”, may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “unit”, “processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read only memory (ROM) for storing software, random access memory (RAM), and non volatile storage. Other hardware, conventional and/or custom, may also be included. Similarly, any switches shown in the FIGS. are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
[0207] It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the invention. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
[0208] Whilst the principles of the described methods and devices have been set out above in connection with specific embodiments, it is to be understood that this description is merely made by way of example and not as a limitation of the scope of protection which is determined by the appended claims.