Method and apparatus for processing magnetic resonance data
11169235 · 2021-11-09
Assignee
- MAX-PLANCK-GESELLSCHAFT ZUR FOERDERUNG DER WISSENSCHAFTEN E. V. (Munich, DE)
- Eberhard Karls Universitaet Tuebingen (Tuebingen, DE)
Inventors
Cpc classification
G01R33/5608
PHYSICS
G01R33/56
PHYSICS
G01R33/50
PHYSICS
G06F17/18
PHYSICS
G01R33/485
PHYSICS
G01R33/443
PHYSICS
G01R33/4828
PHYSICS
International classification
G01R33/56
PHYSICS
G06F17/18
PHYSICS
Abstract
A method of processing magnetic resonance (MR) data of a sample under investigation, includes the steps of providing the MR data being collected with an MRI scanner apparatus, and subjecting the MR data to a multi-parameter nonlinear regression procedure being based on a non-linear MR model and employing a set of input parameters, wherein the regression procedure results in creating a parameter map of model parameters of the sample, wherein the input parameters (initial values and possibly boundaries) of the regression procedure are estimated by a machine learning based estimation procedure applied to the MR data. The machine learning based estimation procedure preferably includes at least one of at least one neural network and a support vector machine. Furthermore, an MRI scanner apparatus is described.
Claims
1. A method of processing magnetic resonance (MR) data of a sample under investigation, comprising the steps of: providing the MR data being collected with an MRI scanner apparatus, and subjecting the MR data to a multi-parameter nonlinear regression procedure based on a non-linear MR signal model and employing a set of input parameters, wherein the regression procedure results in creating a parameter map of model parameters of the sample, wherein the input parameters of the regression procedure are estimated by a machine learning based estimation procedure applied to the MR data.
2. The method according to claim 1, wherein the machine learning based estimation procedure comprises at least one of at least one neural network, and a support vector machine.
3. The method according to claim 1, wherein the machine learning based estimation procedure is an estimation procedure trained by at least one of simulation data, real data and manipulated real data.
4. The method according to claim 1, wherein the input parameters of the regression procedure are estimated by using a combination of an estimation procedure trained by simulation data, an estimation procedure trained by real data, and an estimation procedure trained by real but manipulated data.
5. The method according to claim 1, wherein the machine learning based estimation procedure is applied separately on N-dimensional MR data of voxels of the sample under investigation, wherein N refers to one or more sample features collected within each voxel.
6. The method according to claim 1, wherein the regression procedure includes a single iteration step for creating the parameter map of model parameters from the input parameters.
7. The method according to claim 1, including further steps of repeating the steps of estimating the input parameters and subjecting the MR data to the multi-parameter nonlinear regression procedure with a changed configuration of the machine learning based estimation procedure, until the parameter map of the model parameters provides a realistic approximation of the sample.
8. The method according to claim 1, wherein the non-linear MR model comprises at least one of a Bloch equation based MR signal model, an MR signal model based on adapted Bloch equations, a magnetic field mapping model and a T1 and T2 relaxation times estimating model.
9. The method according to claim 1, wherein the MR data comprise at least two phase and/or magnitude images and/or MR raw data being collected with the MRI scanner apparatus.
10. The method according to claim 1, wherein the step of estimating the input parameters is included in an image reconstruction procedure conducted by the MRI scanner apparatus.
11. The method according to claim 1, wherein the step of estimating the input parameters is conducted separately from operation of the MRI scanner apparatus.
12. The method according to claim 1, wherein the parameter map of the sample comprises at least one of an exponential T1, T2 map, a multi-exponential T1, T2 map, a spectroscopic imaging map, a compartmental map of parameters, an apparent diffusion coefficient (ADC)-map for varying B-values, a Kurtosis parameter map, a parameter map of perfusion and dynamic contrast enhanced imaging for Gadolinium based contrast agents as well as glucose, or glycosamines or oxymethyl glucose (OMG), a parameter map of spectroscopic imaging of nuclei, a parameter map of CEST parameters, a field parameter map (B1, B0), and a parameter map representing at least one of motion, breathing and pulsation with known non-linear influence.
13. The method according to claim 12, wherein the parameter map of the sample comprises the parameter map of CEST parameters including Z-spectra modelled by Multi-Lorentzian regression, or Henkelman-based water, MT and CEST pool regression of effect strength, proton pool sizes and exchange and relaxation rates.
14. The method according to claim 12, wherein the parameter map of the sample comprises the field parameter map (B1, B0) including Bloch-Siegert shift based B1 mapping, multi-flip angle mapping, DREAM or WASABI.
15. A magnetic resonance imaging (MRI) scanner apparatus, comprising an MRI scanner signal acquisition device arranged for collecting MR data, and a data processing unit that includes a regression processor configured for subjecting the MR data to a multi-parameter nonlinear regression procedure based on a non-linear MR model, wherein the regression procedure includes creating a parameter map of model parameters of a sample using a set of input parameters of the regression procedure, wherein the data processing unit includes an estimator section configured for estimating the input parameters of the regression procedure by applying a machine learning based estimation procedure on the MR data.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Further advantages and details of the invention are described in the following with reference to the attached drawings, which schematically show in:
(2)
(3)
(4)
(5)
(6)
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS OF THE INVENTION
(7) Embodiments of the invention are described in the following with particular reference to the combination of the machine learning based estimation procedure with the regression procedure. The invention preferably is implemented with an MRI scanner as it is known per se. Accordingly, details of the MRI scanner, the available control schemes thereof, available excitations and read-out sequences, available schemes of MR signal acquisition and types of MR data are not described as they are known per se from prior art.
Embodiments of the MR Data Processing Method and MRI Scanner Apparatus
(8)
(9) The MR data processing method includes a configuration step S0, wherein the NN is configured. Configuration of the NN includes providing the NN architecture, in particular selecting a number of neurons, a number of layers. With a practical example, 400 neurons and 3 layers are provided. Additionally, the configuration of the NN can include selecting or adapting a regression model and an associated NN architecture, e.g. as a result of step S4 (see below). Furthermore, the configuration of the NN in step S0 includes training the NN (see
(10) The main process starts with step S1 of providing the MR data, e.g. by a running MRI scanner operation. The MR data are stored in an MR data storage of the MRI scanner apparatus or any other computer setup. With an example, the MR data comprise raw data collected with the application of a read-out sequence with the MRI scanner, like e.g., spin-echo, gradient echo, bSSFP (balanced steady-state free precession), EPI (echo planar imaging) and/or any magnetization-prepared readout, e.g., CEST, Tlp, MP-RAGE (magnetization-prepared rapid gradient echo), diffusion-weighted. Also spectroscopic imaging data is possible to be evaluated, like MRSI (MR spectroscopic imaging) data, CSI (chemical shift imaging) data, or EPSI (echo planar spectroscopic imaging) data.
(11) Subsequently, the NN is applied to the MR data for creating input parameters with step S2. As the result of step S2, the input parameters for the subsequent regression procedure are provided, e.g. as a vector or any other data format of initial values.
(12) The input parameters are employed as starting values in the non-linear regression procedure conducted in step S3. The regression procedure comprises an available non-linear regression, like e.g. a least squares optimization (e.g. Levenberg-Marquardt-algorithm, Levmar) or a Markov chain Monte Carlo (MCMC) simulation or a Bayesian nonlinear regression or Maximum-Likelihood methods. As the result, a parameter map of the sample is created, which is output in step S5, e.g. for consideration by a user, further analysis, subsequent separate diagnostic steps and/or post-processing.
(13) Optionally, a verification step S4 can be included, wherein the input parameters and/or the parameter map are verified, e.g. by comparing with reference data or by applying test routines analyzing mapping details. As a result of the verification step S4, if the input parameters and/or the parameter map do not provide a realistic representation of the sample, the configuration of the NN can be changed, e.g., by changing the network architecture, or even the regression procedure can be changed, e.g., from 3-exponential fitting to 4-exponential fitting.
(14)
(15) MR data are collected with the MRI scanner 10. The raw data can be subjected to a Fourier transformation (in particular FFT) for generating a series of MR data in a FFT section (not shown) of the MRI scanner 10 (MRI scanner signal acquisition device 10) or the data processing unit 20. The inventive neural network enabled non-linear parameter estimator section 21 is part of the data processing unit 20 (or alternatively part of the MRI scanner 10). From a series of at least two phase/magnitude images (and/or also from raw data directly) it provides best starting values and boundaries for a subsequent non-linear fit by the regression processor 22, which then creates the reliable quantitative parameter maps.
(16) As shown in
(17) ANN 30 comprises an input layer 31, at least one hidden layer 32 and an output layer 33, each with a number of neurons 34. The number of neurons (nodes) and layers is adapted to the specific data that is used. An exemplary ANN 30 based functioning network can be achieved using 400 neurons 34 and 3 layers 31, 32, 33; given by [100 200 100] where this notation stands for [(neurons in input layer 31) (neurons in hidden layer 32) (neurons in output layer 33)]. Other depths and widths of the can be used as well. In a practical example, 3 T Z spectra data are input at the input layer 31, and 9.4 T Lorentzian fit parameters are output at the input layer 33. With the same architecture as ANN also LSTM network nodes can be used to realize an LSTM based estimator.
(18) An exemplary CNN 40 can be built as in
(19)
(20) (i) training data being provided by the underlying Bloch equations, or derivate of Bloch equations, including e.g., diffusion, chemical exchange, flow, pharmacological kinetics, etc., wherein simulation parameters are used as targets 36 and simulated data are used as inputs 35 for the NN training,
(ii) training data being provided by real acquired data as inputs 35 and corresponding classical estimations of the parameter maps as targets 36 (as an example, a single volunteer can be used for training as this is a voxel based approach, so a 3D dataset already provides a large amount of data of input data with easily 200 000 elements), and
(iii) any variations of the input data 35 of (ii), including real acquired data with added noise, motion and/or fluctuations) with the same targets 36 as in (ii).
(21) The training process is a so-called back-propagation optimization (e.g. scaled conjugate gradient backpropagation, Bayesian regularization backpropagation ([6], [7]), or others) of the free para-meters of the network using input vectors and target parameter vectors of a training dataset. The starting point of the optimization uses randomly initialized matrices. To avoid overfitting, at least one of the following strategies is preferred. The first is an early stopping procedure, wherein the training data is randomly divided into a training set (70%), a validation set (15%) and a test set (15%), and the validation set is used to determine an early stopping criteria: if the rmse (root mean squared error value) of the validation data does not improve within 5 iterations, the optimization is stopped. The second method is a regularization procedure, which uses a regularization factor γ to add a penalty to large weights and thus avoid overfitting. If mse is the mean-squared-error of the optimization and msw are the mean-squared-weights then the optimized function with regularization is msereg=γ*msw+(1-γ)*mse. As an example, in the final network training γ was set to 0.5, but other values are possible depending on the data.
APPLICATION EXAMPLES
(22) 1. Z-Spectrum Fitting in CEST Experiments
(23) Multi-Lorentzian fitting is described as a first example of a multi-parameter nonlinear regression procedure, where NNs can be used for creating regression input parameters (see
(24) MR data including CEST contrast information were acquired with the MRI scanner apparatus 100 of
(25)
(26) To isolate CEST effects in the Z-spectra (input MM CEST data: circles in
Z(Δω)=c−L1−L2−L3−L4 (2)
with each L.sub.x being a Lorentzian function defined by
(27)
(28) Dashed lines in
(29) Noisy Z-spectra from 3 slices in different brain regions were then used as an input for a 3-layer deep neural network (NN, 400 neurons in total) with the Lorentzian pool parameters fitted from de-noised Z-spectra as target values. Validation was performed by comparing the predicted Lorentzian pool amplitudes to those fitted from de-noised Z-spectra. The dataset was divided randomly into training (70%), validation (15%), and test sets (15%) to avoid overfitting. Trained with about 3000 iterations, the neural network was applied to noisy Z-spectra from other slices in the same volunteer, as well as to data from two additional healthy subjects.
(30) Lorentzian least squares fitting of de-noised, 3D Z-spectra takes approx. 6.5 minutes for 14 slices using parallel computation.
(31)
(32) The example of
(33) 2. B0 and B1 Mapping—WASABI
(34) B0 and B1 mapping is described as a second example of the inventive MR data processing (see
(35) However, the equation describing the outcome is highly non-linear leading to regional local minima in the B0 and B1 mapping of brain data at ultra-high field, where field inhomogeneities are strong. The inventors have found that for WASABI B0 and B1 inhomogeneity mapping the non-linear fitting can be strongly improved using the invention, not only for avoiding local minima, but also in reduced fitting time. It is noted that the off resonance measurement mentioned here is very similar to the bSSFP profile and the same advantageous result can be obtained by fitting of bSSFP profiles.
(36) With more details, according to [10] (see also textbooks on physical principles and sequence design in MRI) the Z-magnetization after the rectangular pulse is given by
(37)
(38) Thus given the pulse width t.sub.p this is a model with 4 free parameters: af, c, B.sub.1, and δω.sub.a. Where af, c only describe the amplitude modulation independent of the frequency offset, the parameter B1 changes the periodicity and the parameter δω.sub.a, the symmetry axis of the function (see
(39) Equation (4) is fitted to the acquired data; employing the correct scanner frequency and the gyromagnetic factor=2π.Math.42.578 MHz/T the WASABI method yields B0 and B1 inhomogeneity maps.
(40)
(41) As shown in
(42)
(43) TABLE-US-00001 Method Performance Levmar 100 iterations and table 611 seconds, good performance lookup Levmar 100 iterations 122 seconds, bad performance around nasal cavities NN, Levmar 2 iterations: 34 seconds, good performance Levmar: Levenberg-Marquardt algorithm
(44) 3. Multi-Exponential Relaxation
(45) In relaxation measurement such as T1 or T2 mapping of biological samples often multi-exponential decays can be observed. Here, the inventive NN based input parameter prediction can as well provide good initial parameters for non-linear fitting routines. In particular, the inventors have found that for time series data such as multi-exponential decays NNs can improve the non-linear parameter estimation for MR data of living tissue.
(46) The non-linear function of MR data S for a 3-exponential decay is represented by
S=a.sub.1*exp(−R.sub.1*0+a.sub.2*exp(−R.sub.2*t)+a.sub.3*exp(−R.sub.3*t) (5)
where a.sub.i are the compartments (representing volumes including matter with a certain relaxation rate) and R.sub.1 are the relaxation rates in each compartment. According to the invention, a NN being trained to multi-exponential target data is used for predicting input parameters for the regression providing a.sub.i and R.sub.i based on measured MR data.
(47)
S=0.7*exp(−R.sub.1*t)+0.1*exp(−R.sub.2*t)+0.2*exp(−R.sub.3*t)
(48) For conventionally estimated poor input parameters, fitting the same data results in local minima fits that yield wrong estimation of at least one compartment/relaxation rate (
(49) As in MR data of biological samples the compartments and relaxation rates as well as the actual number of compartments can change strongly between different tissues, the invention allows for fast and robust avoidance of local minima for multi-exponential decay or recovery data fitting. In addition also the number of compartments and thus the actual necessary fit model can be estimated by the invention.
(50) 4. Further Applications
(51) Further applications of the invention are available in exponential T1, T2 mapping, multi-exponential T1 and T2 spectroscopy, spectroscopic imaging, metabolite concentration mapping, compartmental mapping of parameters, partial volume, fat-water, pharmacokinetic, etc., ADC-mapping, for varying b-values, Kurtosis, perfusion and dynamic contrast enhanced imaging (modeling of input function, glucose, gag, OMG, etc.), MR-CEST imaging (Z-spectra by Multi-Lorentzian, or Henkelman-based water, MT and CEST pool fits), field mapping (B1, B0), e.g. Bloch-Siegert, multi-flip angle, DREAM, WASABI, and/or motion, breathing, pulsation measurements with known non-linear influence.
(52) The features of the invention disclosed in the above description, the drawings and the claims can be of significance both individually as well as in combination or sub-combination for the realization of the invention in its various embodiments. The invention is not restricted to the preferred embodiments described above. Rather a plurality of variants and derivatives is possible which also use the inventive concept and therefore fall within the scope of protection. In addition, the invention also claims protection for the subject and features of the subclaims independently of the features and claims to which they refer.