ACCELERATED MAGNETIC RESONANCE THERMOMETRY
20220196771 · 2022-06-23
Inventors
Cpc classification
G01R33/5611
PHYSICS
G01R33/561
PHYSICS
G01R33/5608
PHYSICS
A61B5/055
HUMAN NECESSITIES
G01R33/50
PHYSICS
G01R33/4818
PHYSICS
G01R33/5615
PHYSICS
A61B5/4836
HUMAN NECESSITIES
International classification
G01R33/56
PHYSICS
A61B5/055
HUMAN NECESSITIES
G01R33/50
PHYSICS
G01R33/561
PHYSICS
Abstract
Systems and methods provide accelerated MR thermometry utilizing prior knowledge about the images to be reconstructed from incomplete k-space data, thereby facilitating accurate reconstruction. In various embodiments, missing data is computationally estimated using a machine learning algorithm such as a neural network, and an image is generated based on iteratively updated estimated missing information.
Claims
1. An imaging system comprising: a magnetic resonance imaging (MRI) apparatus configured to: (i) execute a first multi-echo pulse sequence to excite a target region; (ii) acquire a plurality of baseline k-space images of the target region, each baseline image being associated with an echo in the first multi-echo pulse sequence; (iii) execute a second multi-echo pulse sequence to excite the target region; and (iv) acquire a second plurality of k-space images, each of the second plurality of k-space images being associated with an echo in the second multi-echo pulse sequence and at least one of the second plurality of k-space images being under-sampled so as to be missing information compared to the baseline k-space images; and a computation unit configured to reconstruct a target image based at least in part on at least one baseline k-space image and at least one of the second plurality of k-space images.
2. The system of claim 1, wherein the target image is one of a thermal map, a susceptibility-weighted image, a T.sub.2* image, a micro-bleeding image, or a BOLD f-MRI image.
3. The system of claim 1, wherein the computation unit comprises a neural network.
4. The system of claim 3, wherein the neural network is a convolutional neural network.
5. The system of claim 3, wherein the computation unit is configured to, for at least one of the second plurality of k-space images, (i) computationally estimate the information missing therefrom using the neural network, and (ii) generate a target image based at least in part on the estimated missing information.
6. The system of claim 3, wherein the computation unit is configured to, for at least one of the second plurality of k-space images, (i) computationally estimate a corresponding anatomical image using the neural network, and (ii) generate a target image based at least in part on the estimated anatomical image.
7. The system of claim 3, wherein the computation unit is configured to, for at least one of the second plurality of k-space images, (i) estimate the information missing therefrom and reconstruct a corresponding anatomical image, (ii) computationally correct the anatomical image using the neural network, and (ii) generate a target image based at least in part on the corrected anatomical image.
8. The system of claim 1, wherein the computation unit is configured to, for at least one of the second plurality of k-space images, (i) computationally estimate the information missing therefrom, (ii) computationally update the estimated missing information based at least in part on an echo time associated therewith, and (iii) generate a map based at least in part on the updated estimated missing information, the at least one of the second plurality of k-space images corresponding thereto, and the baseline k-space image corresponding to the at least one of the second plurality of k-space images.
9. The system of claim 8, wherein the map comprises at least one of an anatomical image or a thermal map of the target region.
10. The system of claim 8, wherein the computation unit is further configured to computationally estimate the missing information based at least in part on a default value, a portion of one of the corresponding baseline k-space images, or a different one of the second plurality of k-space images.
11. The system of claim 8, wherein each one of the baseline k-space images comprises a plurality of pixels, the computation unit being further configured to update the estimated missing information based at least in part on a magnitude associated with a pixel in one of the baseline k-space images.
12. The system of claim 1, further comprising a first and a second MRI coil, wherein each of the baseline k-space images and each of the second plurality of k-space images is associated with at least one of the first or second MM coil.
13. The system of claim 8, wherein the computation unit is further configured to (a) computationally reconstruct a plurality of baseline images from the plurality of baseline k-space images, and (b) for each one of the second plurality of k-space images, computationally reconstruct an image in a third image set based at least in part on the computationally estimated missing information associated therewith.
14. The system of claim 13, wherein each one of the baseline k-space images is acquired from an echo in response to the first multi-echo pulse sequence and each one of the second plurality of k-space images is acquired from an echo in response to the second multi-echo pulse sequence, the computation unit being further configured to correspond each reconstructed image in the third image set to one of the reconstructed baseline images based at least in part on echo times of the echoes associated therewith.
15. The system of claim 14, further comprising a plurality of MRI coils, each of the baseline k-space images and each of the second plurality of k-space images being associated with at least one of the MRI coils, the computation unit being further configured to relate each reconstructed image in the third image set to one of the reconstructed baseline images based at least in part on the associated at least one MRI coil.
16. The system of claim 14, wherein the echo time associated with the reconstructed image in the third image set is the same as the echo time associated with the corresponding reconstructed baseline image.
17. The system of claim 14, wherein the computation unit is further configured to (c) determine a phase difference between a pixel in one of the reconstructed baseline images and the corresponding pixel in the corresponding reconstructed image in the third image set.
18. The system of claim 17, wherein the computation unit is further configured to (d) update the phase difference based at least in part on the echo time of the k-space image from which said corresponding reconstructed image in the third image set is reconstructed.
19. The system of claim 18, wherein the phase differences associated with the reconstructed images in the third image set positively correlate with the echo times of the second plurality of k-space images.
20. The system of claim 18, wherein the computation unit is further configured to update the phase difference based at least in part on the echo times associated with at least two different images in the second plurality of k-space images.
21. The system of claim 18, wherein the computation unit is further configured to (e) update said corresponding reconstructed image in the third image set based at least in part on the updated phase difference, and (f) transform the updated image to a k-space image in a fourth image set.
22. The system of claim 17, wherein the computation unit is further configured to (g) update the k-space image in the fourth image set based at least in part on the corresponding image in the second plurality of k-space images acquired at a same echo time.
23. The system of claim 22, wherein the computation unit is further configured to iteratively perform steps (b)-(g) until a termination condition is satisfied.
24. The system of claim 23, wherein the termination condition corresponds to one or more of: a number of iterations exceeding a predetermined limit, or a change in the updated k-space image in the fourth image set or in the reconstructed image in the third image set between two iterations being below a predetermined minimum.
25. The system of claim 22, wherein the computation unit is further configured to execute an artificial neural network for performing steps (b)-(g) at least two times.
26. The system of claim 22, wherein the computation unit is further configured to computationally reconstruct an image in a fifth image set based at least in part on the updated k-space image in the fourth image set and computationally reduce noise from the reconstructed image in the fifth image set.
27. The system of claim 26, wherein the computation unit is further configured to apply a locally low-rank regularization to at least two of the images in at least one of the third image set or the fifth image set for reducing the noise therein.
28. A method of magnetic resonance imaging, the method comprising: executing a first multi-echo pulse sequence to excite a target region; acquiring a plurality of baseline k-space images of the target region, each baseline image being associated with an echo in the first multi-echo pulse sequence; executing a second multi-echo pulse sequence to excite the target region; acquiring a second plurality of k-space images, each of the second plurality of k-space images being associated with an echo in the second multi-echo pulse sequence and at least one of the second plurality of k-space images being under-sampled so as to be missing information compared to the baseline k-space images; and reconstructing a target image based at least in part on at least one baseline k-space image and at least one of the second plurality of k-space images.
29-51. (canceled)
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0024] In the drawings, like reference characters generally refer to the same parts throughout the different views. Also, the drawings are not necessarily to scale, with an emphasis instead generally being placed upon illustrating the principles of the invention. In the following description, various embodiments of the present invention are described with reference to the following drawings, in which:
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
DETAILED DESCRIPTION
[0038] The present invention provides systems and methods for monitoring a target (e.g., a treatment target) or other objects of interest in a region of interest in real time during an image-guided treatment/diagnosis procedure. The procedure may, for example, involve the application of focused ultrasound to (i.e., the sonication of) a material, a tissue or organ for the purpose of heating it, either to necrose, ablate, or otherwise destroy the tissue if it is, e.g., cancerous, or for non-destructive treatments such as pain amelioration or the controlled inducement of hyperthermia. Ultrasound may also be used for other, non-thermal types of treatment, such as, e.g., neuromodulation. Alternatively, the procedure may use different forms of therapeutic energy, such as, e.g., radio-frequency (RF) radiation, X-rays or gamma rays, or charged particles, or involve other treatment modalities such as cryoablation. Monitoring the temperature and/or location of the target in various treatment procedures may serve to guide the therapeutic energy beam to the target and/or around other, non-target tissues and organs, i.e., to adjust the beam focus, profile, and/or direction based on images of the affected anatomical region, which may, in some embodiments, also visualize the beam focus. MRI is a widely used technique for such image-based thermal and/or motion tracking. However, other imaging techniques, including, e.g., X-ray imaging, X-ray computed tomography (CT), or ultrasound imaging, may also be used and are within the scope of the present invention. In addition, the temperature and/or motion monitoring may be achieved using one or more two-dimensional images and/or three- dimensional images. MRI systems in which the techniques described herein may be implemented are well-known in the art; an exemplary system is shown in
[0039] In some embodiments, imaging during a procedure is simultaneously used to quantitatively monitor in vivo temperatures. This is particularly useful in MR-guided thermal therapy (e.g., MRgFUS treatment), where the temperature of a target treatment area (e.g., a tumor to be destroyed by heat) should be continuously monitored in order to assess the progress of treatment and correct for local differences in heat conduction and energy absorption to avoid damage to tissues surrounding the treatment area. The monitoring (e.g., measurement and/or mapping) of temperature with MR imaging is generally referred to as MR thermometry or MR thermal imaging.
[0040] Among various methods available for MR thermometry, the proton resonance frequency (PRF) shift method is often the method of choice due to its excellent linearity with respect to temperature change, near-independence from tissue type, and temperature map acquisition with high spatial and temporal resolution. The PRF shift method is based on the phenomenon that the MR resonance frequency of protons in water molecules changes linearly with temperature (with a constant of proportionality that, advantageously, is relatively constant between tissue types). Since the frequency change with temperature is small, only −0.01 ppm/° C. for bulk water and approximately −0.0096 to −0.013 ppm/° C. in tissue, the PRF shift is typically detected with a phase-sensitive imaging method in which the imaging is performed twice: first to acquire a baseline PRF phase image prior to a temperature change and then to acquire a second phase image after the temperature change—i.e., a treatment image—thereby capturing a small phase change that is proportional to the change in temperature. A map of temperature changes may then be computed from the (reconstructed) images by determining, on a pixel-by-pixel basis, phase differences between the baseline image and the treatment image, and converting the phase differences into temperature differences based on the PRF temperature dependence while taking into account imaging parameters such as the strength of the static magnetic field and echo time (TE) (e.g., of a gradient-recalled echo).
[0041] In typical MR images acquired using the single-echo GRE pulse sequence, a spatial shift Ar caused by field inhomogeneity may exist. Increasing the bandwidth of the MR receiver tends to reduce the spatial shift, but at the price of increasing the standard deviation (or the measurement error) of the temperature. As a result, choice of the bandwidth reflects a trade-off between minimizing the spatial shift and reducing the measurement error. To address this challenge, in some embodiments, the MRI system uses receivers with high bandwidth and applies a multi-echo GRE pulse sequence for PRF thermometry. For example, referring to
[0042] Accordingly, the fully sampled k-space data set depicted in
[0043]
[0044] Thus, MR scanning utilizing the approaches described above may require acquisition of only N.sub.1/R (where R represents the imaging acceleration factor and is equal to 2 and 6 in
[0045] It should be noted that the k-space data trajectories including multiple echoes in each TR 410 as illustrated in
[0046] Referring again to
K.sub.treatment(k.sub.y, k.sub.x, echo)X .sub.treatment(y, x, echo) Eq. (1)
[0047] In some embodiments, the iterative estimation procedure then imposes one or more constraints on the datasets so as to accurately estimate the missing k-space data. In one embodiment, the constrains are based on prior knowledge about the treatment images. The signal of an echo E.sub.i, S.sub.i,baseline, at a pixel (x, y) on the baseline image that has a magnitude ρ.sub.i (x, y) and a phase ϕ.sub.i (x, y) can be represented as:
S.sub.i,baseline=ρ.sub.i(x, y).Math.exp[iϕ.sub.i(x, y)] Eq. (2)
When the temperature increases ΔT in the region of interest during treatment, the signal of an echo E.sub.i, S.sub.i,treatment, at the pixel (x, y) of the treatment image can be represented as:
S.sub.i,treatment=S.sub.i,baseline.Math.exp(iΔϕ.sub.i)=S.sub.i,baseline.Math.exp(i2πKΔT.Math.TE.sub.i) Eq. (3)
where K=0.01 ppm/° C. Thus, the phase difference of the echo E.sub.i (i.e., Δϕ.sub.i) between the treatment image and the corresponding baseline image may positively (e.g., linearly) correlate to the echo time TE.sub.i of the echo:
Δϕ.sub.i=(S.sub.i,treatment.Math.S*.sub.i,baseline)=2πKΔTE.sub.i Eq. (4)
[0048] Accordingly, the first constraint imposed by the iterative estimation procedure 500 may be the linear relationship between the echo time TE and the phase difference between the treatment image and corresponding baseline image. Referring again to
Δϕ.sub.new=F.sub.0(y, x)+(y, x).Math.TE≈Δϕ(y, x, echo) Eq. (5)
where F.sub.0 and F.sub.1 are two coefficients independent of TE. For example, the signal of an echo Ej associated with the baseline image may be represented as:
ρ.sub.j,baseline.Math.exp[iϕ.sub.j,baseline] Eq. (6)
The signal of the echo Ej associated with the treatment image may be represented as:
ρ.sub.j,treatmentexp[iϕ.sub.j,treatment]. Eq. (7)
Thus, for each location (x, y), a vector d.sub.j may be defined for echoes E.sub.j(j=1 to N):
Based on Eq. (4), (ϕ.sub.j,treatment−ϕ.sub.j,baseline) can be represented as:
ϕ.sub.j,treatment−ϕ.sub.j,baseline=Δϕ.sub.j=2πKΔT.Math.TE.sub.j=F.sub.0+F.sub.1.Math.TE.sub.j Eq. (9)
Subsequently, we can define a vector v.sub.k with N−1 components based on the vector d.sub.j in Eq. (8):
v.sub.k≡d.sub.j+1.Math.d*.sub.j=ρ.sub.j,treatment.sup.2.Math.exp(iF.sub.1.Math.ΔTE), where j and k=1 to N−1 (10)
where ΔTE represents a TE increment between two neighboring echoes. Because the phase in Eq. (10) is independent of k, F.sub.1 can be represented as:
where represents the four-quadrant arctangent operator and mean represents the mean over all the components of v.
[0049] In addition, we can define another vector u with N components based on the vector d and the known F.sub.1. The jth component of u, u.sub.j, is given by:
u.sub.j≡d.sub.j.Math.exp(−iF.sub.1.Math.TE.sub.j)=ρ.sub.j,treatment.Math.exp(iF.sub.0), j=1 to N
[0050] Because the phase of u.sub.j is independent of j, F.sub.0 can be represented as:
F.sub.0=[mean (u)] Eq. (12)
[0051] Based on Eqs. (11) and (12), the two coefficients, F.sub.0 and F.sub.1, can be computed. As further described below, accurate values of F.sub.0 and F.sub.1 may be obtained after reasonable convergence of the estimated data is achieved. In various embodiments, the new phase differences estimated based on F.sub.0 and F.sub.1 together with the phases in the corresponding baseline image may then be utilized to update the phases of the pixels in the treatment image on a pixel-by-pixel basis (in step 508). For example, the signal of an echo Ej in the treatment image may be updated as:
ρ.sub.j,treatment.Math.exp[iϕ.sub.j,baseline].Math.exp[i(F.sub.0+F.sub.1.Math.TE.sub.j)] Eq. (13)
[0052] In addition, because the temperature increase resulting from the ultrasound procedure affects mostly the phases of the pixels in the treatment images, the iterative estimation procedure may impose the second constraint, which sets the magnitude of each pixel in the treatment image to be the same as that in the corresponding baseline image (in step 510):
|X.sub.treatment(y, x, echo)|.fwdarw.|X.sub.baseline(y, x, echo) Eq. (14a)
Additionally or alternatively, the fact that the spin density ρ.sub.treatment decreases during heating by 10%-15% relative to ρ.sub.baseline (due to lengthening of T1 by about 25% resulting from heating to 20° C. above body temperature) can be taken into account. Hence the signal of echo j during treatment in Eq. (13) can be written as
S.sub.j,treatment=ρ.sub.j,baseline.Math.A.Math.exp(iϕ.sub.j,baseline).Math.exp[(F.sub.0+F.sub.1.Math.TE.sub.j)] Eq. (14b)
where A (which is smaller than 1) is a real positive number that represents this decrease in signal amplitude. A depends on x, y but is independent of echo time. From Eqs (6) and (14) and the fact that A is the same for all TE we conclude that
where S.sub.j,treatment and S.sub.j,baseline are the treatment and baseline signals at echo j and #
represent the magnitude of the signal inside the brackets. The mean in Eq. (14a) is the mean over all the echoes and coils, since A is the same for all echoes and coils. During each iteration we calculate A using Eq. (14a) and then substitute it in Eq. (14b). Since A is spatially slowly varying, a standard Total Variation (TV) operator may be used to smooth it and improve its accuracy. The calculated signal S.sub.j,treatment in Eq. (14b) is imposed in step 510 in
[0053] After the above constraints are imposed, the iterative estimation procedure may apply a Fourier transform to the updated treatment images to generate updated k-space data sets (in step 512). In some embodiments, the iterative estimation procedure imposes a third constraint that replaces the updated k-space data sets by the originally acquired, under-sampled k-space data (in step 514):
K(k.sub.ys,k.sub.xs, echo).fwdarw.K .sub.samp(k.sub.ys,k.sub.xs, echo) Eq. (15)
where K.sub.samp represents the originally acquired k-space data at various locations, k.sub.ys and k.sub.xs. Thus, after step 514, the k-space data sets include the data acquired from the echoes as well as the data estimated based at least in part on the echo times as described above. After step 514, the k-space data can be inverse Fourier-transformed to reconstruct a treatment image (in step 516). Steps 504-516 may be iteratively performed until a termination condition (e.g., the image difference between two consecutive iterations are below a predetermined threshold or a certain number of iterations have been performed) is satisfied, indicating that the missing k-space data has been properly estimated. Thus, by imposing the constraints in a repetitive, alternative manner as described above, a proper estimation of the missing k-space data can be achieved. In some embodiments, the iterative estimation procedure may use a convolutional or recurrent neural network trained to determine the missing k-space data (see, e.g., https://arxiv.org/pdf/1712.02862.pdf, which is incorporated herein by reference).
[0054] Referring again to
[0055] It should be noted that the second constraint that sets the magnitude of each pixel in the treatment image to be the same as that in the corresponding baseline image may be relaxed in some embodiments. For example, this constraint may be imposed in the first few iterations (e.g., 1-5 iterations) only; after reasonable convergence of the estimated data is achieved, this constraint may be removed (i.e., step 510 may be skipped after the first few iterations). Thus, the magnitudes of the pixels in the treatment images may be proximate to, but not necessarily identical to, those in the corresponding baseline reference images.
[0056] Subsequently, the treatment images may be compared against the corresponding baseline images to computationally generate a temperature map of the region of interest, thereby providing real-time monitoring of the temperature change during the ultrasound procedure (step 310). In one embodiment, the temperature map is generated by weighting the computed temperature change, αT.sub.i, associated with each echo E.sub.i in the echo train using a weighting function W. As described above, the temperature change, αT.sub.i, may be computed based on the phase difference, Δϕ.sub.i, between the image acquired from echo E.sub.i(at echo time TE.sub.i) and its corresponding baseline image:
A weighted combination of the temperature change, ΔT.sub.i, associated with each echo may then be computed to determine the temperature change associated with the treatment image:
Steps 306-310 may be iteratively performed throughout the ultrasound procedure for monitoring the region of interest during the procedure.
[0057] Accordingly, various embodiments of the present invention may efficiently increase the MR imaging rate (e.g., by under-sampling the k-space data in one or more echoes in a multi-echo GRE pulse sequence) while providing images with sufficient resolution (e.g., by properly determining the missing (or non-acquired) k-space data using the constraint(s) and iterative estimation approach described above) for monitoring the region in real time during the ultrasound procedure.
[0058] In the examples given above, only a single slice is excited in a given TR. Hence, when a total scan time of 3 seconds is desired, with an acceleration of R, the number of TR's is reduced by a factor of R; as a result, the TR increases by R. For example, as shown above, TR increases from 27 to 142 milliseconds when R=6; in addition, SNR increases by ˜1.9. An alternative approach in accordance with various embodiments excites all the slices simultaneously during each TR. During RF excitation, each slice is excited by a different phase, and the RF waveform is a linear combination of the waveforms required to excite each slice separately. If R slices are excited simultaneously, they overlap one another. To separate them, R waveforms may be applied to acquire R datasets, where the phase of each slice changes linearly from waveform to waveform in a known way. Finally, the slices may be separated by applying an inverse Fourier transform or Hadamard transform on the R datasets. In this case, a short TR of 27 milliseconds may be used to excite all R waveforms. However, since all the slices are simultaneously excited, the SNR increases by √{square root over (R)} compared to a single slice with TR=27 milliseconds. This is very beneficial for a large R. For example, for R=8, the SNR gain is √{square root over (8)}=2.8.
Slice.sub.1=(S.sub.n1+S.sub.n2)/2,
Slice.sub.2=(S.sub.n1−S.sub.n2)/2.
[0059] Employing the approaches described above for accelerating the MR imaging rate may advantageously increase the spatial resolution of the images while acquiring a smaller number of slices for each temperature measurement compared to conventional MR scanning approaches. For example, when R=6, it may be possible to acquire three MR imaging slices, each having 256 phase-encoding steps (i.e., N.sub.1=256). The images having more phase-encoding steps may then provide a higher spatial resolution compared to the conventional MR images where 128 phase-encoding steps are used. In addition, when desired, the above-described approaches may increase the time resolution of the acquired images (i.e., reduce the time required for each temperature measurement). For example, when R=6, it may be possible to acquire an imaging dataset that has 3 slices in 1.6 seconds. This acquisition rate is faster compared to the conventional approaches where 3 seconds are required to acquire an imaging dataset having one slice.
[0060] The approach set forth in Eqs. (14a-14c) can be extended. In one embodiment, where the baseline image is not stable due to field map variations or movement of the water, several baseline images are collected the treatment phase is fitted with a linear combinations of the baseline images. In some embodiments, the approach can be formulated as a const function to be minimized:
g=∥Σ.sub.n.sup.N.sup.
here
[0061] One example of such model is where the baseline image is not stable (e.g., due to water movement) and some relaxation of T2* occurs upon heating (R2*=1/T2*). In this case, x can be computed as
x.sub.n,j=(Σ.sub.l.sup.Nbw.sub.l,jρ.sub.l,j).Math.A.Math.exp(iϕ.sub.j,baseline).Math.exp[i(F.sub.0+F.sub.1.Math.TE.sub.j)].Math.exp[−R*.sub.2.Math.TE.sub.j] Eq. (18)
ρ is 1 baseline magnitude at the echoes j and N.sub.b is the number of baselines. Regularization terms can be applied to F.sub.1 and/or F.sub.0 and/or R.sub.2* and/or the amplitude A, across the measured spatial locations (either 2D or 3D). Other regulation terms such as LLR can be implemented along the echo train. Additionally or alternatively, the LLR can be applied to the difference between the treatment data and the baseline. Additionally or alternatively, spatio-temporal correlations can be considered using temporally constrained reconstruction (TCR).
[0062] Another approach to accommodating missing k-space data is to use a trained neural network, such as a convolutional neural network (CNN) or other neural network architecture that may include fully connected layers; a recurrent neural network (RNN), a “neural in neural” (NiN) network, and others. As is well known, neural networks process information in a manner similar to the human brain. A neural network is composed of a large number of highly interconnected processing elements (neurons) working in parallel to solve a specific problem. Neural networks learn by example; they cannot be programmed to perform a specific task. The examples must be selected carefully, otherwise useful time is wasted or, worse, the network might function incorrectly. Neural networks that operate on the principle of “supervised learning” are trained using labeled input images (with the label specifying a characteristic relevant to the output, e.g., a classification).
[0063] In some embodiments, the neural network learns a mapping from aliased images in k-space to artifact-free images in k-space. In one image-driven method for network training, the loss function L includes a regularization term, e.g., L.sub.imag=∥x−y∥.sub.n.sup.n+λRe(x, θ). In this case, n is either 1 or 2 and the regularization parameter, Re(x, θ), can be derived from relevant prior knowledge about the probability distribution function. In one embodiment, Tikhonov regularization is used for data smoothness.
[0064] In other supervised learning approaches, knowledge of the encoding operator E.sub.j{
where
[0065] The encoder operator E.sub.j1, which represents the k trajectories, can also include free parameters in the neural network architecture (NNA) and can be learned. In some embodiments, the k-space data trajectory of different echoes can be different. In one embodiment, physical constraints are added to the gradient trajectories as loss functions, allowing the k sampling trajectories to be learned. The E.sub.j sampled pattern can be learned from spiral, radial, or other geometric pattern trajectories while allowing parameters that characterize the trajectories to be learned by the neural network.
[0066] In addition or alternatively, the actual k trajectories can be calculated based on a physical model that includes parameters for eddy current effects and/or concomitant gradient effect, and/or for Bo inhomogeneities. The neural network can receive prescan data (such as a Bo field map or pre-scan gradient delay measurements) as input. Together with the fully sampled baseline trajectories, an auto-correction for the k trajectories can be achieved. In one embodiment, the k trajectories are corrected by varying the gradient waveforms prior to the treatment scan. In addition or alternatively, the neural network used to correct k trajectories is integrated within the overall NNA.
[0067] In addition or alternatively, another physically driven loss function can be used, namely, the field map parameters that may be constructed from the echo images. In one embodiment, TMAP(ΔT), calculated based on Eq. (16), is used as the ground truth compared against the reconstructed, TMAP() from the NNA. TMAP is the thermal map in real space obtained using the calculated ΔT at each pixel.
L.sub.ΔT=∥(TMAP(,
[0068] In one embodiment, the UNet CNN architecture is employed. The N.sub.coils undersampled k-data can be used as the input data together with the N.sub.coils baseline k-data. In one embodiment, zero filling is set at the missing k-data points. In the case of non-Cartesian k sampling, gridding can be employed. Then the inverse Fourier transform of the k-data to real space is obtained. In one embodiment, the CNN is used iteratively where the loss function L.sub.imag, is active within the CNN steps, but between CNN steps, data consistency loss terms, L.sub.us, L.sub.bl are used. In another embodiment, an end-to-end CNN can proceed from k-space to the TMAP directly. Training data may be generated using a realistic and detailed Bloch simulations, which account for a wide variety of realistic features: phantom shape, magnetic field drift, RF interference, eddy currents, different temperature patterns, shape, size, scale, movements between baseline and treatment, and other features.
[0069] Another embodiment utilizes a generative adversarial network (GAN). In one embodiment, the generative network loss function is L.sub.image with L1 and or L2 regularization. The adversarial network may use data-consistency loss functions. Still another embodiment uses recurrent neural networks given the sequential nature of the treatment phases (including baseline phases).
[0070] Caffe, Keras, PyTorch, Theano and TensorFlow are suitable neural network platforms (and may be cloud-based or local to an implemented system in accordance with design preferences). For tasks involving images, CNNs are typically preferred as the major component in the NNA. The usage of CNN presented hereon is representative and actual implementation can use alternative networks as well and also combinations of NNAs. CNNs process images as input and may output a classification or another image, depending on design. For example, a CNN network based on the U-net architecture can be used to turn input data into an image. A CNN typically includes, inter alia, convolution layers and pooling layers that operate on the input image to produce an output.
[0071] More generally, embodiments of the present invention use a NNA to generate or identify a thermal map from an input k-space image with missing data with or without the addition of the full data that was collected as the base line k-data or multiple baselines. In one approach, a training library includes a series of k-space images with missing data and the full data of the baselines, and the labels are corresponding thermal maps. For example, the labels may be generated as described above from k-space images with missing data, which serve as the training input
[0072] This approach utilizes a direct transform from the subsample k-space to the desired thermal maps. In another embodiment, the thermal map is created indirectly in two steps: the subsampled k space is first transformed into complex images (e.g., images from at least two echoes) using a NNA, and the resulting images are then used to generate thermal map using the conventional PRF technique. The first step of exploiting the NNA to reconstruct an MRI image from subsampled k-space may be accomplished using different techniques. In one approach, k-space is first completed (e.g., by assigning zeros to the missing data) and reconstructed to an image that will exhibit artifacts, which the CNN is used to remove. In another approach, the NNA generates the image directly from the subsampled k-space. Additionally, or alternatively, the NNA can be trained to accept as input at least one suboptimal thermal map (e.g., a map with artifacts) and generate as output one or more improved thermal maps (ML_Map). In some embodiments, the improved thermal maps can be used to further correct the reconstructed image (e.g., by replacing Eq. 11 with F.sub.1=2πKΔT=2πKML_Map).
[0073] It should be noted that the MR accelerating approach described herein may be applied to any multi-echo MR pulse sequence; the under-sampled k-space data may be collected along any “incremented” dimensions (e.g., the phase-encoding dimension, spiral-interleaving dimension, radial-angle dimension, etc.) as long as the under-sampling is carried out in the echoes of the multi-echo trains. For example, the MR accelerating approach described herein may be applied in a multi-slice, multi-echo GRE pulse sequence and/or a multi-band, multi-echo GRE pulse sequence for acquiring 2D MR imaging slices, and the k-space trajectory may be Cartesian, spiral, radial or an arbitrary trajectory. For example, referring to
[0074] In some embodiments, a volumetric 3D MR thermal map is desirable. In this situation, three dimensions of the k-space data may be acquired, and two of the dimensions (e.g., indirect Z-encoding, spiral interleaving, etc.) may be incremented. Accordingly, the data along any one or both two incremented dimensions may be under-sampled and estimated using the accelerated MR imaging approaches described above. For example, prior to treatment, the acquired k-space dataset(s) may include one or more full 3D baseline images. During treatment, the acquired k-space dataset(s) may include multiple shots corresponding to different values along the incremented dimension(s) in each TR; in addition, different shots may be sampled at different TEs. As a result, each shot in the treatment image may have some missing data. Again, because the main change between the baseline and treatment images is in the phase, the missing data in the treatment image(s) may be estimated using the iterative approach described above based on the constraint(s). In addition, the 3D data acquisition may be Cartesian acquisition, spiral acquisition, radial acquisition (see, e.g., https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.26862), a radial acquisition in combination with 2D spiral planes (see, e.g., https://www.ncbi.nlm.nih.gov/pubmed/28643383 and https://onlinelibrary.wiley.com/doi/abs/10.1002/mrm.26862) or an arbitrary trajectory. In one embodiment, the arbitrary trajectory is defined based on the acquired baseline.
[0075] Further, the methods for accelerating multi-echo GRE pulse sequences described herein are not limited to MR thermometry only. Other applications that use multi-echo GRE pulse sequences may also benefit from the accelerated MR data acquisition approaches. One example is the measurement of a field map, where fast field maps are acquired upon a field change or for construction of susceptibility-weighted imaging (SWI) using the multi-echo GRE pulse sequence. Another exemplary application is the study of T.sub.2*, where the magnitude contrast and signal decay per pixel between the echo images (e.g., micro-bleeding detection, BOLD PSD in f-MRI), as opposed to the phase, are under-sampled and estimated. Again, in all these applications, one or more fully sampled baseline images may be required prior to the procedures. If the baseline image(s) are not static with respect to the treatment image(s), a physical model may be integrated into the optimization procedure to predict and thereby compensate for the change (see, for example, Eq. (18)).
[0076] In some situations, the SNR of the acquired signals may be too low to provide meaningful estimation of the missing data using the constraints described above. For example, when detecting micro-bleeding of the brain tissue, local dark spots may appear along the echo train images (fast T.sub.2*); as a result, the linear relationship between TE and the phase difference at these spots cannot be measured due to low SNR. In one embodiment, a weighting mask can be applied to the pixels corresponding to the dark spots such that the unreliable constraints resulting therefrom do not influence the reconstructed images.
[0077] In addition, the MRI system may include multiple MRI coils; each coil may receive at least a portion of MR signals from the target region. Each of the k-space images acquired prior to and/or during the ultrasound procedure may be associated with one or more of the MRI coils. As a result, the reconstructed images may be based on the associated echo time as described above as well as the associated MRI coil(s). In one embodiment, information extracted from the MR signals received by the multiple coils may be merged; the merged information may be equivalent (or at least similar) to information measured using a single MR coil. Additionally or alternatively, the MR signals received by each coil may be processed as described above to reconstruct an image and/or generate a thermal map. The images and/or thermal maps associated with the multiple coils may then be collected, for example, using a weighted average to create an averaged image and/or thermal map of the target region.
[0078] In general, functionality for performing the accelerated MR imaging acquisition, including, under-sampling the k-space data and computationally determining the missing k-space data using the iterative estimation approach, as described above, whether integrated within a controller of the MRI system, and/or an ultrasound system, or provided by a separate external controller, may be structured in one or more modules implemented in hardware, software, or a combination of both. For embodiments in which the functions are provided as one or more software programs, the programs may be written in any of a number of high level languages such as PYTHON, FORTRAN, PASCAL, JAVA, C, C++, C#, BASIC, various scripting languages, and/or HTML. Additionally, the software can be implemented in an assembly language directed to the microprocessor resident on a target computer (e.g., the controller); for example, the software may be implemented in Intel 80×86 assembly language if it is configured to run on an IBM PC or PC clone. The software may be embodied on an article of manufacture including, but not limited to, a floppy disk, a jump drive, a hard disk, an optical disk, a magnetic tape, a PROM, an EPROM, EEPROM, field-programmable gate array, or CD-ROM. Embodiments using hardware circuitry may be implemented using, for example, one or more FPGA, CPLD or ASIC processors.
[0079] In addition, the term “controller” used herein broadly includes all necessary hardware components and/or software modules utilized to perform any functionality as described above; the controller may include multiple hardware components and/or software modules and the functionality can be spread among different components and/or modules. Finally, when there is a perturbation of the baseline magnitude, and a physical model is assumed (as described above), the optimization approach described above may include adjustment of the baseline images as well.
[0080] Certain embodiments of the present invention are described above. It is, however, expressly noted that the present invention is not limited to those embodiments; rather, additions and modifications to what is expressly described herein are also included within the scope of the invention.