Randomized dimension reduction for magnetic resonance image iterative reconstruction

20220381863 · 2022-12-01

    Inventors

    Cpc classification

    International classification

    Abstract

    In a method for magnetic resonance imaging pseudorandomly undersampled k- space imaging data is acquired with multiple receiver coils of an MRI imaging apparatus. MR image reconstruction is performed to produce a reconstructed MR image from the k-space imaging data by iteratively solving sketched approximations of an original reconstruction problem. The sketched approximations use a sketched model matrix As that is a lower-dimensional version of an original model matrix A of the original reconstruction problem. The sketched model matrix As preserves the Fourier structure of the MR reconstruction problem and reduces the number of coils actively used during reconstruction.

    Claims

    1. A method for magnetic resonance imaging comprising: acquiring with multiple receiver coils of an MRI imaging apparatus pseudorandomly undersampled k-space imaging data; performing MR image reconstruction to produce a reconstructed MR image from the k-space imaging data; wherein the reconstruction comprises iteratively solving sketched approximations of an original reconstruction problem, wherein the sketched approximations use a sketched model matrix As that is a lower-dimensional version of an original model matrix A of the original reconstruction problem; wherein the sketched model matrix As preserves the Fourier structure of the MR reconstruction problem and reduces the number of coils actively used during reconstruction.

    2. The method of claim 1 wherein the pseudorandomly undersampled k-space imaging data have both spatial and temporal dimensions.

    3. The method of claim 1 wherein iteratively solving the sketched approximations of the original reconstruction problem comprises iteratively solving second-order Taylor approximations with a sketched Hessian that includes the sketched model matrix As.

    4. The method of claim 1 wherein the sketched model matrix As is related to the original model matrix A by A.sup.t.sub.S=S.sup.tA, where S is a structured sketching matrix.

    5. The method of claim 4 wherein the structured sketching matrix S is the product of two matrices SPCA and SR, where SPCA estimates virtual coils from principal components and SR sketches the virtual coils.

    6. The method of claim 5 wherein SR sketches a reduced number of high-energy virtual coils and also includes random linear combinations of remaining low-energy coils.

    7. The method of claim 5 wherein multiplication of SPCA by a coil sensitivity map operator C produces a sensitivity map operator CPCA with virtual coils sorted according to descending energy.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0011] FIG. 1 is a schematic diagram of a structured sketching matrix S illustrating it as the product of two matrices SPCA and SR, and also illustrating a sensitivity map operator CpcA as the product of SpcA by a coil sensitivity map operator C.

    [0012] FIGS. 2A-2B are two graphs that show the convergence of the objective functions during reconstruction, of the 3D cones dataset, with conventional reconstruction with all coils, coil compression and coil sketching.

    [0013] FIG. 3 depicts three grids showing images of reconstruction results for the 3D cones dataset reconstructed with conventional reconstruction with all coils, coil compression and coil sketching.

    [0014] FIGS. 4A-4B are two grids showing images of inference results for two slices from the test dataset reconstructed with baseline MoDL, PnP sketched MoDL, and sketched MoDL.

    DETAILED DESCRIPTION OF THE INVENTION

    General reconstruction

    [0015] Conventional MR image reconstruction aims to recover image x* by solving the convex optimization problem in Equation 1,

    [00001] x * = arg min x f ( x ) := 1 2 .Math. "\[LeftBracketingBar]" Ax - y .Math. "\[RightBracketingBar]" 2 2 Data Consistency ( DC ) + λ R ( x ) Regularization ( Eq . 1 )

    where x represents the image, A models the MR imaging process, y is the acquired undersampled k-space, R is a regularization term, and λ is the regularization hyperparameter. The first term is usually referred as the data consistency term, since it enforces consistency with the acquired data; while the second term is the denoising term that removes the noise-like artifacts based on a prior image representation model. In compressed sensing, the latter term usually denoises by enforcing sparsity in a sparsifying transform domain such as wavelets.

    [0016] Matrix A has the structure described in Equation 2,


    A=UFC   (Eq. 2)

    where C represents the voxel-wise multiplication by c coil sensitivity maps, F is the coil-wise Fourier Transform, and U is the undersampling operator. In conventional PICS reconstruction, we would solve Equation 1 using an algorithm like Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) or Alternating Direction Method of Multipliers (ADMM); whereas for DL-based reconstruction the state-of-the-art methods use unrolled networks architectures [1-4]. In all cases, the computational cost of solving Equation 1 is linearly proportional to the number of coils c.

    [0017] In DL reconstruction with deep unrolled networks, similarly to conventional reconstruction, the reconstruction problem is solved by alternating between data consistency and regularization steps. The difference lies on the regularization step, where the regularization function R(x) is learned from historical data via Convolutional Neural Networks (CNNs). The alternating steps can be described as:


    x.sub.k+1=custom-character(z.sub.k)   (Eq. 3)


    z.sub.k+1=custom-character.sub.w(x.sub.k+1)   (Eq. 4)

    where custom-character is the operator enforcing data consistency, custom-character is enforcing regularization, and k is a step index. In this case, custom-character.sub.w is composed of CNNs parameterized by W and enforces a learned data-driven regularization. Two common examples of custom-character operators are shown in equation 5 and equation 6.

    [00002] ( z ) = z - α ( A H ( Az - y ) ) ( Eq . 5 ) ( z ) = arg min x .Math. "\[LeftBracketingBar]" Ax - y .Math. "\[RightBracketingBar]" 2 2 + λ .Math. "\[LeftBracketingBar]" x - z .Math. "\[RightBracketingBar]" 2 2 ( Eq . 6 )

    where α is the step size.

    Sketched Reconstruction

    [0018] The inventors provide a formulation of the MR reconstruction problem using the iterative Hessian sketching (IHS) formulation [7,8]. Herein, Equation 1 is not directly solved. Instead, the reconstruction is performed by iteratively solving second-order Taylor approximations with a sketched Hessian that includes a lower-dimensional version of the model matrix A.sub.S.sup.t=S.sup.tA. The new formulation is presented in Equation 7, where the first term contains the sketched Hessian (A.sub.S.sup.t).sup.HA.sub.S.sup.t and the second, the gradient A.sup.H(Ax.sup.t−y) of the original problem (without the regularization term).

    [00003] x t + 1 = arg min x f ^ ( x ) := 1 2 .Math. "\[LeftBracketingBar]" A S t ( x - x t ) .Math. "\[RightBracketingBar]" 2 2 + .Math. x , A H ( Ax t - y ) .Math. Data Consistency ( DC ) + λ R ( x ) Regularization ( Eq . 7 )

    [0019] An advantage of this method is that, since only the data consistency term is modified, the same algorithm can be employed to solve the optimization problem (e.g. FISTA, ADMM, DL, etc.).

    [0020] Similarly, for deep unrolled networks, only the data consistency step custom-character is modified. The sketched formulations of example equations 5 and 6 are shown in equation 8 and 9, respectively.

    [00004] t ( z ) = z - α ( A S t ( z - x t ) + A H ( Ax t - y ) ) ( Eq . 8 ) t ( z ) = arg min x 1 2 .Math. "\[LeftBracketingBar]" A S t ( x - x t ) .Math. "\[RightBracketingBar]" 2 2 + .Math. x , A H ( Ax t - y ) .Math. + λ 2 .Math. "\[LeftBracketingBar]" x - z .Math. "\[RightBracketingBar]" 2 2 ( Eq . 9 )

    Structured Sketching Matrix

    [0021] We further adapt the IHS formulation to the MR reconstruction problem by providing a structured sketching matrix S denoted eigen-sketching matrix. The structured design of the sketching matrix has three main purposes: [0022] 1. Preserve the Fourier structure of the original problem to keep leveraging the high efficiency of the fast Fourier transform. [0023] 2. Reduce the number of coils actively used during reconstruction to lower computational burden of the reconstruction procedure. [0024] 3. Leverage previous work on coil compression by incorporating the principal component estimation into the sketching matrix.

    [0025] FIG. 1 shows the structure of the eigen-sketching matrix, which is composed of two matrices S.sub.PCA and S.sub.R that, effectively, combine the data linearly in the coil dimension. The S.sub.PCA matrix performs the coil compression transformation [5], which yields a new coil sensitivity map operator C.sub.PCA composed by virtual coils sorted in descending energy. Then, S.sub.R proceeds to sketch this new operator. Similarly to coil compression, the S.sub.R matrix considers a reduced number of high-energy virtual coils. But unlike coil compression, the matrix also considers random linear combinations of the remaining low-energy coils. Thus, leveraging data from all the coils. The resulting sketched model matrix A.sub.S.sup.t at iteration t will have the structure showed in Equation 4, where C.sub.S.sup.t represents the sketched sensitivity map operator that performs voxel-wise multiplication by ĉ<c coil sensitivity maps.


    A.sub.S.sup.t=UFC.sub.S.sup.t   (Eq. 10)

    [0026] Effectively, the computational cost of solving Equation 6 is now linearly proportional to the new number of actively used coils ĉ<c.

    Algorithm examples

    [0027] Coil sketching is a general and flexible framework that can be applied to any iterative reconstruction algorithm. Herein, the inventors present the sketched versions of two commonly used reconstruction formulations.

    [0028] First, the most used formulation: L1-wavelets reconstruction, where the regularization of the reconstruction problems (Eq. 1 and Eq. 7) becomes:


    R(x)=|Ψx|.sub.1   (Eq. 11)

    where Ψ is the wavelets transform. Then, the conventional and sketched reconstruction problems can be solved using the ISTA algorithm as described in Algorithm 1 and Algorithm 2, respectively.

    TABLE-US-00001 Algorithm 1 Conventional L1-Wavelets reconstruction 1: Initialize x.sub.0.sup.0 2: for k = 0 to (K − 1) do 3:  z.sub.k+1 = x.sub.k.sup.t − α.sub.kA.sup.H (Ax.sub.k − y) 4:  x.sub.k+1 = Ψ.sup.H  custom-character  (Ψz.sub.k+1) 5: end for

    TABLE-US-00002 Algorithm 2 Sketched L1-Wavelets reconstruction  1: Initialize x.sub.0.sup.0  2: for t = 0 to (N − 1) do  3:  g = A.sup.H (Ax.sub.0.sup.t − y)  4;  Generate random sketching matrix S.sup.t  5:  A.sub.S.sup.t = S.sup.t A  6:  for k = 0 to (K − 1) do  7:   z.sub.k+1.sup.t = x.sub.k.sup.t − α.sub.t,k ((A.sub.S.sup.t).sup.H A.sub.S.sup.t(x.sub.k.sup.t − x.sub.0.sup.t) + g)  8:   x.sub.k+1.sup.t = Ψ.sup.H  custom-character  (Ψz.sup.t.sub.k+1)  9:  end for 10:  x.sub.0.sup.t+1 = x.sub.K.sup.t 11: end for where J is the soft-thresholding operation with threshold λ.

    [0029] Additionally, the inventors present the algorithms for a commonly used DL formulation: MoDL [1], where the data consistency step for the conventional and sketched reconstructions are Eq. 6 and Eq. 9 respectively. The algorithms are presented below:

    TABLE-US-00003 Algorithm 3 Conventional MoDL reconstruction   1: Initialize x.sub.0 2: for k = 0 to (K − 1) do 3:  z.sub.k+1 = custom-character .sub.W(x.sub.k) 4:  [00005] x k + 1 = arg min x .Math. Ax - y .Math. 2 2 + λ .Math. x - z k + 1 .Math. 2 2 = ( A H A + λ I ) - 1 ( A H y + z k + 1 ) 5: end for

    TABLE-US-00004 Algorithm 4 Sketched MoDL reconstruction    1: Initialize x.sub.0.sup.0  2: for k = 0 to (K − 1) do  3:  z.sub.k+1 = custom-character .sub.W(x.sub.k.sup.0)  4:  for t = 0 to (T − 1) do  5:   g = A.sup.H(Az.sub.k+1.sup.t − y)  6:   Generate random sketching matrix S.sup.t  7:   A.sub.S.sup.t = S.sup.t A  8:   Initialize x.sub.k.sup.t  9:   [00006] x k t + 1 = arg min x 1 2 .Math. A S t ( x - x k t ) .Math. 2 2 + .Math. x , g .Math. + λ 2 .Math. x - z k + 1 .Math. 2 2 = ( ( A S t ) H A S t + λ I ) - 1 ( ( A S t ) H A S t x k t - g + λ z k + 1 ) 10:  end for 11:  x.sub.k+1.sup.0 = x.sub.k.sup.T 12: end for

    Evaluation

    [0030] First, the inventors tested the method with a non-Cartesian retrospectively undersampled dataset: Abdominal 3D cones acquired with a 20 phases array torso coil. The inventors reconstructed the dataset with L1-wavelets regularization using accelerated versions of Algorithm 1 and Algorithm 2 (FISTA). Then, they compared the convergence of the objective function f(x), memory usage and final image quality of the following three methods: [0031] 1. Conventional reconstruction with all c coil. [0032] 2. Reconstruction with coil compression using the first coil ĉ<c virtual coil. [0033] 3. Coil sketching with ĉ<c coils.

    [0034] Second, the inventors tested the feasibility of coil sketching in deep unrolled networks with a Cartesian 3D knee (320 slices) retrospectively undersampled dataset with 8 coil-compressed channels. The datasets were reconstructed using three networks: [0035] 1. Baseline network, which accords with the original DL framework. [0036] 2. Plug-n-play (PnP) sketched network, which uses the trained baseline network weights, then replaces the data consistency (DC) steps at inference. The aim is to test equivalency of both original and sketched DC steps. [0037] 3. Sketched network, network with the sketched data consistency both for training and inference.

    [0038] Network 1 is described in Algorithm 3, and networks 2 and 3 are described in Algorithm 4. To evaluate training, memory usage and iteration times were compared. To evaluate inference, memory usage, iteration times, and image quality metrics were compared.

    Results

    [0039] From the first test with conventional reconstruction, FIGS. 2A-2B show that reconstructions with coil compression and coil sketching run considerably faster than reconstruction using all coils. However, reconstruction with coil compression converges to a suboptimal value, whereas the proposed method attains the optimal value. FIG. 3 shows that this suboptimality is manifested as significant lower image quality and noticeable shading artifacts. Table 1 shows the summarized metrics. Memory usages and reconstruction times are considerably lower for coil compression and coil sketching. However, coil compression presents a significant decrement on image quality metrics, whereas coil sketching yields virtually the same image metrics than conventional reconstruction with all coils.

    [0040] From the second test with DL-based reconstruction, FIGS. 4A-4B show a visual comparison between reconstruction with the three networks discussed above. The graphic presents inference results for two slices from the test dataset reconstructed with the baseline network, the PnP sketched network, and the sketched network shown left-to-right in the four columns. Both visually and through image metrics, the results are marginally different. Table 2 shows the summary of the metrics measured for both training and inference. For training, the sketched network has −18% memory usage and −15% time per iteration. For inference, the three networks have marginally different image metrics.

    Discussion

    [0041] For conventional reconstruction, the inventors tested the method with a non-Cartesian retrospectively undersampled dataset. Reconstructions with coil compression and coil sketching run considerably faster than reconstruction using all coils. However, reconstruction with coil compression converges to a suboptimal value, whereas coil sketching attains the optimal value. This suboptimality is manifested as significant lower image quality and noticeable shading artifacts in the 3D acquisition. Additionally, memory usage of coil compression and coil sketching are considerably lower than conventional reconstruction with all coils. Coil sketching allows faster reconstruction (approximately 2×) and improved memory-efficiency (approximately 5×) with virtually no penalty on reconstruction accuracy, by effectively using fewer coil sensitivity maps and, therefore, fewer Fourier transforms per iteration. This reduced computational cost enables reconstruction of larger datasets with higher resolution and dimensionality.

    [0042] For DL-based reconstruction, sketched-MoDL shows a promising decrease in training memory usage that could facilitate DL-based reconstruction of higher dimensional datasets, especially for non-cartesian 3D, where efficient coil compression techniques like GCC [6] cannot be used. Additionally, the decrease in iteration time could reduce the total training time by days, since networks are typically trained for tens of thousands of iterations. Additionally, our method can be flexibly integrated into various unrolled methods as a substitute for data-consistency, in contrast to other memory-efficient techniques such as [11] that require invertible layers. Regarding inference, the three networks present virtually the same image metrics, which indicates no penalty on reconstruction accuracy.

    TABLE-US-00005 TABLE 1 All coils Coil compression Coil Sketching c = 20 ĉ = 4 ĉ = 4 Max. memory 4,719 1,070 1,150 (MB) Recon time (s) 109 57 60 PSNR (dB) 13.6 11.9 13.7 MS-SSIM 0.81 0.77 0.81

    TABLE-US-00006 TABLE 2 Baseline PnP Sketched network network network c = 8 ĉ = 4 ĉ = 4 Training Max. memory 20,345 — 16,576 (−18%) (MB) Time per 2.90 —  2.48 (−15%) iteration (s) Inference SSIM 0.886 ± 0.002 0.889 ± 0.004 0.889 ± 0.003 PSNR (dB) 39.94 ± 0.42  39.79 ± 0.43  39.85 ± 0.42  Max. memory 1,731 2,331 (+35%)  2,331 (+35%) (MB) Time per 0.51  0.39 (−24%)  0.39 (−24%) image (s)

    Conclusion

    [0043] Herein is disclosed a general and flexible reconstruction algorithm that applies randomized dimension reduction to effectively reduce the computational cost of image reconstruction. The techniques of the invention can be used to reconstruct any undersampled magnetic resonance imaging dataset with increased speed and memory-efficiency. This increased efficiency can be leveraged to afford reconstruction of datasets with higher dimensionality and spatiotemporal resolution. The invention is especially useful for high-dimensional MRI applications that usually have prohibitive computational and memory requirements, e.g., 2D/3D cardiac cine, 3D cones acquisitions, 4D flow, ND flow or dynamic contrast enhanced (DCE) MRI.

    [0044] Furthermore, deep learning-based reconstruction methods require the use of deep neural networks with millions of parameters, which increases the memory footprint and limits the method to only lower-dimensional MRI techniques such as 2D or 3D acquisitions. The invention can decrease computational cost of training and inference of DL-based reconstruction and could potentially enable applications to higher-dimensional MRI techniques.

    [0045] In other embodiments, the technique is applied to other types of MR image reconstruction problem formulations besides Eq. 1. For example, it may be applied to low-rank decomposition methods for high-dimensional applications (e.g., 4d flow, is ND flow, or DCE). In other embodiments, different structured sketching matrices could be used for S.sub.R and S.sub.PCA. For S.sub.R, other randomized sketching matrices could be tested such as Gaussian sketch or Hadamard sketch. For S.sub.PCA, other coil compression transformation methods could be used. In other embodiments, other types of iterative sketching algorithms could be used besides Eq. 3, e.g. adaptive sketching methods. In other embodiments, other types of deep unrolled network's data consistency steps (custom-character operator) could be sketched to improve computational efficiency.

    References

    [0046] 1. Aggarwal, H. K., Mani, M. P., & Jacob, M. (2018). MoDL: Model-based deep learning architecture for inverse problems. IEEE transactions on medical imaging, 38(2), 394-405.

    [0047] 2. Hammernik, K., Klatzer, T., Kobler, E., Recht, M. P., Sodickson, D. K., Pock, T., & Knoll, F. (2018). Learning a variational network for reconstruction of accelerated MRI data. Magnetic resonance in medicine, 79(6), 3055-3071

    [0048] 3. Sandino, C. M., Cheng, J. Y., Chen, F., Mardani, M., Pauly, J. M., & Vasanawala, S. S. (2020). Compressed sensing: From research to clinical practice with deep neural networks: Shortening scan times for magnetic resonance imaging. IEEE signal processing magazine, 37(1), 117-127.

    [0049] 4. Yang, Y., Sun, J., Li, H., & Xu, Z. (2018). ADMM-CSNet: A deep learning approach for image compressive sensing. IEEE transactions on pattern analysis and machine intelligence, 42(3), 521-538.

    [0050] 5. Huang, F., Vijayakumar S., Li, Y., Hertel, S., & Duensing, G. R., (2008). A software channel compression technique for faster reconstruction with many channels,” Magnetic resonance imaging, vol. 26, no. 1, pp. 133-141.

    [0051] 6. Zhang, T., Pauly, J. M., Vasanawala, S. S., & Lustig, M. (2013). Coil compression for accelerated imaging with Cartesian sampling. Magnetic resonance in medicine, 69(2), 571-582.

    [0052] 7. Oscanoa, J. A., Ong, F., Li, Z., Sandino, C. M., Ennis, D. B., Pilanci, M., & Vasanawala, S. S. Coil Sketching for fast and memory-efficient iterative reconstruction. ISMRM 28.sup.th Annual Meeting, May 15-20.sup.th 2021 (Virtual meeting).

    [0053] 8. Oscanoa J A, Ozturkler B, Iyer S S, Li Z, Sandino C M, Pilanci M, Ennis D B, Vasanawala SS. Coil-sketched unrolled networks for computationally-efficient deep MRI reconstruction. ISMRM 30th Annual Meeting, London, United Kingdom, May 6-12.sup.th 2022

    [0054] 9. Pilanci, M., & Wainwright, M. J. (2016). Iterative Hessian sketch: Fast and accurate solution approximation for constrained least-squares. The Journal of Machine Learning Research, 17(1), 1842-1879

    [0055] 10. Pilanci, M., & Wainwright, M. J. (2015). Randomized sketches of convex programs with sharp guarantees. IEEE Transactions on Information Theory, 61(9), 5096-5115

    [0056] 11. Kellman M., Zhang K., Markley E., Tamir J., Bostan E., Lustig M., and Waller L., (2020). Memory-efficient learning for large-scale computational imaging,” IEEE Transactions on Computational Imaging, vol. 6, pp. 1403-1414.