MULTIPLEX MRI IMAGE RECONSTRUCTION
20230160986 · 2023-05-25
Assignee
Inventors
- Zhang Chen (Cambridge, MA, US)
- Shanhui Sun (Cambridge, MA, US)
- Xiao Chen (Cambridge, MA, US)
- Terrence Chen (Cambridge, MA, US)
Cpc classification
G01R33/5608
PHYSICS
G01R33/50
PHYSICS
International classification
G01R33/56
PHYSICS
Abstract
In Multiplex MRI image reconstruction, a hardware processor acquires sub-sampled Multiplex MRI data and reconstructs parametric images from the sub-sampled Multiplex MRI data. A machine learning model or deep learning model uses the subsampled Multiplex MRI data as the input and parametric maps calculated from the fully sampled data, or reconstructed fully sample data, as the ground truth. The model learns to reconstruct the parametric maps directly from the subsampled Multiplex MRI data.
Claims
1. An apparatus for Multiplex MRI image reconstruction, comprising: a hardware processor coupled to a memory, wherein the hardware processor is configured to: acquire sub-sampled Multiplex MRI data; and reconstruct parametric maps from the sub-sampled Multiplex MRI data.
2. The apparatus according to claim 1, wherein the hardware processor is further configured to train a machine learning model to reconstruct the parametric maps from the acquired sub-sampled Multiplex MRI data using sub-sampled Multiplex MRI data as an input and parametric maps reconstructed from fully sampled Multiplex MRI data as the ground truth.
3. The apparatus according to claim 1, wherein the hardware processor is further configured to reconstruct echo images with the acquired sub-sampled Multiplex MRI data prior to reconstructing the parametric maps.
4. The apparatus according to claim 3, wherein the acquired sub-sampled Multiplex MRI data comprises echo images and the hardware processor is further configured to stack multiple ones of the echo images and input the stacked images into the machine learning model.
5. The apparatus according to claim 1, wherein the hardware processor is configured to acquire the sub-sampled Multiplex MRI data using different sampling masks.
6. The apparatus according to claim 1, wherein the hardware processor is further configured to divide the acquired sub-sampled Multiplex MRI data into two or more parts in a readout (RO) direction.
7. A computer implemented method for multiplex MRI reconstruction, the method comprising using a hardware processor to: acquire sub-sampled Multiplex MRI data; and reconstruct parametric maps from the sub-sampled Multiplex MRI data.
8. The computer implemented method according to claim 7, the method further comprising training a machine learning model to reconstruct the parametric maps from the acquired sub-sampled Multiplex MRI data using sub-sampled Multiplex MRI data as an input and parametric maps reconstructed from fully sampled Multiplex MRI data as the ground truth
9. The computer implemented method according to claim 8, wherein the method further comprises reconstructing echo images with the acquired sub-sampled Multiplex MRI data prior to reconstructing the parametric maps.
10. The computer implemented method according to claim 9, wherein the acquired sub-sampled Multiplex MRI data comprises echo images and the method further comprises stacking multiple ones of the echo images and inputting the stacked images into the machine learning model.
11. The computer implemented method according to claim 9, wherein the method further comprises applying different sampling masks to different echo images of the sub-sampled Multiplex MRI data.
12. The computer implemented method according to claim 8, wherein the method further comprises dividing the acquired sub-sampled Multiplex MRI data into two or more parts in a readout (RO) direction; reconstructing the two or more parts separately, and combining the reconstructed two or more parts into a final full image.
13. A computer program product comprising a non-transitory computer-readable medium having machine readable instruction stored thereon, which when executed by a computing apparatus, are configured to cause the computing apparatus to execute the method according to claim 7.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0022] In the following detailed portion of the present disclosure, the invention will be explained in more detail with reference to the example embodiments shown in the drawings, in which:
[0023]
[0024]
[0025]
[0026]
[0027] In the accompanying drawings, an underlined number is employed to represent an item over which the underlined number is positioned or an item to which the underlined number is adjacent. A non-underlined number relates to an item identified by a line linking the non-underlined number to the item. When a number is non-underlined and accompanied by an associated arrow, the non-underlined number is used to identify a general item at which the arrow is pointing.
DETAILED DESCRIPTION OF THE DISCLOSED EMBODIMENTS
[0028] The following detailed description illustrates exemplary aspects of the disclosed embodiments and ways in which they can be implemented. Although some modes of carrying out the aspects of the disclosed embodiments have been disclosed, those skilled in the art would recognize that other embodiments for carrying out or practising the aspects of the disclosed embodiments are also possible.
[0029]
[0030] Multi-flip-angle (FA) and multi-echo GRE (hereinafter “Multiplex MRI”), can simultaneously acquire multiple contrast images with just one single scan. With the single scan, Multiplex MRI can provide over 16 types of image contrasts and 9 types of parametric mappings. One Multiplex MRI scan often includes a combination of several echoes and different flip-angles and each combination leads to one echo image. With different echo and FA configure settings, a single scan can generate multiple (e.g., 7×2=14) echo data. Each echo image has different contrast information and the parametric maps, such as proton density weighted (PDW), T1 weighted (T1W), T2*, quantitative susceptibility mapping (QSM), can then be calculated based on the echo images.
[0031] With reference to
[0032] In one embodiment, the apparatus 102 includes a processor 104, a neural network or machine learning model 106 and a memory 106. The processor 104, model 106 and memory 108 can be embodied in a single device or can comprise multiple devices communicatively coupled together.
[0033] There is further shown a communication network 110 and imaging system 120 or apparatus. The communication network 110 generally includes a medium through which the imaging system 120 and the apparatus 102 can communicate with each other. The imaging system 120, which can comprise any suitable MRI imaging stem, is configured to provide the Multiplex MRI data to the apparatus 102.
[0034] Although communication network 110 is shown communicatively coupling the imaging system 120 to the apparatus 102, the aspects of the disclosed embodiments are not so limited. In alternate embodiments the apparatus 102 can be connected or coupled to the imaging system 120 in any suitable manner. Additionally, the apparatus 102 can be configured to receive, acquire or generate Multiplex MRI data, as is generally described herein, from any suitable source in any suitable manner.
[0035] The aspects of the disclosed embodiments are directed to reconstructing parametric images or maps directly from the subsampled Multiplex MRI data without reconstructing the echo images for Multiplex MRI. In one embodiment, the workflow can be achieved by training the machine learning model 106, also referred to as a deep learning model, using subsampled Multiplex MRI data as the input. During the training phase, parametric maps are calculated from the fully sampled Multiplex MRI data, or reconstructed fully sampled Multiplex MRI data, as the ground truth. The model 106 learns to reconstruct the parametric maps directly from the subsampled Multiplex MRI data by comparing the prediction of the model 106 during the training phase to the ground truth and updating the model weights. Once the model 106 is fully trained, the model 106 can be implemented in testing. During the testing phase, fully sampled Multiplex MRI data is not available.
[0036]
[0037] As illustrated in the example of
[0038] In the embodiment of
[0039] In one embodiment, the sampling masks used in each echo acquisition, shown in
[0040] By using different sampling masks, the acquired information can be complementary. For example, for certain echo images, Mask 1 can be used. For other echo images, Mask 2 or Mask N can be used. The design of Mask 1 to Mask N is configured so that different regions of the data are sub-sampled. During reconstruction, complementary information is combined to recover the missing information.
[0041] In one embodiment, a machine learning model or module 306 receives as input 304, the sub-sampled Multiplex MRI data 302. The machine learning model 306 in this example is similar to the machine learning model 108 of
[0042] During training of the machine learning model 306, the machine learning model 306 receives the sub-sampled Multiplex MRI data 304 as the input 304 and outputs a prediction. The fully sampled Multiplex MRI data 310 is the ground truth. The prediction is compared to the ground truth and the model weights are updated 312 during the training phase.
[0043] During testing, the model 306 takes the sub-sampled Multiplex MRI data 304 as the input. The model 306 then generates or calculates the parametric maps 308.
[0044] In one embodiment, the input 304 can comprise the sub-sampled Multiplex MRI data with some pre-processing. This pre-processing can result in, for example, but is not limited to coil compressed data or read out (RO) cropped data.
[0045] For example, a coil compression method can be used to reduce the number of coils such that fewer compressed coils are used for reconstruction. As another example, instead of reconstructing the full images at once, the images can be divided into several parts by readout direction and each part can be reconstructed separately, which can then be combined into the final full images. These can be considered data pre-processing steps.
[0046] In one embodiment, the machine learning model 306, similarly to the neural network 106 of
[0047] Referring to
[0048] As shown in the exemplary workflow 400 of
[0049] In one embodiment, multiple echo images can be reconstructed together. This can be implemented, for example, by stacking the multiple echo images as an extra dimension in the input 404 and feeding the stack as the input 404 into the machine learning model 406. By stacking the echo images, the machine learning model 406 can take more information as input.
[0050] In the example workflows of
[0051]
[0052] The architecture 500 is merely exemplary. In alternate embodiments, any suitable network architecture can be used to implement the models 306/406 described herein.
[0053] Referring again to
[0054] In one embodiment, the processor 104 includes suitable logic, circuitry, interfaces and/or code that is configured to carry out the processes generally described herein. The processor 104 is configured to respond to and process instructions that drive the apparatus 102. Examples of the processor 104 includes, but is not limited to, a microprocessor, a microcontroller, a complex instruction set computing (CISC) microprocessor, a reduced instruction set (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, or any other type of processing circuit. Optionally, the processor 104 may be one or more individual processors, processing devices and various elements associated with a processing device that may be shared by other processing devices. Additionally, the one or more individual processors, processing devices and elements are arranged in various architectures for responding to and processing the instructions that drive the apparatus 102. In one embodiment, the processor 104 is a hardware processor.
[0055] The memory 106 may comprise suitable logic, circuitry, interfaces, and/or code that may be configured to store instructions executable by the processor 104. The memory 202 is further configured to store the MRI data. The memory 106 may be further configured to store operating systems and associated applications of the apparatus 102 including the neural network 108. Examples of implementation of the memory 106 may include, but are not limited to, Random Access Memory (RAM), Read Only Memory (ROM), Hard Disk Drive (HDD), Flash memory, and/or a Secure Digital (SD) card. A computer readable storage medium for providing a non-transient memory may include, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
[0056] The neural network 108 generally refers to an artificial neural network. In one embodiment, the neural network 108 is an unsupervised neural network that uses machine learning.
[0057] The communication network 110 may be a wired or wireless communication network. Examples of the communication network 110 may include, but are not limited to, a Wireless Fidelity (Wi-Fi) network, a Local Area Network (LAN), a wireless personal area network (WPAN), a Wireless Local Area Network (WLAN), a wireless wide area network (WWAN), a cloud network, a Long Term Evolution (LTE) network, a plain old telephone service (POTS), a Metropolitan Area Network (MAN), and/or the Internet.
[0058] In one embodiment, referring also to
[0059] Referring again to
[0060] Various embodiments and variants disclosed above, with respect to the aforementioned system 100, apply mutatis mutandis to the method. The method described herein is computationally efficient and does not cause processing burden on the processor 104.
[0061] Modifications to embodiments of the aspects of the disclosed embodiments described in the foregoing are possible without departing from the scope of the aspects of the disclosed embodiments as defined by the accompanying claims. Expressions such as “including”, “comprising”, “incorporating”, “have”, “is” used to describe and claim the aspects of the disclosed embodiments are intended to be construed in a non-exclusive manner, namely allowing for items, components or elements not explicitly described also to be present. Reference to the singular is also to be construed to relate to the plural.
[0062] Thus, while there have been shown, described and pointed out, fundamental novel features of the invention as applied to the exemplary embodiments thereof, it will be understood that various omissions, substitutions and changes in the form and details of devices and methods illustrated, and in their operation, may be made by those skilled in the art without departing from the spirit and scope of the presently disclosed invention. Further, it is expressly intended that all combinations of those elements, which perform substantially the same function in substantially the same way to achieve the same results, are within the scope of the invention. Moreover, it should be recognized that structures and/or elements shown and/or described in connection with any disclosed form or embodiment of the invention may be incorporated in any other disclosed or described or suggested form or embodiment as a general matter of design choice. It is the intention, therefore, to be limited only as indicated by the scope of the claims appended hereto.