Feature space based MR guided PET reconstruction
11672492 · 2023-06-13
Assignee
Inventors
Cpc classification
G06T11/008
PHYSICS
G06T2211/464
PHYSICS
G06V10/7715
PHYSICS
A61B6/501
HUMAN NECESSITIES
G06F18/213
PHYSICS
A61B6/5205
HUMAN NECESSITIES
International classification
G06F18/213
PHYSICS
Abstract
A method for PET image reconstruction acquires PET data by a PET scanner; reconstructs from the acquired PET data a seed PET image; builds a feature space from the seed PET image and anatomical images co-registered with the seed PET image; performs a penalized maximum-likelihood reconstruction of a PET image from the seed PET image and the feature space using a penalty function that is calculated based on the differences between each voxel and its neighbors both on the PET image and in the feature space regardless of their location in the image.
Claims
1. A method for PET image reconstruction comprising: acquiring PET data by a PET scanner; reconstructing from the acquired PET data a seed PET image; building a feature space from the seed PET image and anatomical images co-registered with the seed PET image; performing a penalized maximum-likelihood reconstruction of a PET image from the seed PET image and the feature space using a penalty function calculated based on differences between each voxel and its neighbors both in the PET image and in the feature space; wherein building a feature space comprises mapping values of voxels into an (N+1)-dimensional space, where N is a number of sets of the anatomical images.
2. The method of claim 1 wherein the penalty function is a combination of a first PL objective function calculated using a relative difference prior method on neighboring voxels in image space and a second PL objective function using the relative difference prior method on neighboring voxels in the feature space.
3. The method of claim 1 wherein the anatomical images are computed tomography images or magnetic resonance images.
4. The method of claim 1 wherein reconstructing the seed PET image uses OSEM PET reconstruction.
5. A method for PET image reconstruction comprising: acquiring PET data by a PET scanner; reconstructing from the acquired PET data a seed PET image; building a feature space from the seed PET image and anatomical images co-registered with the seed PET image; performing a penalized maximum-likelihood reconstruction of a PET image from the seed PET image and the feature space using a penalty function calculated based on differences between each voxel and its neighbors both in the PET image and in the feature space; wherein the penalty function is a combination of a first PL objective function calculated using a relative difference prior method on neighboring voxels in image space and a second PL objective function using the relative difference prior method on neighboring voxels in the feature space.
Description
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
DETAILED DESCRIPTION OF THE INVENTION
(9) The resolution of PET images is limited by several physical factors, including positron range, scatter, and the finite size of the detector elements. The current spatial resolution of PET images is ˜4 mm for whole body PET/MR. Anatomical MR images with higher resolution and superior SNR have been used in PET reconstruction to improve the image quality and spatial resolution. However, these methods are generally vulnerable to mismatches between the anatomical image and the true activity distribution. To address this problem, the present invention provides a feature-based approach to incorporate anatomical priors in PET image reconstruction, where the location of a voxel or its neighbors in image space does not play a direct role, and where both functional and anatomical images are used to construct the feature space.
(10) An overview of a method for PET image reconstruction, according to an embodiment of the invention, is shown in
(11) The PET reconstruction technique of the present invention adapts a penalized maximum-likelihood algorithm based on the relative difference prior (RDP), which uses a block sequential regularized expectation maximization (BSREM) optimizer. Conventionally, the RDP applies activity-dependent smoothing to control noise at higher iterations and suppresses image noise in low-activity background regions. This relative difference penalty is calculated based on the difference in image space between each voxel activity and its neighboring voxels, as shown in
(12) In the present invention, a new penalty based on the relative differences between each voxel 104 and its neighboring voxels in both the feature space and image space was calculated and used in the BSREM reconstruction method. We have used the same framework to incorporate the MR anatomical priors into the image reconstruction by applying an additional penalty, which is calculated based on the relative difference between each voxel activity and its neighboring voxels in the feature space, as shown in
(13) The feature space is constructed based on all co-registered multi-parametric functional or anatomical MRI images and an initial PET reconstruction. The feature space construction may use a conventional OSEM algorithm. By including the conventional PET reconstructed images in the feature space, the mismatches between the anatomical priors and true activity distribution will not affect the final image.
Example 1
(14) The techniques of the present invention and its advantageous results are illustrated in the following example. In the first example, subjects were injected with 8 mCi of FDG and underwent a 60-minute brain scan on a SIGNA PET/MR (GE Healthcare, Waukesha, Wis.). 3D T1 IR FSPGR, 3D T2 CUBE and 3D T2 FLAIR CUBE images were acquired simultaneously with PET. The seed PET images were reconstructed with TOF-OSEM with both 2.78 mm and 1.17 mm slice thickness and a transaxial pixel size was 1.17 mm.sup.2 (256×256 matrix on a 30 cm FOV). They were also reconstructed using the anatomical priors, i.e., 3D T1, 3D T2 and 3D FLAIR images with MR guided TOF-BSREM (MRgTOF-BSREM) using iso-tropic 1.17 mm.sup.3 resolution.
(15)
(16)
Example 2
(17) Two subjects were injected with 5 mCi of RM2 and after 45 min uptake time, they underwent a 20 min prostate exam on SIGNA PET/MR (GE Healthcare, Waukesha). T2 CUBE and DWI were acquired simultaneously with PET. The PET images were reconstructed with TOF-OSEM with isotropic with 2.34×2.34×2.78 mm.sup.3 resolution (256×256 matrix on a 60 cm FOV). They were also reconstructed using the anatomical priors i.e. 3D T2 CUBE and DWI images, with MRgTOF-BSREM with isotropic 1.17 mm.sup.3 resolution (512×512 matrix on a 60 cm FOV).
(18)
(19) MRgTOF-BSREM shows better spatial resolution and improved SNR compared to TOF-OSEM in both brain and prostate exam. Because MRgTOF-BSREM uses the feature space and incorporate a conventional PET image reconstruction into the feature space, it is not vulnerable to mismatches between the MR images and true activity distribution.
(20)
(21)
where n.sub.v is the number of all voxels, N.sub.j is the neighbors of voxel j in image space, F, is the neighbors of voxel j in feature space, and W.sub.j, W.sub.k, ω.sub.j, ω.sub.k are local penalty weights, and y controls the level of edge-preservation.
(22) In step 612, the PML reconstruction for the current subset and iteration is completed by updating the image using the combined PL objective functions. Steps 608, 610, and 612, are repeated for all subsets and iterations (e.g., 28 subsets and 8 iterations), resulting in a final reconstructed image 614.
(23) In MRgBSREM, the number of anatomical series is not limited, and any co-registered anatomical images (e.g., MR, CT) can be used to form the feature space. Any existing methods in forming the feature space or calculating the weights between each voxel and their neighbors can be used within this framework. For instance the images could be normalized differently (e.g. normalized to the intensity of a specific tissue instead of all tissues) and weights could be derived from either L1 or L2 norm of the distance between each voxel and its neighbors.
(24) Any existing methods to update the penalty function (besides the RDP method that was suggested in step 610) in a PML reconstruction method based on neighboring voxels can be used.
(25) This method can be combined with any motion correction method. Since it generates high resolution images, combining it with motion correction will improve the overall image quality. Whether the motion is captured by an external camera or estimated from the PET short frames, or the motion correction is done through List-mode reconstruction (as disclosed in Spangler-Bickell M, et al., IEEE Tran on Radiation and Plasma Medical Sciences 3 (4), 498-503.) or by registering and averaging short PET frames, the proposed method can be used to reconstruct the short PET frames with anatomical priors or use it directly within the List mode reconstruction.
(26) To accelerate the reconstruction, this method can be combined with any AI method (such as Deep Learning and Convolutional Neural Networks). After training the AI module will get the PET seed image along with anatomical images and will perform the 3D high-res PET reconstruction. For instance, the reconstructed images using anatomical priors by this method could be used as the ground truth to train a convolutional neural network with anatomical priors and PET images as its input to generate high-resolution PET images at its output.