DEEP LEARNING BASED THREE-DIMENSIONAL RECONSTRUCTION METHOD FOR LOW-DOSE PET IMAGING

Abstract

Disclosed is a three-dimensional low-dose PET reconstruction method based on deep learning. The method comprises the following steps: back projecting low-dose PET raw data to the image domain to maintain enough information from the raw data; selecting an appropriate three-dimensional deep neural network structure to fit the mapping between the back projection of the low-dose PET and a standard-dose PET image; after learning from the training samples the network parameters are fixed, realizing three-dimensional PET image reconstruction starting from low-dose PET raw data, thereby obtaining a low-dose PET reconstructed image which has a lower noise and a higher resolution compared with the traditional reconstruction algorithm and image domain noise reduction processing.

Claims

1. A three-dimensional low-dose PET reconstruction method based on deep learning, comprising the following steps: (1) performing a back projection method for low-dose PET raw data, which specifically comprises the following sub-steps: (1.1) performing attenuation correction processing on the low-dose PET raw data, and obtaining l.sub.pp_ac, a back projection of the low-dose PET data after attenuation correction through being subjected to the transposed system matrix of a PET scanner; (1.2) obtaining l.sub.rs, a back projection of random and scatter data by subjecting the random and scatter data of the low-dose PET raw data to the transposed system matrix of the PET scanner; (1.3) simulating an all-1 PET image, subjecting the image to the system matrix of the PET scanner to obtain a three-dimensional projection, and then back projecting a result of the three-dimensional projection to the image domain to obtain l.sub.1, a back projection of the all-1 PET image; (1.4) subtracting l.sub.rs_ac, the back projection of the random and scatter data obtained in step (1.2) from l.sub.pp_ac, the back projection of the low-dose PET data after attenuation correction obtained in step (1.1) to implement random and scatter correction, and then dividing by l.sub.1, the back projection of the all-1 PET image to obtain a corrected and regularized three-dimensional low-dose PET back projection l.sub.bp: l bp = l pp _ ac - l r s l 1 ( 1 ) (2) taking l.sub.bp , the corrected and regularized three-dimensional low-dose PET back projection obtained in step (1.4), as an input of a deep neural network, taking a reconstructed standard-dose PET image as a label of the network, updating parameters of the deep neural network through Adam optimization algorithm, minimizing a target loss function, and completing training of the deep neural network, wherein the target loss function of training the deep neural network is as follows: L = 1 N x N y N z .Math. i = 1 N x .Math. j = 1 N y .Math. k = 1 N z .Math. "\[LeftBracketingBar]" C ( l bp ( i , j , k ) ) - f full ( i , j , k ) .Math. "\[RightBracketingBar]" ( 2 ) where N.sub.x,N.sub.y, and N.sub.z represent total numbers of pixels of the low-dose PET back projection or the standard-dose PET image in horizontal, vertical and axial directions, respectively, C(.Math.) represents mapping from the low-dose PET back projection l.sub.bp to the standard-dose PET reconstructed image f.sup.full fitted by a three-dimensional deep neural network, and (i, j, k) represents pixels in the image; and (3) performing the back projection method in step 1 on newly acquired low-dose PET raw data, and inputting the data into the deep neural network trained in step 2 to obtain a corresponding low-dose PET reconstructed image.

Description

BRIEF DESCRIPTION OF DRAWINGS

[0017] FIG. 1 is a flow chart of the back projection method of low dose PET raw data;

[0018] FIG. 2 is a comparison diagram of the reconstructed PET images from using the traditional algorithm and the algorithm of the present application.

DESCRIPTION OF EMBODIMENTS

[0019] The present application will be further explained with reference to the drawings and examples.

[0020] The low-dose raw data acquired by the PET scanner is a three-dimensional sinogram composed of projecting a PET image through the three-dimensional X-ray transformation. The three-dimensional sinogram contains not only the axial plane projections but also the oblique plane projections passing through the axial plane, and has the characteristics of large amount of data and high redundancy of information. It is difficult to fit the mapping from sinogram to PET image directly by a neural network due to the limitation of computing and storage capacities. The back projection method of the three-dimensional sinogram that can maintain information of the sinogram provided by the present application follows a flow chart as shown in FIG. 1:

[0021] (1.1) attenuation correction processing is performed on the low-dose raw PET data, and a back projection of the attenuation corrected low-dose PET data, l.sub.pp_ac, is obtained using the transposed system matrix of a PET scanner;

[0022] (1.2) a back projection of random and scatter data, l.sub.rs, is obtained by subjecting the random and scatter data of the low-dose PET data to the transposed system matrix of the PET scanner;

[0023] (1.3) to deal with the spatial variance of the three-dimensional projection and back projection process caused by the limited axial extent of the PET scanner, the present application provides a regularization method for PET back projection; laminogram of a simulated all-1 PET image is generated by subjecting the simulated all-1 PET image to the system matrix of the PET scanner, and then a result of projection is back projected into the image domain to obtain a back projection of the all-1 PET image, l.sub.1;

[0024] (1.4) the back projection of the random and scatter data obtained in step (1.2), l.sub.rs_ac, is subtracted from the back projection of the low-dose PET data after attenuation correction obtained in step (1.1), l.sub.pp_ac, to implement random and scatter correction, and then the result is divided by the back projection of the all-1 PET image, l.sub.1, to obtain the corrected and regularized low-dose PET three-dimensional back projection l.sub.bp:

[00003] l bp = l pp _ ac - l r s l 1 ( 1 )

[0025] The relationship between the corrected and regularized low-dose PET three-dimensional back projection l.sub.bp and the PET reconstructed image is expressed as follows:

[00004] f ( x , y , z ) = l bp ( x , y , z ) F - 1 { 1 H ( υ , ψ ) } ( 3 )

[0026] where f(x,y,z) and l.sub.bp(x,y,z) represent the activity values of the three-dimensional PET image and the corrected and regularized back projection at a certain point (x,y,z), respectively, H(υ,Ψ) is the three-dimensional Fourier transform of rotationally symmetric PSF (point spread function) expressed in spherical coordinates, and is defined as:

[00005] H ( υ , ψ ) = { 2 π υ .Math. "\[LeftBracketingBar]" ψ .Math. "\[RightBracketingBar]" Θ 4 sin - 1 ( sin Θ .Math. "\[LeftBracketingBar]" sin ψ .Math. "\[RightBracketingBar]" ) υ .Math. "\[LeftBracketingBar]" ψ .Math. "\[RightBracketingBar]" > Θ ( 4 ) F - 1 { 1 H ( υ , ψ ) }

represents a ramp-type three-dimensional image domain filter, convolving the PET back projection with the filter recovers high-resolution PET images.

[0027] The present application provides a three-dimensional deep neural network to fit the nonlinear and spatial variant deblurring filter from the laminogram to the reconstructed PET image. The deep neural network is composed of two parts. The first part is a U-Net composed of 3D convolutional layers, 3D deconvolutional layers, and shortcuts between them. The convolutional layers are used to encode the PET back projection to extract high-level features, and the deconvolutional layers decode the features to obtain the rough estimation of PET images. The shortcuts in the network superimpose the outputs of the convolutional layers and the corresponding deconvolutional layers, which improves the network training effect and effectively prevents degradation without increasing network parameters.

[0028] The second part of the deep neural network is composed of multiple residual blocks in series, which are used to further refine the high frequency details in the rough estimation of the PET image. Since the low frequency information contained in the rough estimation of PET image is similar to that of the standard-dose PET image, the residual block can only learn the high frequency residual part between them, so as to improve the efficiency of the network training.

[0029] (2) Therefore, the corrected and regularized three-dimensional back projection l.sub.bp of the low-dose PET obtained in step (1.4) is taken as an input of a deep neural network, a reconstructed standard-dose PET image is taken as a label of the network, parameters of the deep neural network are updated through Adam optimization algorithm, a target loss function is minimized, and training of the deep neural network is completed. An estimate {tilde over (C)}(.Math.) of the mapping network is obtained. The standard-dose prior knowledge learned by the training the mapping network C(.Math.) can compensate the high frequency details in the low-dose PET back projection in the testing process. The target loss function of the deep neural network training is as follows:

[00006] L = 1 N x N y N z .Math. i = 1 N x .Math. j = 1 N y .Math. k = 1 N z .Math. "\[LeftBracketingBar]" C ( l bp ( i , j , k ) ) - f full ( i , j , k ) .Math. "\[RightBracketingBar]" ( 2 )

[0030] where N.sub.x,N.sub.y,N.sub.z represent the total numbers of pixels of the low-dose PET back projection or the standard-dose PET image in horizontal, vertical and axial directions, respectively, C(.Math.) represents mapping from the low-dose PET back projection l.sub.bp to the standard-dose PET reconstructed image f.sup.full fitted by a three-dimensional deep neural network, and (i, j, k) represents pixels in the image.

[0031] (3) Newly acquired low-dose PET raw data are reconstructed by using the trained mapping, {tilde over (C)}(.Math.): firstly, executing the back projection method in step 1, and feeding the processed back projection into the mapping network {tilde over (C)}(.Math.) to obtain a corresponding low-dose PET reconstructed image.

[0032] The reconstruction result of low-dose PET data by a traditional reconstruction algorithm is shown in FIG. 2(a). The reconstructed image has large noise and lesions cannot be distinguished from noise; the reconstruction algorithm proposed by the present application can obtain a reconstructed low-dose PET image as shown in FIG. 2(b), and the signal-to-noise ratio of the image 2(b) is obviously higher than that of the image 2(a).