Deep-learning based separation method of a mixture of dual-tracer single-acquisition PET signals with equal half-lives

11445992 · 2022-09-20

Assignee

Inventors

Cpc classification

International classification

Abstract

The present invention discloses a DBN based separation method of a mixture of dual-tracer single-acquisition PET signals labelled with the same isotope. It predicts the two separate PET signals by establishing a complex mapping relationship between the dynamic mixed concentration distribution of the same isotope-labeled dual-tracer pairs and the two single radiotracer concentration images. Based on the compartment models and the Monte Carlo simulation, the present invention selects three sets of the same radionuclide-labeled tracer pairs as the objects and simulates the entire PET process from injection to scanning to generate enough training sets and testing sets. When inputting the testing sets into the constructed universal deep belief network trained by the training sets, the prediction results show that the two individual PET signals can been reconstructed well, which verifies the effectiveness of using the deep belief network to separate the dual-tracer PET signals labelled with the same isotope.

Claims

1. A deep belief network based method of separating a mixture of dual-tracer positron emission tomography (PET) signals from dual-tracers labelled with the same isotope, which comprises the following steps: (1) injecting the dual tracers, including tracer I and tracer II, labelled with the same isotope to a biological tissue and performing dynamic PET imaging on the biological tissue, then obtaining a coincidence counting vector corresponding to different moments, and then forming a dynamic coincidence counting sequence reflecting a concentration distribution of a mixture of the dual tracers, denoted as S.sup.dual; (2) injecting individual tracer I and tracer II to the biological tissue sequentially and performing two separate dynamic PET image acquisitions on the biological tissue respectively, obtaining coincidence counting vectors of the two separate dynamic PET image acquisitions corresponding to different moments, and constituting the dynamic coincidence counting sequences that respectively reflect the distribution of tracer I and tracer II, denoted as S.sup.I and S.sup.II; (3) using a PET reconstruction algorithm to reconstruct dynamic PET images X.sup.dual, X.sup.I and X.sup.II corresponding to the dynamic coincidence counting sequences S.sup.dual, S.sup.I and S.sup.II; (4) repeating step (1)˜(3) multiple times to generate a plurality of dynamic PET image sequences X.sup.dual, X.sup.I and X.sup.II and dividing them into training sets and testing sets; (5) extracting the time activity curve (TAC) of each pixel from the dynamic PET image sequences X.sup.dual, X.sup.I and X.sup.II, taking the TACs of the training set X.sup.dual as the input sample, and the TACs of the corresponding X.sup.I and X.sup.II as the ground truth, then training them by the deep belief network to obtain a dual-tracer PET reconstruction model; and (6) inputting every TAC of any X.sup.dual in the test set into the PET reconstruction model one by one, and outputting the TACs corresponding to X.sup.I and X.sup.II, and finally reconstructing the TACs to obtain dynamic PET images X.sub.test.sup.I and X.sub.test.sup.II corresponding to tracer I and tracer II.

2. The separation method described in claim 1, characterized that, in the step (5), the TAC of each pixel is extracted from the dynamic PET image sequences X.sup.dual, X.sup.I and X.sup.II according to the following formula:
X.sup.dual=[TAC.sub.1.sup.dual TAC.sub.2.sup.dual . . . TAC.sub.n.sup.dual].sup.T
X.sup.I=[TAC.sub.1.sup.I TAC.sub.2.sup.I . . . TAC.sub.n.sup.I].sup.T
X.sup.II=[TAC.sub.1.sup.II TAC.sub.2.sup.II . . . TAC.sub.n.sup.II].sup.T wherein: TAC.sub.1.sup.dual˜TAC.sub.n.sup.dual is the TAC of the 1st to nth pixels in the dynamic PET image sequence X.sup.dual, TAC.sub.1.sup.I˜TAC.sub.n.sup.I is the TAC of the 1st to nth pixels in the dynamic PET image sequence X.sup.I, and TAC.sub.1.sup.II˜TAC.sub.n.sup.II is the 1st to nth pixels in the dynamic PET image sequence X.sup.II, n is the total number of pixels of the PET image, and T represents matrix transposition.

3. The separation method according to claim 1, characterized in that: the specific process of training in the step (5) by a deep belief network (DBN) is as follows: 5.1 initializing the DBN framework consisting of an input layer, hidden layers and an output layer, wherein the hidden layers are composed of three stacked Restricted Boltzmann Machines (RBMs); 5.2 initializing the parameters in the DBN, which include the number of nodes in hidden layers, the offset vector and the weight vector between layers, activation function and the maximum number of iterations; 5.3 pre-training the stacked RBMs; 5.4 passing the pre-trained parameters to the initialized deep belief network, and then substituting the TAC in the dynamic PET image sequence X.sup.dual into the DBN, calculating an error function L between the output result and the corresponding ground truth, continuously updating the parameters of the entire network according to a gradient descent method until the error function L converges or reaches the maximum number of iterations, thus completing the training to obtain the dual tracer PET reconstruction model.

4. The separation method described in claim 3, characterized in that: in step 5.3, the restricted Boltzmann machine in the hidden layer is pre-trained, that is, each restricted Boltzmann machine is composed of one display layer and one hidden layer, and the weight of the hidden layer and the display layer is continuously updated by the contrast divergence algorithm, until the hidden layer can accurately represent the characteristics of the display layer and reverse the display layer.

5. The separation method described in claim 3, characterized in that: the formula of the error function L in the step 5.4 is as follows:
L=∥custom character−TAC.sub.j.sup.I∥.sub.2.sup.2+∥custom character−TAC.sub.j.sup.II∥.sub.2.sup.2+ζ∥(custom character+custom character)−(TAC.sub.j.sup.I+TAC.sub.j.sup.II)∥.sub.2.sup.2 wherein TAC.sub.j.sup.I is the TAC of the j-th pixel in the dynamic PET image sequence X.sup.I; TAC.sub.j.sup.II is the TAC of the j-th pixel in the dynamic PET image sequence X.sup.II; custom character and custom character represent the two output TACs corresponding to X.sup.I and X.sup.II by feeding the TAC of the j-th pixel in the dynamic PET image sequence X.sup.dual into the DBN; j is a natural number and 1≤j≤n, n is the total number of the PET images; ∥ ∥.sub.2 is the L2 norm; and ζ is a defined constant.

6. The separation method described in claim 1, characterized in that: in the step (6), the TAC of a j-th pixel of X.sup.dual in the test set is input into the dual-tracer PET reconstruction model, and then [TAC.sub.j.sup.I TAC.sub.j.sup.II] is output corresponding to the two separated tracers, wherein 1≤j≤n, n is the total pixel number of the PET images; according to the above procedures, the TAC of all the pixels of X.sup.dual are tested, then the dynamic PET image sequences X.sub.test.sup.I and X.sub.test.sup.II that correspond to tracer I and tracer II are obtained according to the following formula:
X.sub.test.sup.I=[TAC.sub.1.sup.I TAC.sub.2.sup.I . . . TAC.sub.n.sup.I].sup.T
X.sub.test.sup.II=[TAC.sub.1.sup.II TAC.sub.2.sup.II . . . TAC.sub.n.sup.II].sup.T wherein T represents matrix transposition.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 is the flow diagram of the separation method of dual-tracer PET signals according to the present invention.

(2) FIG. 2 is the structure of the DBN according to the present invention.

(3) FIG. 3(a) is the Hoffman brain template.

(4) FIG. 3(b) is the complex brain template.

(5) FIG. 3(c) is the Zubal thorax template.

(6) FIG. 4(a) is the ground truth of 9.sup.th frame of [.sup.11C]DTBZ.

(7) FIG. 4(b) is the estimated result of 9.sup.th frame of [.sup.11C]DTBZ.

(8) FIG. 4(c) is the ground truth of 9.sup.th frame of [.sup.11C]FMZ.

(9) FIG. 4(d) is the estimated result of 9.sup.th frame of [.sup.11C]FMZ.

(10) FIG. 5(a) is the ground truth of 9.sup.th frame of [.sup.62Cu] ATSM.

(11) FIG. 5(b) is the estimated result of 9.sup.th frame of [.sup.62Cu] ATSM.

(12) FIG. 5(c) is the ground truth of 9.sup.th frame of [.sup.62Cu] PTSM.

(13) FIG. 5(d) is the estimated result of 9.sup.th frame of [.sup.62Cu] PTSM.

(14) FIG. 6(a) is the ground truth of 9.sup.th frame of [.sup.18F] FDG.

(15) FIG. 6(b) is the estimated result of 9.sup.th frame of [.sup.18F] FDG.

(16) FIG. 6(c) is the ground truth of 9.sup.th frame of [.sup.18F] FLT.

(17) FIG. 6(d) is the estimated result of 9.sup.th frame of [.sup.18F] FLT.

SPECIFIC EMBODIMENTS OF THE INVENTION

(18) In order to more specifically describe the present invention, the detailed instructions are provided in conjunction with the attached figures and following specific embodiments:

(19) As FIG. 1 shows, the present invention of DBN based separation method of a mixture of dual-tracer single-acquisition PET signals labelled with the same isotope include the following steps:

(20) (1) Preparation of Training Data 1.1 Injecting the dual tracers labelled with the same isotope (tracer I and tracer II) to a biological tissue and performing dynamic PET imaging on it. Then a sequence of sinogram can be acquired, denoted as S.sup.I+II. 1.2 Injecting the individual Tracer I and Tracer II labelled with the same isotope to the biological tissue sequentially and performing two separate dynamic PET imaging on them, respectively. Then two sequences of sinogram can be acquired, denoted as S.sup.I and S.sup.II. 1.3 Using the reconstruction algorithm to reconstruct the sinogram to the concentration distribution of the radioactive tracer in the body, which are denoted as X.sup.dual, Y.sup.I and Y.sup.II corresponding to S.sup.dual, S.sup.I and S.sup.II respectively. 1.4 Repeating step (1)˜(3) to generate enough sequences of dynamic PET images U.sup.Dual, U.sup.I and U.sup.II, and dividing them into training datasets U.sub.train.sup.dual, U.sub.train.sup.I and U.sub.train.sup.II and testing datasets U.sub.test.sup.dual, U.sub.test.sup.I and U.sub.test.sup.II randomly with a ratio of around 2:1.

(21) Extracting pixel based time activity curves (TACs) from the reconstructed images X.sup.dual, Y.sup.I, Y.sup.II, the process can be described as follows:
X.sup.dual=[x.sub.1,x.sub.2,x.sub.3, . . . ,x.sub.N].sup.T,x.sub.i=[x.sub.i.sup.1,x.sub.i.sup.2, . . . ,x.sub.i.sup.M].sup.T
Y.sup.I=[(y.sup.1).sub.1,(y.sup.1).sub.2, . . . ,(y.sup.1).sub.N].sup.T,(y.sup.1).sub.i=[(y.sup.1).sub.i.sup.1,(y.sup.1).sub.i.sup.2, . . . ,(y.sup.1).sub.i.sup.M].sup.T
Y.sup.II=[(y.sup.2).sub.1,(y.sup.2).sub.2, . . . ,(y.sup.2).sub.N].sup.T,(y.sup.2).sub.i=[(y.sup.2).sub.i.sup.1,(y.sup.2).sub.i.sup.2, . . . ,(y.sup.2).sub.i.sup.M].sup.T
wherein x.sub.i.sup.j represents the radioactive concentration of pixel i at j-th frame, (y.sup.1).sub.i.sup.j and (y.sup.2).sub.i.sup.j represent the radiotracer concentration of pixel point i of Tracer I and Tracer II at j-th frame, respectively, and N is the total number of pixels of the resulting PET image, M is the total number of frames acquired by dynamic PET.

(22) (2) Preparation of Training Sets and Test Set Data.

(23) 70% of the TAC data set X.sup.dual is extracted as the training set X.sub.train.sup.dual, and the remaining 30% is used as the testing set X.sub.test.sup.dual, and Y.sup.I and Y.sup.II corresponding to the training set and the test set are respectively connected in series as the labels of the training set and the ground truth of the testing set, which can be shown as follows:
Y.sub.train=[Y.sub.train.sup.I,Y.sub.train.sup.II]
Y.sub.test=[Y.sub.test.sup.I,Y.sub.test.sup.II]

(24) (3) Constructing a deep belief network for the signal separation of the dual-tracer PET with the same isotope; as shown in FIG. 2, this deep belief network consists of an input layer, three hidden layers, and an output layer.

(25) (4) The training set is input into this network for training. The training process is as follows: 4.1 Initializing the network; initializing the deep belief network, including the number of nodes of all the layers, initializing offset vectors and weight matrix, setting learning rate, activation function and the number of iterations. 4.2 Inputting the X.sub.train.sup.dual into the network for training; the training process is divided into two parts: pre-training and fine-tuning. 4.2.1 Pre-training: The contrast divergence algorithm is required to keep updating the parameters of each restricted Boltzmann machine until all stacked restricted Boltzmann machines are trained well. In more detail, each restricted Boltzmann machine consists of a display layer and a hidden layer, using the contrast divergence algorithm to update the weights until the hidden layer accurately express the characteristics of the display layer and reverse the display layer. 4.2.2 Fine-tuning: The parameters obtained by pre-training are copied to a common neural network with the same structure. These parameters are used as the initial values of the neural network to participate in the final fine-tuning process. The error function L between custom character and label Y.sub.train is calculated. Based on L, the gradient descent algorithm is used to update the weight matrix of the entire network until the iteration stops.

(26) The error function L is as follows:
L=∥custom character−Y.sub.train.sup.I∥.sub.2.sup.2+∥custom character−Y.sub.train.sup.II∥.sub.2.sup.2−γ(∥custom characterY.sub.train.sup.I∥.sub.2.sup.2+∥custom character−Y.sub.train.sup.II∥.sub.2.sup.2)
wherein, the first two reflect the error between the ground truth and the predicted value, while the latter two are the difference between the two tracer signals; γ is a defined constant used to adjust the proportion of the latter two items in the loss function. 4.3 Adjusting the parameters of the whole network by the backpropagation algorithm. The error function is as follows:
Loss(custom character.sub.j.sup.I,custom character.sub.j.sup.II)=∥custom character.sub.j.sup.I−TAC.sub.j.sup.I∥.sub.2.sup.2+∥custom character.sub.j.sup.II−TAC.sub.j.sup.II∥.sub.2.sup.2+ξ∥(custom character.sub.j.sup.I+custom character.sub.j.sup.II)−(TAC.sub.j.sup.I+TAC.sub.j.sup.II)∥.sub.2.sup.2
wherein custom character.sub.j.sup.I and custom character.sub.j.sup.II and are the prediction value of TAC.sub.j.sup.I and TAC.sub.j.sup.II by the DBN respectively, and ξ is the weigh coefficient.

(27) (5) Inputting the TACs of the test set data X.sub.test.sup.dual into the trained neural network to obtain the separation signal of the dual-tracer PET labeled with the same isotope.

(28) Next, we validate the present invention by simulated experiment.

(29) (1) Phantom Selection

(30) There are three different dual tracers in training datasets, and every group is equipped with a different phantom with different Regions of interest (ROIs), which indicates different biochemical environment. FIG. 3(a) is the Hoffman brain phantom with 3 ROIs for [.sup.18F] FDG+[.sup.18F] FLT; FIG. 3(b) is the complex brain phantom with 4 ROIs for [.sup.11C] FMZ+[.sup.11C] DTBZ; FIG. 3(c) is the Zubal thorax phantom with 3 ROIs for [.sup.62Cu] ATSM+[.sup.62Cu] PTSM.

(31) (2) The Simulation of PET Concentration Distribution

(32) The parallel compartment model was used to simulate the motion of dual tracers, and then the stable dual-tracer concentration distribution can be acquired by solving the dynamic ordinary differential equations (ODEs). The single compartment model based on kinetic parameters was used to simulate the motion of a single tracer in vivo, and then the stable single-tracer concentration distribution can be acquired by solving the dynamic ordinary differential equations (ODEs).

(33) (3) The Simulation of PET Scanning Process

(34) Computer Monte Carlo simulations were used to perform the dynamic dual-tracer PET scans with the help of software GATE. All simulations are based on the geometry of the full 3D whole body PET scanner SHR74000, designed by Nippon Kanematsu Photonics Co., Ltd. The PET scanner is designed with 6 rings and each ring has 48 detector blocks. Each detector block consists of a 16×16 array of lutetium yttrium orthosilicate (LYSO) crystals. The ring diameter of the scanner is 826 mm. When three groups of dual tracers and a single tracer concentration distribution map was input the Monte Carlo system, corresponding sinogram data were acquired.

(35) (4) Reconstruction Process

(36) The sinogram was reconstructed using the classical ML-EM reconstruction algorithm to obtain the concentration distribution of the simulated radiotracer pairs.

(37) (5) Acquisition of TAC Curve;

(38) TACs Based on pixel were obtained by recombining the concentration distribution matrix of the three groups of mixed radioactive concentration tracers in the body.

(39) (6) Training Process;

(40) the 70% TACs of three dual-tracer groups ([.sup.8F] FDG+[.sup.18F] FLT, [.sup.11C] FMZ+[.sup.11C] DTBZ and [.sup.62Cu] ATSM+[.sup.62Cu] PTSM) was inputted to the DBN as training data to do the pre-training. The TAC curve of a single tracer serves as a label to provide feedback for fine-tuning the entire network.

(41) (7) Testing

(42) The remaining 30% was used to evaluate the validity of the network.

(43) FIG. 4(a) and FIG. 4(b) are the simulated radioactive concentration images of the 9th frame of [.sup.11C] DTBZ and the predicted radioactive concentration images obtained by the trained DBN, respectively. FIG. 4(c) and FIG. 4(d) are the simulated radioactive concentration images of the 9th frame of [.sup.11C] FMZ and the predicted radioactive concentration images obtained by the trained DBN, respectively. FIG. 5(a) and FIG. 5(b) are the simulated radioactive concentration images of the 9th frame of [.sup.62Cu] PTSM and the predicted radioactive concentration images obtained by the trained DBN, respectively. FIG. 5(c) and FIG. 5(d) are the simulated radioactive concentration images of the 9th frame of [.sup.62Cu] ATSM and the predicted radioactive concentration images obtained by the trained DBN, respectively. FIG. 6(a) and FIG. 6(b) are the simulated radioactive concentration images of the 9th frame of [.sup.18F] FDG and the predicted radioactive concentration images obtained by the trained DBN, respectively. FIG. 6(c) and FIG. 6(d) are the simulated radioactive concentration images of the 9th frame of [.sup.18F] FLT and the predicted radioactive concentration images obtained by the trained DBN, respectively.

(44) Comparing the predicted image with the simulated ground truth, it can be found that the constructed deep belief network can separate the dual-tracer PET signal with the same isotope label well. This confirms the effectiveness of the deep belief network in feature extraction and signal separation, and also demonstrates that the method of the invention is effective in processing PET signals labeled with the same isotope.

(45) The description of the specific instances is to facilitate ordinary technicians in the technical field to understand and apply the present invention. It is obvious that a person familiar with the technology in this field can easily modify the specific implementation mentioned above and apply the general principles described here into other instances without the creative Labor. Therefore, the present invention is not limited to the above instances. According to the disclosure of the present invention, the improvement and modification of the present invention will be all within the protection scope of the present invention.