SUPER RESOLUTION IN POSITRON EMISSION TOMOGRAPHY IMAGING USING ULTRAFAST ULTRASOUND IMAGING
20210239863 · 2021-08-05
Inventors
- Bertrand Tavitian (Paris, FR)
- Mickaël Tanter (Paris, FR)
- Mailyn PEREZ-LIVA (Paris, FR)
- Joaquin LOPEZ HERRAIZ (Madrid, ES)
- Jean PROVOST (Montreal, CA)
Cpc classification
G06T11/008
PHYSICS
G06T2211/464
PHYSICS
A61B6/5229
HUMAN NECESSITIES
G01T1/2985
PHYSICS
A61B6/5235
HUMAN NECESSITIES
G06T3/4076
PHYSICS
International classification
G01T1/29
PHYSICS
Abstract
An imaging method including: a) acquiring N successive positron emission tomography (PET) low resolution images Γ.sub.i and simultaneously, N successive Ultrafast Ultrasound Imaging (UUI) images Ui of a moving object; b) determining from each UUI image Ui, the motion vector fields M.sub.i that corresponds to the spatio-temporal geometrical transformation of the motion of the object; c) obtaining a final estimated high resolution image H of the object by iterative determination of a high resolution image H.sup.n+1 obtained by applying several correction iterations to a current estimated high resolution image H.sup.n, n being the number of iterations, starting from an initial estimated high resolution image H.sup.1 of the object, each correction iteration including at least: i) warping the estimated high resolution image H.sup.n using the motion vector fields M.sub.i to determine a set of low resolution reference images L.sup.n.sub.i; ii) determining a differential image Di by difference between each PET image Γ.sub.i and the corresponding low resolution reference image L.sup.n.sub.i; iii) warping back the differential images Di using the motion vector fields M.sub.i and averaging the N warped back differential images to obtain a high resolution differential image; iv) determining the high resolution image H.sup.n+1 by correcting the high resolution image H.sup.n using the high resolution differential image; d) applying the motion vector fields M.sub.i of each UUI image Ui to the final high resolution image H.
Claims
1. An imaging method including: a) acquiring N successive positron emission tomography (PET) low resolution images Γ.sub.i and simultaneously, N successive Ultrafast Ultrasound Imaging (UUI) images Ui of a moving object; b) determining from each UUI image Ui, the motion vector fields M.sub.i that corresponds to the spatio-temporal geometrical transformation of the motion of the object; c) obtaining a final estimated high resolution image H of the object by iterative determination of a high resolution image H.sup.n+1 obtained by applying several correction iterations to a current estimated high resolution image H.sup.n, n being the number of iterations, starting from an initial estimated high resolution image H.sup.1 of the object, each correction iteration including at least: i) warping said estimated high resolution image H.sup.n using the motion vector fields M.sub.i to determine a set of low resolution reference images L.sup.n.sub.i; ii) determining a differential image Di by difference between each PET image Γ.sub.i and the corresponding low resolution reference image L.sup.n.sub.i; iii) warping back said differential images Di using the motion vector fields M.sub.i and averaging the N warped back differential images to obtain a high resolution differential image; iv) determining the high resolution image H.sup.n+1 by correcting said high resolution image H.sup.n using said high resolution differential image; d) applying the motion vector fields M.sub.i of each UUI image Ui to said final high resolution image H.
2. The imaging method according to claim 1, wherein the motion vector fields M.sub.i of b) are estimated by combination of both global and local image estimators by characterizing respectively intensity and local phase information obtained from two consecutive frames of the set of UUI images.
3. The imaging method according to claim 2, wherein the motion vector fields M.sub.i are estimated according to the following equation:
4. The imaging method according to claim 2, wherein the local phase θ(x,y) is obtained from the monogenic signal using quadrature filters, which are the combination of an even-symmetric band-pass filter and of two consecutive odd-symmetric filters applied to the even component of the signal according to the following equation:
5. The imaging method according to claim 4, wherein the even-symmetric band-pass filter is a log-Garbor radial filter, the two consecutive odd-symmetric filters correspond to the Riesz transform of the log-Garbor radial filter, wherein three uniformly decreasing center-frequency are used for the band-pass for the log-Garbor filter, using 30, 20 and 10 pixels, which defines three consecutive scales, wherein at each scale, 20 iterations are performed and the similarity function with each local phase scale is evaluated, and wherein the final-scale motion field is regularized with a Gaussian low-pass kernel with a Full-Width at Half-Maximum of 10 pixels.
6. The imaging method according to claim 1, wherein a) is performed using the UUI-B-mode dynamic sequence of the UUI system and wherein the initial estimated high resolution image H.sup.1 of c) is obtained by up-sampling the image resulting from the motion registration of the N-frames gated-PET to the dimensions of the UUI-B-mode maps.
7. The imaging method according to claim 1, wherein the positron emission tomography (PET) low resolution images Γ.sub.i are positron emission tomography-computed tomography (PET-CT) images.
8. The imaging method according to claim 1, wherein in c) i), the low resolution reference images L.sub.i is estimated by down-sampling H.sup.n to the dimensions of the low-resolution images Γ.sub.i, warping H.sup.n to a reference time frame i and blurring H.sup.n according to the following equation:
L.sup.n.sub.i=DBM.sub.iH.sup.n+Ψ.sub.i, wherein H is a high-resolution image of the inspected object, L.sub.i is a set containing i=1, . . . , N.sub.f low-resolution image frames of the object, N.sub.f is the total number of frames, M.sub.i is the motion vector fields (MVF) containing the spatio-temporal geometrical transformation of the object motion at frame i, B is a blurring kernel or point spread function (PSF) of the imaging system, D is a down-sampling kernel defining the difference in pixel size between H and L.sub.i, i.e. combining several voxels of the grid into one, for example using linear interpolation, and Ψ.sub.i an additive noise term.
9. The imaging method according to claim 1, wherein the n estimation H.sup.n of the high-resolution image is used to calculate the divergence of the gradient of the Total Variation (TV) model penalizing term, which measures the quality of the restored image by countering the effect of the noise term and which corresponds to the last term of the following equation, and wherein the n+1 estimation H.sup.n+1 of the high-resolution image is obtained by means of the following steepest-descent algorithm
10. An imaging device comprising a positron emission tomography (PET) scanner, a Ultrafast Ultrasound Imaging (UUI) scanner and a processor, wherein said PET scanner acquire a set of N successive low resolution images Γ.sub.i and said UUI scanner simultaneously acquire a set of N successive images Ui of a moving object; and wherein the processor is configured to perform at least: a) receiving the N successive positron emission tomography (PET) low resolution images Γ.sub.i and the simultaneously registered N successive Ultrafast Ultrasound Imaging (UUI) images Ui of said moving object; b) determining from each UUI image Ui, the motion vector fields M.sub.i that corresponds to the spatio-temporal geometrical transformation of the motion of the object; c) obtaining a final estimated high resolution image H of the object by iterative determination of a high resolution image H.sup.n+1 obtained by applying several correction iterations to a current estimated high resolution image H.sup.n, n being the number of iteration, starting from an initial estimated high resolution image H.sup.1 of the object, each correction iteration including at least: i) warping said estimated high resolution image H.sup.n using the motion vector fields M.sub.i to determine a set of low resolution reference images L.sup.n.sub.i; ii) determining a differential image Di by difference between each PET image Γ.sub.i and the corresponding low resolution reference image L.sup.n.sub.i; iii) warping back said differential images Di using the motion vector fields M.sub.i and averaging the N warped back differential images to obtain a high resolution differential image; iv) determining the high resolution image H.sup.n+1 by correcting said high resolution image H.sup.n using said high resolution differential image; d) applying the motion vector fields M.sub.i of each UUI image Ui to said final high resolution image H.
11. The imaging device according to claim 10, wherein the PET scanner is a positron emission tomography-computed tomography (PET-CT) scanner and wherein the UUI scanner is configured to use the real-time B-mode imaging.
12. The imaging device according to claim 10, wherein the UUI scanner comprises a transducer positioned over the moving object to be imaged by use of a motorized micropositioner and wherein the moving object to be imaged with the UUI transducer are positioned together inside the PET gantry.
13. The imaging device according to claim 10, further comprising a remote control unit that control the motorized micropositioner, wherein the motorized micropositioner is a six-degrees-of-freedom motorized micropositioner and the control unit controls the motorized micropositioner in programmed steps.
14. A computer-readable medium comprising instructions which, when executed by a computer, cause the computer to carry out: a) acquiring N successive positron emission tomography (PET) low resolution images Γ.sub.i and simultaneously, N successive Ultrafast Ultrasound Imaging (UUI) images Ui of a moving object; b) determining from each UUI image Ui, the motion vector fields M.sub.i that corresponds to the spatio-temporal geometrical transformation of the motion of the object; c) obtaining a final estimated high resolution image H of the object by iterative determination of a high resolution image H.sup.n+1 obtained by applying several correction iterations to a current estimated high resolution image H.sup.n, n being the number of iteration, starting from an initial estimated high resolution image H.sup.1 of the object, each correction iteration including at least: i) warping said estimated high resolution image H.sup.n using the motion vector fields M.sub.i to determine a set of low resolution reference images L.sup.n.sub.i; ii) determining a differential image Di by difference between each PET image Γ.sub.i and the corresponding low resolution reference image L.sup.n.sub.i; iii) warping back said differential images Di using the motion vector fields M.sub.i and averaging the N warped back differential images to obtain a high resolution differential image; iv) determining the high resolution image H.sup.n+1 by correcting said high resolution image H.sup.n using said high resolution differential image; d) applying the motion vector fields M.sub.i of each UUI image Ui to said final high resolution image H.
15. The imaging method according to claim 1, wherein the moving object to be imaged is an organ, preferably a living heart, more preferably living heart in closed-chest, even more preferably a rodent living heart in closed-chest, even more preferably a human living heart in closed-chest.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0060] Other features and advantages of the disclosure appear from the following detailed description of one non-limiting example thereof, with reference to the accompanying drawings.
[0061] In the drawings:
[0062]
[0063]
[0064]
DETAILED DESCRIPTION
A. Super-Resolution in the Image Domain
[0065] SR restoration is an ill-conditioned problem since it aims to restore a high-resolution version of an object from a set of low-resolution images of it, that are related through a series of convolutional degrading steps such as motion, blurring, down-sampling and noise corruption:
L.sub.i=DBM.sub.iH+Ψ.sub.i (1)
being H a high-resolution image of the inspected object, L.sub.i a set containing i=1, . . . , N.sub.f low-resolution image frames of the object, N.sub.f is the total number of frames, M.sub.i the motion vector fields (MVF) containing the spatio-temporal geometrical transformation of the object motion at frame i, B a blurring kernel or point spread function (PSF) of the imaging system, D a down-sampling kernel defining the difference in pixel size between H and L.sub.i i.e. combining several voxels of the grid into one, for example using linear interpolation, and Ψ.sub.i an additive noise term.
[0066] Γ.sub.i is defined as being a dataset containing low-resolution frames of a Gated Cardiac-PET, such as successive time frames acquired synchronously with the electrocardiogram (ECG). Using Γ.sub.i, a high-resolution original image is estimated:
where L.sup.n.sub.i is a simulated set of low-resolution images that can be obtained through Eq. (1) using the high-resolution image H.sup.n; n=1, . . . , N defines the iterations of the algorithm; H* is the optimal high-resolution image that solves the variational problem of Eq. (2); TV(H) is a penalizing function that measures the quality of the restored image by countering the effect of the noise term Ψ.sub.i and λ is the regularization parameter that balances the weight of the penalization term.
[0067] Regarding the uptake FDG in the cardiac ventricular wall, a clear difference between healthy and injured tissue but very little variation of uptake inside the healthy tissue is expected. Hence, a Total Variation (TV) model as penalizing term has been adopted:
TV(H)=∫|∇H(r)|dr (3)
[0068] In Eq. (3), ∇ denotes the gradient operator. To solve the Euler-Lagrange equation associated to the variational problem of Eq. (2), a steepest-descent algorithm has been used:
[0069] In Eq. (4), D.sup.−1 is the inverse of the down-sampling operator that performs the up-sampling of data, i.e. it divides each voxel of the grid in smaller voxels, M.sub.i.sup.−1 inverts the MVF (the approximation M.sub.i.sup.−1=−M.sub.i has been used) and the deblurring kernel B.sup.−1 is the inverse of the blurring kernel B.
B. Super-Resolution in the Image Domain
[0070] The UUI-B-mode has been used in order to characterize the MVF, i.e. the M.sub.i term in Eq. (1.).
[0071] In the heart, this geometrical transformation matrix that is capable to align two given phases of the heart's motion is non-rigid as the pixels that contain heart tissue in an image move relatively independently from the pixels in close neighboring regions. Moreover, during the registration of ultrasound cardiac sequences, the ribs may induce dramatic attenuations of the signal intensity.
[0072] To account for the changes of intensity that might appear in ultrasonic sequences, both local and global image estimators have been combined by characterizing the local phase θ(x,y) and intensity of the image [23]. A distance metric derived from Demon-like registrations [26, 27] has been employed:
wherein I.sub.T.sub.
[0073] The local phase θ(x,y) can be obtained from the monogenic signal using quadrature filters, which are the combination of an even-symmetric band-pass filter (F.sub.e, giving as result I.sub.e=F.sub.e.Math.I, the even component of an image I) and of two consecutive odd-symmetric filters (F.sub.o1 and F.sub.o2) applied to the even component of the signal:
[0074] A log-Garbor radial filter has been used as even filter and its Riesz transform as odd filters. Three uniformly decreasing center-frequency have been used for the band-pass for the log-Garbor filter, using 30, 20 and 10 pixels, which defines three consecutive scales. At each scale, 20 iterations were performed and the similarity function (Eq. (5)) has been evaluated using additive corrections. The final-scale motion field was regularized with a Gaussian low-pass kernel with Full-Width at Half-Maximum (FWHM) of 10 pixels.
C. Implementation Details for the Ultrasound-Based SR of Cardiac PET.
[0075]
[0076] The algorithm is feed with the Γ.sub.i dataset of N.sub.f low-resolution Gated Cardiac-PET images and the co-registered UUI-B-mode N.sub.f frames (
[0077] The algorithm starts with an initial high-resolution guess H.sup.1, which is the image resulting from warping and averaging the N.sub.f Gated Cardiac-PET, up-sampled to the dimensions of the UUI-B-mode maps (
[0078] For all frames i, each difference (Γ.sub.i−L.sup.n.sub.i) is up-sampled, warped-back to the selected frame of reference and averaged for all N.sub.f. Using the current n estimation of the high-resolution image H.sup.n, the divergence of the gradient of the TV term (last term in Eq. (4)) is calculated (
[0079] To evaluate the gradient, border elements of the image are padded by replication outside the boundaries of the image. The regularization parameter λ is empirically set to 1×10.sup.−6 as it provided a satisfying tradeoff between noise control and spatial resolution in our experiments (see section of numerical experiments for details about the definitions of noise and spatial resolution used).
[0080] All these steps are reiterated until the root-mean-square-error (RMSE) between estimated and measured low-resolution images varies by less than 5% in two consecutive iterations.
D. PETRUS: Positron Emission Tomography Registered Ultrasonography
[0081] PETRUS has been described in detail in [24]. Briefly, it combines a preclinical PET-CT scanner for small animals (nanoScan Mediso Ltd., Hungary) with a clinical UUI scanner (Aixplorer, Supersonic Imagine France). The UUI component of PETRUS provides thousands of images per second, allowing the exploration of rapid phenomena with unprecedented spatial resolution (<100 μm).
[0082] Concerning cardiac studies in rats, PETRUS uses a commercial pediatric/rheumatology ultrasound probe (SuperLinear™ SLH20-6, Supersonic Imagine, France) with negligible effects on PET image quality [9]. The probe is attached through a 35 cm long hollow carbon rectangular cuboid (Polyplan Composites, France) to a six-degree-of-freedom high-precision micromotor (Hexapod H811, Physik Instrumente, Germany; minimum incremental motion 0.2 μm) fixed to the animal bed of the PET-CT scanner. A home-made 3D-printed plastic holder joins the carbon arm and the probe. Acoustic impedance coupling between the probe and the depilated skin of the animal is obtained using degassed ultrasound gel (Medi'gel Blue ECG, Drexco Medical).
[0083] Co-registration between UUI and PET requires accurate tracking of the ultrasound probe inside the PET gantry. The automatic process of multimodal data co-registration between UUI and PET volumes (detailed in [36]) provides a mean accuracy of co-registration of 0.10±0.03 mm. As a result, Ultrasound images and co-registered PET-CT images corresponding to the same space location are extracted.
[0084] In the present study the UUI-B-mode is used, typically with a plane FOV of 25.6 mm×(20 to 30) mm, a pixel-size of 0.1 mm×0.1 mm, and 16 temporal frames covering uniformly the full heart cycle.
[0085] Therefore, it is described herein an imaging method including:
[0086] a) acquiring N successive positron emission tomography (PET) low resolution images Γ.sub.i and simultaneously, N successive Ultrafast Ultrasound Imaging (UUI) images Ui of a moving object;
[0087] b) determining from each UUI image Ui, the motion vector fields M.sub.i that corresponds to the spatio-temporal geometrical transformation of the motion of the object;
[0088] c) obtaining a final estimated high resolution image H of the object by iterative determination of a high resolution image H.sup.n+1 obtained by applying several correction iterations to a current estimated high resolution image H.sup.n, n being the number of iterations, starting from an initial estimated high resolution image H.sup.1 of the object, each correction iteration including at least: [0089] i) warping said estimated high resolution image H.sup.n using the motion vector fields M.sub.i to determine a set of low resolution reference images L.sup.n.sub.i; [0090] ii) determining a differential image Di by difference between each PET image Γ.sub.i and the corresponding low resolution reference image L.sup.n.sub.i; [0091] iii) warping back said differential images Di using the motion vector fields M.sub.i and averaging the N warped back differential images to obtain a high resolution differential image; [0092] iv) determining the high resolution image H.sup.n+1 by correcting said high resolution image H.sup.n using said high resolution differential image;
[0093] d) applying the motion vector fields M.sub.i of each UUI image Ui to said final high resolution image H.
[0094] The method may further include one and/or other of the following features: [0095] the motion vector fields M.sub.i of b) are estimated by combination of both global and local image estimators by characterizing respectively intensity and local phase information obtained from two consecutive frames of the set of UUI images; [0096] the motion vector fields M.sub.i are estimated according to the following equation:
L.sup.n.sub.i=DBM.sub.iH.sup.n+Ψ.sub.i, [0108] wherein H is a high-resolution image of the inspected object, L.sub.i is a set containing i=1, . . . , N.sub.f low-resolution image frames of the object, N.sub.f is the total number of frames, M.sub.i is the motion vector fields (MVF) containing the spatio-temporal geometrical transformation of the object motion at frame i, B is a blurring kernel or point spread function (PSF) of the imaging system, D is a down-sampling kernel defining the difference in pixel size between H and L.sub.i, i.e. combining several voxels of the grid into one, for example using linear interpolation, and Ψ.sub.i an additive noise term; [0109] the n estimation H.sup.n of the high-resolution image is used to calculate the divergence of the gradient of the Total Variation (TV) model penalizing term, which measures the quality of the restored image by countering the effect of the noise term and which corresponds to the last term of the following equation, and wherein the n+1 estimation H.sup.n+1 of the high-resolution image is obtained by means of the following steepest-descent algorithm
[0114] Besides, it is also disclosed an imaging device comprising a positron emission tomography (PET) scanner, a Ultrafast Ultrasound Imaging (UUI) scanner and a processor, wherein said PET scanner acquire a set of N successive low resolution images Γ.sub.i and said UUI scanner simultaneously acquire a set of N successive images Ui of a moving object; and wherein the processor is configured to perform at least:
[0115] a) receiving the N successive positron emission tomography (PET) low resolution images Γ.sub.i and the simultaneously registered N successive Ultrafast Ultrasound Imaging (UUI) images Ui of said moving object;
[0116] b) determining from each UUI image Ui, the motion vector fields M.sub.i that corresponds to the spatio-temporal geometrical transformation of the motion of the object;
[0117] c) obtaining a final estimated high resolution image H of the object by iterative determination of a high resolution image H.sup.n+1 obtained by applying several correction iterations to a current estimated high resolution image H.sup.n, n being the number of iteration, starting from an initial estimated high resolution image H.sup.1 of the object, each correction iteration including at least: [0118] i) warping said estimated high resolution image H.sup.n using the motion vector fields M.sub.i to determine a set of low resolution reference images L.sup.n.sub.i; [0119] ii) determining a differential image Di by difference between each PET image Γ.sub.i and the corresponding low resolution reference image L.sup.n.sub.i; [0120] iii) warping back said differential images Di using the motion vector fields M.sub.i and averaging the N warped back differential images to obtain a high resolution differential image; [0121] iv) determining the high resolution image H.sup.n+1 by correcting said high resolution image H.sup.n using said high resolution differential image;
[0122] d) applying the motion vector fields M.sub.i of each UUI image Ui to said final high resolution image H.
[0123] The imaging device may further include one and/or other of the following features: [0124] the PET scanner is a positron emission tomography-computed tomography (PET-CT) scanner; [0125] the UUI scanner is configured to use the real-time B-mode imaging; [0126] the UUI scanner comprises a transducer positioned over the moving object to be imaged by use of a motorized micropositioner and wherein the moving object to be imaged with the UUI transducer are positioned together inside the PET gantry; [0127] the imaging device further comprises a remote control unit that control the motorized micropositioner, wherein the motorized micropositioner is a six-degrees-of-freedom motorized micropositioner and the control unit control the motorized micropositioner in programmed steps, for which the coordinates are expressed as a function of the PET system coordinates; [0128] the moving object to be imaged is an organ, preferably a living heart, more preferably a rodent living heart, even more preferably a human living heart.
[0129] It is also disclosed a computer-readable medium comprising instructions which, when executed by a computer, cause the computer to carry out:
[0130] a) acquiring N successive positron emission tomography (PET) low resolution images Γ.sub.i and simultaneously, N successive Ultrafast Ultrasound Imaging (UUI) images Ui of a moving object;
[0131] b) determining from each UUI image Ui, the motion vector fields M.sub.i that corresponds to the spatio-temporal geometrical transformation of the motion of the object;
[0132] c) obtaining a final estimated high resolution image H of the object by iterative determination of a high resolution image H.sup.n+1 obtained by applying several correction iterations to a current estimated high resolution image H.sup.n, n being the number of iteration, starting from an initial estimated high resolution image H.sup.1 of the object, each correction iteration including at least: [0133] i) warping said estimated high resolution image H.sup.n using the motion vector fields M.sub.i to determine a set of low resolution reference images L.sup.n.sub.i; [0134] ii) determining a differential image Di by difference between each PET image Γ.sub.i and the corresponding low resolution reference image L.sup.n.sub.i; [0135] iii) warping back said differential images Di using the motion vector fields M.sub.i and averaging the N warped back differential images to obtain a high resolution differential image; [0136] iv) determining the high resolution image H.sup.n+1 by correcting said high resolution image H.sup.n using said high resolution differential image;
[0137] d) applying the motion vector fields M.sub.i of each UUI image Ui to said final high resolution image H.
EXPERIMENTS
[0138] The method was tested with numerical and animal data (n=2) acquired with the non-invasive hybrid imaging system PETRUS, that fuses PET, CT and UUI.
[0139] SNR, contrast, and spatial resolution of the treated images where measured and compared to the values obtained with static PET.
1. Numerical Experiment
[0140] The performance of the Ultrasound-based SR algorithm was tested on a numerical phantom simulating a realistic Gated Cardiac-PET acquisition in a numerical rat, using the ROBY phantom [12]. A total of 8 cardiac and 8 respiratory frames with 1-mm maximum diaphragm displacement were simulated to evaluate the impact of the cardiac and respiratory motions. The input images of the simulation consisted in 256×256×237 voxels of 0.4×0.4×0.4 mm covering the thoracic cage of the ROBY phantom. The phantom was simulated using the Monte Carlo software MCGPU-PET [32], a fast simulator which takes into account the main relevant physical processes of the emission, transport and detection of the radiation. A generic pre-clinical scanner model was used, with an associated spatial resolution at the center of the scanner of ˜1.5 mm. Each simulation was based on the specific distribution of the activity and materials in each particular frame, and contained around 65 million counts, including trues and scatter coincidences. The acquired data of each frame were stored in 527 sinograms (direct and oblique), containing 129 and 168 radial and angular bins respectively. The simulated data were reconstructed with GFIRST [33], using a 3D-OSEM algorithm and including attenuation and scatter corrections. These corrections were obtained from the known material distribution in each frame, using a 2-tissues class segmentation (air and tissue). The images were reconstructed using 128×128×127 voxels of 0.8×0.8×0.746 mm. The whole simulation and reconstructions took in total ˜52 minutes with a single GPU (GeForce GTX 1080, 1.73 GHz, 8 Gb).
[0141] The SR analysis was regionally limited to a single 2D slice in transversal orientation. The phantom activity information was used as anatomical reference to estimate the MVF, a down-sampling factor D of one every two pixels, and a Gaussian filter with FWHM of 1.5 mm as blurring kernel B. Mean pixels' values of regions of interest (ROIs) located in the left-ventricle wall and in the ventricle cavity were quantified. In order to simulate a static PET acquisition, the respiratory and cardiac frames of the reconstructed PET images were averaged. To simulate a Gated Cardiac-PET sequence, all the respiratory frames were averaged within each phase of the cardiac cycle. As the observed effect of respiration in the simulated Gated Cardiac-PET was a small rigid-translation in the cranio-caudal direction, a rigid registration was performed between the first frame of the reference anatomical image and the first frame of the simulated Gated Cardiac-PET before the registration of the cardiac phases. The estimated misfit was then applied to each frame of the Gated Cardiac-PET.
[0142] The improvement in image quality by the SR processing was assessed by measuring the Signal-to-Noise ratio (SNR), contrast and spatial resolution of the images. The SNR was calculated using the standard deviation (STD) and mean value (MEAN) in the ROIs as:
[0143] Contrast was defined as Weber's fraction [34] using the mean value in the left ventricle wall ROI (MEAN.sub.wall) and in the cavity of the ventricle ROI (MEAN.sub.cav).
[0144] Spatial resolution was defined as the lateral spread function (LSF) of the ventricle's wall. This was evaluated as the FWHM of a Gaussian function fit to the mirrored duplicated points, external to the edge of the ventricle' wall, in a profile crossing the wall. The location of the external edge of the wall was extracted from the matching anatomical reference profile. The procedure was repeated on 5 intensity profiles drawn orthogonally to the heart wall, clockwise: basal lateral, mid lateral, apical, mid-septal and basal septal [35]. Resolution was then defined as the average of the five estimated LSF.
2. Animal Experiments
[0145] Animal experiments were approved by the French Ethical Committee (approval number 18-146). Real data acquired with the PETRUS system from two successive imaging sessions of a 10-week-old female Wistar rat were processed: (i) in normal conditions (baseline) and (ii) after surgically-induced myocardial infarction. Baseline imaging was done 3 days before surgery. During surgery, the rat was under isoflurane anesthesia (2.5%) and physiological parameters were constantly monitored. Analgesics were injected before and post-surgery. A permanent ligation of the left anterior descending coronary artery (LADCA) was performed after thoracotomy under endotracheal intubation. The incision was sutured and the air in the thorax removed. The rat was imaged 4 hours after LADCA ligation. For both imaging sessions, the rat was positioned on a customized bed (as in
RESULTS
[0146]
[0147]
[0148] Quantitatively, the mean FDG uptake in the different ROIs using Ultrasound-based SR increased in the regions of the walls and decreased in the cavity of the ventricle. In the case of the infarcted heart, Ultrasound-based SR shows better defined FDG distribution in the ventricle walls as well as a better delineation of the most affected regions after LADCA ligation. The mean contrast was twice higher than that of the static images, the SNR was improved by 30% and the spatial resolution by 56%, reaching a mean value of 0.88 mm.
[0149] Therefore, the results show that Ultrasound-based SR enhances the quality of PET images of the beating rodent heart: with respect to static PET, image contrast is improved by a factor of two, signal-to-noise ratio by 40% and spatial resolution by 56% (˜0.88 mm). As a consequence, the metabolic defect following an acute cardiac ischemia was delineated with much higher anatomical precision.
[0150] The results support the view that Ultrasound-based SR improves the spatial resolution of the images without affecting the SNR. Analogously, all the image quality parameters evaluated were considerably improved, facilitating the identification of heart lesion, which appear enhanced in the images as a result of a better spatial resolution and decreased PVE. The total amount of counts present in the entire Gated sequence (2.35×10{circumflex over ( )}10 and 2.63×10{circumflex over ( )}10 counts for the intact and infarcted hearts, respectively) were not substantially modified by the Ultrasound-based SR algorithm (2.35×10.sup.10 counts and 2.65×10.sup.10 counts for the intact and infarcted heart). Importantly, the ultrasound image acquisition is performed during the PET acquisition and provides images in real time. Co-registration of multimodal data takes 2 minutes on a dual core Intel® Xeon® CPU E-5-2637 v4 @ 3.5 GHz, and most of this time is used for data loading of the PET volumes. Ultrasound-based SR adds a short computation time to the reconstruction process. The motion registration of 16 UUI-B-mode frames takes 4 minutes, and the SR algorithm 1 minute on the same machine. Both codes were run in non-parallelized conditions, while parallelization is likely to improve considerably the execution time.
TABLE-US-00001 TABLE I QUANTITATIVE ANALISIS OF TREATED IMAGES ROI Static U-based SR Numerical experiments Wall (SUV = 4) 3.32 ± 0.17 4.02 ± 0.11 Cav. (SUV = 1) 1.57 ± 0.19 1.23 ± 0.14 Contrast (ad) 1.11 2.27 SNR (dB) 12.91 15.63 S. Res. (mm) 1.89 0.84 Animal experiments Wall 3.63 ± 0.21 4.27 ± 0.11 Cav. 1.33 ± 0.11 1.01 ± 0.13 BL Contrast (ad) 1.73 3.23 SNR (dB) 12.38 15.89 S. Res. (mm) 1.99 0.85 Wall 2.80 ± 0.20 3.88 ± 0.12 Cav. 1.37 ± 0.26 1.15 ± 0.14 Lesion 1.56 ± 0.15 2.18 ± 0.17 I Contrast (ad) 1.04 2.37 SNR (dB) 11.46 15.10 S. Res. (mm) 2.01 0.90 I: Infarcted; BL: Baseline; Static: Static PET; U-based SR: Super-Resolution based on Ultrasound; Wall: ventricle's wall; Cav: cavity inside ventricle; Lesion: ischemic area