Patent classifications
G06T2211/464
Method and system for generating multi-task learning-type generative adversarial network for low-dose PET reconstruction
The present application relates to a method and system for generating multi-task learning-type generative adversarial network for low-dose PET reconstruction, and relates to the field of deep learning. The method includes connecting layers of the encoder with layers of the decoder by skip connection to provide a U-Net type picture generator; generating a group of generative adversarial networks by matching a plurality of picture generators with a plurality of discriminators in one-to-one manner; obtaining a first multi-task learning-type generative adversarial network; designing a joint loss function 1 for improving image quality; and training the first multi-task learning-type generative adversarial network according to the joint loss function 1 in combination with an optimizer to provide a second multi-task learning-type generative adversarial network.
Partial Scan and Reconstruction for a Positron Emission Tomography System
A method for performing a partial scan of a patient using a PET/CT system includes receiving a selection of a region of interest for scanning and performing a CT scan over a region of interest with the PET/CT system to acquire raw CT data. The raw CT data is reconstructed into one or more CT images. The PET/CT system is configured to limit data collection to the region of interest. A PET scan limited to a region of interest is performed with the PET/CT system to acquire raw PET data. The raw PET data is reconstructed into one or more PET images of the region of interest.
Systems and methods for image reconstruction
A method may include obtaining a first acquisition time period related to a scan of a first modality performed on an object. The method may also include obtaining one or more second acquisition time periods related to a scan of a second modality performed on the object. The method may also include obtaining, based on the first acquisition time period and the one or more second acquisition time periods, target data of the object acquired in the scan of the first modality. The method may also include generating one or more target images of the object based on the target data.
DEVICES AND PROCESS FOR SYNTHESIZING IMAGES FROM A SOURCE NATURE TO A TARGET NATURE
Images are synthesized from a source to a target nature through unsupervised machine learning (ML), based on an original training set of unaligned source and target images, by training a first ML architecture through an unsupervised first learning pipeline applied to the original set, to generate a first trained model and induced target images consisting in representations of original source images compliant with the target nature. A second ML architecture is trained through a supervised second learning pipeline applied to an induced training set of aligned image pairs, each including first and second items corresponding respectively to an original source image and the induced target image associated with the latter, to generate a second trained model enabling image syntheses from the source to the target nature. Also, applications to effective medical image translations.
Spectral CT-based 511 KeV for positron emission tomography
A virtual 511 KeV attenuation map is generated from CT data. Spectral or multiple energy CT is used to more accurately extrapolate the 511 KeV attenuation map. Since spectral or multiple energy CT may allow for material decomposition and/or due to additional information in the form of measurements at different energies, the modeling used to generate the 511 KeV attenuation map may better account for all materials including high density material. The extrapolated 511 KeV attenuation map may more likely represent actual attenuation at 511 KeV without requiring extra scanning using a 511 KeV source external to the patient. The virtual 511 KeV attenuation map (e.g., CT data at 511 KeV) may provide more accurate PET image reconstruction.
Generating Synthetic X-ray Images and Object Annotations from CT Scans for Augmenting X-ray Abnormality Assessment Systems
Systems and methods for generating a synthetic image are provided. An input medical image in a first modality is received. A synthetic image in a second modality is generated from the input medical image. The synthetic image is upsampled to increase a resolution of the synthetic image. An output image is generated to simulate image processing of the upsampled synthetic image. The output image is output.
IMPROVED METAL ARTIFACTS REDUCTION IN CONE BEAM RECONSTRUCTION
The present disclosure describes methods and apparatuses for reducing metal artifacts in cone beam computed tomography (CBCT) reconstructions. The methods use multiple imaging modalities to identify and locate metal present in a region of dental patients mouth and to generate a reconstructed 3-D volume image of the region with reduced metal artifacts using data obtained by the multiple modalities. According to example embodiments, the methods include creating a metal map using data from a first imaging modality and an initial 3-D reconstruction from data obtained from a second imaging modality including CBCT imaging. The metal map is registered to the initial 3-D reconstruction with a reconstructed metal map and projection metal maps being subsequently produced and applied to projections from the CBCT imaging to generate interpolated projections. An artifact reduced 3-D reconstruction is produced from the interpolated projections and a final 3-D reconstruction is created therefrom including merged metal information.
Deep-Learning-based T1-Enhanced Selection of Linear Coefficients (DL-TESLA) for PET/MR Attenuation Correction
Systems and methods for deep-learning-based T1-enhanced selection of linear attenuation coefficients (DL-TESLA) for PET/MR attenuation are described.
Systems and methods for a PET image reconstruction device
Methods, devices and apparatus for reconstructing a PET image are provided. According to an example of the method, a PET initial image may be reconstructed from PET data obtained by scanning on a target object with a PET device, and an MRI image may be reconstructed from MRI data obtained by scanning on the target object with a MRI device, a fusion image retaining only a boundary of the target object is generated based on the PET initial image and the MRI image, and a PET reconstructed image is obtained by combining the PET initial image with the fusion image, where the definition of the boundary of the target object in the PET reconstructed image is higher than that in the PET initial image.
SUPER RESOLUTION IN POSITRON EMISSION TOMOGRAPHY IMAGING USING ULTRAFAST ULTRASOUND IMAGING
An imaging method including: a) acquiring N successive positron emission tomography (PET) low resolution images Γ.sub.i and simultaneously, N successive Ultrafast Ultrasound Imaging (UUI) images Ui of a moving object; b) determining from each UUI image Ui, the motion vector fields M.sub.i that corresponds to the spatio-temporal geometrical transformation of the motion of the object; c) obtaining a final estimated high resolution image H of the object by iterative determination of a high resolution image H.sup.n+1 obtained by applying several correction iterations to a current estimated high resolution image H.sup.n, n being the number of iterations, starting from an initial estimated high resolution image H.sup.1 of the object, each correction iteration including at least: i) warping the estimated high resolution image H.sup.n using the motion vector fields M.sub.i to determine a set of low resolution reference images L.sup.n.sub.i; ii) determining a differential image Di by difference between each PET image Γ.sub.i and the corresponding low resolution reference image L.sup.n.sub.i; iii) warping back the differential images Di using the motion vector fields M.sub.i and averaging the N warped back differential images to obtain a high resolution differential image; iv) determining the high resolution image H.sup.n+1 by correcting the high resolution image H.sup.n using the high resolution differential image; d) applying the motion vector fields M.sub.i of each UUI image Ui to the final high resolution image H.