Patent classifications
G06T2211/421
METHOD OF PROCESSING COMPUTER TOMOGRAPHY (CT) DATA FOR FILTER BACK PROJECTION (FBP)
The present invention relates to a method of processing CT data for suppressing image cone beam artefacts (CBA) in CT images, which are reconstructed from said CT data. For the reconstruction the Frequency Split method is used. However, a straightforward use of this method can lead to an un-desired increase of the residual low-frequency noise left in the basis image after applying image domain de-noising methods. This residual noise then propagates rather linearly to the spectral results. In order to avoid this increase of the noise, the method presented here uses the FS method selectively and yet effectively. Thus, in a first aspect of the invention there is provided a method of processing computer tomography (CT) data for suppressing image cone beam artefacts (CBA) in CT images to be reconstructed from said CT data. The method comprises the steps of obtaining CT data generated during a CT scan of a patient (step S1); decomposing the obtained CT data in the projection domain resulting in a plurality of decomposed sinograms (step S2); and non-uniformly spreading between said decomposed sinograms noise and/or inconsistencies that would lead to image cone beam artefacts (step S3).
System for the detection and display of metal obscured regions in cone beam CT
A method for rendering metal obscured regions in a volume radiographic image reconstructs a first 3D image using a plurality of 2D projection images obtained over a scan angle range relative to the subject and identifies metal in the first 3D image or metal shadows in the plurality of 2D projection images. Then, metal obscured regions are determined in a reconstructed 3D image of the object, and an alternative reconstruction being a limited angle reconstruction is performed for the metal obscured regions and displayed to the user with an indication of the spatial relationship to a corresponding metal obscured region.
Systems and methods for correcting mismatch induced by respiratory motion in positron emission tomography image reconstruction
The disclosure relates to PET imaging systems and methods. The systems may obtain a plurality of PET images of a subject and a CT image acquired by performing a spiral CT scan on the subject. Each gated PET image may include a plurality of sub-gated PET images. The CT image may include a plurality of sub-CT images each of which corresponds to one of the plurality of sub-gated PET images. The systems may determine a target motion vector field between a target physiological phase and a physiological phase of the CT image based on the plurality of sub-gated PET images and the plurality of sub-CT images. The systems may reconstruct an attenuation corrected PET image corresponding to the target physiological phase based on the target motion vector field, the CT image, and PET data used for the plurality of gated PET images reconstruction.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM
An image processing apparatus includes at least one processor. The processor is configured to execute region-of-interest image generation processing of generating a region-of-interest image from a projection image, which is obtained at an irradiation position closest to a position facing a detection surface of a radiation detector, among a series of projection images obtained by irradiating a breast with radiations and imaging the breast, and shape type determination processing of determining a type of a shape of a calcification image included in the region-of-interest image generated by the region-of-interest image generation processing.
SYSTEMS AND METHODS FOR CORRECTING MISMATCH INDUCED BY RESPIRATORY MOTION IN POSITRON EMISSION TOMOGRAPHY IMAGE RECONSTRUCTION
The disclosure relates to PET imaging systems and methods. The systems may obtain a plurality of PET images of a subject and a CT image acquired by performing a spiral CT scan on the subject. Each gated PET image may include a plurality of sub-gated PET images. The CT image may include a plurality of sub-CT images each of which corresponds to one of the plurality of sub-gated PET images. The systems may determine a target motion vector field between a target physiological phase and a physiological phase of the CT image based on the plurality of sub-gated PET images and the plurality of sub-CT images. The systems may reconstruct an attenuation corrected PET image corresponding to the target physiological phase based on the target motion vector field, the CT image, and PET data used for the plurality of gated PET images reconstruction.
Apparatus and method combining deep learning (DL) with an X-ray computed tomography (CT) scanner having a multi-resolution detector
A method and apparatus is provided that uses a deep learning (DL) network together with a multi-resolution detector to perform X-ray projection imaging to provide improved resolution similar to a single-resolution detector but at lower cost and less demand on the communication bandwidth between the rotating and stationary parts of an X-ray gantry. The DL network is trained using a training dataset that includes input data and target data. The input data includes projection data acquired using a multi-resolution detector, and the target data includes projection data acquired using a single-resolution, high-resolution detector. Thus, the DL network is trained to improve the resolution of projection data acquired using a multi-resolution detector. Further, the DL network is can be trained to additional correct other aspects of the projection data (e.g., noise and artifacts).
X-RAY IMAGING APPARATUS AND X-RAY IMAGE PROCESSING METHOD
An X-ray imaging apparatus includes an X-ray generator including a plurality of X-ray sources, an X-ray detector configured to detect X-rays radiated from the plurality of X-ray sources and generate a plurality of pieces of projection data, and a processor configured to apply log projection to each of the plurality of pieces of projection data, to apply weighted projection to the log-projected projection data, to apply a bidirectional ramp filter to the weighted-projected projection data, and to generate a tomographic image reconstructed based on each of the projection data to which the bidirectional ramp filter is applied.
System and method for using non-contrast image data in CT perfusion imaging
A system and method for generating a parametric map of a subject's brain includes receiving non-contrast computed tomography (NCCT) imaging data and receiving computed tomography perfusion (CTP) data. The method further includes creating a baseline image by utilizing the NCCT data and generating a parametric map using the CTP data and the baseline image.
DEEP LEARNING BASED THREE-DIMENSIONAL RECONSTRUCTION METHOD FOR LOW-DOSE PET IMAGING
Disclosed is a three-dimensional low-dose PET reconstruction method based on deep learning. The method comprises the following steps: back projecting low-dose PET raw data to the image domain to maintain enough information from the raw data; selecting an appropriate three-dimensional deep neural network structure to fit the mapping between the back projection of the low-dose PET and a standard-dose PET image; after learning from the training samples the network parameters are fixed, realizing three-dimensional PET image reconstruction starting from low-dose PET raw data, thereby obtaining a low-dose PET reconstructed image which has a lower noise and a higher resolution compared with the traditional reconstruction algorithm and image domain noise reduction processing.
FEW-VIEW CT IMAGE RECONSTRUCTION SYSTEM
A system for few-view computed tomography (CT) image reconstruction is described. The system includes a preprocessing module, a first generator network, and a discriminator network. The preprocessing module is configured to apply a ramp filter to an input sinogram to yield a filtered sinogram. The first generator network is configured to receive the filtered sinogram, to learn a filtered back-projection operation and to provide a first reconstructed image as output. The first reconstructed image corresponds to the input sinogram. The discriminator network is configured to determine whether a received image corresponds to the first reconstructed image or a corresponding ground truth image. The generator network and the discriminator network correspond to a Wasserstein generative adversarial network (WGAN). The WGAN is optimized using an objective function based, at least in part, on a Wasserstein distance and based, at least in part, on a gradient penalty.