G06T2211/464

Systems and methods for image correction in positron emission tomography

System for image correction in PET is provided. The system may acquire a PET image and a CT image of a subject. The system may generate, based on the PET image and the CT image, an attenuation-corrected PET image of the subject by application of an attenuation correction model. The attenuation correction model may be a trained cascaded neural network including a trained first model and at least one trained second model downstream to the trained first model. During the application of the attenuation correction model, an input of each of the at least one trained second model may include the PET image, the CT image, and an output image of a previous trained model that is upstream and connected to the trained second model.

SYSTEMS AND METHODS FOR AUTOMATED SINOGRAM COMPLETION, COMBINATION, AND COMPLETION BY COMBINATION
20170365075 · 2017-12-21 ·

Described herein are systems and methods for automated completion, combination, and completion by combination of sinograms. In certain embodiments, sinogram completion is based on a photographic (e.g. spectral or optical) acquisition and a CT acquisition (e.g., micro CT). In other embodiments, sinogram completion is based on two CT acquisitions. The sinogram to be completed may be truncated due to a detector crop (e.g., a center-based crop or an offset based crop). The sinogram to be completed may be truncated due to a subvolume crop (e.g., based on low resolution image projected onto sinogram).

SYSTEMS AND METHODS FOR IMAGE RECONSTRUCTION

A method may include obtaining a first acquisition time period related to a scan of a first modality performed on an object. The method may also include obtaining one or more second acquisition time periods related to a scan of a second modality performed on the object. The method may also include obtaining, based on the first acquisition time period and the one or more second acquisition time periods, target data of the object acquired in the scan of the first modality. The method may also include generating one or more target images of the object based on the target data.

Feature space based MR guided PET reconstruction

A method for PET image reconstruction acquires PET data by a PET scanner; reconstructs from the acquired PET data a seed PET image; builds a feature space from the seed PET image and anatomical images co-registered with the seed PET image; performs a penalized maximum-likelihood reconstruction of a PET image from the seed PET image and the feature space using a penalty function that is calculated based on the differences between each voxel and its neighbors both on the PET image and in the feature space regardless of their location in the image.

METHOD FOR GENERATION OF SYNTHETIC MAMMOGRAMS FROM TOMOSYNTHESIS DATA
20170316588 · 2017-11-02 ·

A method and related apparatus (VS) for synthesizing a projection image (S), in particular for use in mammography. It is proposed to compute a weight function from one image volume (V1) and is then used to implement a weighted forward projection through another image volume block to compute a synthesized projection image (S) across block (V2).

Method and apparatus for performing a tomographic examination of an object
11428648 · 2022-08-30 · ·

A method and a related apparatus for performing a tomographic examination of an object (2) which advances through an examination area (6), wherein the examination area (6) is irradiated with x-rays transversally to a motion trajectory of the object (2) and the residual intensity of the x-rays which have crossed the object (2) is repeatedly detected to obtain, for each detection, an electronic two-dimensional pixel map, the two-dimensional maps thus obtained being processed by a computer to obtain a three-dimensional tomographic reconstruction of the object (2); wherein, during the advancement, the object (2) is made or let rotate, at least partly uncontrolled, in such a way that the object (2) rotates around one or more rotation axes which are transversal both to the motion trajectory and to the propagation directions of the x-rays crossing it; and wherein a computer also determines the spatial position in which the object (2) is located relative to the one or more emitters (4) and/or the one or more detectors (5) at the instant when each two-dimensional map is detected, and factors this in the tomographic reconstruction.

PROBE LOCALIZATION
20170263021 · 2017-09-14 ·

A method of NM image reconstruction, including: (a) acquiring a first set of NM data of a part of the body; (b) collecting a probe position and/or probe NM data from an intrabody probe; (c) reconstructing an NM image from said NM data using said collected probe data.

Also described is a method of navigating to a target in a body, including: (a) acquiring a NM image of a part of the body; (b) collecting NM data from an intrabody probe; (c) correlating said image and said data; and (d) extracting location information of said probe relative to said target based on said correlated data.

METHOD AND SYSTEM FOR GENERATING MULTI-TASK LEARNING-TYPE GENERATIVE ADVERSARIAL NETWORK FOR LOW-DOSE PET RECONSTRUCTION

The present application relates to a method and system for generating multi-task learning-type generative adversarial network for low-dose PET reconstruction, and relates to the field of deep learning. The method includes connecting layers of the encoder with layers of the decoder by skip connection to provide a U-Net type picture generator; generating a group of generative adversarial networks by matching a plurality of picture generators with a plurality of discriminators in one-to-one manner; obtaining a first multi-task learning-type generative adversarial network; designing a joint loss function 1 for improving image quality; and training the first multi-task learning-type generative adversarial network according to the joint loss function 1 in combination with an optimizer to provide a second multi-task learning-type generative adversarial network.

System and method for registering multi-modality images

The present disclosure relates to a method and system for registering multi-modality images. The method may include: acquiring a first image relating to one or more reference objects; acquiring a second image relating to the one or more reference objects; determining a set of reference points based on the one or more reference objects; determining a set of mapping data corresponding to the set of reference points in the first image and the second image; and determining one or more registration parameters by comparing a plurality of back-projection errors generated in a plurality of iterations that are performed based on the set of mapping data.

Systems and methods for image reconstruction

A method may include obtaining a first acquisition time period related to a scan of a first modality performed on an object. The method may also include obtaining one or more second acquisition time periods related to a scan of a second modality performed on the object. The method may also include obtaining, based on the first acquisition time period and the one or more second acquisition time periods, target data of the object acquired in the scan of the first modality. The method may also include generating one or more target images of the object based on the target data.