Patent classifications
G06T2211/441
Medical imaging method and system
The present application provides a medical imaging method and system and a non-transitory computer-readable storage medium. The medical imaging method comprises obtaining an original image acquired by an X-ray imaging system, and post-processing the original image based on a trained network to obtain an optimized image after processing.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
The CPU 20 obtains a three-dimensional image 50 that is a captured image of an organ of a subject and obtains a plurality of ultrasound tomographic images 56 that are images of the organ successively captured at different positions. The CPU 20 identifies, for each ultrasound tomographic image 56 among the plurality of ultrasound tomographic images 56, a CT tomographic image 49 corresponding to a cross section the same as a cross section to which the ultrasound tomographic image 56 corresponds, among a plurality of CT tomographic images 49 that constitute the three-dimensional image 50, and associates position information indicating a position corresponding to the identified CT tomographic image 49 with the ultrasound tomographic image 56 among the plurality of ultrasound tomographic images 56. The CPU 20 generates a three-dimensional ultrasound image 58 from the plurality of ultrasound tomographic images 56 on the basis of the position information.
Single or a few views computed tomography imaging with deep neural network
A method for tomographic imaging comprising acquiring [200] a set of one or more 2D projection images [202] and reconstructing [204] a 3D volumetric image [216] from the set of one or more 2D projection images [202] using a residual deep learning network comprising an encoder network, a transform module and a decoder network, wherein the reconstructing comprises: transforming [206] by the encoder network the set of one or more 2D projection images [202] to 2D features [208]; mapping [210] by the transform module the 2D features [208] to 3D features [212]; and generating [214] by the decoder network the 3D volumetric image from the 3D features [212]. Preferably, the encoder network comprises 2D convolution residual blocks and the decoder network comprises 3D blocks without residual shortcuts within each of the 3D blocks.
Iterative image reconstruction framework
The present disclosure relates to image reconstruction with favorable properties in terms of noise reduction, spatial resolution, detail preservation and computational complexity. The disclosed techniques may include some or all of: a first-pass reconstruction, a simplified datafit term, and/or a deep learning denoiser. In various implementations, the disclosed technique is portable to different CT platforms, such as by incorporating a first-pass reconstruction step.
METHOD OF METAL ARTEFACT REDUCTION IN X-RAY DENTAL VOLUME TOMOGRAPHY
The present invention relates to a method of metal artefact reduction in x-ray dental volume tomography, the method comprising: a step (S1) of obtaining two-dimensional x-ray images (1) or a sinogram (2) of at least part (v) of a patient jaw (3a), acquired through relatively rotating an x-ray source (4) and a detector (5) around the patient jaw (3a); the method being characterized by further comprising: a step (S2) of detecting metal objects (6) in the two-dimensional x-ray images (1) or the sinogram (2) by using at least a trained artificial intelligence algorithm to generate 2D masks (7) which represent the metal objects (6) in the two-dimensional x-ray images (1) or 3D masks which represent the metal objects (6) in the sinogram (2), respectively; and a step (S4; S5) of reconstructing a three dimensional tomographic image (8) respectively based on two-dimensional x-ray images (1) or the sinogram (2) and the 2D masks (7) or the 3D masks as generated.
PROVISION OF CORRECTED MEDICAL IMAGE DATA
A method includes receiving image data of an examination object. A first temporary data record is created by applying a first correction to the image data. A further temporary data record is created by applying a further correction to the image data. The further correction at least partially corresponds to the first correction. A trained function is applied to input data that is based on the first temporary data record and the further temporary data record. A parameter of the trained function is based on an image quality metric. It is determined whether the first temporary data record has a higher image quality compared with the further temporary data record. When a result is positive, the first temporary data record is provided as the corrected medical image data. When the result is negative, the further temporary data record is provided as the image data, and part of the method is repeated.
THREE-DIMENSIONAL TOMOGRAPHY RECONSTRUCTION PIPELINE
A three-dimensional (3D) density volume of an object is constructed from tomography images (e.g., x-ray images) of the object. The tomography images are projection images that capture all structures of an object (e.g., human body) between a beam source and imaging sensor. The beam effectively integrates along a path through the object producing a tomography image at the imaging sensor, where each pixel represents attenuation. A 3D reconstruction pipeline includes a first neural network model, a fixed function backprojection unit, and a second neural network model. Given information for the capture environment, the tomography images are processed by the reconstruction pipeline to produce a reconstructed 3D density volume of the object. In contrast with a set of 2D slices, the entire 3D density volume is reconstructed, so two-dimensional (2D) density images may be produced by slicing through any portion of the 3D density volume at any angle.
METHOD AND SYSTEM FOR GENERATING MULTI-TASK LEARNING-TYPE GENERATIVE ADVERSARIAL NETWORK FOR LOW-DOSE PET RECONSTRUCTION
The present application relates to a method and system for generating multi-task learning-type generative adversarial network for low-dose PET reconstruction, and relates to the field of deep learning. The method includes connecting layers of the encoder with layers of the decoder by skip connection to provide a U-Net type picture generator; generating a group of generative adversarial networks by matching a plurality of picture generators with a plurality of discriminators in one-to-one manner; obtaining a first multi-task learning-type generative adversarial network; designing a joint loss function 1 for improving image quality; and training the first multi-task learning-type generative adversarial network according to the joint loss function 1 in combination with an optimizer to provide a second multi-task learning-type generative adversarial network.
END-TO-END TRAINING FOR A THREE-DIMENSIONAL TOMOGRAPHY RECONSTRUCTION PIPELINE
A three-dimensional (3D) density volume of an object is constructed from tomography images (e.g., x-ray images) of the object. The tomography images are projection images that capture all structures of an object (e.g., human body) between a beam source and imaging sensor. The beam effectively integrates along a path through the object producing a tomography image at the imaging sensor, where each pixel represents attenuation. A 3D reconstruction pipeline includes a first neural network model, a fixed function backprojection unit, and a second neural network model. Given information for the capture environment, the tomography images are processed by the reconstruction pipeline to produce a reconstructed 3D density volume of the object. In contrast with a set of 2D slices, the entire 3D density volume is reconstructed, so two-dimensional (2D) density images may be produced by slicing through any portion of the 3D density volume at any angle.
Model-based image reconstruction using analytic models learned by artificial neural networks
The present disclosure is related to methods and systems for image reconstruction including accelerated forward transformation with an Artificial Neural Network (ANN).