Patent classifications
A61B6/5258
METHOD AND SYSTEMS FOR ALIASING ARTIFACT REDUCTION IN COMPUTED TOMOGRAPHY IMAGING
Various methods and systems are provided for computed tomography imaging. In one embodiment, a method includes acquiring, with an x-ray detector and an x-ray source coupled to a gantry, a three-dimensional image volume of a subject while the subject moves through a bore of the gantry and the gantry rotates the x-ray detector and the x-ray source around the subject, inputting the three-dimensional image volume to a trained deep neural network to generate a corrected three-dimensional image volume with a reduction in aliasing artifacts present in the three-dimensional image volume, and outputting the corrected three-dimensional image volume. In this way, aliasing artifacts caused by sub-sampling may be removed from computed tomography images while preserving details, texture, and sharpness in the computed tomography images.
LOCAL ENHANCEMENT FOR A MEDICAL IMAGE
The present disclosure relates to locally enhancing medical images. In accordance with certain embodiments, a method includes determining a boundary of a region of interest in a displayed medical image, overlaying the boundary on the displayed medical image, adjusting a position of a collimator of a medical imaging system based on the determined boundary, enhancing image quality of the region of interest, and displaying the enhanced region of interest within the boundary.
Device and method for performing nuclear imaging
Gamma cameras may be used to obtain two-dimensional images of an emitting object, of which the most common form is the “Anger-type” gamma camera. The primary components in a conventional Anger-type gamma camera include, but are not limited to: a plurality of photo-multiplier tubes, a scintillator material, and a collimator. The disclosed invention claims a novel use of a gamma camera which eliminates the collimator. The new method is a method of forming an initial image from the incident radiation, which does not depend on any mechanical or other means of restricting the incident radiation to be passed on to a position-sensitive radiation detector. This method then uses mathematical deconvolution to produce an image of the object without the need for a collimator and without reliance on a pre-existing image.
Systems and methods for deep learning-based image reconstruction
Methods and systems for deep learning based image reconstruction are disclosed herein. An example method includes receiving a set of imaging projections data, identifying a voxel to reconstruct, receiving a trained regression model, and reconstructing the voxel. The voxel is reconstructed by: projecting the voxel on each imaging projection in the set of imaging projections according to an acquisition geometry, extracting adjacent pixels around each projected voxel, feeding the regression model with the extracted adjacent pixel data to produce a reconstructed value of the voxel, and repeating the reconstruction for each voxel to be reconstructed to produce a reconstructed image.
Imaging planning apparatus and imaging planning method
An imaging planning apparatus according to one embodiment includes processing circuitry. The processing circuitry obtains a first value of a first index that is related to an X-ray dose and a second value of a second index that is related to an image quality, based on an X-ray imaging condition of a subject set in a predetermined examination. The processing circuitry displays an association chart indicating an association between the first index and the second index on a display unit, displays an acceptable range of the first index and the second index, the acceptable range being based on information related to a diagnostic reference level corresponding to the predetermined examination, in a manner distinguished from a range other than the acceptable range in the association chart, and also displays a mark at a position corresponding to the first value and the second value in the association chart.
Full dose PET image estimation from low-dose PET imaging using deep learning
Emission imaging data are reconstructed to generate a low dose reconstructed image. Standardized uptake value (SUV) conversion (30) is applied to convert the low dose reconstructed image to a low dose SUV image. A neural network (46, 48) is applied to the low dose SUV image to generate an estimated full dose SUV image. Prior to applying the neural network the low dose reconstructed image or the low dose SUV image is filtered using a low pass filter (32). The neural network is trained on a set of training low dose SUV images and corresponding training full dose SUV images to transform the training low dose SUV images to match the corresponding training full dose SUV images, using a loss function having a mean square error loss component (34) and a loss component (36) that penalizes loss of image texture and/or a loss component (38) that promotes edge preservation.
3-D convolutional autoencoder for low-dose CT via transfer learning from a 2-D trained network
A 3-D convolutional autoencoder for low-dose CT via transfer learning from a 2-D trained network is described, A machine learning method for low dose computed tomography (LDCT) image correction is provided. The method includes training, by a training circuitry, a neural network (NN) based, at least in part, on two-dimensional (2-D) training data. The 2-D training data includes a plurality of 2-D training image pairs. Each 2-D image pair includes one training input image and one corresponding target output image. The training includes adjusting at least one of a plurality of 2-D weights based, at least in part, on an objective function. The method further includes refining, by the training circuitry, the NN based, at least in part, on three-dimensional (3-D) training data. The 3-D training data includes a plurality of 3-D training image pairs. Each 3-D training image pair includes a plurality of adjacent 2-D training input images and at least one corresponding target output image. The refining includes adjusting at least one of a plurality of 3-D weights based, at least in part, on the plurality of 2-D weights and based, at least in part, on the objective function. The plurality of 2-D weights includes the at least one adjusted 2-D weight.
SYSTEMS AND METHODS FOR REAL-TIME VIDEO ENHANCEMENT
A computer-implemented method is provided for improving live video quality. The method comprises: acquiring, using a medical imaging apparatus, a stream of consecutive image frames of a subject, and the stream of consecutive image frames are acquired with reduced amount of radiation dose; applying a deep learning network model to the stream of consecutive image frames to generate an image frame with improved quality; and displaying the image frame with improved quality in real-time on a display.
Method and apparatus for improved medical imaging
This invention provides a method to optimize an x-ray beam for more than one structure within the field of view. The preferred embodiment comprises a modular construction of a collimator comprising multiple materials of varying thickness. A first attenuation is performed by the first portion of the collimator to optimize a first anatomic feature and a second attenuation is performed by the second portion of the collimator to optimize a second anatomic feature.
Deep neural network for CT metal artifact reduction
A deep neural network for metal artifact reduction is described. A method for computed tomography (CT) metal artifact reduction (MAR) includes generating, by a projection completion circuitry, an intermediate CT image data based, at least in part, on input CT projection data. The intermediate CT image data is configured to include relatively fewer artifacts than an uncorrected CT image reconstructed from the input CT projection data. The method further includes generating, by an artificial neural network (ANN), CT output image data based, at least in part, on the intermediate CT image data. The CT output image data is configured to include relatively fewer artifacts compared to the intermediate CT image data. The method may further include generating, by detail image circuitry, detail CT image data based, at least in part, on input CT image data. The CT output image data is generated based, at least in part, on the detail CT image data.