Patent classifications
G06T2207/10104
Systems and methods for controlling imaging
A method for controlling a medical device may be provided. The method may include obtaining, via one or more cameras, first data regarding a first motion of a subject in an examination space of the medical device. The method may include obtaining, via one or more radars, second data regarding a second motion of the subject. The method may further include generating, based on the first data and the second data, a control signal for controlling the medical device to scan at least a part of the subject.
MULTISCALE MODELING TO DETERMINE MOLECULAR PROFILES FROM RADIOLOGY
Systems and methods for analyzing pathologies utilizing quantitative imaging are presented herein. Advantageously, the systems and methods of the present disclosure utilize a hierarchical analytics framework that identifies and quantify biological properties/analytes from imaging data and then identifies and characterizes one or more pathologies based on the quantified biological properties/analytes. This hierarchical approach of using imaging to examine underlying biology as an intermediary to assessing pathology provides many analytic and processing advantages over systems and methods that are configured to directly determine and characterize pathology from underlying imaging data.
SYSTEMS AND METHODS FOR IMAGE SEGMENTATION
Systems and methods for image segmentation are provided. The systems may obtain a target image and a template image relating to the target image. The template image may correspond to an initial mask reflecting initial segmentations of the template image. The systems may determine a first transformation and an intermediate template image by preliminarily registering the template image to the target image and generate an intermediate mask based on the initial mask and the first transformation. The systems may determine, based on the intermediate mask, one or more first regions from the target image and one or more second regions from the intermediate template image. The systems may determine a second transformation by registering each of the one or more second regions to a corresponding first region. The systems may determine a target mask according to which the target image can be segmented based on one or more second transformations.
QUANTIFICATION AND VISUALIZATION OF MYOCARDIUM FIBROSIS OF HUMAN HEART
Embodiments of the present disclosure are related to providing a method and device processing a first set of volumetric image data comprising cross-sectional images of a myocardium and displaying a second set of volumetric image data of the myocardium. A curved plane to rectangular plane transformation of cross-sectional images of myocardium of human heart is proposed. After the transformation, a combined and reconstructed set of myocardium images are superimposed with a modified Bull's Eye View (BEV) map and corresponding parameters indicating extent of fibrosis to obtain a second set of volumetric image data of myocardium. In addition to quantifying and displaying the extent of fibrosis, the proposed solution preserves neighborhood and adjacency criteria of abnormal tissues of myocardium walls of human heart.
Systems and methods for image correction
The present disclosure provides a system and method for motion field generation and image correction. The method may include obtaining a plurality of first sets of magnetic resonance (MR) image data of an object generated based on a plurality of first sets of imaging sequences. The method may include obtaining a motion curve of the object. The method may include obtaining position emission tomography (PET) image data of the object generated in a scanning time period. The method may include generating one or more target motion fields corresponding to the scanning time period based on the plurality of first sets of MR image data and the motion curve. The method may include generating one or more corrected PET images by correcting, based on the one or more target motion fields, the PET image data.
Systems and methods for correcting mismatch induced by respiratory motion in positron emission tomography image reconstruction
The disclosure relates to PET imaging systems and methods. The systems may obtain a plurality of PET images of a subject and a CT image acquired by performing a spiral CT scan on the subject. Each gated PET image may include a plurality of sub-gated PET images. The CT image may include a plurality of sub-CT images each of which corresponds to one of the plurality of sub-gated PET images. The systems may determine a target motion vector field between a target physiological phase and a physiological phase of the CT image based on the plurality of sub-gated PET images and the plurality of sub-CT images. The systems may reconstruct an attenuation corrected PET image corresponding to the target physiological phase based on the target motion vector field, the CT image, and PET data used for the plurality of gated PET images reconstruction.
Systems and methods for artificial intelligence-based image analysis for cancer assessment
Presented herein are systems and methods that provide for automated analysis of medical images to determine a predicted disease status (e.g., prostate cancer status) and/or a value corresponding to predicted risk of the disease status for a subject. The approaches described herein leverage artificial intelligence (AI) to analyze intensities of voxels in a functional image, such as a PET image, and determine a risk and/or likelihood that a subject's disease, e.g., cancer, is aggressive. The approaches described herein can provide predictions of whether a subject that presents a localized disease has and/or will develop aggressive disease, such as metastatic cancer. These predictions are generated in a fully automated fashion and can be used alone, or in combination with other cancer diagnostic metrics (e.g., to corroborate predictions and assessments or highlight potential errors). As such, they represent a valuable tool in support of improved cancer diagnosis and treatment.
APPARATUS AND METHODS FOR UNSUPERVISED IMAGE DENOISING USING DOUBLE OVER-PARAMETERIZATION
A method, apparatus, and non-transitory computer-readable storage medium for image denoising whereby a deep image prior (DIP) neural network is trained to produce a denoised image by inputting the second medical image to the DIP neural network and combining a converging noise and an output of the DIP network during the training such that the converging noise combined with the output of the DIP network approximates the first medical image at the end of the training, wherein the output of the DIP network represents the denoised image.
IMAGE ALIGNMENT APPARATUS, METHOD, AND PROGRAM
An image alignment apparatus includes at least one processor, and the processor derives, for each of first and second three-dimensional images each including a plurality of tomographic images and a common structure, first and second three-dimensional coordinate information that define an end part of the structure in a direction intersecting the tomographic image. The processor aligns the first three-dimensional image and the second three-dimensional image by using the first and second three-dimensional coordinate information to align the common structure included in each of the first three-dimensional image and the second three-dimensional image at least in the direction intersecting the tomographic image.
METHOD AND APPARATUS FOR AUTOMATED DETECTION OF LANDMARKS FROM 3D MEDICAL IMAGE DATA BASED ON DEEP LEARNING
A method for automated detection of landmarks from 3D medical image data using deep learning according to the present inventive concept, the method includes receiving a 3D volume medical image, generating a 2D intensity value projection image based on the 3D volume medical image, automatically detecting an initial anatomical landmark using a first convolutional neural network based on the 2D intensity value projection image, generating a 3D volume area of interest based on the initial anatomical landmark and automatically detecting a detailed anatomical landmark using a second convolutional neural network different from the first convolutional neural network based on the 3D volume area of interest.