Patent classifications
G06T2207/10081
DEEP LEARNING-BASED IMAGE QUALITY ENHANCEMENT OF THREE-DIMENSIONAL ANATOMY SCAN IMAGES
Techniques are described for enhancing the quality of three-dimensional (3D) anatomy scan images using deep learning. According to an embodiment, a system is provided that comprises a memory that stores computer executable components, and a processor that executes the computer executable components stored in the memory. The computer executable components comprise a reception component that receives a scan image generated from 3D scan data relative to a first axis of a 3D volume, and an enhancement component that applies an enhancement model to the scan image to generate an enhanced scan image having a higher resolution relative to the scan image. The enhancement model comprises a deep learning neural network model trained on training image pairs respectively comprising a low-resolution scan image and a corresponding high-resolution scan image respectively generated relative to a second axis of the 3D volume.
Fractal analysis of left atrium to predict atrial fibrillation recurrence
Embodiments discussed herein facilitate determination of risk of recurrence of atrial fibrillation (AF) after ablation based on fractal features. One example embodiment is configured to generate a binary mask of at least a portion of a CT scan of a heart of a patient with AF; compute one or more radiomic fractal-based features from at least one of the binary mask or the portion of the CT scan; provide the one or more radiomic fractal-based features to a trained machine learning (ML) classifier; and receive a prediction from the trained ML classifier of whether or not the AF will recur after AF ablation, wherein the prediction is based at least in part on the one or more radiomic fractal-based features.
Medical object detection and identification
An approach for improving determining a significant slice associated with a tumor from a volume of medical images is disclosed. The approach is based on the annotation of tumor range and the slice index in which the tumor appears to have the largest area. The approach infer a tumor growth classifier on sliding window of the volume slices and creates a discrete integral function out of the classifier predictions. The approach applies post processing on the discrete integral function which can include a smoothing function and a bias correction. The approach selects the slice index of maximum value from the post processing step.
Method and data processing system for providing respiratory information
A method is for providing respiratory information. In an embodiment, the method includes receiving imaging data relating to a lung; calculating a perfusion fraction for each respective region of a set of regions of the lung, based on the imaging data; calculating a respective ventilation value for each respective region of the set of regions of the lung based on the imaging data; calculating a weighted average of respective ventilation values across all respective regions of the set of regions of the lung, wherein for each respective region of the set of regions of the lung, the respective ventilation value of the respective region is weighted with the perfusion fraction of the respective region; generating the respiratory information based on the weighted average of the respective ventilation values; and providing the respiratory information.
Predictive use of quantitative imaging
The present disclosure provides systems and methods for predicting a disease state of a subject using ultrasound imaging and ancillary information to the ultrasound imaging. At least two quantitative measurements of a subject, including at least one measurement taken using ultrasound imaging, as part of quantified information can be identified. One of the quantitative measurements can be compared to a first predetermined standard, included as part of ancillary information to the quantified information, in order to identify a first initial value. Further, another of the quantitative measurements can be compared to a second predetermined standard, included as part of the ancillary information, in order to identify a second initial value. Subsequently, the quantitative information can be correlated with the ancillary information using the first initial value and the second initial value to determine a final value that is predictive of a disease state of the subject.
Method for detecting image of object using convolutional neural network
The present application related to a method for detecting an object image using a convolutional neural network. Firstly, obtaining feature images by Convolution kernel, and then positioning an image of an object under detected by a default box and a boundary box from the feature image. By Comparing with the sample image, the detected object image is classifying to an esophageal cancer image or a non-esophageal cancer image. Thus, detecting an input image from the image capturing device by the convolutional neural network to judge if the input image is the esophageal cancer image for helping the doctor to interpret the detected object image.
Standardized coronary artery disease metric
A computing system (118) includes a computer readable storage medium (122) with computer executable instructions (124), including a biophysical simulator (126), and a reference location (128), and a processor (120) configured to the biophysical simulator and simulate a reference FFR value at a predetermined location along a segmented coronary vessel indicated by the reference location. A computer readable storage medium encoded with computer readable instructions, which, when executed by a processor of a computing system, causes the processor to simulate a reference FFR value at a predetermined location along a segmented coronary vessel indicated by a predetermined reference location. A method including simulating a reference FFR value at a predetermined location along a segmented coronary vessel indicated by a predetermined reference location.
Digital unpacking of CT imagery
An improvement to automatic classifying of threat level of objects in CT scan images of container content, methods include automatic identification of non-classifiable threat level object images, and displaying on a display of an operator a de-cluttered image, to improve operator efficiency. The de-cluttered image includes, as subject images, the non-classifiable threat level object images. Improvement to resolution of non-classifiable threat objects includes computer-directed prompts for the operator to enter information regarding the subject image and, based on same, identifying the object type. Improvement to automatic classifying of threat levels includes incremental updating the classifying, using the determined object type and the threat level of the object type.
Systems and methods for interpolation with resolution preservation
Various methods and systems are provided for artifact reduction with resolution preservation. In one example, a method includes obtaining projection data of an imaging subject, identifying a metal-containing region in the projection data, interpolating the metal-containing region to generate interpolated projection data, extracting high frequency content information from the projection data in the metal-containing region, adding the extracted high frequency content information to the interpolated projection data to generate adjusted projection data, and reconstructing one or more diagnostic images from the adjusted projection data.
3-D convolutional autoencoder for low-dose CT via transfer learning from a 2-D trained network
A 3-D convolutional autoencoder for low-dose CT via transfer learning from a 2-D trained network is described, A machine learning method for low dose computed tomography (LDCT) image correction is provided. The method includes training, by a training circuitry, a neural network (NN) based, at least in part, on two-dimensional (2-D) training data. The 2-D training data includes a plurality of 2-D training image pairs. Each 2-D image pair includes one training input image and one corresponding target output image. The training includes adjusting at least one of a plurality of 2-D weights based, at least in part, on an objective function. The method further includes refining, by the training circuitry, the NN based, at least in part, on three-dimensional (3-D) training data. The 3-D training data includes a plurality of 3-D training image pairs. Each 3-D training image pair includes a plurality of adjacent 2-D training input images and at least one corresponding target output image. The refining includes adjusting at least one of a plurality of 3-D weights based, at least in part, on the plurality of 2-D weights and based, at least in part, on the objective function. The plurality of 2-D weights includes the at least one adjusted 2-D weight.