Patent classifications
G06T2211/432
ACCESSIBLE NEURAL NETWORK IMAGE PROCESSING WORKFLOW
Improved (e.g., high-throughput, low-noise, and/or low-artifact) X-ray Microscopy images are achieved using a deep neural network trained via an accessible workflow. The workflow involves selection of a desired improvement factor (x), which is used to automatically partition supplied data into two or more subsets for neural network training. The neural network is trained by generating reconstructed volumes for each of the subsets. The neural network can be trained to take projection images or reconstructed volumes as input and output improved projection images or improved reconstructed volumes as output, respectively. Once trained, the neural network can be applied to the training data and/or subsequent data—optionally collected at a higher throughput—to ultimately achieve improved de-noising and/or other artifact reduction in the reconstructed volume.
VISUALIZING AND EVALUATING 3D CROSS-SECTIONS
Methods, systems, and computer-readable media for generating a cross-section of a 3D model are disclosed. An example method includes determining a cross-section plane intersecting the 3D model, performing ray-tracing by passing each of a plurality of rays through a corresponding pixel of a viewing plane such that each ray intersects the cross-section plane, determining one or more rays that are within a threshold distance of the 3D model at their respective points of intersection with the cross section plane, and highlighting pixels corresponding to the determined rays.
Apparatus and method combining deep learning (DL) with an X-ray computed tomography (CT) scanner having a multi-resolution detector
A method and apparatus is provided that uses a deep learning (DL) network together with a multi-resolution detector to perform X-ray projection imaging to provide improved resolution similar to a single-resolution detector but at lower cost and less demand on the communication bandwidth between the rotating and stationary parts of an X-ray gantry. The DL network is trained using a training dataset that includes input data and target data. The input data includes projection data acquired using a multi-resolution detector, and the target data includes projection data acquired using a single-resolution, high-resolution detector. Thus, the DL network is trained to improve the resolution of projection data acquired using a multi-resolution detector. Further, the DL network is can be trained to additional correct other aspects of the projection data (e.g., noise and artifacts).
Implicit Neural Representation Learning with Prior Embedding for Sparsely Sampled Image Reconstruction and Other Inverse Problems
Image reconstruction is an inverse problem that solves for a computational image based on sampled sensor measurement. Sparsely sampled image reconstruction poses addition challenges due to limited measurements. In this work, we propose an implicit Neural Representation learning methodology with Prior embedding (NeRP) to reconstruct a computational image from sparsely sampled measurements. The method differs fundamentally from previous deep learning-based image reconstruction approaches in that NeRP exploits the internal information in an image prior, and the physics of the sparsely sampled measurements to produce a representation of the unknown subject. No large-scale data is required to train the NeRP except for a prior image and sparsely sampled measurements. In addition, we demonstrate that NeRP is a general methodology that generalizes to different imaging modalities such as CT and MRI. We also show that NeRP can robustly capture the subtle yet significant image changes required for assessing tumor progression.
Systems and methods for iterative reconstruction
The disclosure relates to systems and methods for iterative reconstruction. Raw data detected from a plurality of angles by an imaging device may be obtained. A first seed image may be generated by performing a filtered back projection on the raw data. A first air mask may be determined by performing a minimum value back projection (BP) on the raw data. One or more images may be reconstructed by performing an iterative reconstruction based on the first seed image, the first air mask, and the raw data.
Visualizing and evaluating 3D cross-sections
Methods, systems, and computer-readable media for generating a cross-section of a 3D model are disclosed. An example method includes determining a cross-section plane intersecting the 3D model, performing ray-tracing by passing each of a plurality of rays through a corresponding pixel of a viewing plane such that each ray intersects the cross-section plane, determining one or more rays that are within a threshold distance of the 3D model at their respective points of intersection with the cross section plane, and highlighting pixels corresponding to the determined rays.
ENCODING PROGRAM MEDIA, ENCODING METHOD, ENCODING APPARATUS, DECODING PROGRAM MEDIA, DECODING METHOD, AND DECODING APPARATUS
An encoding method includes; acquiring a first image; separating the first image into a plurality of second images by extracting a pixel in the first image after every predetermined number of pixels in each of horizontal and vertical directions of the first image; and encoding each of the separated second images. By transmitting those pieces of encoded data, even if a packet loss occurs in one of the second images, the missing pixel can be re-generated based on corresponding neighboring pixels in other second images.
SYSTEMS AND METHODS FOR AUTOMATED SINOGRAM COMPLETION, COMBINATION, AND COMPLETION BY COMBINATION
Described herein are systems and methods for automated completion, combination, and completion by combination of sinograms. In certain embodiments, sinogram completion is based on a photographic (e.g. spectral or optical) acquisition and a CT acquisition (e.g., micro CT). In other embodiments, sinogram completion is based on two CT acquisitions. The sinogram to be completed may be truncated due to a detector crop (e.g., a center-based crop or an offset based crop). The sinogram to be completed may be truncated due to a subvolume crop (e.g., based on low resolution image projected onto sinogram).
Medical imaging apparatus and method for processing medical image
A medical imaging apparatus includes a data acquirer configured to acquire measured data acquired by detecting an X-ray transmitted by an X-ray source to an object, and an image processor configured to acquire an initial image based on the measured data, alternately estimate region of interest (ROI)-outside measured data and ROI-inside measured data based on the measured data and the initial image, and acquire a reconstructed image based on the ROI-inside measured data.
CLASSIFIED TRUNCATION COMPENSATION
PET/MR images are compensated with simplified adaptive algorithms for truncated parts of the body. The compensation adapts to a specific location of truncation of the body or organ in the MR image, and to attributes of the truncation in the truncated body part. Anatomical structures in a PET image that do not require any compensation are masked using a MR image with a smaller field of view. The organs that are not masked are then classified as types of anatomical structures, the orientation of the anatomical structures, and type of truncation. Structure specific algorithms are used to compensate for a truncated anatomical structure. The compensation is validated for correctness and the ROI is filled in where there is missing voxel data. Attenuation maps are generated from the compensated ROI.