Patent classifications
G06T2211/428
METHOD AND APPARATUS FOR DETERMINING THE POSITION OF A SURGICAL TOOL RELATIVE TO A TARGET VOLUME INSIDE AN ANIMAL BODY
The invention relates to a method for determining the position of a surgical tool relative to a target volume inside an animal body according to a pre-plan comprising the steps of i) obtaining a plurality of two-dimensional images of said target volume using imaging means, each 2D-image being represented by an image data slice I(x,y,z); ii) reconstructing from said plurality of image data slices I(x,y,z) a three-dimensional image of said target volume using transformation means, said 3D-image being represented by a volumetric image data array V(x,y,z); iii) displaying said three-dimensional image of said target volume to an user using displaying means.
System and method for image reconstruction
The disclosure relates to a system and method for image reconstruction. The method may include the steps of: obtaining raw data corresponding to radiation rays within a volume, determining a radiation ray passing a plurality of voxels, grouping the voxels into a plurality of subsets such that at least some subset of voxels are sequentially loaded into a memory, and performing a calculation relating to the sequentially loaded voxels. The radiation ray may be determined based on the raw data. The calculation may be performed by a plurality of processing threads in a parallel hardware architecture. A processing thread may correspond to a subset of voxels.
Modeling a collapsed lung using CT data
A method of modeling lungs of a patient includes acquiring computed tomography data of a patient's lungs, storing a software application within a memory associated with a computer, the computer having a processor configured to execute the software application, executing the software application to differentiate tissue located within the patient's lung using the acquired CT data, generate a 3-D model of the patient's lungs based on the acquired CT data and the differentiated tissue, apply a material property to each tissue of the differentiated tissue within the generated 3-D model, generate a mesh of the 3-D model of the patient's lungs, calculate a displacement of the patient's lungs in a collapsed state based on the material property applied to the differentiated tissue and the generated mesh of the generated 3-D model, and display a collapsed lung model of the patient's lungs based on the calculated displacement of the patient's lungs.
SYSTEMS AND METHODS OF ON-THE-FLY GENERATION OF 3D DYNAMIC IMAGES USING A PRE-LEARNED SPATIAL SUBSPACE
A method for performing real-time magnetic resonance (MR) imaging on a subject is disclosed. A prep pulse sequence is applied to the subject to obtain a high-quality special subspace, and a direct linear mapping from k-space training data to subspace coordinates. A live pulse sequence is then applied to the subject. During the live pulse sequence, real-time images are constructed using a fast matrix multiplication procedure on a single instance of the k-space training readout (e.g., a single k-space line or trajectory), which can be acquired at a high temporal rate.
Systems and methods for image correction in positron emission tomography
System for image correction in PET is provided. The system may acquire a PET image and a CT image of a subject. The system may generate, based on the PET image and the CT image, an attenuation-corrected PET image of the subject by application of an attenuation correction model. The attenuation correction model may be a trained cascaded neural network including a trained first model and at least one trained second model downstream to the trained first model. During the application of the attenuation correction model, an input of each of the at least one trained second model may include the PET image, the CT image, and an output image of a previous trained model that is upstream and connected to the trained second model.
Workflow, system and method for motion compensation in ultrasound procedures
An ultrasound imaging device (10) with an ultrasound probe (12) acquires a live ultrasound image which is displayed with a contour (62) or reference image (60) registered with the live ultrasound image using a composite transform (42). To update the composite transform, the ultrasound imaging device acquires a baseline three-dimensional ultrasound (3D-US) image (66) tagged with a corresponding baseline orientation of the ultrasound probe measured by a probe tracker, and one or more reference 3D-US images (70) each tagged with a corresponding reference orientation. Transforms (54) are computed to spatially register each reference 3D-US image with the baseline 3D-US image. A closest reference 3D-US image is determined whose corresponding orientation is closest to a current orientation of the ultrasound probe as measured by the probe tracker. The composite transform is updated to include the transform to spatially register the closest reference 3D-US image to the baseline 3D-US image.
System and method for processing multi-dimensional and time-overlapping imaging data in real time with cloud computing
The present embodiments include a system and method for processing multi-dimensional images in real time through the use of third-party servers and cloud computing. The system includes a data acquisition processor, a data storage unit, an administrator processor, and a server. The server can be a cloud-based server. The method includes receiving multi-dimensional imaging data, compressing and blending the image data, transmitting the image data to a server, decompressing and deblending the data, generating multi-dimensional images, and transmitting the imaging data back to the administrator processor.
CT IMAGING DEPENDING ON AN INTRINSIC RESPIRATORY SURROGATE OF A PATIENT
A method for performing a CT imaging process based on an individual respiration behaviour of a patient, comprises: recording a respiratory movement of the patient by monitoring an intrinsic respiratory surrogate. In the context of recording the intrinsic respiratory surrogate, CT raw data are acquired from an examination volume of the patient, and 3D-CT images of subsequent stacks of the examination volume at different z-positions are reconstructed. An automatic organ segmentation is performed based on the reconstructed 3D-CT images of the subsequent stacks, wherein at least a portion of the examination volume is segmented. Furthermore, a respiratory movement of at least the portion of the examination volume is detected and determined as the intrinsic respiratory surrogate. The CT imaging process is then adapted based on the intrinsic respiratory surrogate of the patient.
Method for displaying tumor location within endoscopic images
A method of displaying an area of interest within a surgical site includes modeling a patient's lungs and identifying a location of an area of interest within the model of the patient's lungs. The topography of the surface of the patient's lungs is determined using an endoscope having a first camera, a light source, and a structured light pattern source. Real-time images of the patient's lungs are displayed on a monitor and the real-time images are registered to the model of the patient's lungs using the determined topography of the patient's lungs. A marker indicative of the location of the area of interest is superimposed over the real-time images of the patient's lungs. If the marker falls outside of the field-of view of the endoscope, an arrow is superimposed over the real-time images to indicate the direction in which the marker is located relative to the field of view.
SYSTEMS AND METHODS FOR IMAGE CORRECTION IN POSITRON EMISSION TOMOGRAPHY
System for image correction in PET is provided. The system may acquire a PET image and a CT image of a subject. The system may generate, based on the PET image and the CT image, an attenuation-corrected PET image of the subject by application of an attenuation correction model. The attenuation correction model may be a trained cascaded neural network including a trained first model and at least one trained second model downstream to the trained first model. During the application of the attenuation correction model, an input of each of the at least one trained second model may include the PET image, the CT image, and an output image of a previous trained model that is upstream and connected to the trained second model.