G06T2207/10084

SYSTEM AND METHOD FOR PROCESSING MULTIMODAL IMAGES
20170228896 · 2017-08-10 ·

Various aspects of a system and a method to process multimodal images are disclosed herein. In accordance with an embodiment, the system includes an image-processing device that generates a structured point cloud, which represents edge points of an anatomical portion. The structured point cloud is generated based on shrink-wrapping of an unstructured point cloud to a boundary of the anatomical portion. Diffusion filtering is performed to dilate edge points that correspond to the structured point cloud to mutually connect the edge points on the structured point cloud. A mask is created for the anatomical portion based on the diffusion filtering.

INTRAORAL SCANNING USING A PRE-EXISTING MODEL
20220270340 · 2022-08-25 ·

An intraoral scanner system includes an intraoral scanner having an imaging device and a sensing face, and a computing device, communicatively coupled to the intraoral scanner. The computing device receives a first intraoral images of a three-dimensional intraoral object of a patient generated by the intraoral scanner corresponding to an intraoral scanning of the three-dimensional intraoral object of the patient. The computing device registers a first intraoral image of the first intraoral images relative to a second intraoral image of the first intraoral images using a model of the three-dimensional intraoral object that existed prior to the intraoral scanning.

High temporal resolution Doppler OCT imaging of retinal blood flow
09814384 · 2017-11-14 · ·

Techniques are introduced to improve the ability of OCT to determine more accurately the nature of the flow of fluids in the eye, including faster measurements of the flow and a method to reduce geometric uncertainties due to eye movements.

Method and System for Analyzing Image Data

A method of analyzing image data comprises: obtaining a first image of a first part of an object; obtaining a second image of a second part of the object having overlap with the first part; obtaining a mapping between the first and second images; segmenting the second image to obtain a segmentation; detecting outliers in the first image by identifying extreme intensity values of elements within one or more classes of elements on the basis of the segmentation; replacing elements of the second image that correspond to at least some outliers of the first image, with replacement values, to obtain a corrected second image; and updating the segmentation by performing the segmenting on the corrected second image. The detecting outliers, the replacing, and the updating are performed iteratively until a predetermined convergence criterion is met, which represents a point where there is no significant change in the tissue and lesion segmentations.

DEEP CONVOLUTIONAL NEURAL NETWORKS FOR TUMOR SEGMENTATION WITH POSITRON EMISSION TOMOGRAPHY

The present disclosure relates to techniques for segmenting tumors with positron emission tomography (PET) using deep convolutional neural networks for image and lesion metabolism analysis. Particularly, aspects of the present disclosure are directed to obtaining a PET scans and computerized tomography (CT) or magnetic resonance imaging (MRI) scans for a subject, preprocessing the PET scans and the CT or MRI scans to generate standardized images, generating two-dimensional segmentation masks, using two-dimensional segmentation models implemented as part of a convolutional neural network architecture that takes as input the standardized images, generating three-dimensional segmentation masks, using three-dimensional segmentation models implemented as part of the convolutional neural network architecture that takes as input patches of image data associated with segments from the two-dimensional segmentation mask, and generating a final imaged mask by combining information from the two-dimensional segmentation masks and the three-dimensional segmentation masks.

Partial volume correction in multi-modality emission tomography

For partial volume correction, the partial volume effect is simulated using patient-specific segmentation. An organ or other object of the patient is segmented using anatomical imaging. For simulation, the locations of the patient-specific object or objects are sub-divided, creating artificial boundaries in the object. A test activity is assigned to each sub-division and forward projected. The difference of the forward projected activity to the test activity provides a location-by-location partial volume correction map. This correction map is used in reconstruction from the measured emissions, resulting in more accurate activity estimation with less partial volume effect.

Intraoral scanning using ultrasound and optical scan data
11341732 · 2022-05-24 · ·

First intraoral images of a first portion of a three-dimensional intraoral object are received. The first intraoral images correspond to an intraoral scan of the three-dimensional intraoral object during a current patient visit. A pre-existing model that corresponds to the three-dimensional intraoral object is identified. The pre-existing model is based on intraoral data of the three-dimensional intraoral object captured during a previous patient visit. A first intraoral image of the first intraoral images is registered to a first portion of the pre-existing model. A second intraoral image of the first intraoral images is registered to a second portion of the pre-existing model.

VISUALIZING AN ORGAN USING MULTIPLE IMAGING MODALITIES COMBINED AND DISPLAYED IN VIRTUAL REALITY
20220122239 · 2022-04-21 ·

A system including first and second camera assemblies, and a processor. The first camera assembly includes a first camera producing, in a first imaging modality, a first image of an organ acquired from a first angle, and a first position sensor, producing a first position signal indicative of a first position and a first orientation, of the first camera. The second camera assembly includes a second camera, producing, in a second imaging modality, a second image of the organ acquired from a second angle, and a second position sensor, producing a second position signal indicative of a second position and a second orientation, of the second camera. The processor registers between the first and second images based on the first and second position signals and displays, based on the first and second images, a third image including a combination of at least part of the first and second images.

Systems and methods for medical image registration

There is provided a method for registration of intravital anatomical imaging modality image data and nuclear medicine image data of a patient's heart comprising: obtaining anatomical image data including a heart of a patient outputted by an anatomical intravital imaging modality; obtaining at least one nuclear medicine image data outputted by a nuclear medicine imaging modality, the nuclear medicine image data including the heart of the patient; identifying a segmentation of a network of vessels of the heart in the anatomical image data; identifying a contour of at least part of the heart in the nuclear medicine image data, the contour including at least one muscle wall border of the heart; correlating between the segmentation and the contour; registering the correlated segmentation and the correlated contour to form a registered image of the anatomical image data and the nuclear medicine image data; and providing the registered image for display.

CORRECTING MOTION-RELATED DISTORTIONS IN RADIOGRAPHIC SCANS

A method comprising: receiving a radiographic image dataset representing a sequential radiographic scan of a region of a human subject; receiving three-dimensional (3D) image data representing an optical scan of a surface of said region, wherein said 3D image data is performed simultaneously with said sequential radiographic scan; estimating a time-dependent motion of said subject during said acquisition, relative to a specified position, based, at least in part, on said 3D image data; and using said estimating to determine corrections for said radiographic image dataset, based, at least in part, on a known transformation between corresponding coordinate systems of said radiographic image dataset and said 3D image data.