A61B6/466

System and method for generating a virtual mathematical model of the dental (stomatognathic) system

A method for forming a virtual 3D mathematical model of a dental system, including receiving DICOM files representing the dental system; identifying number and location of voxels of tissues of the dental system; combining the voxels of the tissues into voxels of organs of the dental system; combining the organs into the virtual 3D mathematical model of the dental system, wherein the virtual 3D mathematical models supports linear, non-linear and volumetric measurements of the dental system; and presenting the virtual 3D mathematical model to a user. The DICOM files can be cone beam or multispiral computed tomography, MRT, PET and/or ultrasonography. The tissues include enamel, dentin, pulp, cartilage, periodontium, and/or jaw bone. The organs include teeth, gums, temporomandibular joint and/or jaw. A size of the voxels is typically between 40 μm and 200 μm.

Method, system, device and medium for determining a blood flow velocity in a vessel

Method, system, device and medium for determining a blood flow velocity in a vessel are provided. An example method includes receiving a 3D model of the vessel, which is reconstructed based on X-ray angiography images of the vessel. The method further includes specifying a segment of the 3D model by a start landmark and a termination landmark. Moreover, the method includes determining the blood flow velocity based on length of the segment and perfusion time for the segment by normalizing the blood flow velocity to correspond to a cardiac cycle. The method has a better accuracy in calculating blood flow velocity, and requires no additional modalities other than the original X-ray angiogram sequences used to visualize coronary arteries.

Method of using lung airway carina locations to improve ENB registration

Disclosed are systems, devices, and methods for registering a luminal network to a 3D model of the luminal network. An example method comprises generating a 3D model of a luminal network, identifying a target within the 3D model, determining locations of a plurality of carinas in the luminal network proximate the target, displaying guidance for navigating a location sensor within the luminal network, tracking the location of the location sensor, comparing the tracked locations of the location sensor and the portions of the 3D model representative of open space, displaying guidance for navigating the location sensor a predetermined distance into each lumen originating at the plurality of carinas proximate the target, tracking the location of the location sensor while the location sensor is navigated into each lumen, and updating the registration of the 3D model with the luminal network based on the tracked locations of the location sensor.

System and method for navigating within the lung

Methods and systems for navigating to a target through a patient's bronchial tree are disclosed including a bronchoscope, a probe insertable into a working channel of the bronchoscope and including a location sensor, and a workstation in operative communication with the probe and the bronchoscope, the workstation including a user interface that guides a user through a navigation plan and is configured to present a central navigation view including a plurality of views configured for assisting the user in navigating the bronchoscope through central airways of the patient's bronchial tree toward the target, a peripheral navigation view including a plurality of views configured for assisting the user in navigating the probe through peripheral airways of the patient's bronchial tree to the target, and a target alignment view including a plurality of views configured for assisting the user in aligning a distal tip of the probe with the target.

METHODS AND APPARATUS FOR DEEP LEARNING BASED IMAGE ATTENUATION CORRECTION
20230009528 · 2023-01-12 ·

Systems and methods for reconstructing medical images are disclosed. Measurement data from positron emission tomography (PET) data, and measurement data from an anatomy modality, such as magnetic resonance (MR) data or computed tomography (CT) data, is received from an image scanning system. A PET image is generated based on the PET measurement data, and an anatomy image is generated based on the anatomy measurement data. A trained neural network is applied to the PET image and the anatomy image to generate an attenuation map. The neural network may be trained based on anatomy and PET images. In some examples, the trained neural network generates an initial attenuation map based on the anatomy image, registers the initial attenuation map to the PET image, and generates an enhanced attenuation map based on the registration. Further, a corrected image is reconstructed based on the generated attenuation map and the PET image.

TISSUE STATE GRAPHIC DISPLAY SYSTEM

A system is provided for augmenting a three-dimensional (3D) model of a heart to indicate the tissue state. The system accesses a 3D model of a heart, accesses two-dimensional (2D) images of tissue state slices of the heart, and accesses source location information of an arrhythmia. The system augments the 3D model with an indication of a source location based on the source location information. For each of a plurality of the tissue state slices of the heart, the system augments a 3D model slice of the 3D model that corresponds to that tissue state slice with an indication of the tissue state of the heart represented by the tissue state information of that tissue state slice. The system then displays a representation of the 3D model that indicates the source location of the arrhythmia and the tissue state of the heart.

SYSTEM AND METHOD FOR NAVIGATING WITHIN THE LUNG

Methods and systems for navigating to a target through a patient's bronchial tree are disclosed including a bronchoscope, a probe insertable into a working channel of the bronchoscope and including a location sensor, and a workstation in operative communication with the probe and the bronchoscope, the workstation including a user interface that guides a user through a navigation plan and is configured to present a central navigation view including a plurality of views configured for assisting the user in navigating the bronchoscope through central airways of the patient's bronchial tree toward the target, a peripheral navigation view including a plurality of views configured for assisting the user in navigating the probe through peripheral airways of the patient's bronchial tree to the target, and a target alignment view including a plurality of views configured for assisting the user in aligning a distal tip of the probe with the target.

VIRTUAL OBJECT DISPLAY SYSTEM, AND DISPLAY CONTROL METHOD AND DISPLAY CONTROL PROGRAM FOR THE SAME
20180011317 · 2018-01-11 · ·

A virtual object display system includes a plurality of head-mounted displays 1 each having a virtual object acquisition unit 22 that acquires a virtual object, a display information acquisition unit 23 that acquires display information used to display the virtual object, and a display control unit 24 that causes a display unit to display the virtual object on the basis of the display information, and enables switching between of first display control for causing the virtual object to be displayed on the basis of display information acquired by each of the plurality of head-mounted displays 1 and second display control for causing the virtual object having an identical orientation to an orientation of the virtual object displayed on the basis of display information acquired by another head-mounted display 1 to be displayed.

DISEASE CHARACTERIZATION FROM FUSED PATHOLOGY AND RADIOLOGY DATA
20180012356 · 2018-01-11 ·

Methods and apparatus distinguish invasive adenocarcinoma (IA) from in situ adenocarcinoma (AIS). One example apparatus includes a set of circuits, and a data store that stores three dimensional (3D) radiological images of tissue demonstrating IA or AIS. The set of circuits includes a classification circuit that generates an invasiveness classification for a diagnostic 3D radiological image, a training circuit that trains the classification circuit to identify a texture feature associated with IA, an image acquisition circuit that acquires a diagnostic 3D radiological image of a region of tissue demonstrating cancerous pathology and that provides the diagnostic 3D radiological image to the classification circuit, and a prediction circuit that generates an invasiveness score based on the diagnostic 3D radiological image and the invasiveness classification. The training circuit trains the classification circuit using a set of 3D histological reconstructions combined with the set of 3D radiological images.

Visualization method and apparatus

An inverse visualization of a time-resolved angiographic image data set of a vascular system of a patient that was recorded by a medical imager during the flow of a contrast medium through the vascular system is provided. The time-resolved angiographic image data set of the vascular system has a temporal sequence of frames of the vascular system corresponding to the contrast medium filling process. A data set from bolus arrival times for each pixel or voxel is determined. The bolus arrival time corresponds to the time in the temporal sequence at which a predetermined contrast enhancement due to the contrast medium filling first occurs. A data set of temporally inverted bolus arrival times with respect to the contrast medium filling is determined for each pixel or voxel, resulting in a temporally inverted sequence of frames with respect to the contrast medium filling. The time-resolved angiographic image data set in the temporally inverted sequence is visualized.