G06T2207/30081

REGISTRATION CHAINING WITH INFORMATION TRANSFER
20230051081 · 2023-02-16 ·

A registration chaining system provides information transfer along a chain of registrations of images of same or different modalities. A registration at each link is based on a shared feature readily distinguished in a pair of images. The information is transferred using the registration.

Predictive use of quantitative imaging

The present disclosure provides systems and methods for predicting a disease state of a subject using ultrasound imaging and ancillary information to the ultrasound imaging. At least two quantitative measurements of a subject, including at least one measurement taken using ultrasound imaging, as part of quantified information can be identified. One of the quantitative measurements can be compared to a first predetermined standard, included as part of ancillary information to the quantified information, in order to identify a first initial value. Further, another of the quantitative measurements can be compared to a second predetermined standard, included as part of the ancillary information, in order to identify a second initial value. Subsequently, the quantitative information can be correlated with the ancillary information using the first initial value and the second initial value to determine a final value that is predictive of a disease state of the subject.

System and method for predictive fusion
11580651 · 2023-02-14 · ·

An image fusion system provides a predicted alignment between images of different modalities and synchronization of the alignment, once acquired. A spatial tracker detects and tracks a position and orientation of an imaging device within an environment. A predicted pose of an anatomical feature can be determined, based on previously acquired image data, with respect to a desired position and orientation of the imaging device. When the imaging device is moved into the desired position and orientation, a relationship is established between the pose of the anatomical feature in the image data and the pose of the anatomical feature imaged by the imaging device. Based on tracking information provided by the spatial tracker, the relationship is maintained even when the imaging device moves to various positions during a procedure.

SYSTEM FOR 3D MULTI-PARAMETRIC ULTRASOUND IMAGING

Systems and methods are disclosed that facilitate obtaining two dimensional (2D) ultrasound images, using two or more ultrasound imaging modes or modalities, to generate 2D multi-parametric ultrasound (mpUS) images and/or to generate a three-dimensional (3D) mpUS image. The different ultrasound imaging modes acquire images in a common frame of reference during a single procedure to facilitate their registration. The mpUS images (i.e., 2D or 3D) may be used for enhanced and/or automated detection of one or more suspicious regions. After identifying one or more suspicious regions, the mpUS images may be utilized with a real-time image to guide biopsy or therapy the region(s). All these processes may be performed in a single medical procedure.

3D MULTI-PARAMETRIC ULTRASOUND IMAGING

Systems and methods are disclosed that facilitate obtaining two dimensional (2D) ultrasound images, using two or more ultrasound imaging modes or modalities, to generate 2D multi-parametric ultrasound (mpUS) images and/or to generate a three-dimensional (3D) mpUS image. The different ultrasound imaging modes acquire images in a common frame of reference during a single procedure to facilitate their registration. The mpUS images (i.e., 2D or 3D) may be used for enhanced and/or automated detection of one or more suspicious regions. After identifying one or more suspicious regions, the mpUS images may be utilized with a real-time image to guide biopsy or therapy the region(s). All these processes may be performed in a single medical procedure.

Systems and methods for artificial intelligence-based image analysis for cancer assessment

Presented herein are systems and methods that provide for automated analysis of medical images to determine a predicted disease status (e.g., prostate cancer status) and/or a value corresponding to predicted risk of the disease status for a subject. The approaches described herein leverage artificial intelligence (AI) to analyze intensities of voxels in a functional image, such as a PET image, and determine a risk and/or likelihood that a subject's disease, e.g., cancer, is aggressive. The approaches described herein can provide predictions of whether a subject that presents a localized disease has and/or will develop aggressive disease, such as metastatic cancer. These predictions are generated in a fully automated fashion and can be used alone, or in combination with other cancer diagnostic metrics (e.g., to corroborate predictions and assessments or highlight potential errors). As such, they represent a valuable tool in support of improved cancer diagnosis and treatment.

Apparatus and method for prostate cancer analysis

Provided is a method of operating an apparatus for prostate cancer analysis operated by at least one processor, the method including: receiving digital slide images prepared from serial sections of a prostatectomy specimen, and a gross image of the serial sections; acquiring prostate cancer-related histological information of each received digital slide image using an artificial neural network model trained to infer histological information from the digital slide images; generating digital pathology images by displaying the prostate cancer-related histological information inferred from the artificial neural network model on each digital slide image; and providing a histological mapping image in which a tumor region extracted from each digital pathology image is mapped to a gross image of the corresponding section.

Atlas-based segmentation using deep-learning
11710241 · 2023-07-25 · ·

Techniques for enhancing image segmentation with the integration of deep learning are disclosed herein. An example method for atlas-based segmentation using deep learning includes: applying a deep learning model to a subject image to identify an anatomical feature, registering an atlas image to the subject image, using the deep learning segmentation data to improve a registration result, generating a mapped atlas, and identifying the feature in the subject image using the mapped atlas. Another example method for training and use of a trained machine learning classifier, in an atlas-based segmentation process using deep learning, includes: applying a deep learning model to an atlas image, training a machine learning model classifier using data from applying the deep learning model, estimating structure labels of areas of the subject image, and defining structure labels by combining the estimated structure labels with labels produced from atlas-based segmentation on the subject image.

OUT-OF-DISTRIBUTION DETECTION FOR ARTIFICIAL INTELLIGENCE SYSTEMS FOR PROSTATE CANCER DETECTION
20230230228 · 2023-07-20 ·

Systems and methods for determining whether input medical images are out-of-distribution of training images on which a machine learning based medical imaging analysis network is trained are provided. One or more input medical images of a patient are received. One or more reconstructed images of the one or more input medical images are generated using a machine learning based reconstruction network. It is determined whether the one or more input medical images are out-of-distribution from training images on which a machine learning based medical imaging analysis network is trained based on the one or more input medical images and the one or more reconstructed images. The determination of whether the one or more input medical images are out-of-distribution from the training images is output.

COMPARING HEALTHCARE PROVIDER CONTOURS USING AUTOMATED TOOL
20230222657 · 2023-07-13 ·

Using a computer-implemented intermediary by which contouring performed by two participants, such as two physicians, can be compared. First, contouring performed by each participant can be compared to contouring performed by the intermediary. Then, by way of the common intermediary and a transitive analysis, contouring performed by each participant can be compared.