Patent classifications
G06T2207/30081
SYSTEMS, APPARATUSES, AND METHODS FOR ENDOSCOPY
A portable endoscopic system comprising an imaging unit for an endoscopic procedure. The imaging unit has an imaging coupler for receiving imaging information from an imaging assembly of an endoscope; a display integrated into a housing of the imaging unit; an image processing unit for processing the received imaging information into images of a time series and to displaying the image in real-time; a motion sensor configured to detect a motion of the housing; and a detection processing unit. The detection processing unit is configured to classify at least one anatomical feature in each image of the time series based on an artificial intelligence classifier; determine a confidence metric of the classification; determine a motion vector based on the detected motion; and display, concurrently with the corresponding image, the classification of the at least one anatomical feature, the determined confidence metric, and the determined motion vector.
Systems and methods for the segmentation of multi-modal image data
There is provided a computer implemented method of automatic segmentation of three dimensional (3D) anatomical region of interest(s) (ROI) that includes predefined anatomical structure(s) of a target individual, comprising: receiving 3D images of a target individual, each including the predefined anatomical structure(s), each 3D image is based on a different respective imaging modality. In one implementation, each respective 3D image is inputted into a respective processing component of a multi-modal neural network, wherein each processing component independently computes a respective intermediate, and the intermediate outputs are inputted into a common last convolutional layer(s) for computing the indication of segmented 3D ROI(s). In another implementation, each respective 3D image is inputted into a respective encoding-contracting component a multi-modal neural network, wherein each encoding-contracting component independently computes a respective intermediate output. The intermediate outputs are inputted into a single common decoding-expanding component for computing the indication of segmented 3D ROI(s).
SYSTEM AND METHOD FOR ESTIMATING AN INDICATOR OF THE TISSUE ACTIVITY OF AN ORGAN
The invention relates to a system and method for quantifying a novel biomarker of the tissue activity of a human or animal organ. By way of preferred application, such a biomarker describes the diffusivity of biological fluids in living tissues in the form of a novel indicator of the diffusion of water molecules in living tissues on the basis of diffusion data resulting from the acquisition of a sequence of images of one or more parts of the body of an animal or human patient. Particularly resistant and stable with respect to noise present in the medical imaging signals from which the experimental data stem, the novel biomarker is relevant in a large number of applications including, inexhaustively, the analysis and/or monitoring of cancers, or the assessment of strokes.
Determining at least one final two-dimensional image for visualizing an object of interest in a three dimensional ultrasound volume
The present invention relates to a device (2) and a method (100) for determining at least one final two-dimensional image or slice for visualizing an object of interest in a three-dimensional ultrasound volume. The method (100) for determining at least one final two-dimensional image, the method comprises the steps: a) providing (101) a three-dimensional image of a body region of a patient body, wherein an applicator configured for fixating at least one radiation source is inserted into the body region; b) providing (102) an initial direction, in particular by randomly determining the initial direction within the three-dimensional image; c) repeating (103) the following sequence of steps s1) to s4): s1) determining (104), via a processing unit, a set-direction within the three-dimensional image based on the initial direction for the first sequence or based on a probability map determined during a previous sequence; s2) extracting (105), via the processing unit, an image-set of two-dimensional images from the three-dimensional image, such that the two-dimensional images of the image-set are arranged coaxially and subsequently in the set-direction; s3) applying (106), via the processing unit, an applicator pre-trained classification method to each of the two-dimensional images of the image-set resulting in a probability score for each of the two-dimensional images of the image-set indicating a probability of the applicator being depicted, in particular fully depicted, in the respective two-dimensional image of the image-set in a cross-sectional view; and s4) determining (107), via the processing unit, a probability-map representing the probability scores of the two-dimensional images of the image-set with respect to the set-direction; wherein the method comprises the further step: d) determining (108), via a processing unit and after finishing the last sequence, the two-dimensional image associated with the highest probability score, in particular from the image-set determined during the last sequence, as the final two-dimensional image. The invention provides an efficient way to ensure that the ultrasound volume has the required clinical information by providing the necessary scan planes having the object of interest e.g. the applicator (6) in a three-dimensional ultrasound volume.
SYSTEMS AND METHODS FOR ARTIFICIAL INTELLIGENCE-BASED IMAGE ANALYSIS FOR CANCER ASSESSMENT
Presented herein are systems and methods that provide for automated analysis of medical images to determine a predicted disease status (e.g., prostate cancer status) and/or a value corresponding to predicted risk of the disease status for a subject. The approaches described herein leverage artificial intelligence (AI) to analyze intensities of voxels in a functional image, such as a PET image, and determine a risk and/or likelihood that a subject's disease, e.g., cancer, is aggressive. The approaches described herein can provide predictions of whether a subject that presents a localized disease has and/or will develop aggressive disease, such as metastatic cancer. These predictions are generated in a fully automated fashion and can be used alone, or in combination with other cancer diagnostic metrics (e.g., to corroborate predictions and assessments or highlight potential errors). As such, they represent a valuable tool in support of improved cancer diagnosis and treatment.
Segmentation of histological tissue images into glandular structures for prostate cancer tissue classification
The method according to the invention utilizes a color decomposition of histological tissue image data to derive a density map. The density map corresponds to the portion of the image data that contains the stain/tissue combination corresponding to the stroma, and at least one gland is extracted from said density map. The glands are obtained by a combination of a mask and a seed for each gland derived by adaptive morphological operations, and the seed is grown to the boundaries of the mask. The method may also derive an epithelial density map used to remove small objects not corresponding to epithelial tissue. The epithelial density map may further be utilized to improve the identification of glandular regions in the stromal density map. The segmented gland is extracted from the tissue data utilizing the grown seed as a mask. The gland is then classified according to its associated features.
IMPEDED DIFFUSION FRACTION FOR QUANTITATIVE IMAGING DIAGNOSTIC ASSAY
Methods and systems are provided for analyzing diffusion weighted images (DWI) using impeded diffusion fraction models for quantitative imaging diagnostic assay of cancer, such as glandular tissue cancers. The Impeded diffusion fraction models are tissue and cancer independent and generate a single score representative of multi-compartment diffusion fractions occurring within each voxel of a DWI image.
Prognosis of prostate cancer with computerized histomorphometric features of tumor morphology from routine hematoxylin and eosin slides
Embodiments facilitate generating a biochemical recurrence (BCR) prognosis by accessing a digitized image of a region of tissue demonstrating prostate cancer (CaP) pathology associated with a patient; generating a set of segmented gland lumen by segmenting a plurality of gland lumen represented in the region of tissue using a deep learning segmentation model; generating a set of post-processed segmented gland lumen; extracting a set of quantitative histomorphometry (QH) features from the digitized image based, at least in part, on the set of post-processed segmented gland lumen; generating a feature vector based on the set of QH features; computing a histotyping risk score based on a weighted sum of the feature vector; generating a classification of the patient as BCR high-risk or BCR low-risk based on the histotyping risk score and a risk score threshold; generating a BCR prognosis based on the classification; and displaying the BCR prognosis.
Medical image displaying apparatus and method of displaying medical image using the same
Provided are a medical image displaying apparatus and a medical image displaying method for registering an ultrasound image with a previously obtained medical image and outputting a result of the registration, the medical image displaying method including: transmitting ultrasound signals to an object and receiving ultrasound echo signals from the object via an ultrasound probe of the medical image displaying apparatus; obtaining a first ultrasound image based on the ultrasound echo signals; performing image registration between the first ultrasound image and a first medical image that is previously obtained; obtaining a second ultrasound image of the object via the ultrasound probe; obtaining a second medical image by transforming the first medical image to correspond to the second ultrasound image; and displaying the second medical image together with the second ultrasound image.
Image guided surgical methodology and system employing patient movement detection and correction
A method and system utilizes an imaging device that generates images of target tissue of a patient during a surgical procedure that acts on the target tissue imaged by the imaging device. The method and system enables visual detection of patient movement during the surgical procedure by marking at least one spatial attribute of one or more identifiable features of the target tissue illustrated in an image presented in a display window. Prior to acting on the target tissue, a visual indicator of the spatial attribute(s) is superimposed on one or more subsequent images captured by the imaging device and displayed to the operator. The operator can visually compare a position of the visual indicator to a position of the operator-identified feature in order to detect movement of the patient during the procedure. The system and methodology also facilitates realignment that corrects for detected patient movement.