Patent classifications
G06V2201/031
SYSTEMS AND METHODS FOR IMAGE SEGMENTATION
Systems and methods for image segmentation are provided. The systems may obtain a target image and a template image relating to the target image. The template image may correspond to an initial mask reflecting initial segmentations of the template image. The systems may determine a first transformation and an intermediate template image by preliminarily registering the template image to the target image and generate an intermediate mask based on the initial mask and the first transformation. The systems may determine, based on the intermediate mask, one or more first regions from the target image and one or more second regions from the intermediate template image. The systems may determine a second transformation by registering each of the one or more second regions to a corresponding first region. The systems may determine a target mask according to which the target image can be segmented based on one or more second transformations.
Systems and Methods for Quantification of Liver Fibrosis with MRI and Deep Learning
Embodiments provide a deep learning framework to accurately segment liver and spleen using a convolutional neural network with both short and long residual connections to extract their radiomic and deep features from multiparametric MRI. Embodiments will provide an “ensemble” deep learning model to quantify biopsy derived liver fibrosis stage and percentage using the integration of multiparametric MRI radiomic and deep features, MRE data, as well as routinely available clinical data. Embodiments will provide a deep learning model to quantify MRE-derived liver stiffness using multiparametric MRI, radiomic and deep features and routinely-available clinical data.
SELF-SUPERVISED LEARNING METHOD AND APPARATUS FOR IMAGE FEATURES, DEVICE, AND STORAGE MEDIUM
The present application provides a self-supervised learning method performed by a computer device. The method includes: performing a data enhancement on an original medical image to obtain a first enhanced image and a second enhanced image, the first enhanced image and the second enhanced image being positive samples of each other; performing feature extractions on the first enhanced image and the second enhanced image by a feature extraction model to obtain a first image feature of the first enhanced image and a second image feature of the second enhanced image; determining a model loss of the feature extraction model based on the first image feature, the second image feature, and a negative sample image feature, the negative sample image feature being an image feature corresponding to other original medical images; and training the feature extraction model based on the model loss.
AUTOMATED DETECTION OF TUMORS BASED ON IMAGE PROCESSING
Methods and systems disclosed herein relate generally to processing images to estimate whether at least part of a tumor is represented in the images. A computer-implemented method includes accessing an image of at least part of a biological structure of a particular subject, processing the image using a segmentation algorithm to extract a plurality of image objects depicted in the image, determining one or more structural characteristics associated with an image object of the plurality of image objects, processing the one or more structural characteristics using a trained machine-learning model to generate estimation data corresponding to an estimation of whether the image object corresponds to a lesion or tumor associated with the biological structure, and outputting the estimation data for the particular subject.
IMAGE ANALYSIS METHOD, IMAGE ANALYSIS DEVICE, IMAGE ANALYSIS SYSTEM, CONTROL PROGRAM, AND RECORDING MEDIUM
The disclosed feature makes it possible to accurately determine a change that has occurred in a tissue. The feature includes: a binarizing section (41) that generates, from an image to be analyzed, a plurality of binarized images having respective binarization reference values different from each other; a Betti number calculating section (42) that calculates, for each of the plurality of binarized images, a one-dimensional Betti number indicating the number of hole-shaped regions each of which is surrounded by pixels each having a first pixel value obtained by binarization and is constituted by pixels each having a second pixel value obtained by binarization; and a determining section (44) that determines a change that has occurred in the tissue, based on a binarization reference value and a one-dimensional Betti number in a binarized image in which the one-dimensional Betti number is maximized.
Co-heterogeneous and adaptive 3D pathological abdominal organ segmentation using multi-source and multi-phase clinical image datasets
The present disclosure describes a computer-implemented method for processing clinical three-dimensional image. The method includes training a fully supervised segmentation model using a labelled image dataset containing images for a disease at a predefined set of contrast phases or modalities, allow the segmentation model to segment images at the predefined set of contrast phases or modalities; finetuning the fully supervised segmentation model through co-heterogenous training and adversarial domain adaptation (ADA) using an unlabelled image dataset containing clinical multi-phase or multi-modality image data, to allow the segmentation model to segment images at contrast phases or modalities other than the predefined set of contrast phases or modalities; and further finetuning the fully supervised segmentation model using domain-specific pseudo labelling to identify pathological regions missed by the segmentation model.
DEEP LEARNING VOLUMETRIC DEFORMABLE REGISTRATION
A method and system for automated deformable registration of an organ from medical images includes generating segmentations of the organ by processing a first and second series of images corresponding to different organ states using a first trained CNN. A second trained CNN processes the first and second series of images and the segmentations to deformably register the second series of images to the first series of images. The second trained CNN predicts a displacement field by minimizing a registration loss function, where the displacement field maximizes colocalization of the organ between the different states.
Information processing apparatus, information processing method, and program
Provided are an information processing apparatus, an information processing method, and a program capable of accumulating appropriate relearning data. An information processing apparatus includes an input unit that inputs input data to a learned model acquired in advance through machine learning using learning data, an acquisition unit that acquires output data output from the learned model through the input using the input unit, a reception unit that receives correction performed by a user for the output data acquired by the acquisition unit, and a storage controller that performs control for storing, as relearning data of the learned model, the input data and the output data that reflects the correction received by the reception unit in a storage unit in a case where a value indicating a correction amount acquired by performing the correction for the output data is equal to or greater than a threshold value.
System and Method for Predicting the Risk of Future Lung Cancer
Risk prediction models are trained and deployed to analyze images, such as computed tomography scans, for predicting risk of lung cancer (e.g., current or future risk of lung cancer) for one or more subjects. Individual risk prediction models are trained on nodule-specific and non-nodule specific features, including longitudinal nodule specific and longitudinal non-nodule specific features, such that each risk prediction model can predict risk of lung cancer across different time horizons. Such risk prediction models are useful for developing preventive therapies for lung cancer by enabling clinical trial enrichment.
Ultrasound imaging apparatus with image selector
An ultrasound imaging system includes a cine buffer in which image frames produced during an examination are stored. A processor is programmed to select one or more image frames from the cine buffer for presentation to an operator for approval and inclusion in a patient record or other report. The operator can accept the proposed image frames or can select one or more other image frames from the cine buffer. The processor may select image frames at spaced intervals in the cine buffer for presentation. Alternatively, the processor compares image frames in the cine buffer with one or more target image frames. Image frames that are similar to the target image frames are presented to the operator to confirm. Alternatively, image frames can be selected by the processor that contain a specific feature or that are similar to image frames that were previously selected by the operator when performing a particular type of examination.