Patent classifications
G06T2207/30081
Workflow, system and method for motion compensation in ultrasound procedures
An ultrasound imaging device (10) with an ultrasound probe (12) acquires a live ultrasound image which is displayed with a contour (62) or reference image (60) registered with the live ultrasound image using a composite transform (42). To update the composite transform, the ultrasound imaging device acquires a baseline three-dimensional ultrasound (3D-US) image (66) tagged with a corresponding baseline orientation of the ultrasound probe measured by a probe tracker, and one or more reference 3D-US images (70) each tagged with a corresponding reference orientation. Transforms (54) are computed to spatially register each reference 3D-US image with the baseline 3D-US image. A closest reference 3D-US image is determined whose corresponding orientation is closest to a current orientation of the ultrasound probe as measured by the probe tracker. The composite transform is updated to include the transform to spatially register the closest reference 3D-US image to the baseline 3D-US image.
Prostate cancer tissue image classification with deep learning
The method of the present invention classifies the nuclei in prostate tissue images with a trained deep learning network and uses said nuclear classification to classify regions, such as glandular regions, according to their malignancy grade. The method according to the present disclosure also trains a deep learning network to identify the category of each nucleus in prostate tissue image data, said category representing the malignancy grade of the tissue surrounding the nuclei. The method of the present disclosure automatically segments the glands and identifies the nuclei in a prostate tissue data set. Said segmented glands are assigned a category by at least one domain expert, and said category is then used to automatically assign a category to each nucleus corresponding to the category of said nucleus' surrounding tissue. A multitude of windows, each said window surrounding a nucleus, comprises the training data for the deep learning network.
System of deep learning neural network in prostate cancer bone metastasis identification based on whole body bone scan images
A system of deep learning neural network in prostate cancer bone metastasis identification based on whole body bone scan images includes a pre-processing module for receiving input whole body bone scan images, and a neural network module for detecting whether there is a prostate cancer bone metastasis. The neural network module includes: a chest portion network module for establishing first stage faster R-CNN and segmenting training images of chest portion according to the input whole body bone scan images, and using the training images to train second stage faster R-CNN and categorizing the lesions of cancerous bone metastasis; and a pelvis portion network module for establishing first stage faster R-CNN and segmenting training images of pelvis portion according to the input whole body bone scan images, and using the training images to train the convolutional neural network to categorize whether it is a bone metastasis image.
SYSTEMS AND METHODS FOR ANALYSIS OF PROCESSING ELECTRONIC IMAGES WITH FLEXIBLE ALGORITHMIC PROCESSING
A method may process an electronic image corresponding to a medical sample associated with a patient. The method may include receiving a selection of one or more artificial intelligence (AI) algorithms, receiving one or more whole slide images of a medical sample associated with a patient, performing a task on the whole slide images, using the one or more selected AI algorithms, the whole slide images being stored in a first container, the whole slide images being originated from a first user, the task comprising determining a characteristic of the medical sample in the whole slide images, based on the characteristic of the whole slide image, generating metadata associated with the whole slide image, and storing the metadata in a second container.
TRACKERLESS 2D ULTRASOUND FRAME TO 3D IMAGE VOLUME REGISTRATION
One embodiment provides an apparatus for registering a two dimensional (2D) ultrasound (US) frame and a three dimensional (3D) magnetic resonance (MR) volume. The apparatus includes a first deep neural network (DNN) and an image fusion management circuitry. The first DNN is configured to determine a 2D US pose vector based, at least in part, on 2D US frame data. The image fusion management circuitry is configured to register the 2D US frame data and a 3D MR volume data. The registering is based, at least in part, on the
SYSTEMS, METHODS, AND APPARATUSES FOR GENERATING PRE-TRAINED MODELS FOR nnU-Net THROUGH THE USE OF IMPROVED TRANSFER LEARNING TECHNIQUES
Described herein are means for generating pre-trained models for nnU-Net through the use of improved transfer learning techniques, in which the pre-trained models are then utilized for the processing of medical imaging. According to a particular embodiment, there is a system specially configured for segmenting medical images, in which such a system includes: a memory to store instructions; a processor to execute the instructions stored in the memory; wherein the system is specially configured to: execute instructions via the processor for executing a pre-trained model from Models Genesis within a nnU-Net framework; execute instructions via the processor for learning generic anatomical patterns within the executing Models Genesis through self-supervised learning; execute instructions via the processor for transforming an original image using distortion and cutout-based methods; execute instructions via the processor for learning the reconstruction of the original image from the transformed image using an encoder-decoder architecture of the nnU-Net framework to identify the generic anatomical representation from the transformed image by recovering the original image; and wherein architecture determined by the nnU-Net framework is utilized with Models Genesis and is trained to minimize the L2 distance between the prediction and ground truth. Other related embodiments are disclosed.
Automated co-registration of prostate MRI data
Medical imaging analysis systems are configured to perform automatic image registration algorithms that perform three-dimensional (3D), affine, and/or intensity-based co-registration of magnetic resonance imaging (MRI) data, such as multiparametric MRI (mpMRT) data, using mutual information (MI) as a similarity metric. An apparatus comprises a computer-readable storage medium storing a plurality of imaging series of magnetic resonance imaging (MRI) data for imaged tissue of a patient; and a processor coupled to the computer-readable storage medium. The processor is configured to receive the imaging series of MRI data; identify a volume of interest (VOI) of each image of the imaging series of MRI data; compute registration parameters for the VOIs through the maximization of mutual information of the corrected VOIs; and register the VOIs using the computed registration parameters.
Artificial Intelligence-Based Assistant For Concurrent Review Of Needle Core Prostate Biopsies
One example method includes receiving a digital image of a needle core prostate biopsy, displaying, using a display device, a magnified portion of the digital image, obtaining, from a deep learning model, Gleason scores corresponding to patches of the magnified portion of the digital image, and displaying, using the display device, a superimposed overlay on the magnified portion of the digital image based on the Gleason scores and corresponding confidence values of the Gleason scores, the superimposed overlay comprising one or more outlines corresponding one or more Gleason scores associated with the magnified portion of the digital image and comprising image patches having colors based on a Gleason score of the Gleason scores corresponding to an underlying portion of the magnified portion of the digital image and a confidence value of the corresponding Gleason score.
Automated detection and annotation of prostate cancer on histopathology slides
Automated, machine learning-based systems are described for the analysis and annotation (i.e., detection or delineation) of prostate cancer (PCa) on histologically-stained pathology slides of prostatectomy specimens. A technical framework is described for automating the annotation of predicted PCa that is based on, for example, automated spatial alignment and colorimetric analysis of both H&E and IHC whole-slide images (WSIs). The WSIs may, as one example, be stained with a particular triple-antibody cocktail against high-molecular weight cytokeratin (HMWCK), p63, and α-methylacyl CoA racemase (AMACR).
Method and system for multi-modal tissue classification
A method for multi-modal tissue classification of an anatomical tissue involves a generation of a tissue classification volume (40) of the anatomical tissue derived from a spatial registration and an image extraction of one or more MRI features of the anatomical tissue and one or more ultrasound image features of the anatomical tissue. The method further involves a classification of each voxel of the tissue classification volume (40) as one of a plurality of tissue types including a healthy tissue voxel and an unhealthy tissue voxel.