Patent classifications
G06T2207/30064
Medical image aided diagnosis method and system combining image recognition and report editing
A medical image aided diagnosis method and system combining image recognition and report editing. The medical image aided diagnosis method comprises the following steps S1, establishing an image semantic expression knowledge graph of medical images, S2, obtaining a medical image of a patient, determining a region of interest on a two-dimensional image, and providing a candidate focus option of the patient according to the image semantic expression knowledge graph and the region of interest; and S3, determining a focus type according to the region of interest and the candidate focus option; performing division to obtain a lesion region according to the focus type, and generating a structured report related to a region-of-interest of the medical image of the patient, and adding the lesion region and corresponding expression content of image semantics into a corresponding focus image library. In the method, medical image recognition is performed by combining an image semantic expression knowledge graph and a variety of machine leaning, sample images can be deeply accumulated, the image semantic expression knowledge graph can be continuously improved, and aided diagnosis capabilities of medical images can be enhanced.
MACHINE LEARNING DEVICE, ESTIMATION DEVICE, NON-TRANSITORY COMPUTER READABLE MEDIUM, AND LEARNED MODEL
A machine learning device includes: a generation unit generating a first shape model representing a shape of an object before deformation and a second shape model representing a shape of the object after the deformation based on measurement data before and after the deformation; and a learning unit learning a feature amount including a difference value between each micro region and another micro region that constitute the first shape model, and a relation providing a displacement from the each micro region of the first shape model to each corresponding micro region of the second shape model.
SYSTEMS AND METHODS FOR DETECTION AND STAGING OF PULMONARY FIBROSIS FROM IMAGE-ACQUIRED DATA
A method for ascertaining pulmonary fibrosis disease progression or treatment response includes obtaining a first set of computed tomography (CT) images of a lung and determining a first Pulmonary Surface Index (PSI) score for the lung by detecting a first actual lung boundary of the lung within the first set of CT images, determining a first approximated lung boundary within the first set of CT images, and determining the PSI score using inputs based on the first actual lung boundary and the first approximated lung boundary. The method also includes obtaining a second set of CT images of the lung and determining a second PSI score for the lung using inputs based on a second actual lung boundary and a second approximated lung boundary. The method also includes assessing pulmonary fibrosis treatment response or disease progression based on the first PSI score and the second PSI score.
DEVICE AND METHOD FOR UNIVERSAL LESION DETECTION IN MEDICAL IMAGES
A method for performing a computer-aided diagnosis (CAD) for universal lesion detection includes: receiving a medical image; processing the medical image to predict lesion proposals and generating cropped feature maps corresponding to the lesion proposals; for each lesion proposal, applying a plurality of lesion detection classifiers to generate a plurality of lesion detection scores, the plurality of lesion detection classifiers including a whole-body classifier and one or more organ-specific classifiers; for each lesion proposal, applying an organ-gating classifier to generate a plurality of weighting coefficients corresponding to the plurality of lesion detection classifiers; and for each lesion proposal, performing weight gating on the plurality of lesion detection scores with the plurality of weighting coefficients to generate a comprehensive lesion detection score.
SYSTEMS AND METHODS FOR AUTOMATED DIGITAL IMAGE CONTENT EXTRACTION AND ANALYSIS
Systems and methods are configured to extract images from provided source data files and to preprocess such images for content-based image analysis. An image analysis system applies one or more machine-learning based models for identifying specific features within analyzed images, and for determining one or more measurements based at least in part on the identified features. Such measurements may be embodied as absolute measurements for determining an absolute distance between features, or relative measurements for determining a relative relationship between features. The determined measurements are input into one or more machine-learning based models for determining a classification for the image.
SYSTEMS AND METHODS FOR AUTOMATED DIGITAL IMAGE CONTENT EXTRACTION AND ANALYSIS
Systems and methods are configured to extract images from provided source data files and to preprocess such images for content-based image analysis. An image analysis system applies one or more machine-learning based models for identifying specific features within analyzed images, and for determining one or more measurements based at least in part on the identified features. Such measurements may be embodied as absolute measurements for determining an absolute distance between features, or relative measurements for determining a relative relationship between features. The determined measurements are input into one or more machine-learning based models for determining a classification for the image.
SYSTEMS AND METHODS FOR AUTOMATED DIGITAL IMAGE SELECTION AND PRE-PROCESSING FOR AUTOMATED CONTENT ANALYSIS
Systems and methods are configured for preprocessing of images for further content based analysis thereof. Such images are extracted from a source data file, by standardizing individual pages within a source data file as image data files, and identifying whether the image satisfies applicable size-based criteria, applicable color-based criteria, and applicable content-based criteria, among others, utilizing one or more machine-learning based models. Various systems and methods may identify particular features within the extracted images to facilitate further image-based analysis based on the identified features.
DYNAMIC 3D LUNG MAP VIEW FOR TOOL NAVIGATION INSIDE THE LUNG
A method for implementing a dynamic three-dimensional lung map view for navigating a probe inside a patient's lungs includes loading a navigation plan into a navigation system, the navigation plan including a planned pathway shown in a 3D model generated from a plurality of CT images, inserting the probe into a patient's airways, registering a sensed location of the probe with the planned pathway, selecting a target in the navigation plan, presenting a view of the 3D model showing the planned pathway and indicating the sensed location of the probe, navigating the probe through the airways of the patient's lungs toward the target, iteratively adjusting the presented view of the 3D model showing the planned pathway based on the sensed location of the probe, and updating the presented view by removing at least a part of an object forming part of the 3D model.
DYNAMIC 3D LUNG MAP VIEW FOR TOOL NAVIGATION INSIDE THE LUNG
A method for implementing a dynamic three-dimensional lung map view for navigating a probe inside a patient's lungs includes loading a navigation plan into a navigation system, the navigation plan including a planned pathway shown in a 3D model generated from a plurality of CT images, inserting the probe into a patient's airways, registering a sensed location of the probe with the planned pathway, selecting a target in the navigation plan, presenting a view of the 3D model showing the planned pathway and indicating the sensed location of the probe, navigating the probe through the airways of the patient's lungs toward the target, iteratively adjusting the presented view of the 3D model showing the planned pathway based on the sensed location of the probe, and updating the presented view by removing at least a part of an object forming part of the 3D model.
Systems and methods for integrating tomographic image reconstruction and radiomics using neural networks
Computed tomography (CT) screening, diagnosis, or another image analysis tasks are performed using one or more networks and/or algorithms to either integrate complementary tomographic image reconstructions and radiomics or map tomographic raw data directly to diagnostic findings in the machine learning framework. One or more reconstruction networks are trained to reconstruct tomographic images from a training set of CT projection data. One or more radiomics networks are trained to extract features from the tomographic images and associated training diagnostic data. The networks/algorithms are integrated into an end-to-end network and trained. A set of tomographic data, e.g., CT projection data, and other relevant information from an individual is input to the end-to-end network, and a potential diagnosis for the individual based on the features extracted by the end-to-end network is produced. The systems and methods can be applied to CT projection data, MRI data, nuclear imaging data, ultrasound signals, optical data, other types of tomographic data, or combinations thereof.