Patent classifications
G06V2201/031
DIGITAL TISSUE SEGMENTATION AND MAPPING WITH CONCURRENT SUBTYPING
Accurate tissue segmentation is performed without a priori knowledge of tissue type or other extrinsic information not found within the subject image, and may be combined with classification analysis so that diseased tissue is not only delineated within an image but also characterized in terms of disease type. In various embodiments, a source image is decomposed into smaller overlapping subimages such as square or rectangular tiles. A predictor such as a convolutional neural network produces tile-level classifications that are aggregated to produce a tissue segmentation and, in some embodiments, to classify the source image or a subregion thereof.
Predictive use of quantitative imaging
The present disclosure provides systems and methods for predicting a disease state of a subject using ultrasound imaging and ancillary information to the ultrasound imaging. At least two quantitative measurements of a subject, including at least one measurement taken using ultrasound imaging, as part of quantified information can be identified. One of the quantitative measurements can be compared to a first predetermined standard, included as part of ancillary information to the quantified information, in order to identify a first initial value. Further, another of the quantitative measurements can be compared to a second predetermined standard, included as part of the ancillary information, in order to identify a second initial value. Subsequently, the quantitative information can be correlated with the ancillary information using the first initial value and the second initial value to determine a final value that is predictive of a disease state of the subject.
METHOD AND SYSTEM FOR AUTOMATICALLY DETECTING ANATOMICAL STRUCTURES IN A MEDICAL IMAGE
The invention relates to a computer-implemented method for automatically detecting anatomical structures (3) in a medical image (1) of a subject, the method comprising applying an object detector function (4) to the medical image, wherein the object detector function performs the steps of: (A) applying a first neural network (40) to the medical image, wherein the first neural network is trained to detect a first plurality of classes of larger-sized anatomical structures (3a), thereby generating as output the coordinates of at least one first bounding box (51) and the confidence score of it containing a larger-sized anatomical structure; (B) cropping (42) the medical image to the first bounding box, thereby generating a cropped image (11) containing the image content within the first bounding box (51); and (C) applying a second neural network (44) to the cropped medical image, wherein the second neural network is trained to detect at least one second class of smaller-sized anatomical structures (3b), thereby generating as output the coordinates of at least one second bounding box (54) and the confidence score of it containing a smaller-sized anatomical structure.
PROGRAM, INFORMATION PROCESSING METHOD, METHOD FOR GENERATING LEARNING MODEL, METHOD FOR RELEARNING LEARNING MODEL, AND INFORMATION PROCESSING SYSTEM
A program and the like that make a catheter system relatively easy to use. The program including a non-transitory computer-readable medium (CRM) storing computer program code executed by a computer processor that executes a process comprising: acquiring a tomographic image generated using a diagnostic imaging catheter inserted into a lumen organ; and inputting the acquired tomographic image to a first model configured to output types of a plurality of objects included in the tomographic image and ranges of the respective objects in association with each other when the tomographic image is input, and outputting the types and ranges of the objects output from the first model.
Apparatus and method for x-ray data generation
The apparatus for an X-ray data generation according to an embodiment of the inventive concept includes a processor that receives 3D data to generate output data and a buffer, and the processor includes an extraction unit that extracts raw object data from the 3D data and projects the raw object data onto a 2D plane to generate first object data, an augmentation unit that performs data augmentation on the first object data to generate second object data, a composition unit that synthesizes the second object data and background data to generate composite data, and a post-processing unit that performs post-processing on the composite data to generate the output data, and the buffer stores a plurality of parameters related to generation of the first object data, the second object data, the composite data, and the output data.
SYSTEM AND METHOD FOR EARLY DIAGNOSTICS AND PROGNOSTICS OF MILD COGNITIVE IMPAIRMENT USING HYBRID MACHINE LEARNING
A system and method for predicting mild cognitive impairment (MCI) related diagnosis and prognosis utilizing hybrid machine learning. More specifically, the system and method produce predictions of MCI conversions to dementia and prognosis related thereof. Using available medical imaging and non-imaging data a diagnosis and prognosis model is trained using transfer learning. A platform may then receive a request from a clinician for a target patient's diagnosis or prognosis. The target patient's medical data is retrieved and used to create a model for the target patient. Then details of the target patient's model and the diagnosis and prognosis model are compared, a prediction is generated, and the prediction is returned to the clinician. As new medical data becomes available it is fed into the respective model to improve accuracy and update predictions.
INTELLIGENT MULTI-SCALE MEDICAL IMAGE LANDMARK DETECTION
Intelligent multi-scale image parsing determines the optimal size of each observation by an artificial agent at a given point in time while searching for the anatomical landmark. The artificial agent begins searching image data with a coarse field-of-view and iteratively decreases the field-of-view to locate the anatomical landmark. After searching at a coarse field-of view, the artificial agent increases resolution to a finer field-of-view to analyze context and appearance factors to converge on the anatomical landmark. The artificial agent determines applicable context and appearance factors at each effective scale.
Quality Control of Automated Whole-slide Analyses
The subject disclosure presents systems and methods for automatically selecting meaningful regions on a whole-slide image and performing quality control on the resulting collection of FOVs. Density maps may be generated quantifying the local density of detection results. The heat maps as well as combinations of maps (such as a local sum, ratio, etc.) may be provided as input into an automated FOV selection operation. The selection operation may select regions of each heat map that represent extreme and average representative regions, based on one or more rules. One or more rules may be defined in order to generate the list of candidate FOVs. The rules may generally be formulated such that FOVs chosen for quality control are the ones that require the most scrutiny and will benefit the most from an assessment by an expert observer.
Systems and methods for machine learning based physiological motion measurement
A system for physiological motion measurement is provided. The system may acquire a reference image corresponding to a reference motion phase of an ROI and a target image of the ROI corresponding to a target motion phase, wherein the reference motion phase may be different from the target motion phase. The system may identify one or more feature points relating to the ROI from the reference image, and determine a motion field of the feature points from the reference motion phase to the target motion phase using a motion prediction model. An input of the motion prediction model may include at least the reference image and the target image. The system may further determine a physiological condition of the ROI based on the motion field.
Photoacoustic image evaluation apparatus, method, and program, and photoacoustic image generation apparatus
A photoacoustic image evaluation apparatus includes a processor configured to acquire a first photoacoustic image generated at a first point in time and a second photoacoustic image generated at a second point in time before the first point in time, the first and second photoacoustic images being photoacoustic images generated by detecting photoacoustic waves generated inside a subject, who has been subjected to blood vessel regeneration treatment, by emission of light into the subject; acquire a blood vessel regeneration index, which indicates a state of a blood vessel by the regeneration treatment, based on a difference between a blood vessel included in the first photoacoustic image and a blood vessel included in the second photoacoustic image; and display the blood vessel regeneration index on a display.