Patent classifications
G06T2207/30032
Systems and methods for selecting and deduplicating images of event indicators
A system for selecting images of an event indicator includes a processor and a memory storing instructions which, when executed, cause the system to: access images of a portion of a gastrointestinal tract captured by a capsule endoscopy device; for each of the images, access one or more scores indicating a presence of an event indicator; select seed images from among the images based on the one or more scores; deduplicate the seed images for images showing the same occurrence of the event indicator, where the deduplicating utilizes a consecutive-image tracker; and present the deduplicated seed images in a graphical user interface to display potential occurrences of the event indicator.
Systems and methods for comparing distance between embedding of images to threshold to classify images as containing the same or different occurrence of event indicators
The present disclosure relates to systems and methods for determining whether two images of a gastrointestinal tract (GIT) contain the same occurrence of an event indicator or different occurrences of an event indicator. An exemplary processing system includes at least one processor and at least one memory storing instructions. When the instruction are executed by the processor(s), they cause the processing system to access a first image and a second image of a portion of a GIT, where the first image and the second image contain at least one occurrence of an event indicator, and to classify the first image and the second image by a classification system configured to provide an indication of whether the first image and second image contain a same occurrence of the event indicator or contain different occurrences of the event indicator.
Medical image processing system, medical image processing method, and program
The present disclosure relates to a medical image processing system, a medical image processing method, and a program that facilitate understanding of the criterion or the reason of a determination made by a machine learning model. An estimation unit estimates classification of a medical image with use of a machine learning model. A first calculation unit calculates first ground information indicative of estimation ground of the classification by a first explanation technique, and a second calculation unit estimates second ground information indicative of estimation ground of the classification by a second explanation technique different from the first explanation technique. An output controlling unit controls output of a first explanation image based on the first ground information and a second explanation image based on the second ground information. The present disclosure can be applied to a medical image processing system.
Portable edge AI-assisted diagnosis and quality control system for gastrointestinal endoscopy
In a decision-support system for gastrointestinal (GI) endoscopy, convolutional neural networks (CNNs) are set up to perform decision-support tasks according to endoscopic images. Each learnable kernel used in the CNNs is advantageously modeled as a linear combination of a set of fixed kernels for simplifying kernel learning, giving a lightweight kernel model to advantageously reduce required computation resources. Further computation-resource reduction can be made by CNN model compression via knowledge distillation and by using multi-task CNNs. It enables the decision-support system to be realized as an edge computing system near a site of performing endoscopic examinations. The system can be automatically configured for esophagogastroduodenoscopy (EGD) or colonoscopy. In the system, lesion-detection results and quality-control results can be seamlessly integrated to provide value-added results, which are more valuable to the endoscopist than separately considering the lesion-detection results and quality-control results.
Image analysis processing apparatus, endoscope system, operation method of image analysis processing apparatus, and non-transitory computer readable medium
There is provided an image analysis processing apparatus including a processor, in which the processor acquires a plurality of types of analysis images used in image analysis, performs the image analysis on the analysis image in parallel for each type of analysis image, acquires a plurality of analysis results through the image analysis, and performs control of displaying, on a display, an analysis result display based on the plurality of analysis results and a display image based on at least one type of analysis image among the plurality of types of analysis images.
Endoscopic examination support apparatus, endoscopic examination support method, and recording medium
In the endoscopic examination support apparatus, three three-dimensional model generation means generates a three-dimensional model of a luminal organ in which an endoscope camera is placed, based on endoscopic images acquired by imaging an interior of the luminal organ with the endoscope camera. The unobserved area detection means detects an area estimated not to be observed by the endoscope camera, as an unobserved area, based on the three-dimensional model. The display image generation means generates a display image including information indicating an observation achievement degree for each of a plurality of sites of the luminal organ, based on the detection result of the unobserved area. The endoscopic examination support apparatus may be used to support user's decision making.
Anatomical location detection of features of a gastrointestinal tract of a patient
Generating a structured medical record from endoscopy data includes obtaining image data including endoscopic images representing portions of a gastrointestinal tract (GI) of a patient; determining features to extract from the image data, the features each representing a physical parameter of the GI tract; extracting the features from the image data; generating anatomical location data specifying a location within the GI tract of a portion of the GI tract represented in the image data; associating the anatomical location data with images that represent the portion of the GI tract; storing, in a node of a data store, data entries including the anatomical location data and the associated one or more images. The data store is configured to receive structured queries for the data entries in the data store and provide the data entries including the transformed features in response to receiving the structured queries.
Systems for tracking disease progression in a patient
Systems and methods for tracking an evolution of a disease in a colon of a patient over time are configured for operations including receiving video data representing a colon of a patient; segmenting the video data into segments representing portions of a colon of the patient; extracting, from the video data based on the segmenting, a set of features representing locations in the colon of the patient; and registering the feature vector as representing the colon of the patient for tracking the evolution of the disease in the colon of the patient. The system can be configured to predict disease progression predict drug dosage for patients.
SYSTEMS AND METHODS OF FEATURE DETECTION WITHIN MEDICAL IMAGES
A method including receiving image data including a plurality of CT scan images of at least a portion of a subject; segmenting the CT scan images to identify portions of each image corresponding to the subject's colon; analyze axial views of the segmented CT scan images to identify a candidate polyp using a first CNN; analyzing at least two of axial views, sagittal views, and coronal views CT scan images corresponding to the candidate polyp using a second model to classify the candidate polyp as a polyp or not a polyp; and generating a user interface that includes the classified candidate polyp.
SYSTEMS AND METHODS OF DEEP LEARNING FOR COLORECTAL POLYP SCREENING
Disclosed are various embodiments of systems and methods of deep learning for colorectal polyp screening and providing a prediction of neoplasticity of a polyp. A video of a colonoscopy procedure can be captured. Frames from the video or images associated with the colonoscopy procedure can be extracted. A model for classifying objects that appear in the frames or the images can be obtained. A classification can be determined for a polyp that appears in at least one of the frames or images based on applying the frames or images to an input layer of the model.