G06T2207/30032

ANATOMICAL STRUCTURE COMPLEXITY DETERMINATION AND REPRESENTATION

Various of the disclosed embodiments contemplate systems and methods for assessing structural complexity within an intra-surgical environment. For example, in some embodiments, surface characteristics from three-dimensional models of a patient interior, such as a colon, bronchial tube, esophagus, etc. may be used to infer the surface's level of complexity. Once determined, complexity may inform a number of downstream operations, such as assisting surgical operators to identify complex regions requiring more thorough review, the automated recognition of healthy or unhealthy tissue states, etc. While some embodiments apply to generally cylindrical internal structures, such as a colon or branching pulmonary pathways, etc., other embodiments may be used within other structures, such as inflated laparoscopic regions between organs, joints, etc. Various embodiments also consider graphical and feedback indicia for representing the complexity assessments.

Information processing apparatus, control method, and program

An information processing apparatus (2000) detects an abnormal region (30) from a moving image frame (14). The abnormal region (30) is a region that is estimated to represent an abnormal part inside a body of a subject. The information processing apparatus (2000) generates and outputs output information based on the number of detected abnormal regions (30).

User-interface for visualization of endoscopy procedures

A user-interface for visualizing a colonoscopy procedure includes a video region and a navigational map upon which coverage annotations are displayed. A live video feed received from a colonoscope is displayed in the video region. The navigational map depicts longitudinal sections of a colon. The coverage annotations are presented on the navigation map and indicate whether one or more of the longitudinal sections is deemed adequately inspected or inadequately inspected during the colonoscopy procedure.

Method for detection and pathological classification of polyps via colonoscopy based on anchor-free technique
11954857 · 2024-04-09 · ·

A method for detection and pathological classification of polyps via colonoscopy based on an anchor-free technique includes: performing feature extraction on a color endoscopic image that is pretreated, enhancing and extending the extracted features, decoding the feature information of the enhanced feature and the extended feature through an anchor-free detection algorithm to acquire a polyp prediction box and a prospect prediction mask, then respectively extracting global and local feature vectors from the extended feature and the prospect prediction mask, and combining the global feature vector with the local feature vector, so as to predict the type of polyps through a full-connection layer. Through the present application, the type of polyps can be correctly predicted, and the detection rate of polyps and the accuracy rate of pathological classification are improved.

METHOD AND APPARATUS FOR REAL-TIME DETECTION OF POLYPS IN OPTICAL COLONOSCOPY

A method for performing real-time detection and displaying of polyps in optical colonoscopy, includes a) acquiring and displaying a plurality of real-time images within colon regions to a video stream frame rate, each real-time image comprising a plurality of color channels; b) selecting one single color channel per real-time image for obtaining single color pixels; c) scanning the single color pixels across each the real-time image with a sliding sub-window; d) for each position of the sliding sub-window, extracting a plurality of single color pixels local features of the real-time image; e) passing the extracted single color pixels local features of the real-time image through a classifier to determine if a polyp is present within the sliding sub-window; f) real-time framing on display of colon regions corresponding to positions of the sliding sub-window wherein polyps are detected. A system for carrying out such a method is also provided.

Endoscope system, medical image processing device, and operation method therefor
11978209 · 2024-05-07 · ·

A medical image processing device a reference image that is a medical image with which boundary line information related to a boundary line that is a boundary between an abnormal region and a normal region and landmark information related to a landmark that is a characteristic structure of the subject are associated and a captured image that is the medical image captured in real time, detects the landmark from the captured image, calculates a ratio of match between the landmark included in the reference image and the landmark included in the captured image, estimates a correspondence relationship between the reference image and the captured image on the basis of the ratio of match and information regarding the landmarks included in the reference image and the captured image, and generates a superimposition image in which the boundary line associated with the reference image is superimposed on the captured image on the basis of the correspondence relationship.

ENDOSCOPE SYSTEM
20190239736 · 2019-08-08 · ·

An endoscope system capable of setting the optimal balance of light source wavelengths in accordance with a diagnosis purpose is provided. An endoscope system includes a diagnosis purpose acquisition unit, a plurality of light sources with different light emission wavelengths, a light quantity ratio storage unit, a light quantity ratio selection unit, and a light source control unit. The diagnosis purpose acquisition unit acquires a diagnosis purpose. The light quantity ratio storage unit stores correspondence between the diagnosis purpose and a plurality of light quantity ratios with different balances of respective emission light quantities of the plurality of light sources. The light quantity ratio selection unit refers to the light quantity ratio storage unit and selects the light quantity ratio that is used for the acquired diagnosis purpose. The light source control unit controls the plurality of light sources to emit illumination light with the selected light quantity ratio.

ENDOSCOPE IMAGE PROCESSING APPARATUS AND ENDOSCOPE IMAGE PROCESSING METHOD
20190239718 · 2019-08-08 · ·

An endoscope image processing apparatus includes a region-of-interest detection apparatus configured to sequentially receive observation images obtained by performing image pickup of an object and perform processing for detecting a region of interest for each of the observation images, and a processor. The processor calculates an appearance time period as an elapsed time period from a time when the region of interest appears within the observation image when the region-of-interest detection apparatus detects the region of interest, and starts emphasis processing for emphasizing a position of the region of interest existing within the observation image at a timing at which the appearance time period reaches a predetermined time period.

AUTOMATED IDENTIFICATION OF TUMOR BUDS

Automated image analysis methods to identify and quantify tumor buds in a high resolution image of a section of a tumor that is stained using either pan-cytokeratin AE1/3 or hematoxylin and eosin (H&E) are disclosed. The methods may be used to aid and/or replace manual visual inspection for tumor buds and may be used to predict a clinically relevant outcome or treatment in some cases. The disclosed methods may be used for many different cancer types, such as colorectal cancer.

Systems and methods for training generative adversarial networks and use of trained generative adversarial networks

The present disclosure relates to computer-implemented systems and methods for training and using generative adversarial networks. In one implementation, a system for training a generative adversarial network may include at least one processor that may provide a first plurality of images including representations of a feature-of-interest and indicators of locations of the feature-of-interest and use the first plurality and indicators to train an object detection network. Further, the processor(s) may provide a second plurality of images including representations of the feature-of-interest, and apply the trained object detection network to the second plurality to produce a plurality of detections of the feature-of-interest. Additionally, the processor(s) may provide manually set verifications of true positives and false positives with respect to the plurality of detections, use the verifications to train a generative adversarial network, and retrain the generative adversarial network using at least one further set of images, further detections, and further manually set verifications.