G06T2207/30032

Systems and methods for processing real-time video from a medical image device and detecting objects in the video

The present disclosure relates to systems and methods for processing real-time video and detecting objects in the video. In one implementation, a system is provided that includes an input port for receiving real-time video obtained from a medical image device, a first bus for transferring the received real-time video, and at least one processor configured to receive the real-time video from the first bus, perform object detection by applying a trained neural network on frames of the received real-time video, and overlay a border indicating a location of at least one detected object in the frames. The system also includes a second bus for receiving the video with the overlaid border, an output port for outputting the video with the overlaid border from the second bus to an external display, and a third bus for directly transmitting the received real-time video to the output port.

System and method for automatic processing of images from an autonomous endoscopic capsule

A method and system for processing of images from an autonomous endoscopic capsule includes acquiring a video stream from the endoscopic capsule; detecting repeat frames; removing repeat frames from the video stream; adjusting a speed of play back of the video stream based on a value of possibility of skipping significant frames; detecting anomalies; marking frames with anomalies for further review by a physician; and displaying multiple images on a physician's desktop simultaneously in a chronological order in a form of a matrix.

Phase identification of endoscopy procedures

Embodiments of a system, a machine-accessible storage medium, and a computer-implemented method are described in which operations are performed. The operations comprising receiving a plurality of image frames associated with a video of an endoscopy procedure, generating a probability estimate for one or more image frames included in the plurality of image frames, and identifying a transition in the video when the endoscopy procedure transitions from a first phase to a second phase based, at least in part, on the probability estimate for the one or more image frames. The probability estimate includes a first probability that one or more image frames are associated with a first phase of the endoscopy procedure.

Systems, methods, and apparatuses for actively and continually fine-tuning convolutional neural networks to reduce annotation requirements

Described herein are systems, methods, and apparatuses for actively and continually fine-tuning convolutional neural networks to reduce annotation requirements, in which the trained networks are then utilized in the context of medical imaging. The success of convolutional neural networks (CNNs) in computer vision is largely attributable to the availability of massive annotated datasets, such as ImageNet and Places. However, it is tedious, laborious, and time consuming to create large annotated datasets, and demands costly, specialty-oriented skills. A novel method to naturally integrate active learning and transfer learning (fine-tuning) into a single framework is presented to dramatically reduce annotation cost, starting with a pre-trained CNN to seek worthy samples for annotation and gradually enhances the (fine-tuned) CNN via continual fine-tuning. The described method was evaluated using three distinct medical imaging applications, demonstrating that it can reduce annotation efforts by at least half compared with random selection.

Systems and methods for video-based positioning and navigation in gastroenterological procedures

The present disclosure provides systems and methods for improving detection and location determination accuracy of abnormalities during a gastroenterological procedure. One example method includes obtaining a video data stream generated by an endoscopic device during a gastroenterological procedure for a patient. The method includes generating a three-dimensional model of at least a portion of an anatomical structure viewed by the endoscopic device based at least in part on the video data stream. The method includes obtaining location data associated with one or more detected abnormalities based on localization data generated from the video data stream of the endoscopic device. The method includes generating a visual presentation of the three-dimensional model and the location data associated with the one or more detected abnormalities; and providing the visual presentation of the three-dimensional model and the location data associated with the detected abnormality for use in diagnosis of the patient.

SYSTEM AND METHOD FOR AUTOMATIC POLYP DETECTION USING GLOBAL GEOMETRIC CONSTRAINTS AND LOCAL INTENSITY VARIATION PATTERNS
20170265747 · 2017-09-21 ·

A system and methods for polyp detection using optical colonoscopy images are provided. In some aspects, the system includes an input configured to receive a series of optical images, and a processor configured to process the series of optical images with steps comprising of receiving an optical image from the input, constructing an edge map corresponding to the optical image, the edge map comprising a plurality of edge pixel, and generating a refined edge map by applying a classification scheme based on patterns of intensity variation to the plurality of edge pixels in the edge map. The processor may also process the series with steps of identifying polyp candidates using the refined edge map, computing probabilities that identified polyp candidates are polyps, and generating a report, using the computed probabilities, indicating detected polyps. The system also includes an output for displaying the report.

System and method for detecting polyps from learned boundaries

A system and method for automated polyp detection in optical colonoscopy images is provided. In one embodiment, the system and method for polyp detection is based on an observation that image appearance around polyp boundaries differs from that of other boundaries in colonoscopy images. To reduce vulnerability against misleading objects, the image processing method localizes polyps by detecting polyp boundaries, while filtering out irrelevant boundaries, with a generative-discriminative model. To filter out irrelevant boundaries, a boundary removal mechanism is provided that captures changes in image appearance across polyp boundaries. Thus, in this embodiment the boundary removal mechanism is minimally affected by texture visibility limitations. In addition, a vote accumulation scheme is applied that enables polyp localization from fragmented edge segmentation maps without identification of whole polyp boundaries.

System and method for automatic polyp detection using global geometric constraints and local intensity variation patterns

A system and methods for polyp detection using optical colonoscopy images are provided. In some aspects, the system includes an input configured to receive a series of optical images, and a processor configured to process the series of optical images with steps comprising of receiving an optical image from the input, constructing an edge map corresponding to the optical image, the edge map comprising a plurality of edge pixel, and generating a refined edge map by applying a classification scheme based on patterns of intensity variation to the plurality of edge pixels in the edge map. The processor may also process the series with steps of identifying polyp candidates using the refined edge map, computing probabilities that identified polyp candidates are polyps, and generating a report, using the computed probabilities, indicating detected polyps. The system also includes an output for displaying the report.

Reconstruction with object detection for images captured from a capsule camera
09672620 · 2017-06-06 · ·

A method of processing images captured using a capsule camera is disclosed. According to one embodiment, two images designated as a reference image and a float image are received, where the float image corresponds to a captured capsule image and the reference image corresponds to a previously composite image or another captured capsule image prior to the float image. Automatic segmentation is applied to the float image and the reference image to detect any non-GI (non-gastrointestinal) region. The non-GI regions are excluded in match measure between the reference image and a deformed float image during the registration process. The two images are stitched together by rendering the two images at the common coordinate. In another embodiment, large area of non-GI regions are removed directly from the input image, and remaining portions are stitched together to form a new image without performing image registration.

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND RECORDING MEDIUM

The endoscopic image acquisition means acquires an endoscopic image. The polyp detection means detects a polyp area from the endoscopic image. The first estimation means estimates a size of the polyp based on an image of the detected polyp area. The output means outputs an estimation result of the size of the polyp.