G06T2207/30032

SYSTEMS AND METHODS FOR TRAINING GENERATIVE ADVERSARIAL NETWORKS AND USE OF TRAINED GENERATIVE ADVERSARIAL NETWORKS
20190385018 · 2019-12-19 ·

The present disclosure relates to computer-implemented systems and methods for training and using generative adversarial networks. In one implementation, a system for training a generative adversarial network may include at least one processor that may provide a first plurality of images including representations of a feature-of-interest and indicators of locations of the feature-of-interest and use the first plurality and indicators to train an object detection network. Further, the processor(s) may provide a second plurality of images including representations of the feature-of-interest, and apply the trained object detection network to the second plurality to produce a plurality of detections of the feature-of-interest. Additionally, the processor(s) may provide manually set verifications of true positives and false positives with respect to the plurality of detections, use the verifications to train a generative adversarial network, and retrain the generative adversarial network using at least one further set of images, further detections, and further manually set verifications.

AUTONOMOUS HEALTHCARE VISUAL SYSTEM FOR REAL-TIME PREVENTION, DIAGNOSTICS, TREATMENTS AND REHABILITATION

Systems, methods, and other embodiments relate to autonomous clinical screening and assessment of a patient using a reinforcement learning framework. In at least one approach, a method includes determining, using a control model, a focus point for acquiring imaging data about the patient. The method includes controlling an imaging device according to the focus point to acquire the imaging data. The method includes analyzing the imaging data using a recognition model that outputs a result about a condition of the patient. The method includes providing the result.

SYSTEMS AND METHODS FOR ORGAN SHAPE ANALYSIS FOR DISEASE DIAGNOSIS AND RISK ASSESSMENT
20240087751 · 2024-03-14 ·

A method for quantifying tissue morphology is provided. The method comprises receiving one or more images of a tissue, overlaying a grid system onto the one or more images, the grid system comprising a 180-degree radial system, 360-degree radial system, or a parallel lines system, and transforming the one or more images into compressed sums based on the grid system. The compressed sums may then be transformed into frequencies, for example using FFT. Machine learning may be used to provide a patient risk for developing a disease or indicate a diagnostic status of the tissue. In some embodiments, the tissue may be a brain.

Detecting deficient coverage in gastroenterological procedures

The present disclosure is directed towards systems and methods that leverage machine-learned models to decrease the rate at which abnormal sites are missed during a gastroenterological procedure. In particular, the system and methods of the present disclosure can use machine-learning techniques to determine the coverage rate achieved during a gastroenterological procedure. Measuring the coverage rate of the gastroenterological procedure can allow medical professionals to be alerted when the coverage output is deficient and thus allow an additional coverage to be achieved and as a result increase in the detection rate for abnormal sites (e.g., adenoma, polyp, lesion, tumor, etc.) during the gastroenterological procedure.

Image processing method and apparatus, computer-readable medium, and electronic device

Embodiments of this application include an image processing method and apparatus, a non-transitory computer-readable storage medium, and an electronic device. In the image processing method a to-be-predicted medical image is input into a multi-task deep convolutional neural network model. The multi-task deep convolutional neural network model includes an image input layer, a shared layer, and n parallel task output layers. One or more lesion property prediction results of the to-be-predicted medical image is output through one or more of the n task output layers. The multi-task deep convolutional neural network model is trained with n types of medical image training sets, n being a positive integer that is greater than or equal to 2.

METHOD FOR DETECTION AND PATHOLOGICAL CLASSIFICATION OF POLYPS VIA COLONOSCOPY BASED ON ANCHOR-FREE TECHNIQUE
20240046463 · 2024-02-08 · ·

A method for detection and pathological classification of polyps via colonoscopy based on an anchor-free technique includes: performing feature extraction on a color endoscopic image that is pretreated, enhancing and extending the extracted features, decoding the feature information of the enhanced feature and the extended feature through an anchor-free detection algorithm to acquire a polyp prediction box and a prospect prediction mask, then respectively extracting global and local feature vectors from the extended feature and the prospect prediction mask, and combining the global feature vector with the local feature vector, so as to predict the type of polyps through a full-connection layer. Through the present application, the type of polyps can be correctly predicted, and the detection rate of polyps and the accuracy rate of pathological classification are improved.

System and method for detection of suspicious tissue regions in an endoscopic procedure
10510144 · 2019-12-17 · ·

An image processing system connected to an endoscope and processing in real-time endoscopic images to identify suspicious tissues such as polyps or cancer. The system applies preprocessing tools to clean the received images and then applies in parallel a plurality of detectors both conventional detectors and models of supervised machine learning-based detectors. A post processing is also applied in order select the regions which are most probable to be suspicious among the detected regions. Frames identified as showing suspicious tissues can be marked on an output video display. Optionally, the size, type and boundaries of the suspected tissue can also be identified and marked.

Method and Apparatus for Estimating Area or Volume of Object of Interest from Gastrointestinal Images
20190374155 · 2019-12-12 · ·

A method and apparatus for estimating or measuring a physical area or physical volume of an object of interest in one or more images captured using an endoscope are disclosed. According to the present method, one or more structured-light images and one or more regular images captured using an imaging apparatus are received. An object of interest in the regular images is determined. Distance information associated with the object of interest with respect to the imaging apparatus is derived from the structured-light images. The physical area size or physical volume size of the object of interest is determined based on the regular images and the distance information. The imaging apparatus can be a capsule endoscope or an insertion endoscope.

SYSTEMS AND METHODS FOR PROCESSING REAL-TIME VIDEO FROM A MEDICAL IMAGE DEVICE AND DETECTING OBJECTS IN THE VIDEO
20240108209 · 2024-04-04 ·

The present disclosure relates to systems and methods for processing real-time video and detecting objects in the video. In one implementation, a system is provided that includes an input port for receiving real-time video obtained from a medical image device, a first bus for transferring the received real-time video, and at least one processor configured to receive the real-time video from the first bus, perform object detection by applying a trained neural network on frames of the received real-time video, and overlay a border indicating a location of at least one detected object in the frames. The system also includes a second bus for receiving the video with the overlaid border, an output port for outputting the video with the overlaid border from the second bus to an external display, and a third bus for directly transmitting the received real-time video to the output port.

Combined Compression and Feature Extraction Models for Storing and Analyzing Medical Videos
20240129515 · 2024-04-18 ·

A method of compressing and detecting target features of a medical video is presented herein. In some embodiments, the method may include receiving an uncompressed medical video comprising at least one target feature, compressing the uncompressed medical video to generate a compressed medical video based on a predicted location of the at least one target feature using a first pretrained machine learning model, and detecting the location of the at least one target feature of the compressed medical video using a second pretrained machine learning model. In some embodiments, the first pretrained machine learning model and the second pretrained machine learning model may be trained in tandem using domain-specific medical videos.