Patent classifications
G06T2207/30032
SYSTEMS AND METHODS FOR PROCESSING REAL-TIME VIDEO FROM A MEDICAL IMAGE DEVICE AND DETECTING OBJECTS IN THE VIDEO
The present disclosure relates to systems and methods for processing real- time video and detecting objects in the video. In one implementation, a system is provided that includes an input port for receiving real-time video obtained from a medical image device, a first bus for transferring the received real-time video, and at least one processor configured to receive the real-time video from the first bus, perform object detection by applying a trained neural network on frames of the received real-time video, and overlay a border indicating a location of at least one detected object in the frames. The system also includes a second bus for receiving the video with the overlaid border, an output port for outputting the video with the overlaid border from the second bus to an external display, and a third bus for directly transmitting the received real-time video to the output port.
Automated identification of tumor buds
Automated image analysis methods to identify and quantify tumor buds in a high resolution image of a section of a tumor that is stained using either pan-cytokeratin AE1/3 or hematoxylin and eosin (H&E) are disclosed. The methods may be used to aid and/or replace manual visual inspection for tumor buds and may be used to predict a clinically relevant outcome or treatment in some cases. The disclosed methods may be used for many different cancer types, such as colorectal cancer.
Systems and methods for processing real-time video from a medical image device and detecting objects in the video
The present disclosure relates to systems and methods for processing real-time video and detecting objects in the video. In one implementation, a system is provided that includes an input port for receiving real-time video obtained from a medical image device, a first bus for transferring the received real-time video, and at least one processor configured to receive the real-time video from the first bus, perform object detection by applying a trained neural network on frames of the received real-time video, and overlay a border indicating a location of at least one detected object in the frames. The system also includes a second bus for receiving the video with the overlaid border, an output port for outputting the video with the overlaid border from the second bus to an external display, and a third bus for directly transmitting the received real-time video to the output port.
AI systems for detecting and sizing lesions
An artificial intelligence (AI) platform, method and program product for detecting and sizing a lesion in real time during a clinical procedure. An AI platform is disclosed that includes: a trained classifier that includes a deep learning model trained to detect lesions and reference objects in image data; a real time video analysis system that receives a video feed during a clinical procedure, uses the trained classifier to determine if a video frame from the video feed has both a lesion and a reference object, calculates an actual size of the lesion based on a pixel size of both the lesion and the reference object, and outputs an indication that the lesion was detected and the actual size of the lesion.
Systems and methods for processing colon images and videos
There is provided a method of generating instructions for presenting a graphical user interface (GUI) for dynamically tracking at least one polyp in a plurality of endoscopic images of a colon of a patient, comprising: iterating for the plurality of endoscopic images: tracking a location of a region depicting at least one polyp within the respective endoscopic image relative to at least one previous endoscopic image, when the location of the region is external to the respective endoscopic image: computing a vector from within the respective endoscopic image to the location of the region external to the respective endoscopic image, creating an augmented endoscopic image by augmenting the respective endoscopic image with an indication of the vector, and generating instructions for presenting the augmented endoscopic image within the GUI.
Endoscope image processing apparatus and endoscope image processing method
An endoscope image processing apparatus includes a region-of-interest detection apparatus configured to sequentially receive observation images obtained by performing image pickup of an object and perform processing for detecting a region of interest for each of the observation images, and a processor. The processor calculates an appearance time period as an elapsed time period from a time when the region of interest appears within the observation image when the region-of-interest detection apparatus detects the region of interest, and starts emphasis processing for emphasizing a position of the region of interest existing within the observation image at a timing at which the appearance time period reaches a predetermined time period.
COLON POLYP IMAGE PROCESSING METHOD AND APPARATUS, AND SYSTEM
A colon polyp image processing method and apparatus and a system are disclosed in the embodiments of this application to detect a position of a polyp in real time and determine a property of the polyp, and thereby improve the processing efficiency of a polyp image. Embodiment of this application provide a colon polyp image processing method that can include detecting a position of a polyp in a to-be-processed endoscopic image by using a polyp positioning model, and positioning a polyp image block in the endoscopic image. The polyp image block can include a position region of the polyp in the endoscopic image. The method can further include performing a polyp type classification type on the polyp image block by using a polyp property identification model, and outputting an identification result.
SYSTEMS AND METHODS FOR PROCESSING COLON IMAGES AND VIDEOS
There is provided a method of generating instructions for presenting a graphical user interface (GUI) for dynamically tracking at least one polyp in a plurality of endoscopic images of a colon of a patient, comprising: iterating for the plurality of endoscopic images: tracking a location of a region depicting at least one polyp within the respective endoscopic image relative to at least one previous endoscopic image, when the location of the region is external to the respective endoscopic image: computing a vector from within the respective endoscopic image to the location of the region external to the respective endoscopic image, creating an augmented endoscopic image by augmenting the respective endoscopic image with an indication of the vector, and generating instructions for presenting the augmented endoscopic image within the GUI.
Methods, systems, and media for simultaneously monitoring colonoscopic video quality and detecting polyps in colonoscopy
Mechanisms for simultaneously monitoring colonoscopic video quality and detecting polyps in colonoscopy are provided. In some embodiments, the mechanisms can include a quality monitoring system that uses a first trained classifier to monitor image frames from a colonoscopic video to determine which image frames are informative frames and which image frames are non-informative frames. The informative image frames can be passed to an automatic polyp detection system that uses a second trained classifier to localize and identify whether a polyp or any other suitable object is present in one or more of the informative image frames.
Map of body cavity
In one embodiment, a medical analysis system, includes a display, and processing circuitry to receive a three-dimensional map of an interior surface of a cavity within a body of a living subject, positions on the interior surface being defined in a spherical coordinate system wherein each position is defined by an angular coordinate pair and an associated radial distance from an origin, project the angular coordinate pair of respective positions from the interior surface to respective locations in a two-dimensional plane according to a coordinate transformation, compute respective elevation values from the plane at the respective locations based on at least the radial distance associated with the respective projected angular coordinate pair, and render to the display an image of a partially flattened surface of the interior surface with the partially flattened surface being elevated from the plane according to the computed respective elevation values at the respective locations.