Patent classifications
A61B1/000096
Intraoral Imaging Apparatus, Medical Apparatus, And Program
An intraoral imaging apparatus, a medical apparatus, and a program capable of providing auxiliary data for determination regarding diseases having differences in intraoral findings are provided. The intraoral imaging apparatus includes: an imaging device that acquires an intraoral image; a light source that emits light to a subject of the imaging device; a storage apparatus that stores an algorithm for performing determination of a specific disease; and an arithmetic apparatus, in which the arithmetic apparatus executes: a determination process of determining a possibility of the predetermined disease based on the image and the algorithm; and an output process of outputting a result of the determination process.
AUTOMATED ASSESSMENT OF ENDOSCOPIC DISEASE
The application relates to devices and methods for analysing a colonoscopy video or a portion thereof, and for assessing the severity of ulcerative colitis in a subject by analysing a colonoscopy video obtained from the subject. Analysing a colonoscopy video comprises using a first deep neural network classifier to classify image data from the subject colonoscopy video or portion thereof into at least a first severity class (more severe endoscopic lesions) and a second severity class (less severe endoscopic lesions), wherein the first deep neural network has been trained at least in part in a weakly supervised manner using training image data from a plurality of training colonoscopy videos, the training image data comprising multiple sets of consecutive frames from the plurality of training colonoscopy videos, wherein frames in a set have the same severity class label. Devices and methods for providing a tool for analysing colonoscopy videos are also described.
ENDOSCOPE HOST AND ENDOSCOPE DEVICE FOR INTELLIGENTLY DETECTING ORGANS
Disclosed are an endoscope host and an endoscope device for intelligently detecting organs including a main body having a connection channel for inserting an endoscope tube, and a drive connection part and an electrical connection identification part have first electrical connection points and second electrical connection point respectively. When the endoscope tube is inserted into the connection channel, the endoscope tube is electrically conducted with the first electrical connection point and the second electrical connection point to generate a driving signal and a type signal respectively. An organ identification unit is provided for storing an organ comparison table and comparing the type signal with the organ comparison table to obtain the organ type of the endoscope tube and generate an execution signal. A processing unit is installed in the main body for receiving the driving signal and the type signal and displaying a result image according to the execution signal.
ENDOSCOPE SYSTEM, CONTROL DEVICE, AND CONTROL METHOD OF CONTROL DEVICE
An endoscope system includes a light source that emits illumination light, an image sensor that captures an image of an object toward which the illumination light is emitted, and a processor. The processor is configured to determine whether a fluid is present in the object. If the fluid is not present in the object, the processor switches to a first observation mode in which to illuminate the object by first illumination light. If the fluid is present in the object, the processor switches to a second observation mode in which to illuminate the object by second illumination light. The second illumination light includes is larger than the first illumination light in a relative ratio of long wavelength components.
Machine-learning-based visual-haptic system for robotic surgical platforms
Embodiments described herein provide various examples of a machine-learning-based visual-haptic system for constructing visual-haptic models for various interactions between surgical tools and tissues. In one aspect, a process for constructing a visual-haptic model is disclosed. This process can begin by receiving a set of training videos. The process then processes each training video in the set of training videos to extract one or more video segments that depict a target tool-tissue interaction from the training video, wherein the target tool-tissue interaction involves exerting a force by one or more surgical tools on a tissue. Next, for each video segment in the set of video segments, the process annotates each video image in the video segment with a set of force levels predefined for the target tool-tissue interaction. The process subsequently trains a machine-learning model using the annotated video images to obtain a trained machine-learning model for the target tool-tissue interaction.
Method of hub communication, processing, display, and cloud analytics
A method of displaying an operational parameter of a surgical system is disclosed. The method includes receiving, by a cloud computing system of the surgical system, first usage data, from a first subset of surgical hubs of the surgical system; receiving, by the cloud computing system, second usage data, from a second subset of surgical hubs of the surgical system; analyzing, by the cloud computing system, the first and the second usage data to correlate the first and the second usage data with surgical outcome data; determining, by the cloud computing system, based on the correlation, a recommended medical resource usage configuration; and displaying, on respective displays on the first and the second subset of surgical hubs, indications of the recommended medical resource usage configuration.
ACCESSORY DEVICE FOR AN ENDOSCOPIC DEVICE
A support device for an endoscope comprises a tubular member configured for removable attachment to an outer surface of the endoscope near, or at, its distal end and a plurality of projecting elements extending outward from the outer surface of the tubular member and circumferentially spaced from each other. The device includes an optically transparent cover coupled to the tubular member and configured for covering the distal end of the endoscope when the tubular member is attached to the outer surface of the endoscope. The projecting elements provide support for the endoscope, improve visualization and center the scope as it passes through a body lumen, such as the colon. In addition, the cover seals the distal end of the endoscope to protect the scope and its components from debris, fluid, pathogens and other biomatter.
SYSTEMS AND METHODS FOR DETECTION AND ANALYSIS OF POLYPS IN COLON IMAGES
There is provided a method, comprising: feeding 2D image(s) of an internal surface of a colon captured by an endoscopic camera, into a machine learning model, wherein the 2D image(s) excludes a depiction of an external measurement tool, wherein the machine learning model is trained on records, each including 2D images of the internal surface of the colon of a respective subject labelled with ground truth labels of respective bounding boxes enclosing respective polyps and at least one of an indication of size and a type of the respective polyp indicating likelihood of developing maligiancy, obtaining a bounding box for a polyp and at least one of an indication of size and type of the polyp, and generating instructions for presenting within the GUI, an overlay of the bounding box over the polyp and the at least one of the indication of size and type of the polyp.
CAPSULE ENDOSCOPE APPARATUS AND METHOD OF SUPPORTING LESION DIAGNOSIS
Provided are a capsule endoscope apparatus for supporting a lesion diagnosis and a lesion diagnosis supporting method using the same. The capsule endoscope apparatus for supporting lesion diagnosis includes an imaging unit configured to capture one or more images of an inside of a body, a control unit configured to detect a suspected lesion region in the images and perform a precision diagnosis procedure when a suspected lesion region corresponding to a value equal to or greater than a certain threshold is detected, an image processing unit configured to process the images in the precision diagnosis procedure, and a communication module configured to transmit and receive processed images to another capsule endoscope apparatus or a terminal by using a wireless communication method.
AUGMENTED VISUALIZATION FOR A SURGICAL ROBOT USING A CAPTURED VISIBLE IMAGE COMBINED WITH A FLUORESCENCE IMAGE AND A CAPTURED VISIBLE IMAGE
An endoscope with an optical channel is held and positioned by a robotic surgical system. A capture unit captures (1) a visible first image at a first time and (2) a visible second image combined with a fluorescence image at a second time. An image processing system receives (1) the visible first image and (2) the visible second image combined with the fluorescence image and generates at least one fluorescence image. A display system outputs an output image including an artificial fluorescence image.