Patent classifications
A61B1/000096
ENDOSCOPE SYSTEM AND METHOD FOR OPERATING THE SAME
An endoscope system that illuminates an object and captures reflected light from the object includes a control processor. The control processor acquires an examination image and determines whether the examination image shows a swallowing state or a non-swallowing state. In addition, the control processor detects a high pixel value region from the examination image and determines that the examination image shows the swallowing state in a case in which an area of the high pixel value region is equal to or greater than a first threshold value. Further, the control processor performs grayscale conversion on the examination image to obtain a grayscale image and performs a binarization process for obtaining the high pixel value region in a case in which a density value of a pixel of the grayscale image is equal to or greater than a second threshold value.
ENDOSCOPE SYSTEM, MEDICAL IMAGE PROCESSING DEVICE, AND OPERATION METHOD THEREFOR
A medical image processing device includes a processor, in which the processor acquires an examination image of a subject captured by an endoscope, and identifies an incision suitable site in the subject included in the examination image and performs control for outputting incision suitable site information regarding the incision suitable site on the basis of the examination image, and the identification of the incision suitable site information is performed by using a learning image associated with a position of a muscular layer in the subject.
MEDICAL IMAGE PROCESSING DEVICE, ENDOSCOPE SYSTEM, AND MEDICAL IMAGE PROCESSING DEVICE OPERATION METHOD
A medical image processing device acquires a medical image, detects a position of a specific blood vessel having a predetermined thickness or more on the basis of the medical image, and performs control for outputting specific blood vessel position information regarding the position of the specific blood vessel to provide a notification. The detection of the position of a specific blood vessel is performed by using a learning image that is the medical image associated with information regarding a position of at least a part of the specific blood vessel.
PHASE IDENTIFICATION OF ENDOSCOPY PROCEDURES
Embodiments of a system, a machine-accessible storage medium, and a computer-implemented method are described in which operations are performed. The operations comprising receiving a plurality of image frames associated with a video of an endoscopy procedure, generating a probability estimate for one or more image frames included in the plurality of image frames, and identifying a transition in the video when the endoscopy procedure transitions from a first phase to a second phase based, at least in part, on the probability estimate for the one or more image frames. The probability estimate includes a first probability that one or more image frames are associated with a first phase of the endoscopy procedure.
SYSTEMS AND METHODS FOR SCENE-ADAPTIVE IMAGE QUALITY IN SURGICAL VIDEO
One example method for scene-adaptive image quality in surgical video includes receiving a first video frame from an endoscope, the first video frame generated from a first raw image captured by an image sensor of the endoscope and processed by an image signal processing (“ISP”) pipeline having a plurality of ISP parameters; recognizing, using a trained machine learning (“ML”) model, a first scene type or a first scene feature type based on the first video frame; determining a first set of ISP parameters based on the first scene type or the first scene feature type; applying the first set of ISP parameters to the ISP pipeline; and receiving a second video frame from the endoscope, the second video frame generated from a second raw image captured by the image sensor and processed by the ISP pipeline using the first set of ISP parameters.
Method for controlling smart energy devices
- Frederick E. Shelton, IV ,
- David C. Yates ,
- Jason L. Harris ,
- Kevin L. Houser ,
- John E. Brady ,
- Gregory A. Trees ,
- Patrick J. Scoggins ,
- Madeleine C. Jayme ,
- Kristen G. Denzinger ,
- Cameron R. Nott ,
- Craig N. Faller ,
- Amrita S. Sawhney ,
- Eric M. Roberson ,
- Stephen M. Leuck ,
- Brian D. Black ,
- Fergus P. Quigley ,
- Tamara Widenhouse
A method for controlling an operation of an ultrasonic blade of an ultrasonic electromechanical system is disclosed. The method includes providing an ultrasonic electromechanical system comprising an ultrasonic transducer coupled to an ultrasonic blade via an ultrasonic waveguide; applying, by an energy source, a power level to the ultrasonic transducer; determining, by a control circuit coupled to a memory, a mechanical property of the ultrasonic electromechanical system; comparing, by the control circuit, the mechanical property with a reference mechanical property stored in the memory; and adjusting, by the control circuit, the power level applied to the ultrasonic transducer based on the comparison of the mechanical property with the reference mechanical property.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND ENDOSCOPE APPARATUS
An image processing apparatus includes an inference part which is capable of inference by using a first inference model for finding of a specific target object and by using a second inference model for discrimination about the specific target object and a control unit to which a first picked-up image obtained under a first image pickup condition and a second picked-up image obtained under a second image pickup condition different from the first image pickup condition can be inputted and which performs control such that in a case where the first picked-up image is inputted, the inference part is caused to execute inference by using the first inference model, and in a case where the second picked-up image is inputted, the inference part is caused to execute inference by using the second inference model.
Method of soft tissue imaging system by different combinations of light engine, camera, and modular software
Architecture and methodology of imaging systems are provided for multispectral tissue imaging with various embodiments. The architectural designs comprise hardware of multispectral light engines and cameras and software of image acquisition, processing, modeling, visualization, and quantification. Embodiments of imaging hardware in a medical device can include a light engine of multiple sources for noncoherent light for visible and fluorescence imaging and coherent light of very narrow bandwidths for laser speckle imaging. The imaging software can include anatomical imaging by visible light, blood perfusion imaging by fluorophores in blood, blood flow distribution imaging by light of high coherence, blood oxygen saturation imaging by light absorption in tissues and tissue composition imaging by light scattering in tissues based on the radiative transfer model of light-tissue interaction. Form factors in medical devices include endoscopic, laparoscopic, arthroscopic devices in medical tower or robot systems, cart device, and handheld scanning or tablet devices.
INTERACTIVE ENDOSCOPY FOR INTRAOPERATIVE VIRTUAL ANNOTATION IN VATS AND MINIMALLY INVASIVE SURGERY
A controller (522) for live annotation of interventional imagery includes a memory (52220) that stores software instructions and a processor (52210) that executes the software instructions. When executed by the processor (52210), the software instructions cause the controller (522) to implement a process that includes receiving (S210) interventional imagery during an intraoperative intervention and automatically analyzing (S220) the interventional imagery for detectable features. The process executed when the processor (52210) executes the software instructions also includes detecting (S230) a detectable feature and determining (S240) at add an annotation to the interventional imagery for the detectable feature. The processor further includes identifying (S250) a location for the annotation as an identified location in the interventional imagery and adding (S260) the annotation to the interventional imagery at the identified location to correspond to the detectable feature. During the intraoperative intervention, a video is output (S270) as video output based on interventional imagery and the annotation, including the annotation overlaid on the interventional imagery at the identified location.
IMAGE PROCESSING APPARATUS, OBSERVATION SYSTEM, AND OBSERVATION METHOD
An image processing apparatus includes a processor including hardware, the processor being configured to: determine whether or not an overlapping portion is present in imaging areas included in a plurality of images captured by a plurality of imagers, respectively, the plurality of imagers being are inserted into a subject to capture images of an observation target at different positions from each other; determine whether or not each of the plurality of imagers is inserted to a focal point position at which the observation target is in focus; and generate a composite image that is composed of the plurality of images when it is determined that each of the plurality of imagers is inserted to the focal point position and that the overlapping portion is present in the imaging areas.