Patent classifications
A61B1/0005
METHOD, APPARATUS AND SYSTEM FOR CONTROLLING AN IMAGE CAPTURE DEVICE DURING SURGERY
A system for controlling a medical image capture device during surgery, the system including: circuitry configured to receive a first image of the surgical scene, captured by the medical image capture device from a first viewpoint, and additional information of the scene; determine, for the medical image capture device, in accordance with the additional information and previous viewpoint information of surgical scenes, one or more candidate viewpoints from which to obtain an image of the surgical scene; provide, in accordance with the first image of the surgical scene, for each of the one or more candidate viewpoints, a simulated image of the surgical scene from the candidate viewpoint; control the medical image capture device to obtain an image of the surgical scene from the candidate viewpoint corresponding to a selection of one of the one or more simulated images of the surgical scene.
Speckle removal in a pulsed fluorescence imaging system
Speckle removal in a pulsed fluorescence imaging system is described. A system includes a coherent light source for emitting pulses of coherent light, a fiber optic bundle connected to the coherent light source, and a vibrating mechanism attached to the fiber optic bundle. The system includes and an image sensor comprising a pixel array for sensing reflected electromagnetic radiation. The system is such that at least a portion of the pulses of coherent light emitted by the coherent light source comprises electromagnetic radiation having a wavelength from about 770 nm to about 790 nm.
SYSTEM AND METHOD OF PROVIDING REAL-TIME DYNAMIC IMAGERY OF A MEDICAL PROCEDURE SITE USING MULTIPLE MODALITIES
A system and method of providing composite real-time dynamic imagery of a medical procedure site from multiple modalities which continuously and immediately depicts the current state and condition of the medical procedure site synchronously with respect to each modality and without undue latency is disclosed. The composite real-time dynamic imagery may be provided by spatially registering multiple real-time dynamic video streams from the multiple modalities to each other. Spatially registering the multiple real-time dynamic video streams to each other may provide a continuous and immediate depiction of the medical procedure site with an unobstructed and detailed view of a region of interest at the medical procedure site at multiple depths. A user may thereby view a single, accurate, and current composite real-time dynamic imagery of a region of interest at the medical procedure site as the user performs a medical procedure.
METHOD OF ALERTING A USER TO OFF-SCREEN EVENTS DURING SURGERY
An image of a field of view is captured by the endoscope and a portion of the image is displayed on a display. A popup image is generated and displayed upon image analysis identification of an occurrence of bleeding or motion of edges of an incision that is within the field of view, but outside the portion of the field of view displayed on the display.
ENDOSCOPIC SYSTEM AND METHOD FOR DISPLAYING AN ADAPTIVE OVERLAY
The present disclosure is directed towards a video display system and an endoscopic system configured to provide the surgeon with an optimized video image based upon user preference. The video display system includes a first video image displaying a white light video of a surgical site. The first video image has a first predetermined area. A second video image is overlaid on the first video image. The second video image is displayed in a fluorescent light video. The second video image is centered on the surgical site and includes a second predetermined area. The second predetermined area is smaller than the first predetermined area so as to define a boundary of white light video. Thus, the video display system provides the surgeon with the ability to reference the location of the fluorescent light video with respect to anatomical features of the surgical site.
STEREOSCOPIC ENDOSCOPE WITH CRITICAL STRUCTURE DEPTH ESTIMATION
A surgical visualization system comprises: (a) an endoscope comprising: (i) a shaft comprising a distal end, wherein the distal end is configured to be inserted into a cavity of a patient, (ii) a camera positioned at the distal end of the shaft for visualizing a first structure below a tissue surface within the cavity when the distal end is inserted into the cavity, wherein the camera defines a line of sight, wherein the camera is configured to be swept over the first structure; and (b) a processor in operative communication with the camera of the endoscope, wherein the processor is configured to: (i) monitor at least one sweep parameter when the camera is swept over the first structure, and (ii) estimate a depth of the first structure below the tissue surface based on the monitored at least one sweep parameter.
SCENE ADAPTIVE ENDOSCOPIC HYPERSPECTRAL IMAGING SYSTEM
A method of operating a surgical visualization system includes illuminating an anatomical field of a patient using a waveform transmitted by an emitter. The method also includes capturing an image of the anatomical field based on the waveform using a receiver. The emitter and the receiver are configured for multispectral imaging or hyperspectral imaging. The method also includes determining an adjustment to at one operating parameter of the surgical visualization system based on at least one environmental scene parameter. The method also includes automatically implementing the adjustment to the at least one operating parameter to aid in identification of at least one anatomical structure in the anatomical field.
ENDOSCOPE WITH SOURCE AND PIXEL LEVEL IMAGE MODULATION FOR MULTISPECTRAL IMAGING
A system may be provided which comprises an illumination source adapted to simultaneously illuminate a surgical site with spectral light comprising a first wavelength of light and a second wavelength of light, a first set of sensors comprising sensors adapted to detect visible light, and a second set of sensors comprising sensors adapted to detect the first wavelength of light; and sensors adapted to detect the second wavelength of light. In such a case, the system may also comprise a display coupled to a processor configured to display an enhanced image of the surgical site comprising a visible light image from data detected by the first set of sensors and an overlay identifying a target structure based on light detected by the second set of sensors while the surgical site is illuminated by spectral light comprising the first and second wavelengths of light.
ENDOSCOPE WITH SYNTHETIC APERTURE MULTISPECTRAL CAMERA ARRAY
A method which may effectively provide an endoscope or other surgical instrument with a synthetic multi-camera array may comprise capturing using one or more cameras located at a distal tip of the surgical instrument, a set of images comprising first and second images. For each image in such set of images, that image may be captured by a corresponding camera from the one or more cameras, may be captured when the distal dip of the instrument is located at a corresponding point in space. Such a method may also comprise generating a three dimensional image based on compositing representations of a structure in the first and second image after applying a non-rigid transformation to one or more of those representations.
Endoscope with integrated measurement of distance to objects of interest
The present specification describes a method for determining the distance of an object from the tip of an endoscope during an endoscopic procedure, wherein at least one lens is configured to converge light from outside the tip onto a sensor that includes a plurality of photodiodes a portion of which are adjacent pairs of photodiodes configured to be phase detection pixels. The method includes receiving light into each adjacent pair of photodiodes, wherein said light is reflected off a surface of said object; determining a first response curve to said light for a first photodiode of said adjacent pair of photodiodes and a second response curve to said light for a second photodiode of said adjacent pair of photodiodes; identifying an intersection between the first response curve and the second response curve; and using data derived from said intersection to determine said distance to the object.