A61B1/00193

Imaging apparatus for use in a robotic surgery system

A stereoscopic imaging apparatus for use in a robotic surgery system is disclosed and includes an elongate sheath having a bore. First and second image sensors are adjacently mounted at the distal end to capture high definition images from different perspective viewpoints for generating three-dimensional image information. The image sensors produce an unprocessed digital data signal representing the captured images. A wired signal line transmits the unprocessed digital data signals along the sheath to a proximal end to processing circuitry. The processing circuitry is configured to perform processing operations on the unprocessed digital data signals to produce respective video signals suitable for transmission to a host system or for driving a 3D display. A secondary camera is also disclosed and includes an elongate strip of circuit substrate sized for insertion through a narrow conduit, the strip of circuit substrate connecting between an image sensor and a processing circuit substrate.

SYSTEMS AND METHODS FOR CONTROLLING A SURGICAL ROBOTIC ASSEMBLY IN AN INTERNAL BODY CAVITY

Methods and systems for performing a surgery within an internal cavity of a subject are provided herein. An example method for controlling a robotic assembly of a surgical robotic system includes, while at least a portion of the robotic assembly is disposed in an interior cavity of a subject, receiving a first control mode selection input from an operator and changing a current control mode of the surgical robotic system to a first control mode in response to the first control mode selection input; while the surgical robotic system is in the first control mode, receiving a first control input from hand controllers; in response to receiving the first control input, changing a position and/or an orientation of: at least a portion of the camera assembly, of at least a portion of the robotic arm assembly, or both, while maintaining a stationary position of instrument tips of the end effectors disposed at distal ends of the robotic arms.

Method and apparatus for quantitative and depth resolved hyperspectral fluorescence and reflectance imaging for surgical guidance

An imaging system, such as a surgical microscope, laparoscope, or endoscope or integrated with these devices, includes an illuminator providing patterned white light and/or fluorescent stimulus light. The system receives and images light hyperspectrally, in embodiments using a hyperspectral imaging array, and/or using narrowband tunable filters for passing filtered received light to an imager. Embodiments may construct a 3-D surface model from stereo images, and will estimate optical properties of the target using images taken in patterned light or using other approximations obtained from white light exposures. Hyperspectral images taken under stimulus light are displayed as fluorescent images, and corrected for optical properties of tissue to provide quantitative maps of fluorophore concentration. Spectral information from hyperspectral images is processed to provide depth of fluorophore below the tissue surface. Quantitative images of fluorescence at depth are also prepared. The images are displayed to a surgeon for use in surgery.

Endoscope defogging

An endoscope includes a light source coupled to emit light, and a lens disposed proximate to a distal tip of the endoscope tube and structured to absorb at least some of the light. A controller is coupled to the light source, and the controller includes logic that when executed by the controller causes the endoscope to perform operations, including adjusting an emission profile of the light source to heat the lens with the light, and heating the lens mitigates formation of fog on the lens.

Surgical system with combination of sensor-based navigation and endoscopy

A set of pre-operative images may be captured of an anatomical structure using an endoscopic camera. Each captured image is associated with a position and orientation of the camera at the moment of capture using image guided surgery (IGS) techniques. This image data and position data may be used to create a navigation map of captured images. During a surgical procedure on the anatomical structure, a real-time endoscopic view may be captured and displayed to a surgeon. The IGS navigation system may determine the position and orientation of the real-time image; and select an appropriate pre-operative image from the navigation map to display to the surgeon in addition to the real-time image.

3-AXIS SIDE-VIEW CONFOCAL FLUORESCENCE ENDOMICROSCOPE
20220369933 · 2022-11-24 ·

An optical probe assembly as a confocal endomicroscope includes an optical focusing stage that focuses an output beam onto a sample and a mirror scanning stage that is movable for scanning the output beam in both a lateral two dimensional plane and an axial direction, using a side-view configuration. The side-view configuration allows for output beam illumination and fluorescent imaging of the sample with greater imaging resolution and improved access to hard to reach tissue within a subject.

Surgical port feature

A surgical port feature may include a funnel portion, a tongue, a waist portion, and surgical instrument channels. The waist portion may be located between the funnel portion and the tongue. The surgical instrument channels may extend from the funnel portion through the waist portion. The surgical port feature may further include a second tongue, with the wait portion being located between the funnel portion, the tongue, and the second tongue.

Plenoptic endoscope with fiber bundle
11503987 · 2022-11-22 · ·

A plenoptic endoscope includes a fiber bundle with a distal end configured to receive light from a target imaging region, a sensor end disposed opposite the distal end, and a plurality of fiber optic strands each extending from the distal end to the sensor end. The plenoptic endoscope also includes an image sensor coupled to the sensor end of the fiber bundle, and a plurality of microlenses disposed between the image sensor and the sensor end of the fiber bundle, the plurality of microlens elements forming an array that receives light from one or more of the plurality of fiber optic strands of the fiber bundle and directs the light onto the image sensor. The plurality of microlens elements and the image sensor together form a plenoptic camera configured to capture information about a light field emanating from the target imaging region.

SYSTEMS AND METHODS FOR MITIGATING COLLISION OF A ROBOTIC SYSTEM
20220366594 · 2022-11-17 ·

Systems and methods are provided to mitigate potential collisions between a person and robotic system. In various embodiments, a robotic surgical system includes a robotic linkage including joints, an endoscope coupled to a distal portion of the robotic linkage and configured to capture stereoscopic images, and a controller in communication with the endoscope. The controller executes instructions to analyze the stereoscopic images from the endoscope to identify a human-held tool in the stereoscopic images and to estimate a type and/or pose of the human-held tool, infer a position of a person holding the human-held tool based on the type and/or pose of the human-held tool, determine a spatial relationship between the person and the robotic linkage based on the inferred position of the person, and generate a warning of potential collision between the person and the robotic linkage based on the determined spatial relationship.

Systems and methods for projecting an endoscopic image to a three-dimensional volume

A method comprises obtaining an endoscopic image dataset of a patient anatomy from an endoscopic imaging system and retrieving an anatomic model dataset of the patient anatomy obtained by an anatomic imaging system. The method also comprises mapping the endoscopic image dataset to the anatomic model dataset and displaying a first vantage point image using the mapped endoscopic image dataset. The first vantage point image is presented from a first vantage point at a distal end of the endoscopic imaging system. The method also comprises displaying a second vantage point image using at least a portion of the mapped endoscopic image dataset. The second vantage point image is presented from a second vantage point, different from the first vantage point.