AUGMENTED-REALITY ENDOSCOPIC VESSEL HARVESTING
20230053189 · 2023-02-16
Inventors
- Joseph Mark Geric (Livonia, MI, US)
- Takeshi Tsubouchi (Dexter, MI, US)
- Randal James Kadykowski (South Lyon, MI, US)
- Tatsunori Fujii (Bear, DE, US)
Cpc classification
A61B2017/00216
HUMAN NECESSITIES
A61B2034/2063
HUMAN NECESSITIES
A61B5/065
HUMAN NECESSITIES
A61B17/320016
HUMAN NECESSITIES
A61B2018/00404
HUMAN NECESSITIES
A61B18/1445
HUMAN NECESSITIES
A61B2090/365
HUMAN NECESSITIES
A61B34/20
HUMAN NECESSITIES
A61B18/1482
HUMAN NECESSITIES
A61B90/37
HUMAN NECESSITIES
A61B2090/367
HUMAN NECESSITIES
A61B90/30
HUMAN NECESSITIES
A61B1/0005
HUMAN NECESSITIES
International classification
A61B90/00
HUMAN NECESSITIES
A61B1/00
HUMAN NECESSITIES
A61B34/20
HUMAN NECESSITIES
A61B5/00
HUMAN NECESSITIES
G06T19/00
PHYSICS
Abstract
An endoscopic vessel harvesting system for surgical removal of a blood vessel to be used for coronary bypass uses endoscopic instruments for isolating and severing the vessel. An endoscopic camera in the endoscopic instruments captures images from a distal tip of the instrument within a dissected tunnel around the vessel. An image processor assembles a three-dimensional model of the tunnel from a series of images captured by the endoscopic camera. An augmented-reality display coupled to the image processor renders (e.g., visibly displays to the user in their field of view) a consolidated map representing the three-dimensional model along with a marker in association with the map indicating a current location of the distal tip.
Claims
1. A vessel harvesting system, comprising: an endoscopic camera of an endoscopic instrument capturing images from a distal tip of the instrument within a dissected tunnel around a vessel to be harvested; an image processor assembling a three-dimensional model of the tunnel from a series of images captured by the endoscopic camera; and a display coupled to the image processor rendering 1) a consolidated map representing the three-dimensional model, and 2) a marker in association with the map indicating a current location of the distal tip.
2. The system of claim 1 wherein the display is comprised of an augmented-reality display.
3. The system of claim 1 wherein the image processor is configured for feature point extraction using a comparison of changing positions of detected features within the series of images to determine estimated distances between the detected features within the three-dimensional model.
4. The system of claim 1 wherein the display further renders at least one size marker in association with the map on the display to indicate a corresponding length of at least a portion of the three-dimensional model.
5. The system of claim 1 wherein the display further renders an instantaneous endoscopic image using the captured images from the image processor.
6. The system of claim 5 wherein the display further renders a depth indicator of at least one visible structure in the instantaneous endoscopic image.
7. The system of claim 6 wherein a spatial distance to the visible structure is determined based on stereoscopic images of the visible structure captured by the camera from two different imaging locations.
8. The system of claim 6 further comprising a ranging sensor for determining a distance to the visible structure, wherein the depth indicator corresponds to the determined distance.
9. The system of claim 8 wherein the ranging sensor is comprised of a 3-D camera capturing simultaneous stereoscopic images.
10. The system of claim 6 wherein the depth indicator specifies a relative depth relationship between visible structures, wherein one visible structure is indicated as a foreground structure, and wherein another visible structure is indicated as a background structure.
11. The system of claim 5 wherein the display further renders an actual size indicator of at least one visible structure in the instantaneous endoscopic image.
12. The system of claim 5 wherein the display further renders a structure identification indicator of at least one visible structure in the instantaneous endoscopic image.
13. The system of claim 12 wherein the image processor identifies a side branch using image analysis, and wherein the structure identification indicator is comprised of a side branch indicator to highlight the identified side branch.
14. The system of claim 1 wherein the display is comprised of an augmented-reality display, and wherein the system further comprises: a selector coupled to the image processor for receiving a screen update command generated by a wearer of the augmented-reality display while the wearer holds the endoscopic instrument, wherein the image processor selects data to display on the augmented-reality display in response to the screen update command.
15. The system of claim 1 wherein the image processor is configured to receive a pre-mapping representation of the vessel to be harvested for use in assembling the three-dimensional model, and wherein the pre-mapping representation is obtained by transcutaneous sensing.
16. The system of claim 15 wherein the transcutaneous sensing is comprised of ultrasonic imaging.
17. The system of claim 15 wherein the pre-mapping representation defines a dissection path, wherein the image processor compares the current location of the distal tip with the dissection path, and wherein the display renders a warning to a user of the vessel harvesting system when a deviation between the current location of the distal tip and the dissection path exceeds a predetermined threshold.
18. A method of guiding endoscopic vessel harvesting performed by a user, comprising the steps of: capturing images from an endoscopic camera at a distal tip of an endoscopic instrument within a dissected tunnel around a vessel to be harvested; assembling a three-dimensional model of the tunnel from a series of images captured by the endoscopic camera; rendering a consolidated map representing the three-dimensional model on a display visible to the user; and rendering a marker on the display in association with the map indicating a current location of the distal tip.
19. The method of claim 18 wherein the display is comprised of an augmented-reality display.
20. The method of claim 18 wherein the step of assembling the three-dimensional model is comprised of: locating the distal tip at a starting location in the tunnel; analyzing a corresponding captured image to extract at least a first feature; moving the distal tip along the tunnel away from the starting location; tracking a changing position of the first feature in subsequent images of the series of images; extracting a second feature in one of the subsequent images and tracking a changing position of the second feature in additional subsequent images; and linking the extracted features according to the changing positions to define the three-dimensional model.
21. The method of claim 18 further comprising the steps of: pre-mapping a dissection path of the vessel to be harvested using transcutaneous sensing of the vessel; performing a dissection of the tunnel around the vessel; comparing a current location of the distal tip with the dissection path; and presenting a warning to the user when a deviation between the current location of the distal tip and the dissection path exceeds a predetermined threshold.
22. The method of claim 18 further comprising the step of: rendering at least one size marker in association with the map on the display to indicate a corresponding length of at least a portion of the three-dimensional model.
23. The method of claim 18 further comprising the steps of: rendering an instantaneous endoscopic image on the display; and rendering an actual size indicator of at least one visible structure in the instantaneous endoscopic image.
24. The method of claim 18 further comprising the steps of: rendering an instantaneous endoscopic image on the display; and rendering a depth indicator of at least one visible structure in the instantaneous endoscopic image.
25. The method claim 24 wherein the depth indicator specifies a relative depth relationship between at least two visible structures, wherein one visible structure is indicated as a foreground structure, and wherein another visible structure is indicated as a background structure.
26. The method of claim 18 further comprising the steps of: rendering an instantaneous endoscopic image on the display; identifying a side branch to the vessel using image analysis; and rendering a side branch indicator to highlight the identified side branch on the display of the instantaneous endoscopic image.
27. The method of claim 18 wherein the display is comprised of an augmented-reality display, and wherein the method further comprises the steps of: a wearer of the augmented-reality display taking an action to indicate a screen update command while continuously holding the endoscopic instrument; sensing the screen update command; and selecting data to display on the augmented-reality display in response to the screen update command.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
[0036] Referring to
[0037] A known dissector unit 16 is shown in
[0038] After initial blunt dissection around the vessel, a harvester cutting unit 22 as shown in
[0039] In some embodiments, cutting and cauterizing may be accomplished using a pair of scissor-like jaws instead of a V-cutter. The jaws may have electrodes or other energizable devices on inner surfaces that are clamped onto a side branch for being cut.
[0040]
[0041] Dissector unit 36 has a tubular main body portion comprising a hollow longitudinal rod 37 within which endoscope 32 is to be inserted. Endoscope 32 is inserted or removed from longitudinal rod 37 through a handle portion 38. The material of longitudinal rod 37 may be comprised of fluoropolymers. The most preferred material for constituting the outer surface of longitudinal rod 37 is polytetrafluoroethylene (PTFE). The use of a fluoropolymer reduces the friction caused by moving rod 37 through connective tissue, thereby reducing the force required to perform a dissection.
[0042] A blunt dissector tip 39 is disposed at the distal end of longitudinal rod 37. Tip 39 has a conical shape and comprises a transparent synthetic resin material to facilitate viewing through tip 39 using endoscope 32. Trocar 40 guides dissector unit 36 into the incision site. An outer surface of trocar 40 includes a projection to engage with living tissue and a holding portion 41 to hold trocar 40 onto the living tissue 43 (e.g., patient's skin). Since the inserting direction of dissector 36 is along the direction of a target blood vessel 45 being dissected, the operator gradually inserts the dissector so as to dissect peripheral tissue 46 from blood vessel 45 (creating a working tunnel 44) while viewing the endoscope image on a display 48 which is connected to endoscope 32 by cables 47.
[0043] After dissecting a working tunnel along the target vessel, a dissector instrument may be removed and a cutting instrument may be inserted into the working tunnel to sever the target vessel from any side branches and from any connective tissue that has not been dissected.
[0044]
[0045] Augmented-reality display 60 may be comprised of a head-worn display, sometimes also referred to as “smart glass” or “smart glasses”, among other names. For example, display 60 can take the form of a pair of glasses, a visor, an open area, or a face-shield that a user (e.g., a surgical technician or physician's assistant) wears on their head or face. Display 60 includes a viewfield through which a user can view physical objects in their field of view, which is sometimes referred to as “non-occluded” or “non-occluded heads-up display (HUD)”, among other names. For example, there may be a clear portion of glass, plastic, or similar transparent material through which light emitted from physical objects passes into the user's eye. In some embodiments, display 60 may include solid or opaque portions that completely or partially occludes the user's view, sometimes referred to as “occluded” or an “occluded HUD”, among other names. The viewfield can include one or more screens (e.g., Light Emitting Diode or LED screens) along with one or more cameras that capture a video data of the user's point-of-view. Video is then rendered on the screens, providing the user with a viewfield that is similar to a clear view of the physical environment.
[0046] In another example, display 60 can include a retinal projector configured to project an image directly onto the wearer's eye or eyes. In some cases, the retinal projector can include a clear portion of glass, plastic, or similar transparent material through which light emitted from physical objects passes into the user's eye. In some cases, a display 60 with retinal projector can include one or more cameras that capture a video data of the user's point-of-view. Video is then rendered and projected onto the user's eye, or eyes, providing the user with a viewfield that is similar to a clear view of the physical environment. In some implementations, display 60 can be configured to account for seeing difficulties of the user. For example, a retinal projector can be configured to provide a projection to a user with a cloudy cornea or cataracts in a way that is clear to such a user.
[0047] In yet another example, display 60 can include a half-mirrored portion of glass, plastic, or similar transparent material through which light emitted from physical objects passes into the user's eye, while light is emitted onto the half-mirror view field to render glyphs etc.
[0048] Augmented-reality display 60 is configured to render glyphs (e.g., text, symbols, colored overlays, etc.) and to render video in the viewfield. For example, light emitters can emit light into a transparent viewfield so that the user is shown a reflection of the light. In another example, where screens are used to show video from the user's point-of-view, the glyphs and video can be shown superimposed over the point-of-view video. In any case, display 60 shows a presentation of the glyphs and the video as an overlay to the view of the physical objects.
[0049] Display 60 can include other features as well. For example, a microphone and earphone may be included for connecting to an intercom, cellular phone, or other telecommunication device. This can allow the operator to communicate, via the microphone and earphone, with people in the same facility or more distant.
[0050] As discussed in more detail below, many different types of glyphs and video images can be displayed to the user. Selector 59 enables the user to generate a screen update command in order to modify the contents of display 60 (e.g., selecting different glyphs, scrolling through monitored physiologic parameters of the patient, selecting different image sources, or altering characteristics of the displayed items such as zooming in on a region of the images). Since it is desirable for the user (e.g., wearer of display 60) to maintain their hand grip on the harvesting instrument, selector 59 is configured to receive commands while the user continues to hold the instrument. Selector 59 may be comprised of a manual control mounted on the instrument in its gripping area. Otherwise, selector 59 may be comprised of a hands-free device which senses other actions by the user. For example, selector 59 may include an eye-tracking camera which detects specified eye movements of the user which have been designated to trigger a corresponding update command. Alternatively, selector 59 may include either 1) a microphone and a voice recognition system so that the user can generate the screen update command as a spoken command, 2) a motion sensor responsive to predetermined movements of the user, or 3) a foot pedal (e.g., coupled to image processor 57 via a Bluetooth® connection) with one or more switches to generate a desired update command.
[0051]
[0052] In some embodiments, an image processor (e.g., a local or a remote computer-based unit in communication with the endoscopic instrument and which performs at least a portion of an image processing task using images captured by the endoscopic camera) assembles a three-dimensional model of the tunnel a vessel structures from a series of images captured over time by the endoscopic camera. The image processor may be configured for feature point extraction using a comparison of changing positions of detected features within the series of images to determine estimated distances between the detected features within the three-dimensional model. A process for assembling a 3-D model may include an automated stitching together of image data from overlapping and/or adjacent viewfields as is done for creating panorama types of images.
[0053]
[0054]
[0055]
[0056] In some embodiments, a user can be assisted in identifying particular anatomical objects (e.g., specific vessels, side branches, or connective tissue) based on pattern recognition. As shown in
[0057] In some embodiments, a depth of visible structures or objects (e.g., a distance from the distal tip of the endoscopic instrument to the object) is included as an overlay. Depth can be estimated or measured. For measuring depth, a ranging sensor can be provided at the distal tip of the endoscopic instrument.
[0058] Using the depth information, overlays can be created to assist a user in approaching and treating target tissues (e.g., connective tissue and side branches).
[0059] To further assist a user in understanding the spatial arrangement and sizes of various features in an endoscopic image, a harvesting instrument of the present invention may include absolute reference markings that can be visualized by the user via the endoscopic camera view.
[0060] Depth information also facilitates automated determination of the sizes of selected (and/or automatically recognized) structures in an image. As shown in
[0061] In some embodiments, a pre-mapping representation of the target vessel to be harvested is obtained prior to dissecting the endoscopic tunnel. The pre-mapping representation can be used in assembling the three-dimensional model and/or guiding a dissector during the tunnel dissection. The pre-mapping representation can is obtained by transcutaneous sensing of the location of the target vessel. The transcutaneous sensing can be comprised of ultrasonic imaging. The pre-mapping representation can be used to define a dissection path. During dissection, the image processor can compare a current location of the distal tip with the dissection path. Whenever a deviation between the current location of the distal tip and the dissection path exceeds a predetermined threshold, then the augmented-reality display can render a warning to a user. A sound or other warning can also be generated. The warning rendered on the display may also include an instruction for correcting the deviation.
[0062]
[0063] The pre-mapping representation can be used to guide a dissection as shown in
[0064]
[0065] Using the captured images, the controller estimates a location of the head (distal end) of the dissector in step 153 based on the portion of the dissector that remains visible outside the patient, for example. The estimated head location is compared to the pre-mapped path in step 154. A check is performed in step 155 to determine whether the head location is within a selected boundary (e.g., within a threshold distance) of the pre-mapped path. If within the boundary, then a return is made to step 152 to continue monitoring progress. If not within the desired boundaries, then a warning is provided to the user in step 156.