METHOD OF SPATIALLY LOCATING POINTS OF INTEREST DURING A SURGICAL PROCEDURE
20210186460 · 2021-06-24
Inventors
Cpc classification
A61B8/12
HUMAN NECESSITIES
A61B8/463
HUMAN NECESSITIES
A61B90/37
HUMAN NECESSITIES
A61B2017/00216
HUMAN NECESSITIES
A61B2090/3784
HUMAN NECESSITIES
A61B8/085
HUMAN NECESSITIES
A61B2090/3782
HUMAN NECESSITIES
A61B2090/364
HUMAN NECESSITIES
A61B8/4245
HUMAN NECESSITIES
International classification
A61B8/00
HUMAN NECESSITIES
A61B8/12
HUMAN NECESSITIES
Abstract
A method of visualizing a surgical site includes scanning a surgical site with an ultrasound system, marking a first area or point of interest within a cross-sectional view of the surgical site with a first tag, viewing the surgical site with a camera, and showing an image of the surgical site captured by the camera on a second display. The second display displays a first indicia representative of the first tag on the image of the surgical site captured by the camera.
Claims
1. A method of visualizing a surgical site, the method comprising: scanning a surgical site with an ultrasound system including a first display showing a cross-sectional view of the surgical site including recording cross-sectional views of the surgical site, each of the recorded cross-sectional views associated with a position a probe of the ultrasound system within the surgical site when the respective cross-sectional view is recorded; viewing the surgical site with a camera on a second display; identifying a first area of interest on the second display such that a recorded cross-sectional view of the surgical sited associated with the first area of interest on the second display is displayed on the first display.
2. The method according to claim 1, wherein scanning the surgical site with the ultrasound system includes inserting an ultrasound probe into a body cavity of a patient.
3. The method according to claim 1, further comprising marking a second area of interest on the second display with a first tag including information relative to the second area of interest.
4. The method according to claim 3, further comprising toggling the first tag to display information relevant to the second area of interest on the second display.
5. The method according to claim 3, wherein marking the second area of interest includes identifying the second area of interest within the first area of interest.
6. The method according to claim 1, further comprising locating a first tag within images captured by the camera based on a position of a previous area of interest during a prior surgical procedure.
7. The method according to claim 6, wherein displaying the first tag representative of the previous area of interest includes displaying information relevant to the previous area of interest on the second display.
8. The method according to claim 7, further comprising toggling the first tag to display information relevant to the previous area of interest on the second display.
9. The method according to claim 6, wherein locating the first tag within images captured by the camera includes determining a depth of the first tag within the surgical site from multiple images captured by the camera.
10. The method according to claim 6, wherein locating the first tag within images captured by the camera includes using pixel-based identification of images from the camera to determine the location of the first tag within the images captured by the camera.
11. The method according to claim 1, wherein viewing the surgical site with the camera on the second display includes removing distortion from images of the surgical site captured with the camera before displaying the images of the surgical site on the second display.
12. The method according to claim 1, further comprising: marking a third area of interest within a cross-sectional view of the surgical site on the first display with a second tag; and viewing a third tag on the second display representative of the position of the probe of the ultrasound within images captured by the camera when the third area of interest was identified.
13. The method according to claim 12, wherein viewing the third tag representative of the second tag includes displaying information relevant to the third area of interest on the second display.
14. The method according to claim 13, further comprising toggling the third tag to display information relevant to the third area of interest on the second display.
15. The method according to claim 14, further comprising toggling the first tag to display information relevant to the first area of interest on the second display independent of toggling the third tag.
16. A surgical system comprising: an ultrasound system including: an ultrasound probe configured to capture a cross-sectional view of a surgical site; and an ultrasound display configured to display the cross-sectional view of the surgical site captured by the ultrasound probe; an endoscopic system including: an endoscope having a camera configured to capture images of the surgical site; an endoscope display configured to display the images of the surgical site captured by the camera; and a processing unit configured to receive a location of a first area of interest within a captured image of the surgical site from the endoscope display and to display a cross-sectional view of the surgical site at the location on the endoscope display.
17. The surgical system according to claim 16, wherein the endoscope display is a touchscreen display configured to receive a tag indicative of the location of the first area of interest within the images of the surgical site.
18. The surgical system according to claim 16, wherein the processing unit is configured to remove distortion from images of the surgical site captured with the camera before displaying the images of the surgical site on the endoscope display.
19. The surgical system according to claim 16, wherein the processing unit is configured to locate a second area of interest within images captured by the camera using pixel-based identification of images from the camera, the second area of interest positioned based on a location of the ultrasound probe within the images of the surgical site when a second area of interest is identified on the ultrasound display.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] Various aspects of the present disclosure are described hereinbelow with reference to the drawings, which are incorporated in and constitute a part of this specification, wherein:
[0013]
[0014]
[0015]
[0016]
DETAILED DESCRIPTION
[0017] Embodiments of the present disclosure are now described in detail with reference to the drawings in which like reference numerals designate identical or corresponding elements in each of the several views. As used herein, the term “clinician” refers to a doctor, a nurse, or any other care provider and may include support personnel. Throughout this description, the term “proximal” refers to the portion of the device or component thereof that is closest to the clinician and the term “distal” refers to the portion of the device or component thereof that is farthest from the clinician.
[0018] Referring now to
[0019] The ultrasound imaging system 10 is configured to provide 2D cross-sectional views or 2D image slices of a region of interest within a body cavity of a patient “P” on the ultrasound display 18. A clinician may interact with the ultrasound imaging system 10 and an endoscope 36, which may include a camera, to visualize surface and subsurface portions of a surgical site “S” of the patient “P” during a surgical procedure as detailed below.
[0020] The ultrasound probe 20 is configured to generate 2D cross-sectional views of the surgical site “S” from a surface of a body cavity of the patient “P” and/or may be inserted through an opening, either a natural opening or an incision, to be within the body cavity adjacent the surgical site “S”. The processing unit 11 receives 2D cross-sectional views of the surgical site “S” and transmits a representation of the 2D cross-sectional views on the ultrasound display 18.
[0021] The endoscopic system 30 includes a control unit 31, an endoscope 36, and an endoscope display 38. With additional reference to
[0022] Referring to
[0023] When the endoscope 36 views the surgical site “S”, the camera 33 of the endoscope 36 captures real-time images of the surgical site “S” for viewing on the endoscope display 38. After the surgical site “S” is scanned with the ultrasound probe 20, other surgical instruments, e.g., a surgical instrument in the form of a grasper or retractor 46, may be inserted through the same or a different opening from the endoscope 36 to access the surgical site “S” to perform a surgical procedure at the surgical site “S”.
[0024] As detailed below, the 2D cross-sectional views of the surgical site “S” recorded during the scan of the surgical site “S” are available for view by the clinician during the surgical procedure. As the camera 33 captures real-time images, the images are displayed on the endoscope display 38. The clinician may select an area or point of interest of the surgical site “S” to review on the endoscope display 38. When the an area or point of interest is selected on the endoscope display 38, the control unit 31 determines the position of the area or point of interest within the surgical site “S” and sends a signal to the processing unit 11. The processing unit 11 receives the signal from the control unit 31 and displays a recorded 2D cross-sectional view taken when the ultrasound probe 20 was position at/or near the area or point of interest during the scan of the surgical site “S”. The recorded 2D cross-sectional view can be a fixed image or can be a video clip of the area or point of interest.
[0025] When the recorded 2D cross-sectional view is a video clip of the area or point of interest the video clip may have a duration of about 1 second to about 10 seconds. The duration of the video clip may be preset or may be selected by the clinician before or during a surgical procedure. It is envisioned that the video clip may be looped such that it continually repeats.
[0026] To indicate the area or point of interest on the endoscope display 38, the clinician may electronically or visually “mark” or “tag” the area or point of interest in the image on the endoscope display 38. To electronically or visually mark the area or point of interest in the image on the endoscope display 38, the clinician may use any known means including, but not limited to, touching the display with a finger or stylus; using a mouse, track pad, or similar pointing device to move an indicator on the endoscope display 38; using a voice recognition system; using an eye tracking system; typing on a keyboard; and/or a combination thereof.
[0027] To determine the position of the area or point of interest within the surgical site “S”, the control unit 31 processes the real-time images from the camera 33. The control unit 31 may remove distortion from the real-time images to improve accuracy of determining the position of the area or point of interest. It is envisioned that the control unit 31 may utilize a pixel-based identification of the real-time images from the camera 33 to identify the location of the area or point of interest within the real-time images from the camera 33. Additionally or alternatively, the location of the area or point of interest may be estimated from multiple real-time images from the camera 33. Specifically, multiple camera images captured during movement of the endoscope 36 about the surgical site “S” can be used to estimate a depth of an area or point of interest within the surgical site “S”.
[0028] In embodiments, a stereoendoscope can be used to determine a depth of structures within the surgical site “S” based on the depth imaging capability of the stereoendoscope. The depth of the structures can be used to more accurately estimate the location of the area or point of interest in the images from the camera 33.
[0029] With the location of the area or point of interest of the surgical site “S” determined, the processing unit 11 displays a 2D cross-sectional view, recorded during the scan of the surgical site “S” detailed above, that is associated with the identified location of the area or point of interest. The clinician can observe the 2D cross-sectional view to visualize subsurface structures at the area or point of interest. By visualizing the subsurface structures at the area or point of interest, the clinician's situational awareness of the area or point of interest is improved without the need for rescanning the area or point of interest with the ultrasound probe 20.
[0030] Additionally or alternatively, during a surgical procedure, a clinician may rescan an area or point of interest within the surgical site “S” with the ultrasound probe 20 to visualize a change effected by the surgical procedure. It is envisioned that the clinician may visualize the change on the ultrasound display 18 by comparing the real-time 2D cross-sectional views with the recorded 2D cross-sectional views at the area or point of interest. To visualize the changes on the ultrasound display 18, the clinician may overlay either the real-time or recorded 2D cross-sectional view with the other,
[0031] Before, during, or after viewing 2D cross-sectional views, the clinician may “tag” areas or points of interest within images on the endoscope display 38, as represented by tags 62, 64, 66 in
[0032] Additionally, while viewing the ultrasound display 18, the clinician may identify an area or point of interest at or adjacent the surgical site “S”. When the clinician identifies an area or point of interest on the display 18, the clinician may electronically or visually “mark” or “tag” the area or point of interest in the image on the display 18 as represented by tag 68 in
[0033] When an area or point of interest is tagged on ultrasound display 18, e.g., tag 68, the location of the ultrasound probe 20 within the surgical site “S” is marked on the endoscope display 38 with a tag, e.g., tag 68′, to represent the tag on the ultrasound display 18.
[0034] Providing tags 62, 64, 66, 68′ with information of areas or points of interest at or adjacent a surgical site during a surgical procedure without requiring a clinician to pause a procedure may increase a clinician's situational awareness during a surgical procedure and/or may decrease a clinician's cognitive loading during a surgical procedure. Increasing a clinician's situational awareness and/or decreasing a clinician's cognitive loading may improve surgical outcomes for patients.
[0035] As shown, the tags 62, 64, 66, 68′ can be displayed in a variety of shapes including a sphere, a cube, a diamond, an exclamation point. The shape of the tags 62, 64, 66, 68′ may be indicative of the type of information pertinent to the associated tags 62, 64, 66, 68′. In addition, the tags 62, 64, 66, 68′ may have a color indicative of the information contained in the tag. For example, the tag 62 may be blue when the information of the tag is pertinent to a blood vessel or may be yellow when the information of the tag is pertinent to tissue.
[0036] It is contemplated that the tags 62, 64, 66, 68′ may be saved for subsequent surgical procedures. Before a surgical procedure on a patient, a clinician can load a profile of the patient into the processing unit 11 and/or the control unit 31 including tags from a previous procedure. As the camera 33 of the endoscope 36 captures real-time images, the control unit 31 identifies structures within the surgical site “S” to locate and place tags, e.g., tags 62, 64, 66, 68′ from previous surgical procedures. When similar structures are identified within the surgical site “S” the control unit 31 places a tag within the image on the endoscope display 38 to provide the clinician with additional information about and/or 2D cross-sectional views of the area or point of interest from the previous surgical procedure in a similar manner as detailed above.
[0037] As detailed above and with reference back to
[0038] While several embodiments of the disclosure have been shown in the drawings, it is not intended that the disclosure be limited thereto, as it is intended that the disclosure be as broad in scope as the art will allow and that the specification be read likewise. Any combination of the above embodiments is also envisioned and is within the scope of the appended claims. Therefore, the above description should not be construed as limiting, but merely as exemplifications of particular embodiments. Those skilled in the art will envision other modifications within the scope of the claims appended hereto.