GRAPHICAL USER INTERFACE FOR A SURGICAL NAVIGATION SYSTEM AND METHOD FOR PROVIDING AN AUGMENTED REALITY IMAGE DURING OPERATION
20210267698 · 2021-09-02
Assignee
Inventors
Cpc classification
A61B2034/2068
HUMAN NECESSITIES
G02B2027/0196
PHYSICS
A61B2090/3983
HUMAN NECESSITIES
A61B2017/00216
HUMAN NECESSITIES
A61B2034/2063
HUMAN NECESSITIES
A61B2034/107
HUMAN NECESSITIES
A61B2034/102
HUMAN NECESSITIES
A61B90/36
HUMAN NECESSITIES
A61B34/10
HUMAN NECESSITIES
A61B2090/368
HUMAN NECESSITIES
A61B2090/365
HUMAN NECESSITIES
A61B34/20
HUMAN NECESSITIES
A61B90/37
HUMAN NECESSITIES
G06F18/2148
PHYSICS
G02B2027/0187
PHYSICS
A61B2090/367
HUMAN NECESSITIES
A61B2034/105
HUMAN NECESSITIES
International classification
A61B34/00
HUMAN NECESSITIES
A61B34/10
HUMAN NECESSITIES
A61B5/00
HUMAN NECESSITIES
A61B90/00
HUMAN NECESSITIES
G06T19/00
PHYSICS
Abstract
A surgical navigation system includes: a 3D display system with a see-through visor; a tracking system comprising means for real-time tracking of: a surgeon's head, the see-through visor, a patient anatomy and a surgical instrument to provide current position and orientation data; a source of an operative plan, a patient anatomy data and a virtual surgical instrument model; a surgical navigation image generator configured to generate a surgical navigation image with a three-dimensional image representing simultaneously a virtual image of the surgical instrument corresponding to the current position and orientation of the surgical instrument and a virtual image of the surgical instrument indicating the suggested positions and orientation of the surgical instrument according to the operative plan data based on the current relative position and orientation of the surgeon's head, the see-through visor, the patient anatomy and the surgical instrument; wherein the 3D display system is configured to show the surgical navigation image at the see-through visor, such that an augmented reality image collocated with the patient anatomy in the surgical field underneath the see-through visor is visible to a viewer looking from above the see-through visor towards the surgical field.
Claims
1.-14. (canceled)
15. An apparatus, comprising: a memory; a processor operatively coupled to the memory, the processor configured to: determine, based on data associated with an operative plan of a surgical procedure, a suggested position and orientation of a medical device in an anatomy of a patient, the medical device configured to be used in the surgical procedure; determine, based on tracking data associated with the medical device, an actual position and orientation of the medical device; and generate a surgical navigation image including three-dimensional (3D) image including (1) a virtual object indicative of the suggested position and orientation of the medical device, (2) a virtual representation of the medical device, and (3) a virtual representation of a portion of the anatomy of the patient; and a display system configured to display the surgical navigation image on a surface positionable between a head of an operator and a surgical field including the patient anatomy such that the virtual representation of the portion of the anatomy is collocated with the anatomy and the virtual representation of the medical device is overlaid on a portion of the medical device in a field of view of the operator.
16. The apparatus of claim 15, wherein the virtual representation of the medical device is a first virtual representation of the medical device, and the virtual object is a second virtual representation of the medical device having the suggested position and orientation of the medical device.
17. The apparatus of claim 15, wherein the processor is configured to generate the surgical navigation image to further include a set of images depicting different orthogonal or arbitrary planes of the portion of the anatomy.
18. The apparatus of claim 17, wherein the display system is configured to display the surgical navigation image on the surface such that the set of images is displayed at a location next to the 3D image.
19. The apparatus of claim 17, wherein the display system is configured to adjust the location of the set of images based on a location of the 3D image.
20. The apparatus of claim 17, wherein the display system is configured to display the surgical navigation image on the surface such that the 3D image occupies a larger area of the field of view of the operator than the set of images.
21. The apparatus of claim 17, wherein the processor is further configured to adapt the set of images of the surgical navigation image based on changes to the actual position and orientation of the medical device.
22. The apparatus of claim 15, wherein the processor is further configured to adapt the 3D image based on a position and orientation of the head of the operator.
23. The apparatus of claim 15, wherein the processor is further configured to generate the surgical navigation image to include virtual guidance indicating whether a placement of the medical device is correct based on the suggested position and orientation of the medical device.
24. The apparatus of claim 23, wherein the virtual guidance includes an animation.
25. The apparatus of claim 15, wherein the surface is a surface of a see-through mirror, the see-through mirror positionable a first distance from the head of the operator and a second distance from the surgical field, the first distance being less than the second distance.
26. The apparatus of claim 25, wherein the display system is configured to move such that a location of the see-through mirror can move relative to the head of the operator.
27. The apparatus of claim 25, wherein the display system includes a 3D display and an arm extending from the 3D display, the arm configured to support the see-through mirror such that the see-through mirror can be positioned between the head of the operator and the surgical field when the 3D display is positioned above the head of the operator.
28. The apparatus of claim 15, wherein the processor is further configured to: determine, based on tracking data associated with the head of the operator, a position and orientation of the head of the operator, the processor configured to generate the surgical navigation image according to a perspective of the operator based on the position and orientation of the head of the operator.
29. A method, comprising: determining, based on data associated with an operative plan of a surgical procedure, a suggested position and a suggested orientation of a medical device in an anatomy of a patient, the medical device configured to be used in the surgical procedure; receiving, from a tracking system, tracking data associated with the medical device; determining, based on the tracking data associated with the medical device, an actual position and orientation of the medical device; generating a three-dimensional (3D) image including (1) a first virtual guidance clue for indicating the suggested position of the medical device, (2) a second virtual guidance clue for indicating the suggested orientation of the medical device, and (3) a virtual representation of the medical device associated with the actual position and orientation of the medical device; and displaying the 3D image in a field of view of an operator such that the virtual representation of the medical device is overlaid on a portion of the medical device in the field of view and the first and second virtual guidance clues show the operator whether the medical device has been placed in the suggested position and the suggested orientation.
30. The method of claim 29, wherein the 3D image further includes a virtual representation of a portion of the anatomy of the patient, and the 3D image is displayed in the field of view further such that the virtual representation of the portion of the anatomy is collocated with the anatomy.
31. The method of claim 29, further comprising: generating a set of images depicting different orthogonal or arbitrary planes of the portion of the anatomy; and displaying the set of images in the field of view of the operator such that the set of images is shown next to the 3D image.
32. The method of claim 31, wherein at least one of the set of images includes one or more additional guidance clues associated with the suggested position or the suggested orientation of the medical device.
33. The method of claim 31, further comprising adapting the sets of images based on changes to the actual position and orientation of the medical device.
34. The method of claim 29, further comprising: receiving, from the tracking system, tracking data associated with a head of the operator; determining, based on the tracking data associated with the head of the operator, a position and orientation of the head of the operator; and adapting the 3D image based on the position and orientation of the head of the operator.
Description
BRIEF DESCRIPTION OF FIGURES
[0032] The surgical navigation system and method are presented herein by means of non-limiting example embodiments shown in a drawing, wherein:
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
DETAILED DESCRIPTION
[0050] The following detailed description is of the best currently contemplated modes of carrying out the invention. The description is not to be taken in a limiting sense, but is made merely for the purpose of illustrating the general principles of the invention.
[0051] The system presented herein is comprises a 3D display system 140 to be implemented directly on real surgical applications in a surgical room as shown in
[0052] The surgical room typically comprises a floor 101 on which an operating table 104 is positioned. A patient 105 lies on the operating table 104 while being operated by a surgeon 106 with the use of various surgical instruments 107. The surgical navigation system as described in details below can have its components, in particular the 3D display system 140, mounted to a ceiling 102, or alternatively to the floor 101 or a side wall 103 of the operating room. Furthermore, the components, in particular the 3D display system 140, can be mounted to an adjustable and/or movable floor-supported structure (such as a tripod). Components other than the 3D display system 140, such as the surgical image generator 131, can be implemented in a dedicated computing device 109, such as a stand-alone PC computer, which may have its own input controllers and display(s) 110.
[0053] In general, the system is designed for use in such a configuration wherein the distance d1 between the surgeon's eyes and the see-through mirror 141, is shorter than the distance d2, between the see-through mirror 141 and the operative field at the patient anatomy 105 being operated.
[0054]
[0055] The surgical navigation system comprises a tracking system for tracking in real time the position and/or orientation of various entities to provide current position and/or orientation data. For example, the system may comprise a plurality of arranged fiducial markers, which are trackable by a fiducial marker tracker 125. Any known type of tracking system can be used, for example in case of a marker tracking system, 4-point marker arrays are tracked by a three-camera sensor to provide movement along six degrees of freedom. A head position marker array 121 can be attached to the surgeon's head for tracking of the position and orientation of the surgeon and the direction of gaze of the surgeon—for example, the head position marker array 121 can be integrated with the wearable 3D glasses 151 or can be attached to a strip worn over surgeon's head.
[0056] A display marker array 122 can be attached to the see-through mirror 141 of the 3D display system 140 for tracking its position and orientation, as the see-through mirror 141 is movable and can be placed according to the current needs of the operative setup.
[0057] A patient anatomy marker array 123 can be attached at a particular position and orientation of the anatomy of the patient.
[0058] A surgical instrument marker array 124 can be attached to the instrument whose position and orientation shall be tracked.
[0059] Preferably, the markers in at least one of the marker arrays 121-124 are not coplanar, which helps to improve the accuracy of the tracking system.
[0060] Therefore, the tracking system comprises means for real-time tracking of the position and orientation of at least one of: a surgeon's head 106, a 3D display 142, a patient anatomy 105, and surgical instruments 107. Preferably, all of these elements are tracked by a fiducial marker tracker 125.
[0061] A surgical navigation image generator 131 is configured to generate an image to be viewed via the see-through mirror 141 of the 3D display system. It generates a surgical navigation image 142A comprising data representing simultaneously a virtual image 164B of the surgical instrument corresponding to the current position and orientation of the surgical instrument and a virtual image 164A of the surgical instrument indicating the suggested positions and orientation of the surgical instrument according to the operative plan data 161, 162 based on the current relative position and orientation of the surgeon's head 106, the see-through visor 141, 141B, 141D, the patient anatomy 105 and the surgical instrument 107. It may further comprise data representing the patient anatomy scan 163 (which can be generated before the operation or live during the operation).
[0062] The surgical navigation image generator 131, as well as other components of the system, can be controlled by a user (i.e. a surgeon or support staff) by one or more user interfaces 132, such as foot-operable pedals (which are convenient to be operated by the surgeon), a keyboard, a mouse, a joystick, a button, a switch, an audio interface (such as a microphone), a gesture interface, a gaze detecting interface etc. The input interface(s) are for inputting instructions and/or commands.
[0063] All system components are controlled by one or more computer which is controlled by an operating system and one or more software applications. The computer may be equipped with a suitable memory which may store computer program or programs executed by the computer in order to execute steps of the methods utilized in the system. Computer programs are preferably stored on a non-transitory medium. An example of a non-transitory medium is a non-volatile memory, for example a flash memory while an example of a volatile memory is RAM. The computer instructions are executed by a processor. These memories are exemplary recording media for storing computer programs comprising computer-executable instructions performing all the steps of the computer-implemented method according the technical concept presented herein. The computer(s) can be placed within the operating room or outside the operating room. Communication between the computers and the components of the system may be performed by wire or wirelessly, according to known communication means.
[0064] The aim of the system is to generate, via the see-through visor 141, an augmented reality image such as shown in examples of
[0070] If the 3D display 142 is stereoscopic, the surgeon shall use a pair of 3D glasses 151 to view the augmented reality image 141A. However, if the 3D display 142 is autostereoscopic, it may be not necessary for the surgeon to use the 3D glasses 151 to view the augmented reality image 141A.
[0071] Preferably, the images of the orthogonal planes 172, 173, 174 are displayed in an area next (preferably, above) to the area of the 3D image 171, as shown in
[0072] The location of the images of the orthogonal planes 172, 173, 174 may be adjusted in real time depending on the location of the 3D image 171, when the surgeon changes the position of the head during operation, such as not to interfere with the 3D image 171.
[0073] Therefore, in general, the anatomical information of the user is shown in two different layouts that merge for an augmented and mixed reality feature. The first layout is the anatomical information that is projected in 3D in the surgical field. The second layout is in the orthogonal planes.
[0074] The surgical navigation image 142A is generated by the image generator 131 in accordance with the tracking data provided by the fiducial marker tracker 125, in order to superimpose the anatomy images and the instrument images exactly over the real objects, in accordance with the position and orientation of the surgeon's head. The markers are tracked in real time and the image is generated in real time. Therefore, the surgical navigation image generator 131 provides graphics rendering of the virtual objects (patient anatomy, surgical plan and instruments) collocated to the real objects according to the perspective of the surgeon's perspective.
[0075] For example, surgical guidance may relate to suggestions (virtual guidance clues 164) for placement of a pedicle screw in spine surgery or the ideal orientation of an acetabular component in hip arthroplasty surgery. These suggestions may take a form of animations that show the surgeon whether the placement is correct. The suggestions may be displayed both on the 3D holographic display and the orthogonal planes. The surgeon may use the system to plan these orientations before or during the surgical procedure.
[0076] In particular, the 3D image 171 is adapted in real time to the position and orientation of the surgeon's head. The display of the different orthogonal planes 172, 173, 174 may be adapted according to the current position and orientation of the surgical instruments used.
[0077] The aligning the line of sight of the surgeon onto the see-through mirror with the patient anatomy underneath the see-through mirror, involving the scaling and orientation of the image, can be realized based on known solutions in the field of computer graphics processing, in particular for virtual reality, including virtual scene generation, using well-known mathematical formulas and algorithms related to viewer centered perspective. For example, such solutions are known from various tutorials and textbooks (such as “The Future of the CAVE” by T. A. DeFanti et al, Central European Journal of Engineering, 2010, DOI: 10.2478/s13531-010-0002-5).
[0078]
[0079] For example, as shown in
[0080]
[0081]
[0082] The see-through mirror (also called a half-silvered mirror) 141 is at least partially transparent and partially reflective, such that the viewer can see the real world behind the mirror but the mirror also reflects the surgical navigation image generated by the display apparatus located above it.
[0083] For example, a see-through mirror as commonly used in teleprompters can be used. For example, the see-through mirror 141 can have a reflective and transparent rate of 50R/50T, but other rates can be used as well.
[0084] The surgical navigation image is emitted from above the see-through mirror 141 by the 3D display 142.
[0085] In an example embodiment as shown in
[0086] The 3D display 142 comprises a 3D projector 143, such as a DLP projector, that is configured to generate an image, as shown in
[0087] The see-through mirror 141 is held at a predefined position with respect to the 3D projector 143, in particular with respect to the 3D projector 143, by an arm 147, which may have a first portion 147A fixed to the casing of the 3D display 142 and a second portion 147B detachably fixed to the first portion 147A. The first portion 147A may have a protective sleeve overlaid on it. The second portion 147B, together with the see-through mirror 141, may be disposable in order to keep sterility of the operating room, as it is relatively close to the operating field and may be contaminated during the operation. The arm can also be foldable upwards to leave free space of the work space when the arm and augmented reality are not needed.
[0088] In alternative embodiments, as shown for example in
[0089] As shown in
[0090] As shown in
[0091] As shown in
[0092] Therefore, see-through screen 141B, the see-through display 141D and the see-through mirror 141 can be commonly called a see-through visor.
[0093] If a need arises to adapt the position of the augmented reality screen with respect to the surgeon's head (for example, to accommodate the position depending on the height of the particular surgeon), the position of the whole 3D display system 140 can be changed, for example by manipulating an adjustable holder (a surgical boom) 149 on
[0094] An eye tracker 148 module can be installed at the casing of the 3D display 142 or at the see-through visor 141 or at the wearable glasses 151, to track the position and orientation of the eyes of the surgeon and input that as commands via the gaze input interface to control the display parameters at the surgical navigation image generator 131, for example to activate different functions based on the location that is being looked at, as shown in
[0095] For example, the eye tracker 148 may use infrared light to illuminate the eyes of the user without affecting the visibility of the user, wherein the reflection and refraction of the patterns on the eyes are utilized to determine the gaze vector (i.e. the direction at which the eye is pointing out). The gaze vector along with the position and orientation of the user's head is used to interact with the graphical user interface. However, other eye tracking algorithms techniques can be used as well.
[0096] It is particularly useful to use the eye tracker 148 along with the pedals 132 as the input interface, wherein the surgeon may navigate the system by moving a cursor by eye sight and inputting commands (such as select or cancel) by pedals.
[0097] While the invention has been described with respect to a limited number of embodiments, it will be appreciated that many variations, modifications and other applications of the invention may be made. Therefore, the claimed invention as recited in the claims that follow is not limited to the embodiments described herein.