Graphical user interface for displaying automatically segmented individual parts of anatomy in a surgical navigation system

11622818 · 2023-04-11

Assignee

Inventors

Cpc classification

International classification

Abstract

A surgical navigation system includes a source of a patient anatomy data, wherein the patient anatomy data comprises a three-dimensional reconstruction of a segmented model comprising at least two sections representing parts of the anatomy. A surgical navigation image generator is configured to generate a surgical navigation image comprising the patient anatomy. A 3D display system is configured to show the surgical navigation image wherein the display of the patient anatomy is selectively configurable such that at least one section of the anatomy is displayed and at least one other section of the anatomy is not displayed.

Claims

1. A surgical navigation system, comprising: a source of a patient anatomy data, the patient anatomy data including image data of patient spinal anatomy and segmentation data of the patient spinal anatomy that provides a three-dimensional (3D) reconstruction of a segmented model of the patient spinal anatomy including a set of sections each representing different parts of the patient spinal anatomy; a processor configured to generate a set of surgical navigation images including different subsets of sections of the set of sections using the patient anatomy data; and a 3D display system including a non-head mounted 3D display and a see-through mirror, the 3D display configured to selectively project images from the set of surgical navigation images onto the see-through mirror such that the see-through mirror displays at least: a first surgical navigation image of the set of surgical navigation images in which a first section of the set of sections is displayed together with a second section of the set of sections; and a second surgical navigation image of the set of surgical navigation images in which the second section is displayed with the first section being hidden such that the first section does not interfere with the display of the second section.

2. The system of claim 1, further comprising: a tracking system configured to track a head of a surgeon operating on the patient spinal anatomy, one or more components of the 3D display system, and a physical location of the patient spinal anatomy to provide position or orientation data to the processor, the processor configured to generate the set of surgical navigation images based on the position or orientation data provided by the tracking system.

3. The system of claim 2, further comprising: a source of at least one of operative plan data of an operative plan and a virtual surgical instrument model of a surgical instrument, wherein the tracking system is further configured to track the surgical instrument; wherein at least one surgical navigation image from the set of surgical navigation images further includes a virtual image of the surgical instrument.

4. The system of claim 3, wherein the virtual image of the surgical instrument is configured to indicate a suggested position or orientation of the surgical instrument according to the operative plan data.

5. The system of claim 4, wherein the at least one surgical navigation image further includes a graphical cue indicating a required change of a position or orientation of the surgical instrument to match the suggested position or orientation according to the operative plan data.

6. The system of claim 1, wherein the set of surgical navigation images further includes a set of orthogonal or arbitrary planes of the patient spinal anatomy.

7. The system of claim 1, wherein the see-through mirror is positioned between a head of a surgeon operating on the patient spinal anatomy and a surgical field including the patient spinal anatomy such that the set of surgical navigation images are collocated with the patient spinal anatomy in the surgical field to form an augmented reality image that is visible to the surgeon looking from above the see-through mirror towards the surgical field.

8. The system of claim 1, wherein the patient anatomy data includes output data of a semantic segmentation process of a set of two-dimensional (2D) images of the patient spinal anatomy.

9. The system of claim 8, further comprising a convolutional neural network (CNN) system configured to perform the semantic segmentation process to generate the patient anatomy data.

10. The system of claim 9, wherein the processor is a first processor, and the CNN system includes: at least one non-transitory processor-readable storage medium that stores at least one of processor-executable instructions or data; and at least one second processor communicably coupled to the at least one non-transitory processor-readable storage medium, wherein that at least one second processor is configured to: receive segmentation learning data comprising a plurality of labeled anatomical image sets, each labeled anatomical image set from the plurality of labeled anatomical image sets including: (1) image data including a series of 2D images of the patient spinal anatomy, and (2) at least one label that identifies at least one part of one or more vertebrae of the patient spinal anatomy depicted in each 2D image of the image set, wherein the at least one label identifies one class of a plurality of classes indicating different parts of the one or more vertebrae of the patient spinal anatomy; train a segmentation CNN model to segment semantically the patient spinal anatomy utilizing the received segmentation learning data; and store the trained segmentation CNN model in the at least one non-transitory processor-readable storage medium.

11. The system of claim 10, wherein the at least one second processor is further configured to: receive denoising learning data comprising a plurality of high quality medical images and low quality medical images, wherein the high quality medical images have a lower noise level than the low quality medical images; train a denoising CNN model to denoise an image utilizing the received denoising learning data; and store the trained denoising CNN model in the at least one non-transitory processor-readable storage medium.

12. The system of claim 11, wherein the at least one second processor is further configured to operate the trained denoising CNN model to process the set of 2D images of the patient spinal anatomy to generate a set of output denoised images of the patient spinal anatomy.

13. The system of claim 12, wherein the set of 2D images of the patient spinal anatomy includes low quality medical images.

14. The system of claim 10, wherein the at least one second processor is further configured to operate the trained segmentation CNN model to process the set of 2D images of the patient spinal anatomy to generate a set of output segmented images of the patient spinal anatomy.

15. The system of claim 1, wherein the processor is configured to generate the set of surgical navigation images including the different subsets of sections by processing the patient anatomy data using a set of predefined templates, each predefined template of the set of predefined templates defining a different combination of one or more parts of the patient spinal anatomy.

16. The surgical navigation system of claim 1, wherein each of the set of sections represents a different part of one or more vertebrae of the spine including: a pedicle, a spinous process, a lamina, an articular process, a transverse process, or a vertebral body.

17. A method for providing an augmented reality image during an operation, comprising: obtaining a source of a patient anatomy data, the patient anatomy data including image data of patient spinal anatomy and segmentation data of the patient spinal anatomy that provides a three-dimensional (3D) reconstruction of a segmented model of the patient spinal anatomy including a set of sections each representing different parts of the patient spinal anatomy; generating, by a processor, a set of surgical navigation images including different subsets of sections of the set of sections using the patient anatomy data; receiving a selection of a subset of sections; and displaying, via a 3D display system including a non-head mounted 3D display and a see-through mirror, a surgical navigation image from the set of surgical navigation images including only the subset of sections of the selection while the remaining sections of the set of sections not in the selection are hidden such that the subset of sections is displayed without interference from the remaining sections of the set of sections.

18. The method of claim 17, wherein generating the set of surgical navigation images includes processing the patient anatomy data using a set of predefined templates, each predefined template of the set of predefined templates defining a different combination of one or more parts of the patient spinal anatomy.

19. The method of claim 17, further comprising: receiving a sequence of additional selections of subsets of sections; displaying, via the 3D display system, additional surgical navigation images from the set of surgical navigation images that include the subsets of sections of the additional selections according to the sequence.

20. The method of claim 17, wherein the surgical navigation image displayed via the 3D display system further includes a virtual image of a surgical instrument.

21. A method, comprising: receiving patient anatomy data, the patient anatomy data including image data of patient spinal anatomy and segmentation data of the patient spinal anatomy that provides a three-dimensional (3D) reconstruction of a segmented model of the patient spinal anatomy including a set of sections each representing different parts of the patient spinal anatomy; generating a graphical user interface including a set of display templates, each display template from the set of display templates defining a different combination of one or more parts of the patient spinal anatomy; in response to a user selection of a first display template from the set of display templates, displaying, using a 3D display system, a first surgical navigation image in which a first section of the set of sections is displayed together with a second section of the set of sections; and in response to a user selection of a second display template from the set of display templates, displaying, using the 3D display system, a second surgical navigation image of the set of surgical navigation images in which the second section is displayed with the first section being hidden such that the first section does not interfere with the display of the second section.

Description

BRIEF DESCRIPTION OF FIGURES

(1) Various embodiments are herein described, by way of example only, with reference to the accompanying drawings, wherein:

(2) FIG. 1A shows a layout of a surgical room employing the surgical navigation system in accordance with an embodiment of the invention;

(3) FIG. 1B shows a layout of a surgical room employing the surgical navigation system in accordance with an embodiment of the invention;

(4) FIG. 1C shows a layout of a surgical room employing the surgical navigation system in accordance with an embodiment of the invention;

(5) FIG. 2A shows the connections between the different components that interact in accordance with an embodiment of the invention;

(6) FIG. 2B shows components of the surgical navigation system in accordance with an embodiment of the invention;

(7) FIG. 3A shows an example of an augmented reality display in accordance with an embodiment of the invention;

(8) FIG. 3B shows an example of an augmented reality display in accordance with an embodiment of the invention;

(9) FIG. 3C shows an example of an augmented reality display in accordance with an embodiment of the invention;

(10) FIG. 3D shows an example of an augmented reality display in accordance with an embodiment of the invention;

(11) FIG. 3E shows an example of an augmented reality display in accordance with an embodiment of the invention;

(12) FIG. 3F shows an example of an augmented reality display in accordance with an embodiment of the invention;

(13) FIG. 3G shows an example of an augmented reality display in accordance with an embodiment of the invention;

(14) FIG. 3H shows an example of an augmented reality display in accordance with an embodiment of the invention;

(15) FIG. 3I shows an example of an augmented reality display in accordance with an embodiment of the invention;

(16) FIG. 4A shows a different embodiment of a 3D display system;

(17) FIG. 4B shows another embodiment of a 3D display system;

(18) FIG. 4C shows another embodiment of a 3D display system;

(19) FIG. 4D shows another embodiment of a 3D display system;

(20) FIG. 4E shows another embodiment of a 3D display system;

(21) FIG. 5A shows eye tracking in accordance with an embodiment of the invention;

(22) FIG. 5B shows eye tracking in accordance with an embodiment of the invention;

(23) FIG. 6 shows a 3D representation of the results of the semantic segmentation on one vertebrae in accordance with an embodiment of the invention;

(24) FIG. 7A shows an example of a CT image of a spine;

(25) FIG. 7B shows another example of a CT image of a spine;

(26) FIG. 7C shows another example of a CT image of a spine;

(27) FIG. 7D shows another example of a CT image of a spine;

(28) FIG. 7E shows another example of a CT image of a spine;

(29) FIG. 7F shows a semantic segmented image corresponding to the CT image of FIG. 7A, in accordance with an embodiment of the invention;

(30) FIG. 7G shows a semantic segmented image corresponding to the CT image of FIG. 7B, in accordance with an embodiment of the invention;

(31) FIG. 7H shows a semantic segmented image corresponding to the CT image of FIG. 7C, in accordance with an embodiment of the invention;

(32) FIG. 7I shows a semantic segmented image corresponding to the CT image of FIG. 7D, in accordance with an embodiment of the invention;

(33) FIG. 7J shows a semantic segmented image corresponding to the CT image of FIG. 7E, in accordance with an embodiment of the invention;

(34) FIG. 8A shows an enlarged view of a LDCT scan;

(35) FIG. 8B shows an enlarged view of a HDCT scan;

(36) FIG. 8C shows a low power magnetic resonance scan of a neck portion;

(37) FIG. 8D shows a higher power magnetic resonance scan of the same neck portion as FIG. 8C;

(38) FIG. 9 shows a denoising CNN architecture in accordance with an embodiment of the invention;

(39) FIG. 10 shows a segmentation CNN architecture in accordance with an embodiment of the invention;

(40) FIG. 11 shows a flowchart of a training process in accordance with an embodiment of the invention;

(41) FIG. 12 shows a flowchart of an inference process for the denoising CNN in accordance with an embodiment of the invention;

(42) FIG. 13 shows a flowchart of an inference process for the segmentation CNN in accordance with an embodiment of the invention;

(43) FIG. 14A shows a sample image of a CT spine scan;

(44) FIG. 14B shows a sample image of the segmentation of the sample image of FIG. 14A in accordance with an embodiment of the invention;

(45) FIG. 15 shows a schematic of a system for implementing the segmentation CNN in accordance with an embodiment of the invention.

DETAILED DESCRIPTION OF THE INVENTION

(46) The following detailed description is of the best currently contemplated modes of carrying out the invention. The description is not to be taken in a limiting sense, but is made merely for the purpose of illustrating the general principles of the invention.

(47) The system presented herein, in accordance with one embodiment, comprises a 3D display system 140 to be implemented directly on real surgical applications in a surgical room as shown in FIGS. 1A-1C. The 3D display system 140 as shown in the embodiment of FIGS. 1A-1C comprises a 3D display 142 for emitting a surgical navigation image 142A towards a see-through mirror 141 that is partially transparent and partially reflective, such that an augmented reality image 141A collocated with the patient anatomy in the surgical field 108 underneath the see-through mirror 141 is visible to a viewer looking from above the see-through mirror 141 towards the surgical field 108.

(48) The surgical room typically comprises a floor 101 on which an operating table 104 is positioned. A patient 105 lies on the operating table 104 while being operated by a surgeon 106 with the use of various surgical instruments 107. The surgical navigation system as described in details below can have its components, in particular the 3D display system 140, mounted to a ceiling 102, or alternatively to the floor 101 or a side wall 103 of the operating room. Furthermore, the components, in particular the 3D display system 140, can be mounted to an adjustable and/or movable floor-supported structure (such as a tripod). Components other than the 3D display system 140, such as the surgical image generator 131, can be implemented in a dedicated computing device 109, such as a stand-alone PC computer, which may have its own input controllers and display(s) 110.

(49) In general, the system is designed for use in such a configuration wherein the distance d1 between the surgeon's eyes and the see-through mirror 141, is shorter than the distance d2, between the see-through mirror 141 and the operative field at the patient anatomy 105 being operated.

(50) FIG. 2A shows a functional schematic presenting connections between the components of the surgical navigation system and FIG. 2B shows examples of physical embodiments of various components.

(51) The surgical navigation system comprises a tracking system for tracking in real time the position and/or orientation of various entities to provide current position and/or orientation data. For example, the system may comprise a plurality of arranged fiducial markers, which are trackable by a fiducial marker tracker 125. Any known type of tracking system can be used, for example in case of a marker tracking system, 4-point marker arrays are tracked by a three-camera sensor to provide movement along six degrees of freedom. A head position marker array 121 can be attached to the surgeon's head for tracking of the position and orientation of the surgeon and the direction of gaze of the surgeon—for example, the head position marker array 121 can be integrated with the wearable 3D glasses 151 or can be attached to a strip worn over surgeon's head.

(52) A display marker array 122 can be attached to the see-through mirror 141 of the 3D display system 140 for tracking its position and orientation, as the see-through mirror 141 is movable and can be placed according to the current needs of the operative setup.

(53) A patient anatomy marker array 123 can be attached at a particular position and orientation of the anatomy of the patient.

(54) A surgical instrument marker array 124 can be attached to the instrument whose position and orientation shall be tracked.

(55) Preferably, the markers in at least one of the marker arrays 121-124 are not coplanar, which helps to improve the accuracy of the tracking system.

(56) Therefore, the tracking system comprises means for real-time tracking of the position and orientation of at least one of: a surgeon's head 106, a 3D display 142, a patient anatomy 105, and surgical instruments 107. Preferably, all of these elements are tracked by a fiducial marker tracker 125.

(57) A surgical navigation image generator 131 is configured to generate an image to be viewed via the see-through mirror 141 of the 3D display system. It generates a surgical navigation image 142A comprising data of at least one of: the pre-operative plan 161 (which are generated and stored in a database before the operation), data of the intra-operative plan 162 (which can be generated live during the operation), data of the patient anatomy scan 163 (which can be generated before the operation or live during the operation) and virtual images 164 of surgical instruments used during the operation (which are stored as 3D models in a database).

(58) The surgical navigation image generator 131, as well as other components of the system, can be controlled by a user (i.e. a surgeon or support staff) by one or more user interfaces 132, such as foot-operable pedals (which are convenient to be operated by the surgeon), a keyboard, a mouse, a joystick, a button, a switch, an audio interface (such as a microphone), a gesture interface, a gaze detecting interface etc. The input interface(s) are for inputting instructions and/or commands.

(59) All system components are controlled by one or more computer which is controlled by an operating system and one or more software applications. The computer may be equipped with a suitable memory which may store computer program or programs executed by the computer in order to execute steps of the methods utilized in the system. Computer programs are preferably stored on a non-transitory medium. An example of a non-transitory medium is a non-volatile memory, for example a flash memory while an example of a volatile memory is RAM. The computer instructions are executed by a processor. These memories are exemplary recording media for storing computer programs comprising computer-executable instructions performing all the steps of the computer-implemented method according the technical concept presented herein. The computer(s) can be placed within the operating room or outside the operating room. Communication between the computers and the components of the system may be performed by wire or wirelessly, according to known communication means.

(60) The aim of the system is to generate, via the 3D display system 140, an augmented reality image such as shown in examples of FIGS. 3F-3I and also possibly 3A-3E. When the surgeon looks via the 3D display system 140, the surgeon sees the augmented reality image 141A which comprises: the real world image: the patient anatomy, surgeon's hands and the instrument currently in use (which may be partially inserted into the patient's body and hidden under the skin); and a computer-generated surgical navigation image 142A comprising the patient anatomy 163 configurable such that at least one section of the anatomy 163A-163F is displayed and at least one other section of the anatomy 163A-163F is not displayed.

(61) Furthermore, the surgical navigation image may further comprise a 3D image 171 representing at least one of: the virtual image of the instrument 164 or surgical guidance indicating suggested (ideal) trajectory and placement of surgical instruments 107, according to the pre-operative plans 161 (as shown in FIG. 3C); preferably, three different orthogonal planes of the patient anatomy data 163: coronal 174, sagittal 173, axial 172; preferably, a menu 175 for controlling the system operation.

(62) If the 3D display 142 is stereoscopic, the surgeon shall use a pair of 3D glasses 151 to view the augmented reality image 141A. However, if the 3D display 142 is autostereoscopic, it may be not necessary for the surgeon to use the 3D glasses 151 to view the augmented reality image 141A.

(63) The virtual image of the patient anatomy 163 is generated based on data representing a three-dimensional segmented model comprising at least two sections representing parts of the anatomy. The anatomy can be for example a bone structure, such as a spine, skull, pelvis, long bones, shoulder joint, hip joint, knee joint etc. This description presents examples related particularly to a spine, but a skilled person will realize how to adapt the embodiments to be applicable to the other bony structures or other anatomy parts as well.

(64) For example, the model can represent a spine, as shown in FIG. 6, with the following section: spinous process 163A, lamina 163B, articular process 163C, transverse process 163D, pedicles 163E, vertebral body 163F.

(65) The model can be generated based on a pre-operative scan of the patient and then segmented manually by a user or automatically by a computer, using dedicated algorithms and/or neural networks, or in a hybrid approach including a computer-assisted manual segmentation. For example, a convolutional neural network such as explained with reference to FIGS. 7-14 can be employed.

(66) Preferably, the images of the orthogonal planes 172, 173, 174 are displayed in an area next (preferably, above) to the area of the 3D image 171, as shown in FIG. 3A, wherein the 3D image 171 occupies more than 50% of the area of the see-through visor 141.

(67) The location of the images of the orthogonal planes 172, 173, 174 may be adjusted in real time depending on the location of the 3D image 171, when the surgeon changes the position of the head during operation, such as not to interfere with the 3D image 171.

(68) Therefore, in general, the anatomical information of the user is shown in two different layouts that merge for an augmented and mixed reality feature. The first layout is the anatomical information that is projected in 3D in the surgical field. The second layout is in the orthogonal planes.

(69) The surgical navigation image 142A is generated by the image generator 131 in accordance with the tracking data provided by the fiducial marker tracker 125, in order to superimpose the anatomy images and the instrument images exactly over the real objects, in accordance with the position and orientation of the surgeon's head. The markers are tracked in real time and the image is generated in real time. Therefore, the surgical navigation image generator 131 provides graphics rendering of the virtual objects (patient anatomy, surgical plan and instruments) collocated to the real objects according to the perspective of the surgeon's perspective.

(70) For example, surgical guidance may relate to suggestions (virtual guidance clues 164) for placement of a pedicle screw in spine surgery or the ideal orientation of an acetabular component in hip arthroplasty surgery. These suggestions may take a form of animations that show the surgeon whether the placement is correct. The suggestions may be displayed both on the 3D holographic display and the orthogonal planes. The surgeon may use the system to plan these orientations before or during the surgical procedure.

(71) In particular, the 3D image 171 is adapted in real time to the position and orientation of the surgeon's head. The display of the different orthogonal planes 172, 173, 174 may be adapted according to the current position and orientation of the surgical instruments used.

(72) FIG. 3B shows an example indicating collocation of the virtual image of the patient anatomy 163 and the real anatomy 105.

(73) For example, as shown in FIG. 3C, the 3D image 171 may demonstrate a mismatch between a supposed/suggested position of the instrument according to the pre-operative plan 161, displayed as a first virtual image of the instrument 164A located at its supposed/suggested position, and an actual position of the instrument, visible either as the real instrument via the see-through display and/or a second virtual image of the instrument 164B overlaid on the current position of the instrument. Additionally, graphical guiding cues, such as arrows 165 indicating the direction of the supposed change of position, can be displayed.

(74) FIG. 3D shows a situation wherein the tip of the supposed position of the instrument displayed as the first virtual image 164A according to the pre-operative plan 161 matches the tip of the real surgical instrument visible or displayed as the second virtual image 164B. However, the remaining objects do not match, therefore the graphical cues 165 still indicate the need to change position. The surgical instrument is close to the correct position and the system may provide information on how close the surgical instrument is to the planned position.

(75) FIG. 3E shows a situation wherein the supposed position of the real surgical instrument matches the position of the instrument according to the pre-operative plan 161, i.e. the correct position for surgery. In this situation the graphical cues 165 are no longer displayed, but the virtual images 164A, 164B may be changed to indicate the correct position, e.g. by highlighting it or blinking.

(76) In some situations, the image of the full patient anatomy 163, as shown in FIG. 3A, may be obstructive. To solve this problem, the system allows a selective display of the parts of the anatomy 163, such that at least one part of the anatomy is shown and at least one other part of the anatomy is not shown.

(77) For example, the surgeon may only want to see isolated parts of the spinal anatomy during spine surgery (only vertebral body or only the pedicle). Each part of the spinal anatomy is displayed at the request of the surgeon. For example the surgeon may only want to see the virtual representation of the pedicle during placement of bony anchors. This would be advantageous, as it would not have any visual interference from the surrounding anatomical structures.

(78) Therefore, a single part of the anatomy may be displayed, for example only the vertebral body 163F (FIG. 3F) or only the pedicles 163E (FIG. 3G). Alternatively, two parts of the anatomy may be displayed, for example the vertebral body 163F and the pedicles 163E (FIG. 3H); or a larger group of anatomy parts may be displayed, such as the top parts of 163A-D of the spine (FIG. 3I).

(79) The user may select the parts that are to be displayed via the input interface 132.

(80) For example, the GUI may comprise a set of predefined display templates, each template defining a particular part of the anatomy to be displayed (such as FIG. 3F, 3G) or a plurality of parts of the anatomy to be displayed (such as FIG. 3H, 3I). The user may then use a dedicated touch-screen button, keyboard key, pedal or other user interface navigation element to select a particular template to be displayed or to switch between consecutive templates.

(81) Alternatively, the GUI may display a list of available parts of anatomy to be displayed and the user may select the parts to be displayed.

(82) The GUI interface for configuring the parts that are to be displayed can be configured to be operated directly by the surgeon or by an assistant person.

(83) The foregoing description will provide examples of a 3D display 142 with a see-through mirror 141, which is particularly effective to provide the surgical navigation data. However, other 3D display systems can be used as well to show the automatically segmented parts of anatomy, such as 3D head-mounted displays.

(84) The see-through mirror (also called a half-silvered mirror) 141 is at least partially transparent and partially reflective, such that the viewer can see the real world behind the mirror but the mirror also reflects the surgical navigation image generated by the display apparatus located above it.

(85) For example, a see-through mirror as commonly used in teleprompters can be used. For example, the see-through mirror 141 can have a reflective and transparent rate of 50R/50T, but other rates can be used as well.

(86) The surgical navigation image is emitted from above the see-through mirror 141 by the 3D display 142.

(87) In an example embodiment as shown in FIGS. 4A and 4B, a special design of the 3D display 142 is provided that is compact in size to facilitate its mounting within a limited space at the operating room. That design allows generating images of relatively large size, taking into account the small distance between the 3D display 142 and the see-through mirror 141, without the need to use wide-angle lens that could distort the image.

(88) The 3D display 142 comprises a 3D projector 143, such as a DLP projector, that is configured to generate an image, as shown in FIG. 4B (by the dashed lines showing image projection and solid lines showing images generated on particular reflective planes). The image from the 3D projector 143 is firstly refracted by an opaque top mirror 144, then it is refracted by an opaque vertical mirror 145 and subsequently placed on the correct dimensions on a projection screen 146 (which can be simply a glass panel). The projection screen 146 works as a rear-projection screen or a small bright 3D display. The image displayed at the projection screen 146 is reflected by the see-through mirror 141 which works as an augmented reality visor. Such configuration of the mirrors 144, 145 allows the image generated by the 3D projector 143 to be shown with an appropriate size at the projection screen 146. The fact that the projection screen 146 emits an enlarged image generated by the 3D projector 143 makes the emitted surgical navigation image bright, and therefore well visible when reflected at the see-through mirror 141. Reference 141A indicates the augmented reality image as perceived by the surgeon when looking at the see-through mirror 141.

(89) The see-through mirror 141 is held at a predefined position with respect to the 3D projector 143, in particular with respect to the 3D projector 143, by an arm 147, which may have a first portion 147A fixed to the casing of the 3D display 142 and a second portion 147B detachably fixed to the first portion 147A. The first portion 147A may have a protective sleeve overlaid on it. The second portion 147B, together with the see-through mirror 141, may be disposable in order to keep sterility of the operating room, as it is relatively close to the operating field and may be contaminated during the operation. The arm can also be foldable upwards to leave free space of the work space when the arm and augmented reality are not needed.

(90) In alternative embodiments, as shown for example in FIGS. 4C, 4D, 4E, alternative devices may be used in the 3D display system 140 in place of the see-through mirror 141 and the 3D display 142.

(91) As shown in FIG. 4C, a 3D monitor 146A can be used directly in place of the projection screen 146.

(92) As shown in FIG. 4D, a 3D projector 143 can be used instead of the 3D display 142 of FIG. 4A, to project the surgical navigation image onto a see-through projection screen 141B, which is partially transparent and partially reflective, for showing the surgical navigation image 142A and allowing the surgeon to see the surgical field 108. A lens 141C can be used to provide appropriate focal position of the surgical navigation image.

(93) As shown in FIG. 4E, the surgical navigation image can be displayed at a three-dimensional see-through screen 141D and viewed by the user via a lens 141C used to provide appropriate focal position of the surgical navigation image.

(94) Therefore, see-through screen 141B, the see-through display 141D and the see-through mirror 141 can be commonly called a see-through visor.

(95) If a need arises to adapt the position of the augmented reality screen with respect to the surgeon's head (for example, to accommodate the position depending on the height of the particular surgeon), the position of the whole 3D display system 140 can be changed, for example by manipulating an adjustable holder (a surgical boom) 149 on FIG. 1A, by which the 3D display 142 is attachable to an operating room structure, such as a ceiling, a wall or a floor.

(96) An eye tracker 148 module can be installed at the casing of the 3D display 142 or at the see-through visor 141 or at the wearable glasses 151, to track the position and orientation of the eyes of the surgeon and input that as commands via the gaze input interface to control the display parameters at the surgical navigation image generator 131, for example to activate different functions based on the location that is being looked at, as shown in FIGS. 5A and 5B.

(97) For example, the eye tracker 148 may use infrared light to illuminate the eyes of the user without affecting the visibility of the user, wherein the reflection and refraction of the patterns on the eyes are utilized to determine the gaze vector (i.e. the direction at which the eye is pointing out). The gaze vector along with the position and orientation of the user's head is used to interact with the graphical user interface. However, other eye tracking algorithms techniques can be used as well.

(98) It is particularly useful to use the eye tracker 148 along with the pedals 132 as the input interface, wherein the surgeon may navigate the system by moving a cursor by eyesight and inputting commands (such as select or cancel) by pedals.

(99) FIGS. 7-14 show an example of a convolutional neural network (CNN) that can be used to automatically segment the bone structure to provide anatomy section data for the selective display as described above.

(100) The CNN can be used to process images of a bony structure, such as a spine, skull, pelvis, long bones, shoulder joint, hip joint, knee joint etc. The foregoing description will present examples related mostly to a spine, but a skilled person will realize how to adapt the embodiments to be applicable to the other bony structures as well.

(101) Moreover, the CNN may include, before segmentation, pre-processing of lower quality images to improve their quality. For example, the lower quality images may be low dose computed tomography (LDCT) images or magnetic resonance images captured with a relatively low power scanner can be denoised. The foregoing description will present examples related to computed tomography (CT) images, but a skilled person will realize how to adapt the embodiments to be applicable to other image types, such as magnetic resonance images.

(102) FIGS. 7A-7E show examples of various CT images of a spine. FIGS. 7F-7J show their corresponding segmented images obtained by the method presented herein.

(103) FIGS. 8A and 8B show an enlarged view of a CT scan, wherein FIG. 8A is an image with a high noise level (such as a low dose (LDCT) image) and FIG. 8B is an image with a low noise level (such as a high dose (HDCT) image or a LDCT image denoised according to the method presented herein).

(104) FIG. 8C shows a low strength magnetic resonance scan of a neck portion and FIG. 8D shows a higher strength magnetic resonance scan of the same neck portion (wherein FIG. 8D is also the type of image that is expected to be obtained by performing denoising of the image of FIG. 8C).

(105) Therefore, in the present CNN, a low-dose medical imagery (such as shown in FIG. 8A, 8C) is pre-processed to improve its quality to the quality level of a high-dose or high quality medical imagery (such as shown in FIG. 8B, 8D), without the need to expose the patient to the high dose imagery.

(106) For the purposes of this disclosure, the LDCT image is understood as an image which is taken with an effective dose of X-ray radiation lower than the effective dose for the HDCT image, such that the lower dose of X-ray radiation causes appearance of higher amount of noise on the LDCT image than the HDCT image. LDCT images are commonly captured during intra-operative scans to limit the exposure of the patient to X-ray radiation.

(107) As seen by comparing FIGS. 8A and 8B, the LDCT image is quite noisy and is difficult to be automatically processed by a computer to identify the components of the anatomical structure.

(108) The system and method disclosed below use a neural network and deep-learning based approach. In order for any neural network to work, it must first be learned. The learning process is supervised (i.e., the network is provided with a set of input samples and a set of corresponding desired output samples). The network learns the relations that enable it to extract the output sample from the input sample. Given enough training examples, the expected results can be obtained.

(109) In the presented system and method, a set of samples are generated first, wherein LDCT images and HDCT images of the same object (such as an artificial phantom or a lumbar spine) are captured using the computed tomography device. Next, the LDCT images are used as input and their corresponding HDCT images are used as desired output to learn the neutral network to denoise the images. Since the CT scanner noise is not totally random (there are some components that are characteristic for certain devices or types of scanners), the network learns which noise component is added to the LDCT images, recognizes it as noise and it is able to eliminate it in the following operation, when a new LDCT image is provided as an input to the network.

(110) By denoising the LDCT images, the presented system and method may be used for intra-operative tasks, to provide high segmentation quality for images obtained from intra-operative scanners on low radiation dose setting.

(111) FIG. 10 shows a convolutional neural network (CNN) architecture 300, hereinafter called the denoising CNN, which is utilized in the present method for denoising. The network comprises convolution layers 301 (with ReLU activation attached) and deconvolution layers 302 (with ReLU activation attached). The use of a neural network in place of standard de-noising techniques provides improved noise removal capabilities. Moreover, since machine learning is involved, the network can be tuned to specific noise characteristics of the imaging device to further improve the performance. This is done during training. The architecture is general, in the sense that adopting it to images of different size is possible by adjusting the size (resolution) of the layers. The number of layers and the number of filters within layers is also subject to change, depending on the requirements of the application. Deeper networks with more filters typically give results of better quality. However, there's a point at which increasing the number of layers/filters does not result in significant improvement, but significantly increases the computation time, making such a large network impractical.

(112) FIG. 11 shows a convolutional neural network (CNN) architecture 400, hereinafter called the segmentation CNN, which is utilized in the present method for segmentation (both semantic and binary). The network performs pixel-wise class assignment using an encoder-decoder architecture, using as input the raw images or the images denoised with the denoising CNN. The left side of the network is a contracting path, which includes convolution layers 401 and pooling layers 402, and the right side is an expanding path, which includes upsampling or transpose convolution layers 403 and convolutional layers 404 and the output layer 405.

(113) One or more images can be presented to the input layer of the network to learn reasoning from single slice image, or from a series of images fused to form a local volume representation.

(114) The convolution layers 401 can be of a standard kind, the dilated kind, or a combination thereof, with ReLU or leaky ReLU activation attached.

(115) The upsampling or deconvolution layers 403 can be of a standard kind, the dilated kind, or a combination thereof, with ReLU or leaky ReLU activation attached.

(116) The output slice 405 denotes the densely connected layer with one or more hidden layer and a softmax or sigmoid stage connected as the output.

(117) The encoding-decoding flow is supplemented with additional skipping connections of layers with corresponding sizes (resolutions), which improves performance through information merging. It enables either the use of max-pooling indices from the corresponding encoder stage to downsample, or learning the deconvolution filters to upsample.

(118) The architecture is general, in the sense that adopting it to images of different size is possible by adjusting the size (resolution) of the layers. The number of layers and number of filters within a layer is also subject to change, depending on the requirements of the application.

(119) Deeper networks typically give results of better quality. However, there is a point at which increasing the number of layers/filters does not result in significant improvement, but significantly increases the computation time and decreases the network's capability to generalize, making such a large network impractical.

(120) The final layer for binary segmentation recognizes two classes (bone and no-bone). The semantic segmentation is capable of recognizing multiple classes, each representing a part of the anatomy. For example, for the vertebra, this includes vertebral body, pedicles, processes etc.

(121) FIG. 11 shows a flowchart of a training process, which can be used to train both the denoising CNN 300 and the segmentation CNN 400.

(122) The objective of the training for the denoising CNN 300 is to tune the parameters of the denoising CNN 300 such that the network is able to reduce noise in a high noise image, such as shown in FIG. 8A, to obtain a reduced noise image, such as shown in FIG. 8B.

(123) The objective of the training for the segmentation CNN 400 is to tune the parameters of the segmentation CNN 400 such that the network is able to recognize segments in a denoised image (such as shown in FIGS. 7A-7E or FIG. 8A) to obtain a segmented image (such as shown in FIGS. 7F-7J or FIG. 8B), wherein a plurality of such segmented images can be then combined to a 3D segmented image such as shown in FIG. 6.

(124) The training database may be split into a training set used to train the model, a validation set used to quantify the quality of the model, and a test set.

(125) The training starts at 501. At 502, batches of training images are read from the training set, one batch at a time. For the denoising CNN, LDCT images represent input, and HDCT images represent desired output. For the segmentation CNN, denoised images represent input, and pre-segmented (by a human) images represent output.

(126) At 503 the images can be augmented. Data augmentation is performed on these images to make the training set more diverse. The input/output image pair is subjected to the same combination of transformations from the following set: rotation, scaling, movement, horizontal flip, additive noise of Gaussian and/or Poisson distribution and Gaussian blur, etc.

(127) At 504, the images and generated augmented images are then passed through the layers of the CNN in a standard forward pass. The forward pass returns the results, which are then used to calculate at 505 the value of the loss function—the difference between the desired output and the actual, computed output. The difference can be expressed using a similarity metric, e.g.: mean squared error, mean average error, categorical cross-entropy or another metric.

(128) At 506, weights are updated as per the specified optimizer and optimizer learning rate. The loss may be calculated using a per-pixel cross-entropy loss function and the Adam update rule.

(129) The loss is also back-propagated through the network, and the gradients are computed. Based on the gradient values, the network's weights are updated. The process (beginning with the image batch read) is repeated continuously until an end of the training session is reached at 507.

(130) Then, at 508, the performance metrics are calculated using a validation dataset—which is not explicitly used in training set. This is done in order to check at 509 whether not the model has improved. If it isn't the case, the early stop counter is incremented at 514 and it is checked at 515 if its value has reached a predefined number of epochs. If so, then the training process is complete at 516, since the model hasn't improved for many sessions now.

(131) If the model has improved, the model is saved at 510 for further use and the early stop counter is reset at 511. As the final step in a session, learning rate scheduling can be applied. The session at which the rate is to be changed are predefined. Once one of the session numbers is reached at 512, the learning rate is set to one associated with this specific session number at 513.

(132) Once the training is complete, the network can be used for inference, i.e. utilizing a trained model for prediction on new data.

(133) FIG. 12 shows a flowchart of an inference process for the denoising CNN 300.

(134) After inference is invoked at 601, a set of scans (LDCT, not denoised) are loaded at 602 and the denoising CNN 300 and its weights are loaded at 603.

(135) At 604, one batch of images at a time is processed by the inference server. At 605, a forward pass through the denoising CNN 300 is computed.

(136) At 606, if not all batches have been processed, a new batch is added to the processing pipeline until inference has been performed at all input noisy LDCT images.

(137) Finally, at 607, the denoised scans are saved.

(138) FIG. 13 shows a flowchart of an inference process for the segmentation CNN 400.

(139) After inference is invoked at 701, a set of scans (denoised images obtained from noisy LDCT images) are loaded at 702 and the segmentation CNN 400 and its weights are loaded at 703.

(140) At 704, one batch of images at a time is processed by the inference server.

(141) At 705, the images are preprocessed (e.g., normalized, cropped) using the same parameters that were utilized during training, as discussed above. In at least some implementations, inference-time distortions are applied and the average inference result is taken on, for example, 10 distorted copies of each input image. This feature creates inference results that are robust to small variations in brightness, contrast, orientation, etc.

(142) At 706, a forward pass through the segmentation CNN 400 is computed.

(143) At 707, the system may perform post-processing such as linear filtering (e.g. Gaussian filtering), or nonlinear filtering, such as median filtering and morphological opening or closing.

(144) At 708, if not all batches have been processed, a new batch is added to the processing pipeline until inference has been performed at all input images.

(145) Finally, at 709, the inference results are saved and can be combined to a segmented 3D model. The model can be further converted to a polygonal mesh representation for the purpose of visualization on the display. The volume and/or mesh representation parameters can be adjusted in terms of change of color, opacity, changing the mesh decimation depending on the needs of the operator.

(146) FIG. 14A shows a sample image of a CT spine scan and FIG. 14B shows a sample image of its segmentation. Every class (anatomical part of the vertebrae) can be denoted with its specific color. The segmented image comprises spinous process 11, lamina 12, articular process 13, transverse process 14, pedicles 15, vertebral body 16.

(147) FIG. 6 shows a sample of the segmented images displaying all the parts of the vertebrae (11-16) obtained after the semantic segmentation combined into a 3D model.

(148) The functionality described herein can be implemented in a computer system. The system may include at least one non-transitory processor-readable storage medium that stores at least one of processor-executable instructions or data and at least one processor communicably coupled to that at least one non-transitory processor-readable storage medium. That at least one processor is configured to perform the steps of the methods presented herein.

(149) FIG. 15 shows a schematic illustration of a computer-implemented system 900, for example a machine learning system, in accordance with one embodiment of the invention, for implementing the segmentation CNN. The system 900 may include at least one non-transitory processor-readable storage medium 910 that stores at least one of processor-executable instructions 915 or data; and at least one processor 920 communicably coupled to the at least one non-transitory processor-readable storage medium 910. The at least one processor 920 may be configured to (by executing the instructions 915) receive segmentation learning data comprising a plurality of batches of labeled anatomical image sets, each image set comprising image data representative of a series of slices of a three-dimensional bony structure, and each image set including at least one label which identifies the region of a particular part of the bony structure depicted in each image of the image set, wherein the label indicates one of a plurality of classes indicating parts of the bone anatomy. The at least one processor 920 may also be configured to (by executing the instructions 915) train a segmentation CNN, that is a fully convolutional neural network model with layer skip connections, to segment into plurality of classes at least one part of the bony structure utilizing the received segmentation learning data. The at least one processor 920 may also be configured to (by executing the instructions 915) store the trained segmentation CNN in at least one non-transitory processor-readable storage medium 910 of the machine learning system.

(150) While the invention has been described with respect to a limited number of embodiments, it will be appreciated that many variations, modifications and other applications of the invention may be made. Therefore, the claimed invention as recited in the claims that follow is not limited to the embodiments described herein.