MRI-BASED AUGMENTED REALITY ASSISTED REAL-TIME SURGERY SIMULATION AND NAVIGATION

20230114385 · 2023-04-13

    Inventors

    Cpc classification

    International classification

    Abstract

    An MRI-based surgical navigation method of providing a personalized augmented reality of targeted internal organs with real-time intraoperative tracking is provided. Briefly, two-dimensional MRI images of targeted internal organs are segmented into a plurality of segmented data and recombined thereof to generate a three-dimensional volumetric model of the targeted internal organs. An augmented reality-based three-dimensional simulation model including anatomical features and spatial information of the targeted internal organs is obtained to be overlaid with the three-dimensional volumetric model while collecting real-time feedback of surgical operations. The anatomical features and spatial information data of the targeted internal organs are processed to generate robust and accurate navigation coordinates, which will be outputted to an augmented reality-based three-dimensional simulation and real-time anatomical model capture and display device for assisting medical practitioners to visualize surgical paths and specific anatomical features of an individual receiving said surgical operations.

    Claims

    1. An MRI-based surgical navigation method of providing a personalized augmented reality of targeted internal organs of a subject with real-time intraoperative tracking, comprising: obtaining a plurality of two-dimensional magnetic resonance imaging images of targeted internal organs by a magnetic resonance imaging device; segmenting one or more of the two-dimensional magnetic resonance imaging images into a plurality of segmented data and recombining thereof to generate a three-dimensional volumetric model of the targeted internal organs; providing an augmented reality-based three-dimensional simulation to obtain an augmented reality-based three-dimensional simulation model including anatomical features and spatial information of the targeted internal organs, and overlaying thereof with the three-dimensional volumetric model of the targeted internal organs while gaining a real-time feedback of one or more surgical operations carried out on the targeted internal organs; processing data of the anatomical features and spatial information of the targeted internal organs to generate a plurality of robust and accurate navigation coordinates; and outputting the plurality of robust and accurate navigation coordinates to an augmented reality-based three-dimensional simulation and real-time anatomical model capture and display device for assisting medical practitioners of the one or more surgical operations to visualize at least a surgical path and specific anatomical features of an individual receiving said surgical operations; wherein the plurality of robust and accurate navigation coordinates is calculated and generated by a combined optical and electromagnetic tracking system with the following steps: marking optical markers on the targeted internal organs based on fiducials marker attached to the surgical area; tracking the optical markers by the combined optical and electromagnetic tracking system and generating a set of tracking data; and feeding the set of tracking data to a filter that transforms the tracking data through a non-linear function to generate the coordinate.

    2. The method of claim 1, wherein the segmenting and recombining are carried out by a deep neural network to generate the three-dimensional volumetric model of the targeted internal organs with unique identification of non-specific and specific anatomical features.

    3. The method of claim 1, wherein the augmented reality-based three-dimensional simulation and real-time anatomical model capture and display device collects at least one body appearance feature of the subject from a user's direction of view, wherein the at least one body appearance feature is in registration with a human appearance characteristics image database.

    4. The method of claim 3, wherein the plurality of robust and accurate navigation coordinates is calculated based on the plurality of two-dimensional magnetic resonance imaging images and the at least one body appearance feature of the subject.

    5. The method of claim 1, wherein the filter is an unscented Kalman filter for transforming the data points through the non-linear function in order to obtain a deep learning-based data forecast model.

    6. The method of claim 5, wherein the unscented Kalman filter is cascaded with a deep neural network for predicting the surgical path of the medical instrument in the targeted internal organs.

    7. A surgical navigating system for providing patient-specific and surgical environment-specific pre-operative planning and intraoperative navigation, comprising: a magnetic resonance imaging (MRI) device for capturing a plurality of two-dimensional magnetic resonance imaging images of targeted internal organs; a deep neural network for segmenting the plurality of the two-dimensional magnetic resonance imaging images of the targeted internal organs to obtain segmented data of the two-dimensional magnetic resonance imaging images and recombining the segmented data to generate a three-dimensional volumetric model of the targeted internal organs; a combined optical and electromagnetic tracking system for acquiring data of optical markers' location at the targeted internal organs and transforming the data points through a non-linear function; and an augmented reality-based three-dimensional simulation and real-time anatomical model capture and display device for creating a three-dimensional anatomical simulation model during a simulated surgical operation based on the three-dimensional volumetric shape of the targeted human body part, collecting at least one body appearance features, gathering real-time feedback of one or more surgical operations, overlaying the three-dimensional volumetric shape of the targeted human body part with the three-dimensional anatomical simulation model, displaying a predicted surgical path of medical instrument obtained during a surgery simulation process including navigation coordinates of medical instruments and specific anatomical features of an individual receiving the surgical operation.

    8. The system of claim 7, wherein the augmented reality-based three-dimensional simulation and real-time anatomical model capture and display device collects at least one body appearance feature of the subject from a user's direction of view, wherein the at least one body appearance feature is in registration with a human appearance characteristics image database.

    9. The system of claim 7, wherein the combined optical and electromagnetic tracking system processes data of the anatomical features and spatial information of the targeted internal organs to generate a plurality of robust and accurate navigation coordinates.

    10. The system of claim 9, wherein the plurality of robust and accurate navigation coordinates is calculated based on the plurality of two-dimensional magnetic resonance imaging images and the at least one body appearance feature of the subject.

    11. The system of claim 7, wherein the combined optical and electromagnetic tracking system comprises a filter for transforming the data points through the non-linear function.

    12. The system of claim 11, wherein the filter is an unscented Kalman filter cascaded with the deep neural network for predicting the surgical path of the medical instrument in the targeted internal organs.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0029] The patent application file contains at least one drawing executed in color. Copies of this patent application publication with color drawings will be provided by the Office upon request and payment of the necessary fee.

    [0030] The above and other objects, features and advantages of the present invention will become more apparent to those of ordinary skill in the art by describing in detail exemplary embodiments thereof with reference to the accompanying drawings, in which:

    [0031] FIGS. 1A-1C shows the transition from an MRI image to a volumetric model generated by a deep neural network; FIG. 1A depicts T1 MRI on Axial Plane; FIG. 1B depicts a deep learning-based Segmentation Mask; and FIG. 1C shows a 3D structure of the anatomy from the MRI;

    [0032] FIGS. 2A-2B exhibit a 3D holographic superimposition of anatomical AR model on a dummy head;

    [0033] FIG. 3 depicts a workflow of the anatomical AR Visualization; and

    [0034] FIG. 4 shows the AR-based system block diagram.

    DETAILED DESCRIPTION

    [0035] The MRI-based surgical navigation method and system for providing a personalized augmented reality of targeted internal organs of a subject with real-time intraoperative tracking is described in detail below. The invention is described in relation to neural surgery involving the brain; however, it is understood that the method and system are generally applicable to surgery in other parts of the body. Turning to the drawings in detail, FIG. 4 depicts a surgical system 100 according to an embodiment. System 100 includes deep neural network 10 and MRI image repository 20. In order to improve the visualization from 2D MRI images to a 3D volumetric shape, a deep neural network-based segmentation technique is applied by network 10 to every MRI image; segmented image parts are recombined to form a volumetric shape with unique identification of white matter, gray matter and any abnormalities. This deep neural network 10 employs a Bayesian Principle approach where a small portion of pretrained data is sufficient to train the entire network. The processed data is fed to registration 70 for matching with human appearance/organ characteristics image database. Segmentation and volume rendering of the anatomical model using such technique is a novel approach and this information is sent to an augmented reality display system.

    [0036] The surgical system 100 further includes a head-mounted AR display system 30 which may be selected from a commercially available head mounted AR display systems. The surgical procedure is simulated on the 3D anatomical model displayed by display system 30 with real-time feedback of the surgical procedure. Surgical simulation using AR headsets is an example of the AR-based 3D simulation and real-time anatomical model capture and display device. Finally, the real-time tracking of surgical navigation using the AR anatomical 3D model which is designed based on the fusion of algorithms The surgical system uses an optical tracking system 40 and an electromagnetic tracking system 50 which feeds the information to Unscented Kalman filter 60 for transforming the data points through the non-linear function in order to obtain a robust and accurate navigation coordinate subsequently transmitted to registration 70. Combining both data, respectively processed by deep neural network 10 and unscented Kalman filter 60, and transforming them to AR display system 30, so that real time tracking information is included during the surgical simulation.

    [0037] Tracking system 50 typically includes a surgical probe which may be mounted to a surgical instrument such as a catheter or be manually inserted into a surgical field. The surgical probe includes an image tracking element that provides images of anatomy in the vicinity of the probe. This imaging may be displayed as three images from three mutually orthogonal directions. Tracking of a surgical probe may be accomplished using electromagnetic, ultrasonic, mechanical, or optical techniques.

    [0038] Functionalities of the system 100 are classified into three main principles: segmentation and 3D re-construction of anatomical model from patient-specific MRI scans; enhancing the anatomical visualization using Augmented Reality based 3D model; real-time tracking of surgical incision in the AR based anatomical 3D model. Also, three major applications of the present invention include: (1) preoperative planning, (2) surgery simulation and (3) intraoperative navigation.

    [0039] (1) Preoperative planning system provides an in-depth 3D visualization of the anatomical model which is derived from the patient-specific MRI scans. The present system can help surgeons to set the trajectory of the surgical path. In conventional setup, this is done based on 2D MRI scans and involves human intervention to select suitable scans for re-construction into an 3D anatomical model.

    [0040] (2) Surgical simulation based on the preoperative planning can provide the preliminary detail of the whole surgical process. These simulation results can help surgeons to deal with the unexpected which might arise during surgery. Most importantly, surgical simulation can help medical students and professionals to practice any specific surgical method repeatedly and conveniently.

    [0041] (3) Intraoperative surgical system is the major focus of the present invention. The present system combines the tracking data from optical and electromagnetic tracking system to provide robust and accurate tracking data of surgical incision. Infused with the 3D anatomical model, this surgical incision tracking coordinates can give a clear picture of the whole surgery in an AR environment, significantly improving the safety of the whole operation.

    [0042] In a preoperative condition, three-dimensional anatomical models are used as a guidance system to map the surgical procedures. Visualizing the anatomy and the related abnormalities in three-dimensions provides better accuracy than the conventional methods, hence, the quality of preplanning improves severalfold.

    [0043] Further details relating to the construction and operation of surgical system 100 are discussed in connection with the Example, below.

    EXAMPLE

    [0044] In order to create the three-dimensional anatomical model, open-source MRI datasets were employed. An augmented reality headset system 30, HoloLens2 (Microsoft Corporation) and an optical tracking system 40, OptiTrack V120 Trio (Natural Point, Inc. USA), were used in this example for tracking an optical marker's location and at the same time for creating the electromagnetic tracking system 50.

    [0045] The data acquired from both tracking systems 40 and 50 are fed into an Unscented Kalman Filter 12 for a robust and accurate navigation coordinate which can be supplied to AR display system 30. The Unscented Kalman filter is a derivation from the original Kalman Filter which transforms the data points through a non-linear function for unscented transformation. As a result, the data points approximation is more correct and less prone to line-of-sight and magnetic field interruption errors. Data corrections and approximation using Unscented Kalman filter is a novel approach in this field. For better accuracy of data forecast, a deep learning-based data forecast model is in-line with the unscented Kalman filter.

    [0046] The augmented reality headset 30 is used to visualize the tracking data that are supplied from registration 70 collecting the processed data from neural network 10 and unscented Kalman filter 60. Also, the hololens AR display system 30 is equipped with a depth camera system, allowing the system to collect the point cloud data of the patient's body appearance features and register it with a human appearance characteristics image database for holographic superimposition.

    [0047] Turning to FIGS. 1A-1C, converting the T1 MRI image (FIG. 1A) to a segmented mask (FIG. 1B) and a volumetric model (FIG. 1C) using deep learning models is demonstrated. FIG. 1A depicts T1 MRI on an axial plane. In FIG. 1B, a deep learning-based segmentation mask is applied to the 2-D image of FIG. 1A. Finally, FIG. 1C shows a 3D structure of the anatomy from a number of MRI images. As seen in the 3D image of FIG. 1C, the volumetric model enhances the visualization compared to the tradition 2D MRI images. Further, because the volumetric image is generated from many MRI images taken throughout the organ, the interior of the image will also display the unique features of the patient's anatomy including the targets of the surgical intervention such as anatomical abnormalities or objects such as tumors.

    [0048] Using custom hand gestures and virtual surgical equipment, surgeons can simulate the surgery on the 3D anatomical AR model based on the preplanning data. Simulation of surgical procedures has a higher accuracy impact on the intraoperative procedures.

    [0049] During the intraoperative stage, the 3D model of the targeted anatomical part is superimposed on the surgical region of interest at a 1:1 ratio using the augmented reality glass. The superimposed augmented reality 3D model on the surgical region of interest unfolds the view of the inner structure of the anatomy which is not visible with the naked eyes. Inner structural details of the anatomy with the guided view of augmented reality reveal more information regarding the patient's anatomy.

    [0050] The system 100 is tested by targeting a neurosurgical procedure on a dummy model. As shown in FIGS. 2A-2B, the implementation of the superimposition-based AR surgical tracking, a 3D reconstructed ventricle is successfully and accurately superimposed on the dummy head from all viewing angels. The superimposition of 3D anatomical model onto a dummy allows surgeons or any participants in medical planning to see the inner structures of brain.

    [0051] From the MRI data processing to the intraoperative real-time surgical tracking system with the patient specific anatomical 3D AR models, a pipeline of refined algorithms is provided. As shown in FIG. 3, the workflow of the actions from MRI data to intraoperative 3D anatomical AR-based surgical tracking is exhibited. As seen in FIG. 3, Phase 1 involves a deep learning based segmentation and DICOM segmentation and volumetric reconstruction. This phase is used with the annotation and registration phase 2 involving a physical tracking device used in a surgical procedure. Clinical imaging data is typically provided in a DICOM format (Digital Imaging and Communications in Medicine). DICOM is a tag-based format where objects in a file are encapsulated in a tag that describes the object and its size. Affine transformation involves linear mapping that preserves points, lines, and planes, and can correct distortions that result from images that were takes with less-than-ideal image capture perspectives. These transformed images undergo deformable registration and are applied to a U-net convolutional network for segmentation of images. It is a sliding window convolutional network that has been used for biomedical image segmentation. The process image data undergoes neuroanatomical segmentation and is routed to be processed with surface nets, Laplacian smoothing and decimation of meshes after which it can be used in the formation of a 3D reconstructed anatomical image—in this case, in FIG. 3, a 3D reconstructed brain model.

    [0052] The 3D brain model is used in combination with a tracking device (following probe calibration) to undergo registration in Phase 2. In parallel, the neuroanatomical segmented data undergoes 3D geometrical measurements and multi-planar reconstruction. MPR converts data from an imaging modality, acquired in an axial plane, into another plane. This converted data undergoes axis calibration, pivot calibration, and quaternion transformation. This information is applied in Phase 2 to create a surgical navigation annotation module, along with a fiducial registration module and, for the particular optical system used as an exemplary embodiment, an Open IGT Link module. Additionally, the transformed data undergoes single value decomposition (point based) surface matching, which is also used in the various modules described above.

    [0053] Finally in Phase 3, real-time navigation applies the processed data from Phase 1 and Phase 2 in order to provide real-time tracking visualization in Augmented Reality.

    INDUSTRIAL APPLICABILITY

    [0054] The present invention leverages visualization techniques using the current AR technology combined with real time tracking. Currently, there is no AR-based surgical tracking technique or product available on the market. Most of the AR based surgical navigation from multiple research outcomes can be classified into two areas: first, using head mounted display (HMD) to superimpose the augmented anatomical structure onto the real patient without any tracking; second, displaying the tracking data of surgical equipment (probe) onto an anatomical model. Few of the common approaches in all the on-going research focuses on the visualization technique and enhancement technique instead of providing the spatial information to visualize the 3D structure of the anatomy, tracking efficiency, data processing and user end platform.

    [0055] The following is some key distinctive advantages of the present invention compared with the currently available commercial products and research: [0056] 1. MRI data are segmented and processed to generate the volumetric structure of the anatomy without human intervention. A manual segmentation of MRI set used in the conventional system and method is very time-consuming and labor-intensive. The present invention improves the efficiency of segmentation and volume rendering by using a deep-learning-based segmentation technique through a deep neural network. [0057] 2. The present invention resolves the problem of incorporating AR techniques in displaying a 3D model preserving the spatial information after transforming from one coordinate to another. [0058] 3. To improve the efficiency of tracking, a combined optical and electromagnetic tracking system is provided and the data generated therefrom is validated through an Unscented Kalman Filter which is cascaded by a deep neural network for data forecast before imposing. Unscented Kalman filter is very robust in comparing and calculating the coordinate data generated from the combined optical and electromagnetic tracking system. Using the deep neural network along with the Unscented Kalman filter to predict the path of movement of surgical equipment improves accuracy of the prediction. The network is trained based on the collected data of surgeons' hand movement. One advantage of cascading the Kalman filter with deep neural network is the accuracy level and successful path-prediction which reduces the risk, especially in complex organ surgeries such as Spine Surgery. [0059] 4. One of the major drawbacks of the existing surgical navigation system is the usability cost. Normally, tracking data cannot be visualized without using proprietary software or device. However, the present invention provides an application platform independent of these which can be provided with mobile support which provides easy accessibility to surgeons and medical professionals, and thereby reducing their overall operational cost. [0060] 5. The augmented reality 3D anatomy based on patient-specific magnetic resonance imaging (MRI) images can apply to surgeon training in medical schools as an advance over traditional carcass-based teaching, due to its accessibility, mobility and usability.

    [0061] Those skilled in the art will appreciate from the foregoing description that the broad techniques of the embodiments can be implemented in a variety of forms. Therefore, while the embodiments have been described in connection with particular examples thereof, the true scope of the embodiments should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the specification, and following claims.