METHOD OF DESIGNING A SKULL PROSTHESIS, AND NAVIGATION SYSTEM
20220000555 · 2022-01-06
Assignee
- INSTITUTE OF SCIENCE AND TECHNOLOGY AUSTRIA (Klosterneuburg, AT)
- Medizinische Universität Wien (Wien, AT)
Inventors
Cpc classification
G06T19/20
PHYSICS
A61F2002/4633
HUMAN NECESSITIES
G16H20/40
PHYSICS
G16H50/20
PHYSICS
A61B2034/104
HUMAN NECESSITIES
A61B2034/108
HUMAN NECESSITIES
A61B2034/105
HUMAN NECESSITIES
A61B34/10
HUMAN NECESSITIES
International classification
A61B34/10
HUMAN NECESSITIES
G06T19/00
PHYSICS
G06T19/20
PHYSICS
G16H20/40
PHYSICS
Abstract
Methods and apparatus for designing a skull prosthesis are disclosed. In one arrangement, imaging data from a medical imaging process is received. The imaging data represents the shape of at least a portion of a skull. The imaging data is used to display on a display device a first virtual representation of at least a portion of the skull. User input defining a cutting line in the first virtual representation is received. A surgical operation of cutting through the skull along at least a portion of the defined cutting line to at least partially disconnect a target portion of the skull from the rest of skull is simulated. Output data is provided based on the simulation. The output data represents a simulated shape of at least a portion of the skull with the target portion at least partially disconnected from the rest of the skull, thereby defining the shape of an implantation site for a skull prosthesis to be manufactured.
Claims
1. A computer-implemented method of designing a skull prosthesis, comprising: receiving imaging data from a medical imaging process, the imaging data representing the shape of at least a portion of a skull; using the imaging data to display on a display device a first virtual representation of at least a portion of the skull; receiving user input defining a cutting line in the first virtual representation; simulating a surgical operation of cutting through the skull along at least a portion of the defined cutting line to at least partially disconnect a target portion of the skull from the rest of skull; providing output data based on the simulation, the output data representing a simulated shape of at least a portion of the skull with the target portion at least partially disconnected from the rest of the skull, thereby defining the shape of an implantation site for a skull prosthesis to be manufactured.
2. The method of claim 1, wherein the output data comprises a modified version of the received imaging data.
3. The method of claim 2, wherein the modification comprises changing only data in the received imaging data that represents the target portion of the skull to be disconnected.
4. The method of claim 3, wherein the modification comprises changing voxels of the imaging data identified as containing skull material but falling within the target portion of the skull to be distinguishable from voxels identified as containing skull material that are outside of the target portion of the skull.
5. The method of claim 1, wherein the user input defining the cutting line comprises specification of a position of each of a plurality of first reference points defining the cutting line.
6. The method of claim 1, wherein the user input further defines an angle of the simulated cutting, defined as a deviation from a normal to the surface of the skull when viewed along the cutting line, at one or more positions along the cutting line.
7. The method of claim 6, wherein the user input defines an angle of the simulated cutting, defined as a deviation from a normal to the surface of the skull when viewed along the cutting line, that varies as a function of position along the cutting line, for at least a portion of the cutting line.
8. The method of claim 6, wherein the angle of the simulated cutting is defined by specifying the position of one or more second reference points and requiring that, for each of one or more positions along the cutting line, a cutting direction through the skull is parallel to a line extending from a selected one of the second reference points to the respective position along the cutting line.
9. The method of claim 8, wherein the angle of the simulated cutting is defined relative to different second reference points for at least two different positions along the cutting line.
10. The method of claim 1, wherein the user input defining the cutting line comprises specifying a line along an outer surface of the skull.
11. The method of claim 1, wherein the medical imaging process comprises a neuroradiological diagnostic method.
12. A computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method of claim 1.
13. A computer-readable medium comprising instructions which, when executed by a computer, cause the computer to carry out the method of claim 1.
14. A method of manufacturing a skull prosthesis, comprising: performing the method of claim 1; and manufacturing a skull prosthesis using the output data.
15. The method of claim 14, wherein the skull prosthesis comprises a polyethylene.
16. A method of preparing for implantation of a skull prosthesis, comprising: performing the method of claim 1; and using a navigation system to mark the location of a line in real space that corresponds to the cutting line defined in the first virtual representation, wherein a position in real space of a reference instrument manipulated by a user of the navigation system is monitored and correlated with a position in a virtual environment displayed by the navigation system, wherein the virtual environment comprises a second virtual representation of at least a portion of the skull, and wherein the second virtual representation is formed using the output data representing the simulated shape of at least a portion of the skull with the target portion at least partially disconnected from the rest of the skull.
17. The method of claim 16, further comprising: receiving an updated version of the imaging data from the medical imaging process, wherein: the second virtual representation is generated by combining the updated version of the imaging data and the output data.
18. A method of replacing a target portion of a skull with a skull prosthesis, comprising: performing the method of claim 14; using a navigation system to mark the location of a line in real space that corresponds to the cutting line defined in the first virtual representation, wherein a position in real space of a reference instrument manipulated by a user of the navigation system is monitored and correlated with a position in a virtual environment displayed by the navigation system, wherein the virtual environment comprises a second virtual representation of at least a portion of the skull, and wherein the second virtual representation is formed using the output data representing the simulated shape of at least a portion of the skull with the target portion at least partially disconnected from the rest of the skull; cutting through the skull along the marked line; removing the target portion of the skull; and implanting the manufactured skull prosthesis.
19. The method of claim 18, wherein: the user input defines an angle of the simulated cutting, defined as a deviation from a normal to the surface of the skull when viewed along the cutting line, at one or more positions along the cutting line; and during the cutting through the skull, an angle of the cutting is controlled for each position along the marked line by reference to the user input defined angle of the simulated cutting.
20. A navigation system, comprising: a display device configured to display a virtual environment containing a virtual representation of at least a portion of a skull; and a reference instrument configured to indicate how a cutting line to be followed in a surgical operation can be marked, wherein: the navigation system is configured to monitor a position of the reference instrument in real space and correlate the monitored position with a position in the virtual environment, such that the position of the reference instrument relative to the skull in real space corresponds to a position of a reference instrument relative to the virtual representation of the skull in the virtual environment; and the virtual representation comprises an indication of a cutting line on the skull in the virtual environment.
Description
[0011] The invention will now be further described, by way of example, with reference to the accompanying drawings, in which:
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027] Various methods of the present disclosure are computer-implemented. Each step of such methods may therefore be performed by a computer. The computer may comprise various combinations of computer hardware, including for example CPUs, RAM, SSDs, motherboards, network connections, firmware, software, and/or other elements known in the art that allow the computer hardware to perform the required computing operations. The required computing operations may be defined by one or more computer programs. The one or more computer programs may be provided in the form of media, optionally non-transitory media, storing computer readable instructions. When the computer readable instructions are read by the computer, the computer performs the required method steps. The computer may consist of a self-contained unit, such as a general-purpose desktop computer, laptop, tablet, mobile telephone, smart device (e.g. smart TV), etc. Alternatively, the computer may consist of a distributed computing system having plural different computers connected to each other via a network such as the internet or an intranet.
[0028]
[0029] In step 101, imaging data is received. The imaging data is derived from a medical imaging process. The imaging data may be provided in the Digital Imaging and Communications in Medicine (DICOM) format. In an embodiment, the medical imaging process comprises a neurological diagnostic method, such as computed tomography (CT) scan or a magnetic resonance imaging (MRI) scan. The imaging data represents a shape of at least a portion of a skull 2 (
[0030] In an embodiment, the target portion 4 contains a tumor to be removed by resection.
[0031] In step 102, the imaging data is used by a computer to generate a first virtual representation of at least a portion of the skull 2 on a display device, for example a three-dimensional perspective visualization of the cranial anatomy of a portion of the skull 2. In an embodiment, the computer processes the imaging data to perform bone segmentation (i.e. to identify voxels in the imaging data that contain bone) to display the cranial anatomy. The computer may additionally process the imaging data to remove non-skull objects, such as equipment used to obtain the imaging data. The computer is programmed to allow a user to provide user input that defines a cutting line 6 (e.g. along the surface of the skull 2) in the first virtual representation. The user may, for example, interact with the displayed first virtual representation using a suitable interface (e.g. a mouse or touch sensitive interface) to define the cutting line 6. An example of a first visual representation 10 and a cutting line 6 defined by a user are shown in
[0032] In step 103, the surgical operation (craniotomy) of cutting through the skull 2 along the defined cutting line 6 is simulated. The simulated cutting through the skull at least partially (optionally fully) disconnects a target portion 4 of the skull 2 from the rest of the skull 2, creating an interface between the target portion 4 and the rest of the skull 2 where disconnection occurs. Output data representing a simulated shape of at least a portion of the skull 2 with the target portion 4 at least partially (optionally fully) disconnected from the rest of the skull 2 is generated (using the simulation). The output data thus defines the shape of an implantation site for a skull prosthesis to be manufactured (i.e. the gap that would be created if the target portion 4 that has been at least partially disconnected is removed from the skull 2 after being fully disconnected). Partial deconnection may comprise simulated drilling of closely spaced holes along the cutting line or simulated cutting along a large proportion of the cutting line but not all of the cutting line. In an embodiment, the simulation is used to modify the first virtual representation to represent a simulated state of the skull 2 after the simulated craniotomy and the output data is generated based on the modified first virtual representation. In an embodiment, the output data comprises a modified version of the imaging data received in step 101, which may be referred to as modified imaging data. The modification may comprise exclusively modifying a subset of the voxels of the received imaging data. In an embodiment, the modified imaging data represents a shape of the skull 2 after the simulated craniotomy. The modified imaging data thus defines the shape of the skull prosthesis to be manufactured (by defining the gap to be filled). The simulation identifies an interface surface within the thickness of the skull 2 that will be exposed by the cutting operation. The interface surface is an interface between the target portion 4 of the skull 2 when present and the rest of the skull 2. The skull prosthesis should be shaped to have an engagement surface that fits against the interface surface of the skull 2 and an outer surface that is shaped to conform with the nearby geometry of the outer surface of the skull 2 (e.g. to resemble the natural curvature of the skull 2). The skull prosthesis may be deliberately manufactured to be slightly smaller than the hole to be left by the cutting procedure (e.g. by about 1 mm around the peripheral extremity of the skull prosthesis) to ensure that the skull prosthesis can be inserted during the actual surgical operation.
[0033] In an embodiment, the generation of the modified imaging data comprises processing the received imaging data to change only data that represents the target portion 4 of the skull 2 to be disconnected by the simulated craniotomy. For example, voxels identified as containing skull material but falling within the target portion of the skull to be disconnected by the simulated craniotomy may be modified (e.g. by assigned to them voxel values corresponding to air or some other values that are different to skull tissue) so as to be distinguishable from voxels identified as containing skull material that are outside of the target portion. In an embodiment, the format of the modified imaging data output in step 103 is the same as the format of the imaging data received in step 101. For example, the modified imaging data and the received imaging data may both be in the DICOM format. This is desirable because it means the modified imaging data and the received imaging data may both be processed using standard visualization software and navigation systems such as neuronavigation systems.
[0034] In an embodiment, the user input defining the cutting line 6 is provided by the user specifying a position of each of a plurality of first reference points 8 defining the cutting line 6. This is illustrated schematically in
[0035] In an embodiment, as depicted schematically in
[0036] In an embodiment, the angle of the simulated cutting defined by the user is greater than 10 degrees, optionally greater than 20 degrees, optionally greater than 30 degrees, optionally greater than 40 degrees, optionally greater than 50 degrees, optionally greater than 60 degrees, for at least a portion of the cutting line 6. In an embodiment, the angle of simulated cutting is defined as zero for at least a portion of the cutting line. In other embodiments, the angle of simulated cutting is defined as zero for all of the cutting line.
[0037] In an embodiment, as depicted schematically in
[0038] In step 104, the modified imaging data output by step 103 is used to manufacture the skull prosthesis. Where the modified imaging data is in the DICOM format (with a synthetically modified portion corresponding to the simulated craniotomy), for example, any of the various known techniques for manufacturing a skull prosthesis based on DICOM data may be used. In an embodiment, the skull prosthesis is manufacturing from a polyethylene such as PEEK or PEKK. The manufactured skull prosthesis may thus comprise, consist essentially of, or consist of, a polyethylene, such as PEEK or PEKK. Other example materials include acrylics such as PMMA, hydroxyapathite, silicon, ceramics, Cortoss™, and metals such as titanium.
[0039]
[0040] In step 105, modified imaging data is received. The modified imaging data may be output data provided by step 103 of the method of designing a skull prosthesis described above with reference to
[0041] In step 106, a second virtual representation of at least a portion of the skull is displayed using a navigation system 30. An example of a navigation system 30 is depicted schematically in
[0042] The navigation system 30 may be implemented by providing suitable input to any of the various neuronavigation systems known in the art (e.g. by providing DICOM data allowing generation of the second virtual representation of the skull by the neuronavigation system). Neuronavigation systems are designed to support navigation of surgical instruments within the brain during brain surgery and have not previously been used to mark a cutting line 6 on a surface of a skull for removing a target portion of the skull according a work flow of the type disclosed herein. However, the inventors have recognised that the functionality developed for brain surgery can support such work flow with minimal modification, particularly where data in a standard format such as DICOM is used.
[0043] In an embodiment, the second virtual representation is formed using the output data provided by step 103, thereby indicating the cutting line 6 to be followed. In an embodiment, the output data comprises modified imaging data that represents a shape of the skull after the simulated surgical operation has been completed. To take account of possible changes in the subject occurring between the receiving of the imaging data in step 101 and the surgical operation (which may involve delays of weeks to months), an updated version of the imaging data may be obtained, for example by repeating the medical imaging process used to provide the imaging data received in step 101 (e.g. a CT scan), a short time before the surgical operation. In such an embodiment, the second virtual representation may be provided by combining the updated version of the imaging data with the modified imaging data. Thus, updated DICOM data representing the skull just before the surgical operation may be merged with DICOM data representing the same skull after a simulated surgical operation. The resulting second virtual representation thus indicates to an operator in the displayed virtual environment where the cutting line 6 used as the basis for the simulated operation is located. The operator of the navigation system may thus mark a location of a line (e.g. as a series of dots or as a continuous line) on the skull (or on an overlay material or tissue on top of the skull) that follows (e.g. lies on top of) the cutting line 6 as indicated in the virtual environment. The marking may be performed using any of various known instruments for providing visible markings on a surface, for example by staining the surface or depositing a visible substance on the surface, for example using a marker pen or the like, by projecting a pattern of light (e.g. laser light) onto the surface, or by scratching into the surface. The marking may be performed using methylene blue for example. The second virtual representation of the skull is in registration with the skull in the real world, so the line marked on the skull will be in the same position relative to the rest of the skull as the cutting line 6 forming the basis for the simulated operation. The inventors have found that the registration and the surgical operation can be achieved with high accuracy. Thus, when a method of replacing a target portion of the skull 2 is performed in practice, including cutting through the skull 2 along the marked line provided using the above method and removing the target portion of the skull 2, the inventors have found that a subsequent step of implanting a skull prosthesis manufactured on the basis of the simulation provides an excellent fit without any manual shaping or adjustment being needed, and without new CT scan data being obtained and sent away to industrial manufacturers to provide the skull prosthesis. A methodology is thus provided which allows a skull prosthesis to be fitted during the same surgical operation as the removal of the target portion of the skull in significantly less time than is currently possible (due to the absence of manual shaping). Additionally, consistently high-quality fitting is provided which makes it easier to ensure high quality aesthetic appearance.
[0044] In the case where, prior to the simulation, user input is received defining an angle of the simulated cutting at one or more positions along the cutting line 6, an angle of the cutting may be controlled by the surgeon for each position along the marked line by reference to the user input defined angle of the simulated cutting (e.g. so as to match the user input defined angle of the simulated cutting as closely as possible).
[0045]
[0046]
Specific Examples and Experimental Validation
[0047] To validate both software and operative workflow ahead of clinical implementation, a cadaveric feasibility study was performed to assess accuracy and precision. Methods and results of this study are described below.
Example Methods
[0048] 3D volume-rendering models (3D-VR) of 43 patients treated for skull infiltrating pathologies between 2013 and 2015 by means of combined resection and large single-stage cranioplastic reconstruction (PMMA≥40 g), were reconstructed based on pre- and postoperative CT scans. Ten representative cases were selected as reference models (patient characteristics detailed in
[0049] The location of an intended craniotomy was defined by interaction with a computer-generated first virtual representation of a skull 2, as described above with reference to step 102 of
[0050] The imaging data provided to step 101 and modified imaging data output from step 103 as output data were provided in the standard DICOM format. The modified imaging data thus appeared to a manufacturer of the skull prostheses as functionally identical to what would have been provided had imaging data been generated by performing a CT scan on a real skull after a real craniotomy had been performed (e.g. as in the work flow of
[0051] The original imaging data (in DICOM format, as received in step 101 of
[0052] Surgical times of the work flow described above were compared with surgical times of traditional, free-hand-molded acrylic techniques in ten clinical reference cases shown in Table 1. Once the postoperative CT scans documented the PEEK implantation results, the specimens were returned to the laboratory, the PEEK alloplastics explanted and the same osteoclastic defect reconstructed with PMMA (Palacos®).
TABLE-US-00001 Total surgical Cadaver Ref. Sex Age Histology Location Strategy Material time (min) specimen 01 f 49 meningothel. convexity, primary PMMA 350 01 meningioma left (90 g) frontotemporal 02 f 47 meningothel. convexity, primary PMMA 190 meningioma right (40 g) frontoparietal 03 f 59 meningothel. right primary PMMA 175 02 meningioma frontoparietal (50 g) 04 m 54 metastasis of left occipital primary PMMA 200 esophageal (40 g) carcinoma 05 f 55 meningothel. left frontal primary PMMA 230 03 meningioma (40 g) 06 f 22 fibrous right primary PMMA 290 dysplasia petroso- (120 g) mastoideal 07 f 59 transitional superior primary PMMA 475 04 menigioma sagittal (120 g) sinus 08 m 61 esthesioneuroblastoma bifrontal primary and 1. PMMA 315 secondary (80 g) (after 2. PEEK PMMA infection) 09 f 73 metastasis of bilateral primary PMMA 145 05 squamous parieto- (80 g) cell occipital carcinoma 10 f 60 microcytic right frontal primary PMMA 180 meningioma (40 g)
[0053] The performance of methods according to the present disclosure were evaluated through comparison of pre- and postoperative imaging data. The pre-operative imaging data was obtained using a Siemens Emotion 16 CT Scanner; scans were acquired at 130 kV (peak), mean X-ray tube current 98 mA with a field of view of 25 cm and slice thickness 0.6 mm. The post-operative imaging data was obtained using a Siemens Somatom Sensation 64 CT Scanner; scans were acquired at 120 kV, mean X-ray tube current 288 mA with a field of view of 23 cm and slice thickness 0.6 mm. Since the outline of the bone flap on the outside of the skull is the most descriptive geometrical feature of craniotomy, we compared these to determine the closeness of the match of the virtual and real craniotomy. The most relevant comparison metrics for two such outlines are their location, size, and their shape, where the former describes the accuracy of our method and the latter two its precision. A special-purpose software program was developed as a MATLAB script (MathWorks Inc., Natick, Mass., US) to compute different distance measurements between bone flap outlines. The input data was obtained by marking 3D points (ROIs) on the outline of the craniotomy in both the pre- and postoperative imaging data with the help of a conventional DICOM viewer. Operating solely on these points, the program performs several steps: (i) the points are reordered along a one-dimensional outline curve, which (ii) is fitted with a continuous curve from which (iii) a dense set of points on the outline is generated. Subsequently, (iv) the dense point sets of the pre- and postoperative data are matched to each other via a rigid transformation (i.e., a rotation and translation). Finally, (v) the closest preoperative point is determined for each postoperative point on the outline and their respective distances collected into a histogram from which the mean and standard deviation were computed. The transformation that is the output of step (iv) determines the accuracy since it describes the alignment between the virtual and real craniotomy. The precision is given by the output of step (v), since the mean and standard deviation describe how closely the shapes of the virtual and real craniotomies match when they are overlaid.
[0054] Statistical calculations included descriptive analyses (means±standard deviations). Differences between groups were evaluated by the Mann-Whitney U test and by the Wilcoxon test for paired samples. Two-sided p values below 0.05 were considered statistically significant. SPSS 23.0 software (SPSS Inc., Chicago, Ill., USA) was used for data administration and statistical calculations.
Example Results
[0055] The shape and size of the virtual approaches corresponded to the clinical reference cases as outlined in
[0056] The CAD-generated imaging data were compatible with the neuronavigation system used in all ten experimental cases. The point-merged anatomic/surface-merged registration and methylene blue marking of skin incisions lasted 2.2±0.7 min (range 1.3-3.2 min). The measured time for image-guided marking of the craniotomy delineation on the bone surface averaged 3.1±1.3 min (range 1.4-5.5 min). All ten prefabricated PEEK allografts were successfully implanted, with a mean implantation time of 4.2±2.1 min (range 1.4-8.5 min). Osteosynthesis of the first four implants was performed with sutures, as the titanium microfixation materials were only available at a later stage of the experiment (cases 05-10). Therefore, the more representative implantation times with microplate-osteosynthesis, as performed for the last six cases, averaged 3.0±1.2 min (range 1.4-4.5 min). These time measurements included minor adjustments to the craniotomy edges to optimize the fitting of the implants in 7 cases, with an average correction time of 1.4±0.9 min (range 0.3-2.6 min). Major corrections, such as recraniotomy, were not required.
[0057] For the sake of comparability, the same defects were reconstructed with PMMA. The implantation time of the hand-molded grafts averaged 31.1±3.8 min (range 24.4-35.8 min). Here, the most time-consuming step of the reconstruction was the PMMA polymerization phase, lasting up to 16.0 min.
[0058] Time differences became particularly apparent when both techniques were directly compared. The methodology of embodiments of the present disclosure resulted in significantly shorter reconstruction times (p=0.005, Wilcoxon test for paired samples per cranioplasty; p<0.001, overall comparison of both groups by the Mann-Whitney U test) than the traditional PMMA technique, with an average time saving of 26.8±2.3 min.
[0059] The navigational accuracy of the performed craniotomy and the surgical precision (degree of matching between the shapes of the virtual and real craniotomies when virtually overlaid) were independently evaluated. The results of the surgical precision are reflected in the mean of 1.1±0.29 mm (range 0.7-1.6 mm) of local distance between virtual and real craniotomy. Submillimetric precision was achieved in 50% of the cadaveric cases (
[0060] The assessment of the global offset between virtual and actual craniotomy—as a measure of the navigational accuracy—revealed an average shift of 4.5±3.6 mm (range 1.1-13.1 mm). There were shift differences in relation to the implantation sites or head positioning. While the six cases operated upon in a “standard” supine position (cases 01/02/03/05/08/10) demonstrated more accurate results, with a mean global offset of 3.0±1.1 mm (range 1.5-4.7 mm), the four cases operated on in a prone position (cases 04/06/07/09) fared slightly worse, with a mean global offset of 6.2±5.2 mm (range 1.1-13.1 mm). Although these location-related observations did not reach statistical significance (p=0.173), the results seem to demonstrate a trend towards a reduced navigational accuracy for (sub)occipitally located implantation sites.