Computer-implemented technique for determining a coordinate transformation for surgical navigation
09901407 ยท 2018-02-27
Assignee
Inventors
Cpc classification
A61B2090/367
HUMAN NECESSITIES
A61B34/20
HUMAN NECESSITIES
A61B2090/364
HUMAN NECESSITIES
International classification
Abstract
A technique for determining a transformation between a navigation reference coordinate system (302) for navigation of a surgical device (150) relative to patient image data and an image coordinate system (304) in which the patient image data define a shape of a patient surface is provided. A computer-implemented method implementation of that technique comprises receiving multiple data sets that have been taken from different perspectives of the patient surface. Feature coordinates of multiple features (170) identifiable in the picture data sets are determined from the picture data sets and in the navigation reference coordinate system (302). From the feature coordinates, a shape model of the patient surface in the navigation reference coordinate system (302) is determined. Then, surface matching between the shape model and the shape of the patient surface defined by the patient image data is applied to determine the transformation (T1) between the navigation reference coordinate system (302) and the image coordinate system (304).
Claims
1. A system for determining a transformation between a navigation reference coordinate system for navigation of a surgical device relative to patient image data and an image coordinate system in which the patient image data define a shape of a patient surface of a patient, the system comprising: a first camera movable relative to the patient upon taking picture data sets from different perspectives; an interface adapted to receive the picture data sets from the first camera, wherein the first camera is a video camera and the picture data sets are received from the video camera in the form of a video data stream; and a processor adapted to determine, from the picture data sets and in the navigation reference coordinate system, feature coordinates of multiple features identifiable in the picture data sets, to determine, from the feature coordinates, a shape model of the patient surface in the navigation reference coordinate system, and to determine a transformation between the navigation reference coordinate system and the image coordinate system using surface matching between the shape model and the shape of the patient surface defined by the patient image data.
2. A device for determining a transformation between a navigation reference coordinate system for navigation of a surgical device relative to patient image data and an image coordinate system in which the patient image data define a shape of a patient surface, the device comprising: an interface adapted to receive multiple picture data sets from a first camera movable relative to the patient upon taking the picture data sets from different perspectives of the patient surface, wherein the first camera is a video camera and the picture data sets are received from the video camera in the form of a video data stream; and a processor adapted to determine, from the picture data sets and in the navigation reference coordinate system, feature coordinates of multiple features identifiable in the picture data sets, to determine, from the feature coordinates, a shape model of the patient surface in the navigation reference coordinate system, and to determine a transformation between the navigation reference coordinate system and the image coordinate system using surface matching between the shape model and the shape of the patient surface defined by the patient image data.
3. A method of determining a transformation between a navigation reference coordinate system for navigation of a surgical device relative to patient image data and an image coordinate system in which the patient image data define a shape of a patient surface, the method comprising: receiving multiple picture data sets from a first camera movable relative to the patient upon taking the picture data sets from different perspectives of the patient surface, wherein the first camera is a video camera and the picture data sets are received from the video camera in the form of a video data stream; determining, from the picture data sets and in the navigation reference coordinate system, feature coordinates of multiple features identifiable in the picture data sets; determining, from the feature coordinates, a shape model of the patient surface in the navigation reference coordinate system; and determining a transformation between the navigation reference coordinate system and the image coordinate system using surface matching between the shape model and the shape of the patient surface defined by the patient image data.
4. The method of claim 3, wherein the first camera is at least one of a handheld camera and attachable to the surgical device.
5. The method of claim 3, wherein several of the features identifiable in at least one picture data set are grouped to form a feature group, wherein at least one of a position and orientation is attributable to each feature group.
6. The method of claim 3, wherein at least one of the feature coordinates and the shape model is determined using one or more of a structure-from-motion technique, a simultaneous localization and mapping technique, and a pose estimation technique.
7. The method of claim 6, wherein at least one of the feature coordinates and the shape model is determined using a simultaneous localization and mapping technique; and wherein simultaneous localization and mapping technique is applied to the feature groups.
8. The method of claim 7, wherein the structure-from-motion technique builds feature tracks for individual features identifiable in the picture data sets from different perspectives and triangulation based on different perspectives is applied to individual feature tracks.
9. The method of claim 3, wherein the shape model is represented by a point cloud.
10. The method of claim 3, further comprising determining the navigation reference coordinate system on the basis of at least some of the features identified in the picture data sets.
11. The method of claim 3, wherein the feature coordinates are determined for one or more tracker features of a patient tracking device for use during surgical navigation, wherein the patient tracking device is at least partially identifiable in the picture data sets and has a fixed position relative to the patient.
12. The method of claim 3, wherein the feature coordinates are determined for one or more tracker features of a patient tracking device for use during surgical navigation, wherein the patient tracking device is at least partially identifiable in the picture data sets and has a fixed position relative to the patient; and wherein the tracker features at least partially define the navigation reference coordinate system.
13. The method of claim 3, wherein the feature coordinates are determined for one or more anatomic patient features identifiable in the picture data sets.
14. The method of claim 13, further comprising identifying the one or more anatomic patient features in the picture data sets using generic knowledge about anatomic features.
15. The method of claim 13, wherein several of the features identifiable in at least one picture data set are grouped to form a feature group, wherein at least one of a position and orientation is attributable to each feature group; wherein at least one of the feature coordinates and the shape model is determined using a simultaneous localization and mapping technique; wherein simultaneous localization and mapping is applied to the feature groups; and wherein the navigation reference coordinate system is at least partially determined from the anatomic patient features.
16. The method of claim 13, wherein the shape model is at least partially determined from the anatomic patient features.
17. The method of claim 3, wherein the feature coordinates are determined for one or more patch features of a feature patch applied to the patient and at least partially identifiable in the picture data sets.
18. The method of claim 17, wherein the method further comprises determining the navigation reference coordinate system on the basis of at least some of the features identified in the picture data sets; and wherein the navigation reference coordinate system is at least partially determined from the patch features.
19. The method of claim 17, wherein the feature patch conforms to the patient surface and wherein the shape model is at least partially determined from the patch features.
20. The method of claim 3, further comprising deriving a scaling factor from the surface matching, and wherein the navigation reference coordinate system is determined also from the scaling factor.
21. The method of claim 3, wherein in the picture data sets scaling features of a scaling reference are identifiable, and wherein the navigation reference coordinate system is determined also from a scaling factor derived from the scaling features.
22. The method of claim 3, further comprising tracking or calculating, during navigation, a position of the surgical device, or a portion thereof, relative to the navigation reference coordinate system, which has been determined from one or more of the features.
23. The method of claim 22, wherein the tracking or calculating is performed based on at least of one or more patient features and one or more tracker features of the patient tracking device, wherein the patient tracking device is different from a feature patch applied to the patient.
24. The method of claim 22, further comprising visualizing the surgical device or a portion thereof relative to the patient image, wherein the visualization is adapted in accordance with the tracking or calculating.
25. The method of claim 22, wherein the picture data sets are received from a first camera and wherein the tracking or calculating is performed based on picture information provided by a second camera different from the first camera.
26. The method of claim 25, wherein the second camera is maintained at an essentially fixed location in an operating room during surgery.
27. The method of claim 22, wherein the picture data sets are received from a first camera and wherein the tracking or calculating is also performed based on the picture data sets received from the first camera.
28. The method of claim 3, wherein two or more of the features identifiable in the picture data sets are coded according to a pre-defined coding scheme so as to be differentiable from each other in the picture data sets.
29. The method of claim 3, further comprising identifying one or more of the features in the picture data sets based on pattern recognition.
30. The method of claim 3, further comprising receiving the patient image data, the patient image data being provided in the image coordinate system; and extracting the shape of the patient surface from the patient image data.
31. The method of claim 30, wherein the patient image data do not show any registration marker.
32. The method of claim 30, wherein the patient image data are generated pre-operatively.
33. The method of claim 3, wherein the transformation is determined prior to navigation and additionally one or more times during navigation to verify or correct the transformation determined prior to navigation.
34. The method of claim 33, wherein the transformation is determined anew based on each picture data set received during navigation.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Further aspects, details and advantages of the present disclosure will become apparent from the following description of exemplary embodiments in conjunction with the accompanying drawings, wherein:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
DETAILED DESCRIPTION
(9) In the following description of exemplary embodiments, for purposes of explanation and not limitation, specific details are set forth, such as particular methods, functions and procedures, in order to provide a thorough understanding of the technique presented herein. It will be apparent to one skilled in the art that this technique may be practiced in other embodiments that depart from these specific details. For example, while the following embodiments will primarily be described on the basis of registration and navigation scenarios pertaining to ENT (ear, nose, throat) surgery and neurosurgery, it will be evident that the technique presented herein could also be implemented with respect to other regions of a patient's body, for example for spinal surgery.
(10) Moreover, those skilled in the art will appreciate that the methods, functions and steps explained herein may be implemented using software functioning in conjunction with the programmed microprocessor, an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP) or general purpose computer. It will also be appreciated that while the following embodiments will primarily be described in the context of methods, systems and devices, the present disclosure may also be embodied in a computer program product which can be loaded to run on a computing device or a distributed computer system comprising one or more processors and one or more memories functioning as a storage, wherein the one or more memories are configured to store one or more computer programs that control the one or more processors to perform the methods, functions and steps disclosed herein.
(11)
(12) The system 100 may also comprise at least one user-operable input device such as one or more buttons, a keyboard, a mouse or a trackball (not shown) for generating (or triggering the generation) of user interaction signals. The user interaction signals may control the operation of the system 100. The input device and the display device 120 may be integrated into a touchscreen. The touchscreen, in turn, may be part of a tablet computer.
(13) The system 100 further includes a surgical device 150 (e.g., a surgical tool) for use in a surgical procedure. As understood herein, also diagnostic and therapeutic treatments of a patient are regarded to constitute surgical procedures. The surgical device 150 may comprise the input device (e.g., in the form of one or more buttons).
(14) The surgical device 150 can be a free-hand operable device or a guided device. In the latter case, the surgical device 150 may be operated by a surgical robot (e.g., fully automatically or semi-automatically). In other variants, a mechanical guidance may be present that constrains a movement of the surgical device 150 by a surgeon. In some of the following embodiments, the surgical device 150 is configured as a biopsy needle or on endoscope.
(15) The display device 120 is configured to visualize patient image data. The patient image data have been taken by the imaging device 140 prior to or during the surgical procedure. The display device 120 is further configured to visualize computer-assisted guidance for navigating the surgical device 150 relative to the patient. Such visualization may include superimposing the current position (optionally including the orientation) of the surgical device 150 or a portion thereof on a patient image derived from the image data. It should be noted that such guidance could additionally, or alternatively, be provided via acoustic or haptic feedback.
(16) As shown in
(17) In one variant, the camera 160 is rigidly mounted to the surgical device 150 such that the camera 160 can be moved together with the surgical device 150. In another variant, the camera 160 can be operated independently from the surgical device 150. In such a variant, the camera 160 may be incorporated in a smartphone, tablet computer or any other mobile user equipment.
(18) Optionally, at least one further camera 160A may be provided. In one implementation, the further camera 160A is rigidly mounted to the surgical device 150 to be used for tracking during surgical navigation (e.g., as described in US 2008/0208041 A1), whereas the other camera 160 can be manipulated independently from the surgical device 150 in connection with a registration procedure in which the coordinate system transformation is determined as described herein. In another implementation the camera 160 is rigidly mounted to the surgical device 150 and used for both registration and navigation (i.e., tracking) purposes. In a further implementation, both cameras 160, 160A are mounted to the surgical device 150, wherein the camera 160 is used for registration purposes and the camera 160A is used for guided navigation purposes. In a still further implementation, the camera 160A is used for tracking during surgical navigation and attached to an operating room wall, an operating room light or a cart (not shown in
(19) When mounted to the surgical device 150, any of the cameras 160, 160A may have a field of view that includes a patient surface targeted at by the surgical device 150. As an example, when the surgical device 150 has a longitudinal axis in use directed towards the patient, the field of view may extend along the longitudinal axis of the surgical device 150.
(20) The feature set 170 comprises multiple features that are identifiable at least in the picture data sets taken by the camera 160 (and optionally, the camera 160A). For such identification purposes, pattern recognition capabilities can be provided by the computing device 110. In this regard, the system 100 may or may not have a priori knowledge of the arrangement, coding or other characteristics of the features to be detected. One or more of the features may be active markings (e.g., emitting radiation to be detected by the camera 160). Additionally, or in the alternative, one or more of the features may be passive markings. Passive markings may have reflecting or non-reflecting properties. Passive markings may be realized (e.g., by printing) on any rigid (e.g., planar) or flexible substrate, such as any of the patient and tool tracking devices presented herein, or be painted on the patient's skin. One or more of the features may also be realized by characteristic anatomic patient features that can, but need not, comprise any additional marking.
(21)
(22) As to the anatomic features, the system 100 will generally have no dedicated a priori knowledge, but may use generic models to identify same. As examples for anatomic features, (typically two-dimensional) skin features such as freckles, birth marks and pores can be mentioned. Other (typically three-dimensional) anatomic features include, for example, the patient's eyes or tip of the nose.
(23) Now returning to
(24) The internal storage 116 or the external storage 130, or both, may be configured to store image data of a patient image taken by the imaging device 140. Alternatively, or in addition, such image data may also be received (e.g., downloaded) via the computer network 180. The external storage 130 may, for example, at least partially be realized in the imaging device 140 for being read by the computing device 110.
(25) Moreover, the internal storage 116 or the external storage 130, or both, may be configured to store various items of calibration data. Such calibration data constitute a priori knowledge of the system 100, and various calibration data examples will be described below in more detail. As will be appreciated, the a priori knowledge of the system 100 may alternatively, or in addition, comprise other items of information.
(26) The internal storage 116 or the external storage 130, or both, may additionally be configured to store picture data sets received from the camera 160 and, if present, from the camera 160A. As mentioned above, those picture data sets may be received in the form of a video data stream that is at least temporarily stored for being processed by the processor 114. Such processing may, for example, include pattern recognition to identify (e.g., locate and, optionally, decode) one or more of the features in the received picture data sets.
(27) In the following, exemplary modes of operation of the system 100 as illustrated in
(28) The system 100 of
(29)
(30) As illustrated in the flow diagram 200, the method embodiment comprises a first step 202 in which the computing device 110 receives, via the interface 112, multiple picture data sets from one of the camera 160 and the camera 160A shown in
(31) In a following step 204, the processor 114 processes the picture data sets in the storage 116. Using pattern recognition technologies, the processor 114 first identifies (e.g., locates) multiple features in the picture data sets and determines their coordinates (e.g., in the form of their key point coordinates) in a navigation reference coordinate system. In this regard, the processor 114 may also determine the navigation reference coordinate system based on a plurality of the identified features. The processor 114 may have a priori knowledge of the particular features in the picture data sets that span the navigation reference coordinate system, or may simply designate, or select, suitable ones of the identified features to span the navigation reference system.
(32) In a further step 206, the processor 114 determines, from the feature coordinates, a shape model of the patient surface in the navigation reference coordinate system. The shape model may be represented by a point cloud defined by the feature coordinates of features supposed to lie on the patient's skin. The point cloud defining the shape model may typically comprise more than 30 points and may, in certain implementations, comprise several hundred points.
(33) Then, in step 208, the processor 114 determines a transformation (i.e., a set of transformation parameters) between the navigation reference coordinate system and the image coordinate system. That transformation is determined using a surface matching method and between the shape model (e.g., the surface point cloud) determined in step 206 on the one hand and, on the other hand, the shape of the patient surface defined by the patient image data acquired by the imaging device 140. For this purpose, the processor 114 may in a preceding or parallel step not shown in
(34)
(35) Also shown in
(36) The registration steps 202 to 208 discussed above with reference to
(37) As further shown in
(38) The transformation parameters of the second transformation T2 may be stored as calibration data (e.g., in the internal storage 116 of the computing device 110 shown in
(39) The transformation parameters underlying a particular projection model may be provided by the respective camera manufacturer or by a distributer of the system 100. They could also be estimated with an on-site calibration fixture or be standardized for a particular camera type. In certain implementations, the transformation parameters may be provided via a suitable interface by the respective camera 160, 160A itself (e.g., in real-time dependent on a currently selected zoom level).
(40) Also provided as calibration data, for example in the internal storage 116 of the computing device 110 of
(41) The transformation parameters of the third transformation T3 for the camera 160 are calculated by solving the following equation system for each individual feature j:
M.sub.j,160=T4.Math.T3.sup.1.Math.M.sub.j,cal,
wherein M.sub.j,160 is the imaged feature j in a picture of the picture data set (e.g., video frame) of the camera 160 with coordinates relative to its image coordinate system, M.sub.j,cal is provided as calibration data and indicative of (e.g., a key point of) the feature j with coordinates relative to the navigation reference coordinate system 302, and a fourth transformation T4 designates the transformation parameters between the camera 160 and its associates image coordinate system.
(42) In a similar manner, the transformation parameters of transformation T3A can be calculated for the tracking camera 160A. It should be noted that the perspective back-projection described above is sometimes also referred to as camera pose estimation, or performed in connection with camera pose estimation.
(43) In the exemplary scenario illustrated in
(44)
(45) With reference to
(46) In
(47)
(48) It should also be noted that the white ring in
(49) As said, the substrate of the feature patch 330 shown in
(50) The relative positions of individual features as well as their coding scheme (that allows to differentiate individual features) may be stored as calibration data (i.e., might be known a priori). Also the distance of an individual feature (or feature key point) to the skin of the patient (i.e., the thickness of the feature patch 330) might be stored as calibration data.
(51)
(52) In the present embodiment, that navigation reference coordinate system 302 is defined, or spanned, by features (in the form of combinations of black and white areas) provided on the two-dimensional surface of the patient tracking device of
(53) Generally, the above statements regarding the features of the feature patch 330 illustrated in
(54) In the following, further embodiments for determining a transformation between the navigation reference coordinate system 302 and the image coordinate system 304 will be described. Those embodiments are, with certain modifications that will be discussed in more detail, derived from the general scenario illustrated in
(55) In preparation of the following embodiments, the concept of feature groups, also called mini trackers herein, and certain computer vision concepts such as pose estimation, SLAM and SfM will be discussed.
(56)
(57) The feature group defining the mini tracker permits associating a well-defined point in space (and, optionally, orientation) with the mini tracker. This is illustrated by a coordinate system in
(58) For the feature patch example illustrated in
(59) In
(60) The transformations T11 and T10 of the surface points relative to the camera coordinate system 306 (see
(61) SLAM in the present realization models the shape of the patient surface to be reconstructed with local planar areas (defined by the mini trackers), so that the patient surface can locally be reconstructed using planar pose estimation. The pose may in certain embodiments be calculated, for example estimated, relative to a patient tracking device such as the patient tracking device 320 of
(62) As an alternative to SLAM, a combination of pose estimation and SfM technologies may be used to derive the feature coordinates for the shape model (steps 204 and 206 in
(63) Then, SfM is applied to derive the three-dimensional feature coordinates (i.e., to reconstruct the patient surface and generated the shape model). SfM builds two-dimensional feature tracks for individual features as the registration camera 160 is moved relative to the patient. From the feature tracks, the feature coordinates are derived in the navigation reference coordinate system (e.g., in the coordinate system 302 of the patient tracking device 320). In this regard, the pose of the registration camera 160 relative to the patient tracking device 320 may be exploited for the picture data sets from which the feature tracks are built. The feature tracks are thus used for three-dimensional feature coordinate reconstruction.
(64) Triangulation and, optionally, bundle adjustment can be applied for the three-dimensional feature coordinate reconstruction and shape model generation. In one variant, triangulation determines for each feature track the two picture data sets (e.g., video frames) with the greatest angular distance in camera poses (e.g., relative to the patient tracking device 320). The two-dimensional feature information is then derived from those two picture data sets to get an initial three-dimensional reconstruction of the feature coordinates in the navigation reference coordinate system. Then, the initial reconstructions for all feature tracks are together with many or all of the associated picture data sets (and associated camera poses) are used to perform a bundle adjustment. Bundle adjustment is an optimization procedure to reduce the reprojection error. Also in the present case the resulting shape model is represented by a point cloud of three-dimensional feature coordinates.
(65) It will be appreciated that SfM can also be performed without explicit camera pose estimation relative to a patient tracking device. The respective camera pose may in such a case be estimated and iteratively optimized. A related process is described by Klein et al., Parallel Tracking and Mapping for Small AR Workspaces, Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Pages 1 to 10, 13-16 Nov. 2007.
(66) Based on the above explanations of the tracker concept, pose estimation, SLAM and SfM, more detailed embodiments as depicted in
(67) In the embodiments illustrated in
(68)
(69) In an initial step 502, the patient region of interest (i.e., that is to be surgically treated) is scanned pre- or intra-operatively. As mentioned above, no specific fiducials or any other markers need to be attached to the patient's anatomy. The resulting patient image data, typically a volume data set, is imported in the computer device 110 of
(70) In a next step 504, the shape of a patient surface of interest is extracted from the image data. The extracted shape representation may describe the skin surface of the anatomic region of interest of the patient. In the exemplary embodiment illustrated in
(71) It will be appreciated that steps 502 and 504 can be performed days or even weeks prior to surgery. In certain cases, both steps could also be performed during (i.e., concurrently with) a surgical treatment.
(72) Immediately before a surgical treatment, the skin mask-type feature patch 330 with the coded features thereon is attached to the skin surface of the patient (step 506). As explained above, the coded features are grouped to form (likewise coded) mini trackers. Due to the adhesive on the side of the feature patch 330 facing the patient, the attached feature patch 330 will conform to the patient surface.
(73) At the same time, the patient tracking device 320 is attached to the patient (step 508). The patient tracking device is attached such that it can be guaranteed that it will not move relative to the patient's anatomy during registration and navigation. In the embodiment illustrated in
(74) The actual registration procedure is started in step 510 with recording, in the internal storage 116 of the computing device 110 of
(75) Then, in step 512, for each picture data set in which at least four robust features of the patient tracking device 320 can be identified (e.g., detected), the position of each mini tracker on the feature patch 330, that can also be identified in that picture data set, is determined in three dimensions relative the patient tracking device 320 in the navigation reference coordinate system 302 (e.g., using pose estimation as discussed above with reference to
(76) As such, step 512 includes estimating the position and orientation (i.e., the pose) of the patient tracking device 320 and the positions of the mini trackers relative to the registration camera coordinate system 306 (in a similar manner as discussed with reference to
(77) Consequently, by processing the video data stream received from the registration camera 160, the feature coordinates of multiple mini trackers (i.e., of the associated surface points as illustrated in
(78) One or more of the above calculations can be done while recording the video data stream and providing visual or other feedback to a user operating the registration camera 160. Such feedback may comprise one or more of a rendering of the video data stream acquired by the registration camera 160 on the display device 120, information pertaining to whether or not the patient tracking device 320 can be recognized in the picture data sets, and information pertaining to the status of individual features of the feature patch 330 (e.g., one or more of a detection status, quality information pertaining to the estimated position of the feature, etc.).
(79) After the point cloud indicative of the feature coordinates for the feature patch 330 has been determined in step 512, the method proceeds to step 514. In step 514, surface matching is performed to match the point cloud of feature coordinates in the navigation reference coordinate system 302 as derived in step 512 to the patient surface in the image coordinate system 304 as extracted in step 504 from the image data). Surface matching can be performed using Iterative Closest Points (ICP) or any other technologies. The result of the matching in step 514 is a registration transformation matrix (i.e., transformation parameters) for the transformation T1 from the navigation reference coordinate system 302 (which, in the present embodiment, coincides with the patient tracking device coordinate system) to the image coordinate system 304, or vice versa (see
(80) Then, in step 516 and based on the registration transformation matrix, the surgical device 150 with the attached tracking camera 160A can be navigated in the patient image volume data set when the navigation camera 160A can identify at least four features known in the navigation reference coordinate system 302 (e.g., features of the patient tracking device 320), and the pose of the tracking camera 160A relative to the patient tracking device 320 can be calculated using pose estimation techniques.
(81)
(82) In step 562, a SfM technique is used to determine the feature coordinates of the features of the feature patch 330. As stated above, SfM refers to the process of estimating three-dimensional structures from two-dimensional sequences of picture data sets. Thus, a three-dimensional surface can be recovered from a (projected) two-dimensional motion field of a moving scene taken by the registration camera 160. In this regard, the individual features of the feature patch 330 are tracked in the sequence of picture data sets (by, e.g., optical flow algorithms) from picture data set to picture data set. By knowing the (e.g., estimated) camera pose relative to the patient tracking device 320 for each picture data set and applying SfM, the three-dimensional coordinates of the identified features in the navigation reference coordinate system 302 (i.e., the patient tracking device coordinate system) can be calculated. The result of those calculations will be a point cloud of coordinates of the features of the feature patch 330 in the navigation reference coordinate system 302.
(83) In
(84) The corresponding processing steps are illustrated in the flow diagram 600 of
(85) Since no dedicated patient tracking device is attached to the patient, the video data stream recorded in step 608 is only indicative of features of the feature patch 330.
(86) In step 610, for each picture data set, the pose (i.e., position and orientation) of each identified mini tracker of the feature patch 330 relative to the registration camera coordinate system 306 (see
(87) In step 612, the transformations calculated in step 610 for various feature combinations are collected and, optionally, filtered, (e.g., by forming the mean of the transformations for each mini tracker that have been calculated from different perspectives).
(88) Then, in step 614, an arbitrary coordinate system is built from the positions (i.e., coordinates) and/or transformations derived for the identified mini trackers. The feature coordinates of the individual identified mini trackers in the arbitrary feature patch coordinate system again form a point cloud (in that coordinate system) representative of a surface model of the patient surface to which the feature patch 330 has been applied. Additionally, multiple one of the identified mini trackers could be designated for later tracking purposes (via the tracking camera 160A) during surgical navigation. As such, the arbitrary feature patch coordinate system is defined to constitute the navigation reference coordinate system 302 that replaces the patient tracking device coordinate system utilized for the same purpose in connection with the method embodiments illustrated in
(89) Accordingly, the mini trackers may also be used for tracking during navigation to determine the position of the navigation camera relative to the patient (and the surgical device) in step 618. This fact explains the expression mini trackers. It will be appreciated that in other embodiments in which a dedicated patient tracking device is present (see, e.g.,
(90)
(91) The scaling reference 190 of
(92) The associated method embodiment illustrated in the flow diagram 700 of
(93) In contrast to step 610, which is performed based, inter alia, on pose estimation, in step 710 SfM is used to calculate a point cloud of the identified (i.e., detected and, optionally, decoded) individual features (not necessarily feature groups). The point cloud is scaled in step 712 by a scaling factor determined from scaling the scaling reference features identified in the picture data sets in accordance with the a priori knowledge of the relative position of the scaling features in space. As will be appreciated, such a scaling is not required for the pose estimation technique utilized in step 610. In step 714 a patient tracking device and an associated coordinate system are built from at least four (individual) features of the feature patch 330. Those at least four features will then be used for estimating the camera pose for navigation purposes in step 718.
(94)
(95) In step 806, the scaling reference 190 is placed in the viewing direction (i.e., in the field of view) of the registration camera 160. Then, in step 808, a video data stream of the patient area of interest (here: the patient's face) is recorded with the registration camera 160 from multiple perspectives, or viewing angles, such that also the scaling reference 190 can be seen. In a further step 810, anatomic patient features are identified and, optionally, classified, for being tracked in the video data stream based on the detected features. SfM is applied to calculate a three-dimensional point cloud as explained above.
(96) In connection with step 810 (i.e., in parallel), pattern recognition is applied in step 812 to identify the additional anatomic patient features (so-called landmarks) which are used to define picture areas where anatomic patient features are expected. This approach may help to prevent detecting features not lying on the patient surface. As will be appreciated, the anatomic patient features will also be utilized to define the patient tracking device for surgical navigation in step 820. As such, neither a dedicated feature patch nor a dedicated patient tracking device is required in the embodiment illustrated in
(97) In the implementation of
(98) It should be noted that in the setup illustrated in
(99) As has become apparent from the above embodiments, the present disclosure provides a surgical navigation technique with innovative registration approaches. The navigation system can be provided at low costs since, in the simplest variant, a single camera (e.g., a web cam coupled to a computer device via an interface) is sufficient. The registration procedure is easy and intuitive, and does not require any particular patient treatment for the acquisition of the patient image data for surgical navigation.
(100) In the foregoing, principles, embodiments and various modes of implementing the technique disclosed herein have exemplarily been described. The present invention should not be construed as being limited to the particular principles, embodiments and modes discussed herein. Rather, it will be appreciated that various changes and modifications may be made by a person skilled in the art without departing from the scope of the present invention as defined in the claims that follow.