HYBRID MULTI-CAMERA TRACKING FOR COMPUTER-GUIDED SURGICAL NAVIGATION
20220409287 ยท 2022-12-29
Assignee
Inventors
Cpc classification
A61B2090/365
HUMAN NECESSITIES
A61B34/20
HUMAN NECESSITIES
G06T7/80
PHYSICS
A61B2560/0223
HUMAN NECESSITIES
A61B2090/3983
HUMAN NECESSITIES
A61B2090/367
HUMAN NECESSITIES
A61B2090/364
HUMAN NECESSITIES
International classification
A61B34/20
HUMAN NECESSITIES
A61B90/00
HUMAN NECESSITIES
Abstract
The invention relates to a camera system for surgical navigation systems including a plurality of cameras mounted in a room. At least three cameras are mounted in the room which are operated in at least two different modes. In the first mode at least a subset of the cameras is operated to determine the position of markers and in a second mode at least a subset of the cameras is operated to determine the position of surfaces of the room.
Claims
1-25. (canceled)
26. A camera system for surgical navigation systems comprising: a plurality of cameras mounted in a room; wherein at least three cameras are mounted in the room which are operated in at least two different modes; and wherein in the first mode at least a subset of the cameras is operated to determine the position of markers and in a second mode at least a subset of the cameras is operated to determine the position of surfaces of the room.
27. The camera system as claimed in claim 26, wherein in the first mode at least a subset of the cameras is operated with settings which are more suitable with regard to the position determination of markers and in the second mode at least a subset of the cameras is operated with settings which are more suitable with regard to the position determination of surfaces of the room, in each case with reference to the other of the two modes.
28. The camera system as claimed in claim 26, wherein at least one subset of cameras, comprising at least two cameras, is always operated in the first mode with the same or different composition of cameras over time.
29. The camera system as claimed in claim 26, wherein at least one subset of cameras, comprising at least two cameras, is always operated in the second mode with the same or different composition over time.
30. The camera system as claimed in claim 26, wherein the first mode and the second mode are each operated with a subset of cameras, wherein the composition of the subsets changes over time.
31. The camera system as claimed in claim 26, wherein a subset comprising at least two cameras of the same or different composition over time is operated permanently in one of the two modes, wherein the other mode is operated in time windows with pauses in between.
32. The camera system as claimed in claim 26, wherein all cameras are operated in a single one of the two modes in a time window and in between all cameras are operated in the respective other mode or in between simultaneously a respective subset of the cameras is operated in a respective one of the two modes.
33. The camera system as claimed in claim 26, wherein the camera system is operated according to at least one of the following variants: in a first variant, three cameras are mounted in the room, wherein a subset of two cameras is always in marker mode, which subset is composed differently over time, and wherein always or in time windows with pauses in between one of the cameras is in image mode, wherein which of the cameras is in image mode changes over time; in a second variant, at least four cameras are mounted in the room, wherein a subset of at least two cameras is always in marker mode, which subset is composed differently over time, wherein always or in time windows with pauses in between at least one of the cameras is in image mode, wherein which of the cameras is in image mode changes over time, or always or in time windows with pauses in between at least two of the cameras are in image mode, wherein which of the cameras is in image mode changes over time; in a third variant, at least four cameras are mounted in the room, wherein a subset of at least two cameras is always in marker mode, which subset is composed unchanged over time, and wherein at least two of the cameras are always in image mode, wherein which of the cameras are in image mode does not change over time; and in a fourth variant, at least four cameras are mounted in the room, wherein a subset of at least two cameras is always in marker mode, which subset is composed unchanged over time, and wherein the cameras of the remaining subset, comprising at least two cameras, alternate between marker mode and image mode.
34. The camera system as claimed in claim 26, wherein the system comprises infrared light sources, wherein the infrared light sources are operated at different intensities in the at least two different modes.
35. The camera system as claimed in claim 26, wherein: the cameras are equipped with an optical filter which allows light in the infrared range to pass and attenuates or eliminates light of other wavelengths; and the optical filter is active in the first mode and is not active in the second mode.
36. The camera system as claimed in claim 26, wherein: images from the cameras of the individual modes are processed differently; the images in the first mode are used for 3D reconstruction of positions of markers in the room; and the images from the cameras in the second mode are used for 3D point cloud calculations.
37. The camera system as claimed in claim 26, wherein in at least one of the two modes a cross-validation of the cameras is performed in order to detect camera displacements.
38. The camera system as claimed in claim 37, wherein the cross-validation of the cameras is performed in the first mode.
39. The camera system as claimed in claim 37, wherein in the event of a camera displacement, the affected camera is recalibrated in the first mode.
40. The camera system as claimed in claim 26, wherein: camera calibration is performed in the first mode; and intrinsic and extrinsic parameters determined thereby are used to create an image mask for the second mode which aligns, of associated pixels on images of different cameras in order to obtain a highest possible match of the images, at least one of: image intensity; grey values; color values; a brightness.
41. The camera system as claimed in claim 26, wherein: a validation of extrinsic parameters of the cameras is performed in the first mode; at one point in time first images of a subset of cameras are used for a first determination of at least one of at least one marker position and an object position, and second images of a second subset of cameras produced at the same point in time are used independently thereof for a second determination of the at least one marker position and/or object position; and a verification is performed as to whether the determined marker positions and/or object positions of the first and second verifications correspond to one another or to stored values.
42. The camera system as claimed in claim 26, wherein: at least three cameras are mounted in the room, which are operated in at least two different modes; a determination of extrinsic and intrinsic parameters of the cameras is performed in the first mode, and on a basis of a position determination of an arrangement known to the system of at least three infrared markers on an object; and in a second mode surfaces of the room are determined by means of point cloud calculations from the image data of the cameras with inclusion of the extrinsic and intrinsic parameters of the cameras determined in the first mode.
43. The camera system as claimed in claim 42, wherein a displacement of cameras is detected in at least one of the first and the second mode and thereupon in the first mode the extrinsic parameters of a displaced camera are determined again and based on these newly determined parameters a calculation model of the point cloud calculations is updated.
44. The camera system as claimed in claim 43, wherein the intrinsic parameters of the displaced camera are determined and based on these newly determined parameters the calculation model of the point cloud calculations is updated.
45. A method for detecting positional displacements of cameras of a camera system for surgical navigation systems, comprising: mounting a plurality of cameras in a room, wherein the system is operated in at least a first mode, in which positions of infrared markers are determined from image information of the cameras, and wherein the system is operated in at least a second mode, in which surfaces of the room are determined by means of point cloud calculations from the image information of the cameras including extrinsic and intrinsic parameters of the cameras determined in the first mode; wherein at least one first object or instrument with at least three infrared markers is present in the room, a spatial arrangement of which relative to one another is stored in the system or can be calculated by the system; wherein a number of at least three cameras are mounted in the room, whose image information is used for a main calculation of a first position of the first object or instrument, or at least four cameras are mounted in the room, whose image information is used for a main calculation of a second position of a second object in the room; wherein furthermore comparison calculations are performed, for which comparison calculations only image information of a subset of cameras is used; and wherein in the comparison calculations the first position of the first object or instrument or the second object in the room or the spatial arrangement of at least two of said three infrared markers with respect to one another is calculated; and wherein the number of cameras of the subset is at least two and at most the total number of cameras minus one, wherein a number of different comparison calculations are performed for subsets with different compositions, which number of different comparison calculations is at least equal to the total number of cameras minus one; determining which results of the comparison calculations deviate from the stored arrangement of said markers or the other comparison calculations; and further determining which of the cameras is involved in all deviating results or in those comparison calculations whose results deviate from all other comparison calculations.
46. The method as claimed in claim 45, wherein: at least one object or instrument with at least three infrared markers is present in the room, the spatial arrangement of which relative to one another is stored in the system, wherein at least three cameras are mounted in the room, whose image information of the first mode is used for a main calculation of the object or instrument positions in the room, wherein comparison calculations are performed additionally, for which comparison calculations only the image information of a subset of cameras is used; in the comparison calculations the spatial arrangement of at least two of said three infrared markers with respect to one another is calculated; the number of cameras of the subset is at least two and at most the total number of cameras minus one, wherein a number of different comparison calculations are made for subsets with different compositions, which number of different comparison calculations is at least equal to the total number of cameras minus one, and determining which results of the comparison calculations deviate from the stored arrangement of said markers and further determining which of the cameras is involved in all deviating results.
47. The method as claimed in claim 45, wherein: at least one object or instrument with at least three infrared markers is present in the room, the spatial arrangement of which relative to one another can be calculated by the system; a number of at least four cameras is mounted in the room, the image information of which is used together for a main calculation of an object position of the first or a second object in the room; in addition comparison calculations are performed, for which comparison calculations only the image information of a subset of cameras is used and wherein the object position of the first or second object in the room is also calculated in the comparison calculations; the number of cameras of the subset is at least two and at most the total number of cameras minus one; and a number of different comparison calculations are performed for subsets with different compositions, which number of different comparison calculations is at least equal to the total number of cameras, and determining which results of the comparison calculations differ from other comparison calculations and further determining which of the cameras is involved in those comparison calculations whose results differ from all other comparison calculations.
48. The method as claimed in claim 45, wherein: the displaced camera involved in all deviating results is excluded from the main calculation; and the main calculation is further performed with a reduced number of cameras.
49. The method as claimed in claim 45, wherein for the displaced camera involved in all deviating results, at least the extrinsic parameters are recalculated and stored based on the spatial arrangement of said markers determined by the remaining cameras in the first mode, by or so that the spatial arrangement of said three markers determined using the image information of the displaced camera is brought into agreement with the spatial arrangement of said three markers calculated by the remaining cameras.
50. The method as claimed in claim 45, wherein in the first mode, the main calculation of the remaining cameras uniquely identifies three markers of any object or instrument, calculates the spatial arrangement of at least two of the three markers, and that for the displaced camera, which is involved in all the deviating results, at least the extrinsic parameters are recalculated and stored based on the calculated spatial arrangement of said two markers with respect to one another and the 2D representations of the arrangement of said two markers from at least one of the remaining cameras and the displaced camera, such that the spatial arrangement of said markers determined using the image information of the deviating camera is brought into agreement with the main calculation of the spatial arrangement of said markers.
51. The method as claimed in claim 49, wherein the recalculated and stored extrinsic parameters of the displaced camera are transferred to the calculation model of the main calculation with all cameras and to the models of the comparison calculations with involvement of the displaced camera, and the main calculation is subsequently performed again with all cameras.
52. The method as claimed in claim 49, wherein the recalculated and stored extrinsic parameters of the displaced camera are transferred to the calculation model of the second mode.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0073] The invention is illustrated with reference to drawings:
[0074]
[0075]
[0076]
[0077]
[0078]
[0079]
DETAILED DESCRIPTION
[0080]
[0081] According to the invention, the individual images are captured in at least two different modes, which differ at least in the exposure settings of the cameras 1 or the infrared light sources 70.
[0082]
[0083] Alternatively, an initial image analysis can already be performed in the camera 1 or a device located between the camera 1 and the data processing system, so that only the coordinates of the markers 4 in the individual images are sent to the data processing system, which reduces the amount of data to be transmitted. The amount of data transmitted between the cameras 1 and a data processing system is less in the first mode than in the second mode. The computational effort required to determine the marker positions of the first mode is less than the computational effort associated with point cloud computing in the second mode.
[0084] In one embodiment, the cameras themselves each have a computing unit that makes camera settings and/or performs the image processing depending on the respective mode. In this case, the camera receives instructions in which mode to capture images at the respective point in time or instructions to change the mode.
[0085] The position of the markers 4 in the room can be determined from the coordinates of the markers 4 in the individual images from at least two cameras 1 with known arrangement and orientation in the room. Due to a known arrangement of the markers 4 on the object 2, the position and orientation of the object 2 in the room can be determined from the marker positions.
[0086]
[0087] The optical filter is preferably pivoted away from the region in front of the lens 60 so that the entire light spectrum reaches the lens 60 without attenuation by the optical filter.
[0088] In the second mode, associated pixels of surfaces are identified by image analysis and their positions are compared in individual images from at least two cameras 1. With the arrangement and orientation of the cameras 1 in the room known, the position and orientation of all surfaces in the room recorded by the cameras can be determined. For example, the object 3 which has no markers 4 becomes visible in the image mode, as does the surface 5, at least as far as it lies within the capture area of at least two cameras 1.
[0089] The object 2 with the markers 4 is also visible in image mode, wherein the markers 4 are also visible in image mode. The markers 4 can be 3D objects, preferably in the form of spheres. Two-dimensional markers can also be detected on the object in image mode, provided they are distinguishable in visible light from the surface to which they are attached.
[0090] In the second mode, a good contrast between all the different surfaces present in the room is preferred in the images, which is best when the light spectrum is full. The camera images in the second mode are gray scale images and/or preferably color images of the room, which are transmitted in this form to a data processing system.
[0091] The transmitted data in both modes contain, in addition to the image data or marker coordinates, information on the time of recording, such as a digital time stamp, so that the images taken at one point in time can be processed together.
[0092]
[0093] Images from the cameras 1 in the second mode, the image mode, are fed by the switch 7 in the form of 2D individual images 10 to a point cloud processing 11, which calculates a 3D point cloud (three-dimensional point cloud) from the individual images 10.
[0094]
[0095]
[0096] Once the calibration is complete, the 2D marker data stream 8 is released for processing in the 3D reconstruction 9. Both the extrinsic and intrinsic parameters are used in the 3D reconstruction 9 to determine the 3D marker positions from the 2D marker data stream 8. From the 3D marker positions, the marked objects are recognised by an instrument recognition 19 and markers and/or object positions are determined as a result.
[0097] The extrinsic parameters and preferably also the intrinsic parameters are used in point cloud processing 11 to determine the surfaces of objects and of the room. The 3D point cloud resulting from point cloud processing 11 is more accurate than if the extrinsic and intrinsic parameters were determined in image mode, due to the extrinsic and intrinsic parameters that can be determined accurately during calibration in marker mode.
[0098] The intrinsic parameters are preferably used to apply a lens distortion correction 16 to the 2D individual images 10, which can already be done prior to the point cloud processing 11.
[0099] The extrinsic parameters and/or data from the 2D individual images 10 or the already corrected 2D individual images can be used for the auto-exposure calculation 17. The exposure calculations can be used for marker capture settings 12 and/or image capture settings 13. The exposure calculations can be used for a brightness correction 18 of the 2D individual images 10.
[0100]
[0101] Sub-instrument recognitions 26 are made from the 3D sub-reconstructions 20, 21, 22, wherein a camera displacement detection 27 determines if the sub-instrument recognitions 26 from the sub-reconstructions 20, 21, 22 of different camera pairs or camera subsets differ from one another. In doing so, it is determined under inclusion of the 2D marker data stream 8, which camera 1 exhibits deviations from the other sub-instrument recognitions 26. The camera displacement detection 27 deactivates the displaced camera 1 or interrupts the 2D marker data stream of the displaced camera 1. In the event of deactivation, the 2D individual images 10 of the displaced camera 1 are also no longer fed to the point cloud processing 11. Alternatively, the feeding of the individual images 10 of the displaced camera 1 to the point cloud processing 11 can also be interrupted.
[0102] Preferably, the camera displacement detection 27 starts a camera calibration routine which can be executed by the camera calibration 14. In at least one recalibration process 23, 24, 25, the extrinsic and/or intrinsic parameters are recalculated for the displaced camera 1 in marker mode or from the 2D marker data stream 8. The recalculated extrinsic and/or intrinsic parameters are transferred to all affected sub-models of the 3D sub-reconstructions 20, 21, 22 and the main model of the 3D reconstruction 9. Furthermore, the recalculated extrinsic and/or intrinsic parameters are transferred to the model of the point cloud processing 11. The recalibration of cameras 1 of the point cloud processing 11 is thus advantageously performed in marker mode.
[0103] Both the 3D sub-reconstructions 20, 21, 22 and, if necessary, the recalibration processes 23, 24, 25 are performed in the first mode with a lower sampling rate than the 3D reconstruction 9 from the data of all or all non-deactivated cameras 1. Preferably, the sampling rate of the 3D reconstruction 9 is greater than or equal to 100 Hz, more preferably greater than or equal to 150 Hz, for example 180 Hz. Preferably, the sampling rate of each of the 3D sub-reconstructions 20, 21, 22 is less than or equal to 10 Hz, more preferably less than or equal to 5 Hz, for example 1 Hz. Preferably, all 3D sub-reconstructions 20, 21, 22 use as a basis for the sub-instrument recognitions 26 and the camera displacement detection 27 the data of the cameras that were captured at the same point in time, wherein the 3D sub-reconstructions 20, 21, 22 can be calculated in parallel to one another. Preferably, the sampling rate of the individual images 10 for the point cloud processing 11 is greater than or equal to 1 Hz, in particular greater than or equal to 10 Hz, in particular greater than 100 Hz, more preferably greater than or equal to 150 Hz, for example 180 Hz.