COMPUTER IMPLEMENTED METHOD, A COMPUTING DEVICE AND A SYSTEM FOR ASSISTING BENDING OF A REINFORCING ROD FOR ATTACHMENT TO A PLURALITY OF CHIRURGICAL IMPLANTS

20230245589 · 2023-08-03

Assignee

Inventors

Cpc classification

International classification

Abstract

A computer implemented method of assisting bending of a reinforcing rod (10) includes the steps of receiving spatial positions (P.sub.1-n) of chirurgical implants (20.sub.1-n), in particular pedicle screws, captured by a camera-based positioning device (50), the chirurgical implants (20.sub.1-.sub.n) configured to attach to the reinforcing rod (10); calculating a rod shape (10c) corresponding to the spatial positions (P.sub.1-n), allowing the chirurgical implants (201n) to be attached to the reinforcing rod (10); based on the calculated rod shape (10c), calculating a sequence of bending parameter set(s); and generating tool operation guidance for bending tool(s) (70) based on the bending parameter set(s), the tool operation guidance indicating a sequence of prescribed operation steps (S.sub.1-n), wherein the sequence of prescribed operation steps (S.sub.1-n) are determined such, that when carried out using the bending tool(s) (70), causes the bending tool(s) (70) to shape the reinforcing rod (10) corresponding to the calculated rod shape (10c).

Claims

1. A computer implemented method of assisting bending of a reinforcing rod (10), the method comprising the steps carried out by a computing device (100): a. receiving spatial positions (P.sub.1-n) of a plurality of chirurgical implants (20.sub.1-n) captured by a camera-based positioning device (50); b. calculating a rod shape (10c) corresponding to the spatial positions (P.sub.1-n) of the plurality of chirurgical implants (20.sub.1-n), allowing the plurality of chirurgical implants (20.sub.1-n) to be attached to the reinforcing rod (10); c. based on the calculated rod shape (10c), calculating a sequence of bending parameter set(s); and d. generating tool operation guidance for bending tool(s) (70) based on the bending parameter set(s), the tool operation guidance indicating a sequence of prescribed operation steps (S.sub.1-n) of the bending tool(s) (70), wherein the sequence of prescribed operation steps (S.sub.1-n) are determined such, that when carried out using the bending tool(s) (70), causes the bending tool(s) (70) to shape the reinforcing rod (10) from an initial shape (10i) to a shaped form (10s) corresponding to the calculated rod shape (10c).

2. The method according to claim 1, further comprising generating augmented reality data based on the tool operation guidance, the augmented reality data comprising overlay(s) representing one or more of the prescribed operation steps (S.sub.1-n) of the tool operation guidance (OG.sub.1-n).

3. The method according to claim 2, further comprising controlling the camera-based positioning device (50) to display the overlay(s) representing the tool operation guidance (OG.sub.1-n) on a display device (52) comprising a see-through display device (52) of the camera-based positioning device (50), the overlay(s) (OG.sub.1-n) being superimposed on a user’s view of the bending tool(s) (70) through the display device (52) and/or on the display device (52).

4. The method according to claim 1, further comprising: a. controlling the camera-based positioning device (50) to track a progress of a user’s operation of the bending tool(s) (70) in accordance with the sequence of prescribed operation steps (S.sub.1-n) of the bending tool(s) (70); and b. controlling the display device (52) to display the overlay (OG.sub.1-n) representing a particular operation step (Sx) of the sequence of prescribed operation steps (S.sub.1-n) of the tool operation guidance (OG.sub.1-n) according to the tracking.

5. The method according to claim 1, further comprising generating augmented reality data based on the calculated rod shape (10c), the augmented reality data comprising an overlay representing the calculated rod shape (O10c).

6. The method according to claim 5, further comprising controlling the camera-based positioning device (50) such as to display the overlay representing the calculated rod shape (O10c), on a display device (52) comprising a see-through display device (52) of the camera-based positioning device (50), the overlay representing the calculated rod shape (O10c) being superimposed on a user’s view of the reinforcing rod (10) through the display device (52) and/or on the display device (52).

7. The method according to claim 1, wherein the tool operation guidance is generated based on at least one of: i. parameter(s) of the bending tool(s) (70); ii. parameter(s) of the reinforcing rod (10); and iii. the positions of fixation points (22) of the plurality of chirurgical implants (20.sub.1-n).

8. The method according to claim 1, further comprising: a. controlling the camera-based positioning device (50) to capture image(s) of the plurality of chirurgical implants (20.sub.1-n); and b. determining the positions of the plurality of chirurgical implants (20.sub.1-n) based on the captured images.

9. The method according to claim 8, wherein the computing device controls the camera-based positioning device (50) to capture stereo images of the plurality of chirurgical implants (20.sub.1-n) using image capture sensors (54L, 54R) of the camera-based positioning device (50), and wherein the computing device determines the spatial positions (P.sub.1-n) of the plurality of chirurgical implants (20.sub.1-n) by processing the stereo images.

10. The method according to claim 9, further comprising: a. controlling the camera-based positioning device (50) to capture a stream of stereo images of the plurality of chirurgical implants (20.sub.1-n); and b. iteratively refining the determined spatial positions (P.sub.1-n) of the plurality of chirurgical implants (20.sub.1-n) by processing the stream of stereo images.

11. The method according to claim 9, wherein the computing device determines the spatial positions (P.sub.1-n) of the plurality of chirurgical im-plants (20.sub.1-n) by processing the stereo images using a stereo neuronal network, the neuronal network having been trained with a dataset of stereo images of chirurgical implants and corresponding annotations indicative of spatial positions of chirurgical implants.

12. The method according to claim 1, wherein a bending parameter set(s) comprises at least one of: i. a rod distance (dARP.sub.1-n); ii. an axial reorientation angle (α.sub.1-n); iii. a rod bending angle (θ.sub.1-n); and iv. a bending radius (R.sub.1-n).

13. The method according to claim 1, further comprising: a. controlling the camera-based positioning device (50) to capture image(s) of the bending tool(s) (70); and i. identifying the bending tools(s) (70) using the captured images and retrieve parameter(s) of the bending tool(s) (70) based on the identification; or ii. determining parameter(s) of the bending tool(s) (70) using the captured images.

14. A computing device (100) comprising a processing unit (120) and a memory unit (130) comprising instructions, which, when executed by the processing unit (120) cause the computing device (100) to carry out the method according to claim 1.

15. A system (1) for assisting bending of a reinforcing rod (10), the system (1) comprising: a. a computing device (100) according to claim 13; and b. a camera-based positioning device (50) communicatively connected to the computing device (100), the camera-based positioning device (50) comprising two or more image capture sensors (54L, 54R) and a display device (52) for the display of overlay(s) superimposed on a user’s view.

16. A computer program product, comprising instructions, which, when carried out by a processing unit (120) of a computing device (100), cause the computing device (100) to carry out the method according to claim 1.

17. The method of claim 1 wherein the plurality of chirurgical implants (20.sub.1-n) comprise pedicle screws.

Description

BRIEF DESCRIPTION OF SEVERAL VIEWS OF THE DRAWINGS

[0035] The herein described invention will be more fully understood from the detailed description given herein below and the accompanying drawings which should not be considered limiting to the invention described in the appended claims. The drawings are showing:

[0036] FIG. 1 shows a flowchart illustrating steps of an embodiment of the computer implemented method of assisting bending of a reinforcing rod, according to the present disclosure;

[0037] FIG. 2 shows a flowchart illustrating steps of a further embodiment of the computer implemented method of assisting bending of a reinforcing rod, according to the present disclosure, comprising tracking of the bending operation;

[0038] FIG. 3 shows a flowchart illustrating steps of an embodiment of determining the spatial positions of the chirurgical implants;

[0039] FIG. 4 shows a schematic perspective view, illustrating capturing images of a plurality of chirurgical implants implanted into a patient, using a camera-based positioning device;

[0040] FIG. 5 shows a schematic perspective view, illustrating the spatial positions of a plurality of chirurgical implants, as determined based on the images of the plurality of chirurgical implants captured by the camera-based positioning device;

[0041] FIG. 6 shows a schematic perspective view, illustrating a calculated rod shape corresponding to the spatial positions of the plurality of chirurgical implants;

[0042] FIG. 7A shows a schematic perspective view of a bending tool receiving a reinforcing rod, illustrating parameters of a bending parameter set and an overlay of the representing the tool operation guidance;

[0043] FIG. 7B shows a schematic perspective view of an overlay of a calculated rod shape superimposed on a user’s view of the reinforcing rod in its current shape;

[0044] FIG. 8 shows a highly schematic block diagram of a computing device according to the present disclosure;

[0045] FIG. 9 shows a schematic view, illustrating display of a particular prescribed operation step of the tool operation guidance, as an overlay superimposed on a user’s view of a bending tool through a see-through display of the camera-based positioning device;

[0046] FIG. 10 shows a schematic perspective view of a bending tool receiving a reinforcing rod, illustrating superimposition of an overlay representing the calculated rod shape over a user’s view of the reinforcing rod while operating a bending bench;

[0047] FIG. 11 shows a schematic visualization of a stereo neural network used in determining the spatial positions of the chirurgical implants;

[0048] FIG. 12A shows a schematic representation of processing of stereo images to determine the spatial positions of the chirurgical implants; and

[0049] FIG. 12B shows an illustration of bounding boxes defined as part of the processing of the stereo images to determine the spatial positions of the chirurgical implants.

DESCRIPTION OF PREFERRED EMBODIMENTS

[0050] Reference will now be made in detail to certain embodiments, examples of which are illustrated in the accompanying drawings, in which some, but not all features are shown. Indeed, embodiments disclosed herein may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Whenever possible, like reference numbers will be used to refer to like components or parts.

[0051] FIG. 1 shows a flowchart illustrating steps of an embodiment of the computer implemented method of assisting bending of a reinforcing rod 10 attachable to a plurality of chirurgical implants 20.sub.1-n. In step S10, a stream of stereo images, each capturing one or more of the chirurgical implants 20.sub.1-n, are streamed from a head-mounted camera-based positioning device 50 comprising two spaced apart image capture sensors 54L, 54R. Thereafter, the spatial positions P.sub.1-n of the plurality of chirurgical implants 20.sub.1-n are determined in three-dimensional space using a positioning algorithm, described in detail with reference to the

[0052] FIGS. 11, 12A and 12B, based on the stereo images. In a particular embodiment, the determined spatial positions P.sub.1-n of the plurality of chirurgical im-plants 20.sub.1-n is iteratively refined using the sequence of stereo images.

[0053] In step S20, the spatial positions P.sub.1-n of the chirurgical implants 20.sub.1-n are received by the computing device 100. According to embodiments, the computing device 100 itself determines the spatial positions P.sub.1-n, in which case the term “receive” relates to data generated, transmitted and received within the computing device 100, in particular within logical, functional or structural modules thereof (e.g., image processing module, position determination module, rod shape calculation module).

[0054] In subsequent step S30, a calculated rod shape 10c is determined corresponding to the spatial positions P.sub.1-nof the plurality of chirurgical implants 20.sub.1-n, such as to allow the plurality of chirurgical implants 20.sub.1-n to be attached to the rein-forcing rod 10. In particular, the calculated rod shape 10c is determined such as to allow fixation points 22 (tulips) at the end of the chirurgical implants 20.sub.1-n to receive corresponding sections of the reinforcing rod 10. Due to the very tight fit between the fixation points 22 and the reinforcing rod 10, the calculated rod shape 10c is determined such that the reinforcing rod 10 comprises straight sections where the reinforcing rod 10 is to be attached to the fixation points 22 of the chirurgical implants 20.sub.1-n. Determination of the calculated rod shape 10c is described in detail with reference to FIGS. 11, 12A and 12B.

[0055] Having determined the calculated rod shape 10c, in step S40, a sequence of bending parameter set(s) is calculated based on the calculated rod shape 10c. The sequence of bending parameter set(s) defines a series of parameters descriptive of a reinforcing rod 10 having the calculated rod shape 10c. In particular, the sequence of bending parameter set(s) describes a sequence of sections (rod section distance dARP.sub.1-n) of the reinforcing rod 10 as well as bends between subsequent sections, each bend being described by an axial reorientation angle (α.sub.1-n and a rod bending angle β.sub.1-n and optionally by a bending radius R.sub.1-n, see FIG. 7A.

[0056] According to particular embodiments, shown of FIG. 1 with dashed lines, in a step S50, parameter(s) of the bending tool(s) 70 is/are determined. In a first substep of step S50, the camera-based positioning device 50 is controlled to capture image(s) of the bending tool(s) 70. Thereafter, the bending tools(s) 70 are identified using the captured images (e.g., by means of an identifier tag as-sociated with each bending tool) and parameter(s) of the bending tool(s) 70 are retrieved (e.g., from a database stored within or communicatively connected to the computing device 100) based on the identification.

[0057] In a subsequent step S60, tool operation guidance is generated specific to the bending tools 70 based on the bending parameter sets and, depending on embodiment, further based on parameter(s) of the bending tool(s) 70. The tool operation guidance indicates a sequence of prescribed operation steps S.sub.1-n of the bending tool(s) 70. In the embodiments described in detail and illustrated in the figures, the bending tools comprise a bending bench 70 for applying a bending force onto a section of the reinforcing rod 10 and a retaining tool for holding the reinforcing rod 10 during bending. Correspondingly, each of the sequence of prescribed operation steps S.sub.1-n comprises an indication on how to operate the bending bench 70 and the retaining tool such as to shape the rein-forcing rod 10 from an initial shape 10i to a shaped form 10s corresponding to the calculated rod shape 10c. For example, each of the sequence of prescribed operation steps S.sub.1-n indicates a rod section distance dARP.sub.1-n, an axial rod rotation α.sub.1-nto be applied using the retaining tool and a lever angle (of bending tool 70) θ.sub.1-n for displacing the bending bench 70 from a resting (or neutral) position thereof. When a user, in particular a surgeon, applies the sequence of prescribed operation steps S.sub.1-n using the retaining tool and the bending bench 70, by sequentially displacing the reinforcing rod 10 by distances dARP.sub.1-n, rotating the reinforcing rod 10 around its longitudinal axis by axial reorientation angles α.sub.1-n and displacing a lever arm 71 of the bending bench 70 by a lever angle (of bending tool 70) θ.sub.1-n, step-by-step the reinforcing rod 10 will be shaped from an initial form 10i into a shaped form 10s matching the calculated rod shape 10c, except for a defined tolerance.

[0058] Having generated the tool operation guidance, in a subsequent step S70, augmented reality data is generated based on the tool operation guidance. The augmented reality data comprises a sequence of overlays, each overlay representing a prescribed operation step S.sub.1-n of the tool operation guidance OG.sub.1-n.

[0059] The assisted bending process is performed (by a user) as follows:

[0060] In a preparatory step, an overlay OG.sub.1 of a retaining tool to control axial rotation os the reinforcing rod 10 is displayed. The displayed retaining tool is for example a rod gripper instrument (such as a forceps) specifically designed for holding reinforcing rods 10 without any slippage. The surgeon fixes the real-world retaining tool to the end of the reinforcing rod 10 and aligns the position and orientation of the retaining tool with the augmented-reality overlays OG.sub.1-n.

[0061] For a k.sub.th bending step:

[0062] Axial reorientation: the axial orientation α.sub.k of the reinforcing rod is accomplished by aligning the retaining tool axially to the presented overlay OG.sub.k. Axial displacement: To guarantee the bending of the reinforcing rod 10 at the correct position, the reinforcing rod 10 is shifted axially by dARPk. This step is navigated using the same overlay OG.sub.k of the retaining tool as for the axial reorientation. Again, the actual retaining tool that are rigidly connected to the reinforcing rod 10 need to coincide with the presented overlay OG.sub.k.

[0063] Lever movement: The navigation of the bending is achieved by showing the start and end positions of the lever 71 of the bending bench 70, as illustrated in light and dark gray in FIG. 7A, respectively. While the start position is fixed, the end position of the lever 71 is displayed according to OG.sub.k.

[0064] Inspection: An overlay of the calculated rod shape 10c is presented to the surgeon, superimposed on a view of the current shape of the reinforcing rod 10c, as illustrated in dark and light grey in FIG. 7B, respectively. The current shape of the reinforcing rod 10c can be verified by visual inspection and adjustments can be made if necessary.

[0065] FIG. 2 shows a flowchart of a particular embodiment, wherein, in step S80, the progress of the sequence of prescribed operation steps S.sub.1-n of the bending tool(s) 70 is tracked using the camera-based positioning device 50.

[0066] Tracking of the progress of the sequence of prescribed operation steps S.sub.1-n is performed by processing a stream of stereo images from the camera-based positioning device 50 in order to identify a current shape of the reinforcing rod 10 and comparing the current shape 10t of the reinforcing rod 10 with the calculated rod shape 10c. Based on the comparison, the progress of the execution of the prescribed operation steps S.sub.1-n can be determined, e.g., as the last executed step. Alternatively, or additionally, the tracking of the progress of the sequence of prescribed operation steps S.sub.1-n is performed by processing a stream of stereo images from the camera-based positioning device 50 in order to identify the current/ latest of the prescribed operation steps S.sub.1-n carried out by the surgeon using the bending tools 70.

[0067] In a subsequent step S90, as long as the bending is not completed (based on the tracking of the progress), a display device 52 of the camera-based positioning device 50 is controlled by the computing device 100 such as to superimpose the current overlay OG.sub.1-n in accordance with the determined progress (e.g., the first prescribed operation step S.sub.1-n not yet completed) onto a user’s view of the bending tool(s) 70.

[0068] According to embodiments, in addition to displaying an overlay OG.sub.1-n of the current operation step of the tool operation guidance, as long as the bending is not completed (based on the tracking of the progress), in a step S95 an overlay representing the calculated rod shape O10c is superimposed on a user’s view of the reinforcing rod 10 through the display device 52, as illustrated on FIG. 7B. This enables the user to visually compare the current shape 10t of the reinforcing rod 10 with the calculated rod shape 10c.

[0069] FIG. 3 shows the steps of determining the spatial positions of the chirurgical implants 20.sub.1-n. In substep S12, stereo images are continuously streamed from two front-facing environmental cameras 54L, 54R of a head-mounted augmented reality headset, such as a Microsoft HoloLens.

[0070] According to particular embodiments, depth information captured by the camera-based positioning device 50 (e.g., by a LIDAR and/or a time-of-flight ToF depth sensor) is also used in determining the spatial positions of the chirurgical implants 20.sub.1-n.

[0071] In a subsequent substep S14, the stream of stereo images is processed to determine the relative positions of the chirurgical implants 20.sub.1-n. The stereo images are fed to two branches of a stereo neuronal network, the neuronal network having been trained with a dataset of stereo images of chirurgical implants and corresponding annotations indicative of spatial positions P.sub.1-n of the chirurgical implants captured by the dataset of stereo images. A particular implementation of the stereo neuronal network is described in detail with reference to FIGS. 11, 12A and 12B.

[0072] FIG. 4 shows a schematic perspective view, illustrating capturing images of a plurality of chirurgical implants 20.sub.1-n implanted into a patient, using a camera-based positioning device 50. The camera-based positioning device 50 is a head-mounted imaging and display device comprising two spaced apart image capture sensors 54L, 54R and a display device 52 for the display of overlays superimposed on a user’s view therethrough. The images of the chirurgical implants 20.sub.1-n are captured as a stream of stereo images recorded as a user, in particular a surgeon or an assistant, is positioned with a view (at least a partial) of the respective anatomical part 200 of the patient.

[0073] FIG. 5 shows a schematic perspective view, illustrating the spatial positions P.sub.1-n of a plurality of chirurgical implants 20.sub.1-n, as determined based on the stereo images of the plurality of chirurgical implants 20.sub.1-n captured by the camera-based positioning device 50. The spatial positions P.sub.1-n of the chirurgical im-plants 20.sub.1-n are determined in three-dimensional space using a positioning algorithm, described in detail with reference to FIGS. 11, 12A and 12B.

[0074] FIG. 6 shows a schematic perspective view, illustrating a calculated rod shape 10c corresponding to the spatial positions P.sub.1-n of the plurality of chirurgical implants. The calculated rod shape 10c, is determined such as to allow the plurality of chirurgical implants 20.sub.1-n to be attached to the reinforcing rod 10. Details of a particular embodiment of determining the calculated rod shape 10c are described with reference to FIGS. 11, 12A and 12B.

[0075] FIG. 7A illustrates the bending parameters of one of the sequence of bending parameter set(s) l-n relative to a bending bench 70. Each bending parameter set define a series of parameters descriptive of a reinforcing rod 10 having the calculated rod shape 10c. As illustrated, the bending parameters sets each comprise a rod section distance dARP.sub.1-n between consecutive bends of the reinforcing rod 10; an axial reorientation angle α.sub.1-n indicative of an angle by which the reinforcing rod needs to be rotated around its longitudinal axis between consecutive bends; and a rod bending angle β.sub.1-n indicative of an angle of a respective bend of the reinforcing rod 10 in a radial direction with respect to the reinforcing rod 10.

[0076] Details of a particular embodiment of determining the bending parameters are described with reference to FIGS. 11, 12A and 12B.

[0077] FIG. 8 shows a highly schematic block diagram of a computing device 100 comprising a processing unit 120 and a memory unit 130. The memory unit 130 comprises instructions, which, when executed by the processing unit 120 cause the computing device 100 to carry out the method of assisting bending of a reinforcing rod 10 according to one of the embodiments disclosed herein.

[0078] FIG. 9 shows a schematic view, illustrating display of a particular prescribed operation step S.sub.1-n of the tool operation guidance, as an overlay OG.sub.1-n super-imposed on a user’s view of the bending tool 70 through a see-through display 52 of the camera-based positioning device 50. As illustrated in FIG. 9, the overlay is generated as a 3D overlay OG and the camera-based positioning device 50 is controlled by the computing device 100 such that 3D overlay OG is projected in the field of view of the user, through a see-through display device 52 of a head-mounted augmented reality headset. The 3D overlay OG is projected in the field of view of the user while the user views the bending bench 70 through the display device 52, the 3D overlay OG being projected at a corresponding position with respect to the bending bench 70.

[0079] As shown, the overlay OG represents a specific operation step S.sub.1-n as a visualization of the bending bench 70 in an actuated position (e.g., rotational displacement of a lever arm 71) which causes the reinforcing rod 10 to be bent according to the corresponding bending parameter set.

[0080] As illustrated on FIG. 10, in order to even further improve the user experience, respectively to allow the user to verify the progress of the sequence of prescribed operation steps S.sub.1-n and/or to verify the current shape of the reinforcing rod 10t, augmented reality data is generated based on the calculated rod shape 10c, comprising an overlay representing the calculated rod shape O10c as a 3D overlay. The camera-based positioning device 50 is controlled by the computing device 100 to display the overlay representing the calculated rod shape O10c on the display device 52 such that the overlay is superimposed on a user’s view of the reinforcing rod 10.

[0081] Turning now to FIGS. 11, 12A and 12B, a particular embodiment of determining the spatial positions of the chirurgical implants 20.sub.1-n shall be described in detail. As illustrated on FIG. 11, both input branches of the neuronal network are configured identically. Each branch consists of three convolutional blocks composed of a series of convolutional layers with 3 × 3 filters which are completed by a max-pooling and a dropout layer. The activations of each convolutional layer are post-processed by a batch normalization layer. The number of filters is doubled for each convolutional block and the weights of the convolutional layers are shared by both network branches. This strategy enables the generation of consistent feature maps for the left and the right input image, respectively. In the next part of the network, the two feature maps are concatenated before a convolutional layer reduces the dimensionality of the resulting tensor using 1 × 1 filters. Then, four fully connected layers with 1024 units each regress the final output tensor which is brought into the desired shape by a reshape layer. The activation function for all convolutional and dense layers is a leaky Rectified Linear Unit ReLU, e.g., with a slope of 0.1 except for the last dense layer which uses linear mapping.

[0082] All bounding boxes in both stereo images are reconstructed in that the output tensor of the network is of shape 13 × 13 × 9 and contains the encoded information. According to a particular embodiment, correspondences by associating detected objects in the left and the right images are identified by superimposing the left and right input images (of the stereo images) to create a union bounding box for each pedicle screw 20.sub.1-n which contains both the bounding box of the left and the right image, respectively. This concept is illustrated for a single pedicle screw head 22 in FIG. 12A. The output tensor of the network can be interpreted as a 13 × 13 grid that divides the image into 169 cells, each consisting of nine regressed values. An output grid size of 13 × 13 was heuristically determined as a good trade-off that allows the detection of multiple smaller neighboring objects, such as screws, while keeping the dimensionality low. Each cell is responsible for detecting union bounding boxes whose center is located within the bounds of the cell. Each detection γi (i = 1, ..., n, where n denotes the number of detected pedicel screws 20.sub.1-n) and consequently each union bounding box found on each stereo image pair in the 13 × 13 grid is encoded by a total of nine regressed parameters organized into three groups:

[00001]γi=tsipresence,txi,tyi,twi,thi,union boxLtΔxi,LtΔwi,RtΔxi,RtΔwistereo correction

The first parameter ts.sub.i indicates whether a pedicle screw 20.sub.1-n and consequently the center of a union bounding box is located in the respective grid cell. This parameter is a binary variable for training but needs to exceed an experimentally determined value of 0.5 to suggest screw presence during inference. The following four entries tx.sub.i, ty.sub.i, tw.sub.i, th.sub.i define the precise location of a union bounding box as well as its width and height. In an embodiment where each cell in the 13 × 13 grid has unit width and height and that the top-left location of a grid cell can be described by the two values cx.sub.i,cy.sub.i as depicted in FIG. 12B. Instead of regressing the union bounding box location in global pixel coordinates, all refinements are described relative to the cell location cx.sub.i, cy.sub.i within the 13 × 13 grid, where the detection occurred. The union bounding box parameters bx.sub.i, by.sub.i, bw.sub.i, bh.sub.i are then obtained by:

[00002]bxi=σtxi+cxi

[00003]byi=σtyi+cyi

[00004]bwi=awetwi

[00005]bhi=ahethi

where σ- denotes the sigmoid function. The parameters aw and ah are anchor values that introduce prior knowledge about the union bounding boxes. This prior information is obtained by averaging over manually labeled ground-truth union bounding boxes to provide an initial estimate which is corrected by the exponential terms. The anchors are of predefined size and do not depend on the current detection. The sigmoid function σ - is required to map the regressed parameters tx.sub.i and ty.sub.i into the range [0, 1] to ensure that the bounding box center will remain in the predicted grid cell.

[0083] The third group of parameters refers to the stereo correction that determines the offsets from the union bounding box to the respective bounding boxes in the left and right image. Assuming rectified cameras 54L, 54R, only the horizontal offset and the width correction for the left and right image need to be regressed (see FIG. 12A). This results in the final four parameters of the detection descriptor LtΔxi, LtΔwi, RtΔxi, RtΔwi. A prescript .sub.• indicates from hereinafter that a term can be applied to the left and right image, respectively. The detection descriptor will be converted to absolute offsets as follows:

[00006]Δxi=aΔxe.Math.tΔxi

[00007]Δwi=aΔwe.Math.tΔwi

[0084] The parameters .sub.•aΔx and .sub.•aΔw are anchor values which were found by averaging over observed horizontal off-sets and width corrections in the ground-truth data, similar to the anchor values aw and ah above. With this representation, each grid cell can detect exactly one screw head 22. The bounding boxes .sub.•(x.sub.i, y.sub.i,w.sub.i, h.sub.i) in the stereo image pairs are eventually found as follows:

[00008]Lxi=bxi+LΔxi

[00009]Rxi=bxiRΔxi

[00010]Lyi=byi

[00011]Ryi=byi

[00012]Lwi=bwiLΔwi

[00013]Rwi=bwiRΔwi

[00014]Lhi=bhi

[00015]Rhi=bhi

[0085] The final point detections in pixel-space .sub.•(ui, vi) are found by transforming the center of the bounding boxes from the grid-space to pixel-space. This transformation consists of dividing .sub.•(xi, yi) by 13, resulting in normalized coordinates and successive multiplication by the original image width and height, respectively.

[0086] The network according to this particular embodiment was implemented in TensorFlow and trained on a dataset obtained from ex-vivo experiments. To homogenize the dataset, all images were resized to a specific resolution and normalized. After random weight initialization, the stereo neural network was trained for 1000 epochs with a batch size of 16. The learning rate was initially set to 10–3 and was reduced to 10–4 after 750 epochs and to 10–5 for the last 100 epochs. To better generalize to unseen data, the stereo images were augmented on-the-fly for training. According to various embodiments, different augmentation strategies can be applied with varying combinations of augmentation techniques such as brightness and contrast changes, blurring, histogram equalization, scaling, vertical flipping of the image, and vertical translation. A combination of vertical translation at a probability of 50% with subsequent scaling or contrast adaptation is particularly advantageous.

[0087] Given i corresponding detections .sub.•(ui, vi) in a stereo image pair, the 3D position of the i.sup.th screw candidate sci can be determined using the vector Midpoint method as follows. Let .sub.Ln.sub.i and .sub.Rn.sub.i be the normalized direction vectors of the rays from the respective camera center .sub.LC.sub.i and .sub.RC.sub.i to the detections L.sup.(uii)and R.sup.(uiνi). As the rays typically do not intersect, the points and .sub.RSC.sub.i on the left and the right ray which are closest to each other are determined:

[00016]Lsc.fwdarw.i=LλiLn.fwdarw.i

[00017]Rsc.fwdarw.i=RC.fwdarw.iLC.fwdarw.i+RλiRn.fwdarw.i

[0088] The following equation can be stated by taking into account that to ensure shortest distance .sub.LSC.sub.i - .sub.RSC.sub.i, has to be perpendicular to both rays. Projecting both rays on each other and given that .sub.LSC.sub.i and .sub.RSC.sub.i coincide in this projected situation yields:

[00018]LλiLn.fwdarw.iLn.fwdarw.i=RC.fwdarw.iLC.fwdarw.iLn.fwdarw.i+RλiRn.fwdarw.iLn.fwdarw.i

[00019]LλiLn.fwdarw.iRn.fwdarw.i=RC.fwdarw.iLC.fwdarw.iRn.fwdarw.i+RλiRn.fwdarw.iRn.fwdarw.i

[0089] Solving for .sub.Lλ.sub.i and .sub.Rλ.sub.i results in:

[00020]Lλi=RC.fwdarw.iLC.fwdarw.iLn.fwdarw.iRC.fwdarw.iLC.fwdarw.iRn.fwdarw.iLn.fwdarw.iRn.fwdarw.i1Ln.fwdarw.iRn.fwdarw.i2

[00021]Rλi=RC.fwdarw.iLC.fwdarw.iLn.fwdarw.iLn.fwdarw.iRn.fwdarw.iRC.fwdarw.iLC.fwdarw.iRn.fwdarw.i1Ln.fwdarw.iRn.fwdarw.i2

[0090] Finally, sci is determined by linear interpolation of .sub.LSC.sub.i and .sub.RSC.sub.i and placed in the point set of triangulated points P.sub.tri. Each processed stereo image pair provides N.sub.det potential screw candidates which are expressed in a 3D world coordinate frame and stored in a point set P.sub.tri. The goal of the subsequent clustering routine is to condense incoming screw candidate locations sc.sub.i from P.sub.tri on-the-fly into a set of clustered screw positions P.sub.final based on the preoperatively known number of desired screws N.sub.screws. The clustering algorithm works as follows. The first incoming screw candidate p.sub.curr from P.sub.tri is stored in a new point set P– which is added to the set of cluster candidates P.sub.cand. Each incoming point p.sub.curr after that is either appended to the closest existing cluster P.sub.closest ∈ P.sub.cand, if the distance to the nearest cluster center is smaller than an empirically determined distance d.sub.thresh, e.g., of 2.0 cm. Otherwise, the point p.sub.curr is the seed of a new point set P.sub.iclu which is added to P.sub.cand. The procedure terminates as soon as N.sub.screws clusters are found which are supported by e.g. 100 individual points. The final clusters are determined by finding the center of each point set in P.sub.cand and storing it in P.sub.final. The final estimates are presented to the surgeon for visual confirmation. In case of incorrect detections, the screw detection is restarted.

[0091] This algorithm not only reduces a potentially noisy set of screw point candidates into a distinct number of estimates, but also efficiently removes outliers due to missing support of other candidate points. Finally, Principle Component Analysis is applied to separate all points in P.sub.final into a point set of anatomically left points .sub.LP.sub.final and right points .sub.RP.sub.final, respectively, and to sort all points from cranial (j = 1) to caudal (j = N.sub.screws/2).

[0092] According to particular embodiments, to allow interactive rates, some of the calculation need to be performed on a computing device such as a high-end workstation rather than onboard the camera-based positioning device 50. To this end, the stereo image data is streamed from the camera-based positioning device 50 to the computing device 100. The spatial positions of the pedicle screws 20.sub.1-n are calculated by the computing device 100 and are eventually sent back to the camera-based positioning device 50 for display and verification.

[0093] Once the spatial positions P.sub.1-n of the pedicle screws pj ∈ .sub.•Pfinal have been obtained, the tool operation guidance is generated to eventually guide surgeons.

[0094] Each bending step is characterized by a set of bending parameters, as depicted in FIG. 7A. To adjust the reinforcing rod 10 position and orientation, the chirurgical implant 20.sub.1-n needs to be shifted along its main axis by dARP and rotated by α. The bending angle of the reinforcing rod β is proportional to the angular displacement of the lever of the bending bench 70.

[0095] The pedicle screw head tulips 22, where the reinforcing rod 10 will eventually be mounted, has an opening that is only marginally (e.g. 0.1 mm) wider than the diameter of the reinforcing rod 10 to guarantee a strong rigid postoperative connection. This implies that the reinforcing rod 10 has to be straight in the positions where it will be mounted into the pedicle screw heads 22. To ensure straight rod segments between the screw heads 22, each screw head 22 pj in .sub.•Pfinal is replaced by two equidistant control points and added to the respective set of control points .sub.•P.sub.control.

[00022]Pcontrolp.fwdarw.j±μp.fwdarw.j+1p.fwdarw.j1p.fwdarw.j+1p.fwdarw.j1pj;j=2,.Math.,Pfinal1

where .Math. is a heuristically determined parameter, e.g., set to 7.5 mm.

[0096] From this set of control points, all bending parameters can be calculated for each bending step S.sub.1-n. The k.sup.th bend is characterized by the required bending angle of the reinforcing rod β.sub.k, the axial reorientation angle αk and the distance by which the reinforcing rod 10 needs to be advanced dARP.sub.k as illustrated in FIG. 7A. Each bending angle β.sub.k is found by iterating over .sub.•P.sub.control to determine the angles using the dot product:

[00023]βk=arccospk+1pkpk+1pkpk1pkpk1pkpk;k=2,.Math.,Pcontrol1

[0097] The axial reorientation angle αk for the k.sup.th bend is calculated by taking four control points into account and initially generating the following three vectors:

[00024]Ln.fwdarw.k=pk1pkpk1pkCn.fwdarw.k=pk+1pkpk+1pkRn.fwdarw.k=pk+2pk+1pk+2pk+1pk;k=2,.Math.,Pcontrol2

[0098] In a next step, the vectors .sub.Ln.sub.k and .sub.Rn.sub.k are projected on the plane defined by the normal vector .sub.cn.sub.k resulting in the projected .sub.Lñ.sub.k and .sub.Rñ.sub.k. Lastly, the axial reorientation angle for the k.sup.th bending step is found by:

[00025]αk=arccosLn˜kRn˜k

[0099] The distance dARP.sub.k that the reinforcing rod 10 needs to be displaced in the k.sub.th bending step S.sub.k is determined by the Euclidean distance between the last and the current control point.

[0100] The lever angle θ.sub.k of the bending bench 70 depends on the desired rod bending angle β.sub.k. Any bending can be considered a combination of elastic and plastic deformation of the reinforcing rod 10. Small lever angles result in no permanent rod 10 deformation due to elastic deformation. The relationship between any desired rod angle β and any applied lever angle θ, however, can be approximated to be linear as soon as plastic deformation starts to occur. This results in a transfer function of the form β = f(θ) = m θ + t, where β is the desired bending angle of the reinforcing rod, θ corresponds to the difference in lever angle from start to end position of the bend and m and t denote the slope and offset of the linear model, respectively. Since the end of the lever 71 describes a circular movement with respect to the center of rotation, the relationship between the straight distance traveled by the tip of the lever dLever and the resulting difference in lever angle θ is given by the equation of a chord which is dLever

[00026]=2rsinθ2,

where r denotes the straight line distance from lever base to lever tip as depicted in FIG. 7A. The respective resulting bending angles of the reinforcing rod β may be determined from a CT scan to estimate the aforementioned linear transfer function in a least square sense.