METHOD FOR DETERMINING THE IMAGE POSITION OF A MARKER POINT IN AN IMAGE OF AN IMAGE SEQUENCE

20200394805 ยท 2020-12-17

Assignee

Inventors

Cpc classification

International classification

Abstract

A method for determining the image position of a marker point (3) in an image of an image sequence including the method steps of: setting (S2) a marker point in a first image (1) of the image sequence, determining (S4) a transformation at least between corresponding portions of the first image (1) and a second image (4) of the image sequence, transforming (S5) at least the portion of the first image (1) or the portion of the second image (4) on the basis of the transformation determined, localizing (S6) the marker point (3) in the transformed portion of the image (4), and mapping (S7) the localized marker point into the second image (4) on the basis of the determined transformation.

Claims

1. A method for determining an image position of a marker point (3) in an image of an image sequence, comprising the following steps: setting (S2) a marker point in a first image (1) of the image sequence, determining (S4) a transformation at least between corresponding portions of the first image (1) and a second image (4) of the image sequence, transforming (S5) at least a portion of the first image (1) or a portion of the second image (4) based on a determined transformation to form a transformed portion of the image (4), localizing (S6) the marker point (3) in the transformed portion of the image (4), and mapping (S7) the localized marker point into the second image (4) based on the determined transformation.

2. The method as claimed in claim 1, further comprising at least one of pictorially representing (S8) of the mapped marker point (3) in the second image (4) or outputting of the image position of the mapped marker point (3).

3. The method as claimed in claim 1, wherein the outputting of the image position of the mapped marker point (3) comprises outputting coordinates

4. The method as claimed in claim 1, wherein the marker point (3) is set manually.

5. The method as claimed in claim 1, wherein the marker point (3) is set in a still of the image sequence.

6. The method as claimed in claim 1, further comprising using a geometric transformation with a plurality of degrees of freedom for determining (S4) the determined transformation.

7. The method as claimed in claim 6, wherein the geometric transformation has eight degrees of freedom.

8. The method as claimed in claim 1, further comprising using an algorithm for at least one of object tracking or feature detection for localizing (S6) the marker point (3).

9. The method as claimed in claim 1, wherein the transformation is performed on the second image (4).

10. The method as claimed in claim 1, further comprising, prior to the localization (S6), carrying out a check (S12) as to whether the image position of the marker point (3) is located within the second image (4).

11. The method of claim 10, wherein the image position of the marker point (3) is initially transferred (S11) into the transformed second image (4) and mapped into the original second image (4) using the transformation

12. The method of claim 10, wherein prior to the localization (S6), further comprising carrying out a check is (S9) as to whether the transformation has been found.

13. The method as claimed in claim 1, further comprising checking (S14) whether the similarity with which a corresponding pixel is found is sufficient.

14. The method of claim 13, wherein the checking (S14) is carried out during localization (S6), and a threshold for the similarity is defined.

15. The method as claimed in claim 1, wherein the first image (1) does not migrate along but remains constant for a certain number of successive images such that each of the second images (4) is compared with the first image (1).

16. The method as claimed in claim 1, wherein a plurality of geometrically related marker points (3) are used.

17. An apparatus for image processing, comprising a processor configured for carrying out the method as claimed in claim 1.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0041] The invention is explained in more detail below on the basis of an advantageous exemplary embodiment with reference to the appended drawings.

[0042] In the figures:

[0043] FIG. 1: shows a flowchart of a method according to the invention,

[0044] FIG. 2: shows a flowchart of a method according to the invention with error detection,

[0045] FIG. 3: shows a first image with a marker point,

[0046] FIG. 4: shows a second image,

[0047] FIG. 5: shows the transformed second image with a localized marker point,

[0048] FIG. 6: shows the second image with a mapped marker point,

[0049] FIG. 7: shows a second image showing a different scene,

[0050] FIG. 8: shows a second image, in which the marker point is located outside of the visual range, and

[0051] FIG. 9: shows a second image, in which the marker point is concealed.

DETAILED DESCRIPTION

[0052] FIG. 1 shows a flowchart of a method according to the invention. By way of example, the method can be carried out in a video controller of an endoscope or in any other image processing unit, in particular an FPGA.

[0053] A first image is read in a first step S1. By way of example, a first image 1 is shown in FIG. 3. By way of example, the first image 1 shows various tissue structures 2. In a marking step S2, a marker point 3, which should currently be tracked, is set in the first image 1. Naturally, a plurality of marker points could also be set. For reasons of simplicity, only one marker point is shown below in each case.

[0054] Now, an n-th image is read in a step S3. FIG. 4 shows such an n-th image 4. This n-th image 4 is rotated in relation to the first image 1. Here, the relative position of the first image 1 is illustrated using dashed lines.

[0055] Now, a transformation that maps the rotated image 4 onto the first image 1 is determined in a further step S4. By way of example, a matrix transformation with a plurality of unknowns, in particular eight unknowns, can be used to find a suitable transformation. By solving the transformation equations, it is thus possible to take account of, e.g., rotation, translation, scaling, and perspective changes.

[0056] After the transformation has been determined, the n-th image 4 is transformed with the aid of this transformation in a next step S5. The result of the transformation is shown in exemplary fashion in FIG. 5. The transformed image 4 now corresponds to the first image 1 in terms of alignment and scaling.

[0057] Now, the marker point 3 is localized in the transformed image 4 in a localization step S6. This localization can be implemented with the aid of known search algorithms, for example for object identification.

[0058] Here, the search can be limited to a restricted search region 5, which is defined around the position of the marker point 3 in the first image 1.

[0059] Now, the found marker point 3 is transformed into the n-th image 4 in a mapping step S7.

[0060] Finally, the marker point 3 can be presented in the n-th image 4 in a presentation step S8, or the coordinates, for example, could be output. As a result, the marker point 3 in the n-th image 4 is located exactly at the point originally defined in the first image 1, as shown in FIG. 6.

[0061] Then, the procedure is repeated with the n+1-th image. As a rule, the images are taken from a video sequence. In particular, the processing of the image signals is preferably implemented so quickly that the marker can be tracked in the live video signal.

[0062] It is particularly expedient for the first image to be kept for the running image sequence such that all further images of the image sequence are respectively related to said first image.

[0063] Alternatively, the first image could also, in principle, be reset after a certain amount of time and/or after a certain number of elapsed images or be set to the last second image of the interval or to the last image with the visible marker point.

[0064] Various error situations which make it impossible to track the marker point may arise when applying the method, for example in endoscopy. FIG. 2 shows a flowchart of a method according to the invention with corresponding error detection and error handling. The method is based on the method in FIG. 1, with some of the steps not being shown here for reasons of simplicity.

[0065] Here, too, a first image 1 is initially loaded in step S1. A check is carried out in a transformation monitoring step S9 as to whether a transformation that maps an n-th image 4 into the first image 1 has been found.

[0066] If not, an error in the scene of the camera is identified in a scene error step S10. This is the case, in particular, if there was a significant change in the camera position. By way of example, such a situation is illustrated in FIG. 7. Here, the image shows different tissue structures, which is why a transformation is not possible. A corresponding error message is presented in a message step S16. This may also comprise the superimposition of an error symbol 6, as shown in exemplary fashion in FIG. 7. Moreover, an error message can be displayed in plain text.

[0067] If so, the marker points 3 marked in the first image 1 are transferred into the transformed n-th image 4 and mapped into the n-th image 4 with the aid of the transformation in a transfer step S11.

[0068] In a plausibility step S12, a check is carried out as to whether the marker points 3 transferred thus are located within a valid image region, in particular within the n-th image 4.

[0069] If not, it is identified that at least one marker point 3 is located outside of the image region of the n-th image 4 in an edge error step S13. The message step S16 also follows in this case. FIG. 8 shows such a situation, in which the marker point 3 is located outside of the visible image region.

[0070] If so, the localization step S6 follows here. Now, a check is carried out in a similarity test step S14 as to whether the similarity to the first image 1 is sufficiently large for the ascertained point. A threshold for similarity could be defined in this case.

[0071] If not, it is identified that although the marker point is located in the valid image region it is nevertheless not visible, in particular because it is concealed, in a point error step S15. FIG. 9 shows such a situation, in which the marker point 3 is concealed by a medical device 7, for example. The message step S16 follows.

[0072] If so, this is followed by the mapping step S7 and the presentation step S8.

LIST OF REFERENCE SIGNS

[0073] 1 First image [0074] 2 Tissue structures [0075] 3 Marker point [0076] 3 Transformed marker point [0077] 4 n-th image [0078] 4 Transformed n-th image [0079] 5 Search region [0080] 6 Error symbol [0081] 7 Medical device [0082] S1-S16 Method steps