METHOD FOR MARKING AN IMAGE REGION IN AN IMAGE OF AN IMAGE SEQUENCE
20200394757 ยท 2020-12-17
Assignee
Inventors
- Martin Bohning (Herzogsweiler, DE)
- Frank HASSENPFLUG (Villingen-Schwenningen, DE)
- Andreas HILLE (Villingen-Schwenningen, DE)
- Michael Kronenthaler (Villingen-Schwenningen, DE)
Cpc classification
G06T7/246
PHYSICS
International classification
Abstract
A method for marking an image region in an image of an image sequence includes the steps of: identifying (S2) an image region (3) in a first image (1) of the image sequence, determining (S5) a transformation between the first image (1) and a second image (4) of the image sequence, transforming (S6) the image region (3) on the basis of the determined transformation, and presenting (S6) the transformed image region (3) in the second image (4).
Claims
1. A method for marking an image region in an image of an image sequence, comprising the steps of: identifying (S2) an image region (3) in a first image (1) of the image sequence, determining (S4) a transformation between the first image (1) and a second image (4) of the image sequence, transforming (S5) the image region (3) based on the determined transformation, and presenting (S6) the transformed image region (3) in the second image (4).
2. The method as claimed in claim 1, wherein the steps of determining (S4), transforming (S5), and presenting (S6) are repeated for further images in the image sequence.
3. The method as claimed in claim 1, wherein the presenting (S6) of the transformed image region (3) only comprises a presentation of a perimeter of the image region (3).
4. The method as claimed in claim 1, wherein the transformed image region (3) is presented utilizing a superposition of the image region (3) in the second image (4).
5. The method as claimed in claim 4, wherein the superposition in the second image (4) is implemented by alpha blending.
6. The method as claimed in claim 1, wherein the transformation is determined based on correspondences (5) between image content between the first image (1) and the second image (4) in particular wherein the image content is located outside of the identified region (3).
7. The method as claimed in claim 6, wherein the image content is located outside of the identified region (3)
8. The method as claimed in claim 1, wherein a geometric transformation with a plurality of degrees of freedom is used for determining the transformation.
9. The method as claimed in claim 8, wherein the geometric transformation is a matrix transformation with eight degrees of freedom.
10. The method as claimed in claim 1, wherein the identification in the first image (1) is implemented based on a colored marker or any other marker (2) in the image region (3).
11. The method as claimed in claim 1, wherein the image region (3) is marked (2) in color by fluorescence.
12. The method as claimed in claim 11, wherein the fluorescence is provided by addition of fluorochromes.
13. The method as claimed in claim 1, wherein the identification is implemented in automated form by recording an image using a different illumination source.
14. The method as claimed in claim 1, wherein the identification comprises buffering of the image region (3).
15. The method as claimed in claim 1, wherein the identifying (S2) of the image region (3) in the first image (1) of the image sequence is carried out by segmentation.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0036] The invention is explained in more detail below on the basis of an exemplary embodiment with reference to the appended drawings.
[0037] In the figures:
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
DETAILED DESCRIPTION
[0044]
[0045] A first image 1 of an image sequence is provided in a first step S1. Such a first image 1 is shown in exemplary fashion in
[0046] In this case,
[0047] An image region 3 which should be tracked in subsequent images, i.e., marked, is now identified in the first image 1 in an identification step S2. In the example, the identification can be implemented on the colored marking, for example by segmentation. By way of example, this can be implemented in automated fashion by virtue of only the fluorescent image region being recorded under UV light.
[0048] The identified image region 3 (see
[0049] A second n-th image of the image sequence is loaded in an image loading step S3. Such a second image 4 is shown in exemplary fashion in
[0050] Now, a geometric transformation that maps the first image 1 onto the second image 4 is determined in a determination step S4. Here, correspondences 5 between image content are searched in the images, the image content preferably being located outside of the identified image region.
[0051] By way of example, the correspondences 5 could be image features which can be easily and reliably identified by an algorithm. By way of example, to this end, known object tracking algorithms or algorithms for feature detection, for instance for edge detection, can be applied. With the aid of a matrix transformation, for example with eight degrees of freedom, the transformation can be determined on the basis of the relative position of the corresponding image content.
[0052] Now, the image region 3 is transformed in a transformation step S5 using the previously determined transformation. The result is shown in exemplary fashion in
[0053] Finally, the transformed image region 3 is presented in superposed fashion in the second image 4 in a presentation step S6. By way of example, an alpha blending method can be applied to this end.
[0054] The result of the presentation S6 is shown in exemplary fashion in
[0055] Therefore, the method according to the invention renders it possible to continue to make an image region, once identified, visible in a running video signal or in an image sequence even though the image is constantly changing, for example on account of camera movements. Here, even discontinuous changes do not lead to problems for as long as it is possible to determine a geometric transformation between images.
[0056] Then, the method is continued with the provision of the next image in the image loading step S3.
[0057] In particular, it is advantageous here if the method always determines the transformation from the first image and not from possible preceding images.
LIST OF REFERENCE SIGNS
[0058] 1 First image [0059] 2 Piece of tissue marked by color [0060] 3 Identified image region [0061] 3 Transformed, marked image region [0062] 4 Second image [0063] 5 Correspondence [0064] S1-S6 Method steps