TRANSFER OF ADDITIONAL INFORMATION AMONG CAMERA SYSTEMS

20210329219 · 2021-10-21

    Inventors

    Cpc classification

    International classification

    Abstract

    A method for enriching a target image, which a target camera system had recorded of a scene, with additional information, with which at least one source image that a source camera system had recorded of the same scene from a different perspective, has already been enriched. The method includes: assigning 3D locations in the three-dimensional space, which correspond to the positions of the source pixels in the source image, to source pixels of the source image; assigning additional information which is assigned to source pixels, to the respective, associated 3D locations; assigning those target pixels of the target image, whose positions in the target image correspond to the 3D locations, to the 3D locations; assigning additional information, which is assigned to 3D locations, to associated target pixels. A method for training a Kl module is also described.

    Claims

    1-10. (canceled)

    11. A method for enriching a target image, which a target camera system had recorded of a scene, with additional information, which has already been used to enrich at least one source image that a source camera system had recorded from the same scene from a different perspective, the method comprising the following steps: assigning 3D locations in a three-dimensional space to source pixels of the source image, which correspond to positions of the source pixels in the source image; assigning additional information, which is assigned to the source pixels, to the respective, assigned 3D locations; assigning to the 3D locations those target pixels of the target image, whose positions in the target image correspond to the 3D locations; and assigning the additional information, which is assigned to the 3D locations, to assigned target pixels.

    12. The method as recited in claim 11, wherein, for at least one source pixel, the assigned 3D location is determined from a time program in accordance with which at least one source camera of the source camera system moves in the space.

    13. The method as recited in claim 11, wherein the source camera system has at least two source cameras.

    14. The method as recited in claim 13, wherein, for at least one source pixel, the assigned 3D location is determined by stereoscopic evaluation of the source images recorded by the two source cameras.

    15. The method as recited in claim 13, wherein source pixels from source images (21) recorded by both source cameras are merged in order to assign additional information to more target pixels of the target image.

    16. The method as recited in claim 11, wherein the source image and the target image were recorded simultaneously.

    17. The method as recited in claim 11, wherein the source camera system and the target camera system are selected which are mounted on the same vehicle in a fixed orientation relative to each other.

    18. A method for training a K1 module, which assigns additional information to an image recorded by a camera system and/or to pixels of the image through processing in an internal processing chain, performance of the internal processing chain being defined by parameters, the method comprising the following steps: inputting a learning image into the K1 module; comparing additional information output by the K1 module to additional learning information assigned to the learning image; and using a result of the comparison to adapt the parameters; wherein the additional learning information is assigned at least partially to pixels of the learning image as target pixels, by: assigning 3D locations in a three-dimensional space to source pixels of a source image, which correspond to positions of the source pixels in the source image, assigning additional information, which is assigned to the source pixels, to the respective, assigned 3D locations, assigning to the 3D locations those target pixels of the learning image, whose positions in the learning image correspond to the 3D locations, and assigning the additional information, which is assigned to the 3D locations, to assigned the target pixels.

    19. The method as recited in claim 10, wherein a semantic classification of image pixels is selected as the additional information.

    20. A non-transitory machine-readable storage medium on which is stored a computer program including machine-readable instructions for enriching a target image, which a target camera system had recorded of a scene, with additional information, which has already been used to enrich at least one source image that a source camera system had recorded from the same scene from a different perspective, the machine-readable instructions, when executed by a computer or control unit, causing the computer or the control unit to perform the following steps: assigning 3D locations in a three-dimensional space to source pixels of the source image, which correspond to positions of the source pixels in the source image; assigning additional information, which is assigned to the source pixels, to the respective, assigned 3D locations; assigning to the 3D locations those target pixels of the target image, whose positions in the target image correspond to the 3D locations; and assigning the additional information, which is assigned to the 3D locations, to assigned target pixels.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0031] FIG. 1 shows an exemplary embodiment of method 100, in accordance with the present invention.

    [0032] FIG. 2 shows an exemplary source image 21.

    [0033] FIG. 3 shows an exemplary transformation of source image 21 into a point cloud in the three-dimensional space.

    [0034] FIG. 4 shows an exemplary target image 31 including additional information 4, 41, 42 transferred from source image 21, in accordance with an example embodiment of the present invention.

    [0035] FIG. 5 shows an exemplary configuration of a source camera system 2 and of a target camera system 3 on a vehicle 6, in accordance with an example embodiment of the present invention.

    [0036] FIG. 6 shows an exemplary embodiment of method 200, in accordance with the present invention.

    DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS

    [0037] In accordance with FIG. 1, 3D locations 5 in the three-dimensional space are assigned in step 110 of method 100 to source pixels 21a of a source image 21. In accordance with block 111, 3D location 5 associated with at least one source pixel 21a may be determined from a time program in accordance with which at least one source camera of source camera system 2 moves in the space. In accordance with block 112 and alternatively or also in combination therewith, associated 3D location 5 may be determined for at least one source pixel 21a by stereoscopically evaluating source images 21 recorded by two source cameras.

    [0038] The latter option presupposes that a source camera system having at least two source cameras was selected in step 105. Moreover, in accordance with optional step 106, a source image 21a and a target image 31a may be selected that have been recorded simultaneously. Furthermore, in accordance with optional step 107, a source camera system 2 and a target camera system 3 may be selected, which are mounted on one and the same vehicle 6 in a fixed orientation 61 relative to each other.

    [0039] In step 120, additional information 4, 41, 42, which is assigned to source pixels 21a of source image 21, is assigned to respective, associated 3D locations 5. In step 130, those target pixels 31a of target image 31, whose positions in target image 31 correspond to 3D locations 5, are assigned to the 3D locations. In step 140, additional information 4, 41, 42, which is assigned to 3D locations 5, is assigned to associated target pixels 31a.

    [0040] This process is explained in greater detail in FIGS. 2 through 4.

    [0041] FIG. 2 shows a two-dimensional source image 21 having coordinate directions x and y that a source camera system 2 recorded of a scene 1. Source image 21 was semantically segmented. In the example shown in FIG. 2, additional information 4, 41, that this partial area belongs to a vehicle 11 present in scene 1, has thus been acquired for a partial area of source image 21. Additional information 4, 42, that these partial areas belong to lane markings 12 present in scene 1, has been acquired for other partial areas of source image 21. An individual pixel 21a of source image 21 is marked exemplarily in FIG. 2.

    [0042] In FIG. 3, source pixels 21a are transformed into 3D locations 5 in the three-dimensional space, this being denoted by reference numeral 5 from FIG. 2 for target pixel 21a. When additional information 4, 41, that source pixel 21a belongs to a vehicle 11, has been stored for source pixel 21a, then this additional information 4, 41 was also assigned to corresponding 3D location 5. When additional information 4, 42, that the source pixel 21a belongs to a lane marking 12, was stored for a source pixel 21a, then this additional information 4, 42 was also assigned to corresponding 3D location 5. This is illustrated by different symbols which represent respective 3D locations 5 in the point cloud shown in FIG. 3.

    [0043] In FIG. 3, only the same number of 3D locations 5 are entered as there are source pixels 21a in source image 21. For that reason, the three-dimensional space in FIG. 3 is not completely filled, but rather is only sparsely occupied by the point cloud. In particular, only the rear section of vehicle 11 is shown, since only this section is visible in FIG. 2.

    [0044] Also indicated in FIG. 3 is that source image 21 shown in FIG. 2 was recorded from perspective A. As a purely illustrative example with no claim to real applicability, target image 31 is recorded from perspective B drawn in FIG. 3.

    [0045] This exemplary target image 31 is shown in FIG. 4. It is marked here exemplarily that source pixel 21a was ultimately circuitously assigned to target pixel 31a via associated 3D location 5. Accordingly, this additional information 4, 41, 42 is circuitously assigned via associated 3D location 5 to all target pixels 31a for which there is an associated source pixel 21a having stored additional information 4, 41, 42 in FIG. 2. Thus, the work invested in this respect in the semantic segmentation of source image 21 is completely reused.

    [0046] As indicated in FIG. 4, more of vehicle 11 is visible in perspective B shown here than in perspective A of the source image. However, additional information 4, 41, that source pixels 21a belong to vehicle 11, was only recorded with regard to the rear section of vehicle 11 visible in FIG. 2. Thus, the front-end section of vehicle 11, drawn with dashed lines in FIG. 4, is not provided with this additional information 4, 41. This extremely constructed example shows that it is advantageous to combine source images 21 from a plurality of source cameras to provide as many target pixels 31a of target image 31 as possible with additional information 4, 41, 42.

    [0047] FIG. 5 shows an exemplary configuration of a source camera system 2 and a target camera system 3, which are both mounted on same vehicle 6 in a fixed orientation 61 relative to each other. In the example shown in FIG. 5, a rigid test carrier defines this fixed relative orientation 61.

    [0048] Source camera system 2 observes scene 1 from a first perspective A′. Target camera system 3 observes same scene 1 from a second perspective B′. Described method 100 makes it possible for additional information 4, 41, 42, acquired in connection with source camera system 2, to be utilized in the context of target camera system 3.

    [0049] FIG. 6 shows an exemplary embodiment of method 200 for training a K1 module 50. K1 module 50 includes an internal processing chain 51, whose performance is defined by parameters 52.

    [0050] In step 210 of method 200, learning images 53 having pixels 53a are input into K1 module 50. K1 module 50 provides additional information 4, 41, 42, such as a semantic segmentation, for example, for these learning images. Learning data 54 with regard to which additional information 4, 41, 42 is to be expected in the particular case for an existing learning image 53 is transferred in accordance with step 215 by method 100 into the perspective from which learning image 53 was recorded.

    [0051] In step 220, additional information 4, 41, 42 actually provided by K1 module 50 is compared with additional learning information 54. Result 220a of this comparison 220 is used in step 230 to optimize parameters 52 of internal processing chain 51 of K1 module 50.