TRANSFER OF ADDITIONAL INFORMATION AMONG CAMERA SYSTEMS
20210329219 · 2021-10-21
Inventors
- Dirk Raproeger (Wunstorf, DE)
- Lidia Rosario Torres Lopez (Hildesheim, DE)
- Paul Robert Herzog (Hildesheim, DE)
- Paul-Sebastian Lauer (Hannover, DE)
- Uwe Brosch (Algermissen, DE)
Cpc classification
H04N13/239
ELECTRICITY
H04N7/181
ELECTRICITY
B60R1/00
PERFORMING OPERATIONS; TRANSPORTING
H04N13/275
ELECTRICITY
B60R2300/304
PERFORMING OPERATIONS; TRANSPORTING
International classification
H04N13/275
ELECTRICITY
B60R1/00
PERFORMING OPERATIONS; TRANSPORTING
Abstract
A method for enriching a target image, which a target camera system had recorded of a scene, with additional information, with which at least one source image that a source camera system had recorded of the same scene from a different perspective, has already been enriched. The method includes: assigning 3D locations in the three-dimensional space, which correspond to the positions of the source pixels in the source image, to source pixels of the source image; assigning additional information which is assigned to source pixels, to the respective, associated 3D locations; assigning those target pixels of the target image, whose positions in the target image correspond to the 3D locations, to the 3D locations; assigning additional information, which is assigned to 3D locations, to associated target pixels. A method for training a Kl module is also described.
Claims
1-10. (canceled)
11. A method for enriching a target image, which a target camera system had recorded of a scene, with additional information, which has already been used to enrich at least one source image that a source camera system had recorded from the same scene from a different perspective, the method comprising the following steps: assigning 3D locations in a three-dimensional space to source pixels of the source image, which correspond to positions of the source pixels in the source image; assigning additional information, which is assigned to the source pixels, to the respective, assigned 3D locations; assigning to the 3D locations those target pixels of the target image, whose positions in the target image correspond to the 3D locations; and assigning the additional information, which is assigned to the 3D locations, to assigned target pixels.
12. The method as recited in claim 11, wherein, for at least one source pixel, the assigned 3D location is determined from a time program in accordance with which at least one source camera of the source camera system moves in the space.
13. The method as recited in claim 11, wherein the source camera system has at least two source cameras.
14. The method as recited in claim 13, wherein, for at least one source pixel, the assigned 3D location is determined by stereoscopic evaluation of the source images recorded by the two source cameras.
15. The method as recited in claim 13, wherein source pixels from source images (21) recorded by both source cameras are merged in order to assign additional information to more target pixels of the target image.
16. The method as recited in claim 11, wherein the source image and the target image were recorded simultaneously.
17. The method as recited in claim 11, wherein the source camera system and the target camera system are selected which are mounted on the same vehicle in a fixed orientation relative to each other.
18. A method for training a K1 module, which assigns additional information to an image recorded by a camera system and/or to pixels of the image through processing in an internal processing chain, performance of the internal processing chain being defined by parameters, the method comprising the following steps: inputting a learning image into the K1 module; comparing additional information output by the K1 module to additional learning information assigned to the learning image; and using a result of the comparison to adapt the parameters; wherein the additional learning information is assigned at least partially to pixels of the learning image as target pixels, by: assigning 3D locations in a three-dimensional space to source pixels of a source image, which correspond to positions of the source pixels in the source image, assigning additional information, which is assigned to the source pixels, to the respective, assigned 3D locations, assigning to the 3D locations those target pixels of the learning image, whose positions in the learning image correspond to the 3D locations, and assigning the additional information, which is assigned to the 3D locations, to assigned the target pixels.
19. The method as recited in claim 10, wherein a semantic classification of image pixels is selected as the additional information.
20. A non-transitory machine-readable storage medium on which is stored a computer program including machine-readable instructions for enriching a target image, which a target camera system had recorded of a scene, with additional information, which has already been used to enrich at least one source image that a source camera system had recorded from the same scene from a different perspective, the machine-readable instructions, when executed by a computer or control unit, causing the computer or the control unit to perform the following steps: assigning 3D locations in a three-dimensional space to source pixels of the source image, which correspond to positions of the source pixels in the source image; assigning additional information, which is assigned to the source pixels, to the respective, assigned 3D locations; assigning to the 3D locations those target pixels of the target image, whose positions in the target image correspond to the 3D locations; and assigning the additional information, which is assigned to the 3D locations, to assigned target pixels.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS
[0037] In accordance with
[0038] The latter option presupposes that a source camera system having at least two source cameras was selected in step 105. Moreover, in accordance with optional step 106, a source image 21a and a target image 31a may be selected that have been recorded simultaneously. Furthermore, in accordance with optional step 107, a source camera system 2 and a target camera system 3 may be selected, which are mounted on one and the same vehicle 6 in a fixed orientation 61 relative to each other.
[0039] In step 120, additional information 4, 41, 42, which is assigned to source pixels 21a of source image 21, is assigned to respective, associated 3D locations 5. In step 130, those target pixels 31a of target image 31, whose positions in target image 31 correspond to 3D locations 5, are assigned to the 3D locations. In step 140, additional information 4, 41, 42, which is assigned to 3D locations 5, is assigned to associated target pixels 31a.
[0040] This process is explained in greater detail in
[0041]
[0042] In
[0043] In
[0044] Also indicated in
[0045] This exemplary target image 31 is shown in
[0046] As indicated in
[0047]
[0048] Source camera system 2 observes scene 1 from a first perspective A′. Target camera system 3 observes same scene 1 from a second perspective B′. Described method 100 makes it possible for additional information 4, 41, 42, acquired in connection with source camera system 2, to be utilized in the context of target camera system 3.
[0049]
[0050] In step 210 of method 200, learning images 53 having pixels 53a are input into K1 module 50. K1 module 50 provides additional information 4, 41, 42, such as a semantic segmentation, for example, for these learning images. Learning data 54 with regard to which additional information 4, 41, 42 is to be expected in the particular case for an existing learning image 53 is transferred in accordance with step 215 by method 100 into the perspective from which learning image 53 was recorded.
[0051] In step 220, additional information 4, 41, 42 actually provided by K1 module 50 is compared with additional learning information 54. Result 220a of this comparison 220 is used in step 230 to optimize parameters 52 of internal processing chain 51 of K1 module 50.