SYNCHRONIZED CAMERA SYSTEM HAVING TWO DIFFERENT CAMERAS

20220038646 · 2022-02-03

Assignee

Inventors

Cpc classification

International classification

Abstract

The invention relates to a camera system, and to a method for controlling the camera system, for capturing images of the surroundings of a vehicle for a driver assistance system of the vehicle. The camera system comprises a first rolling shutter camera (1) having a first aperture angle α, a second rolling shutter camera (2) having a second aperture angle β, and control electronics. The first camera (1) is suitable for generating a wide-angle camera image, that is, the first aperture angle α is greater than the second aperture angle β of the second camera (2) which is suitable for generating a tele camera image. The two cameras (1, 2) are designed in such a way that both camera images have an overlap region.

The control electronics is configured to synchronize the two cameras (1, 2).

The geometric arrangement of the two cameras (1, 2) with respect to one another, and the position of the overlap region (10) in the wide-angle image and in the tele camera image, are determined by means of continuous estimation.

The stored geometric arrangement and position are taken into consideration during synchronization of the first camera (1) and the second camera (2) of the camera system.

Claims

1-11. (canceled)

12. A camera system for capturing images of the surroundings of a vehicle for a driver assistance system of the vehicle, the camera system comprising: a first rolling shutter camera having a first aperture angle; a second rolling shutter camera having a second aperture angle; wherein the first camera is configured to generate a wide-angle camera image and the second camera is designed to generate a tele camera image such that an overlap region exists among the camera images; and control electronics configured to synchronize the two cameras; wherein a geometric arrangement of the two cameras with respect to one another and a position of the overlap region are determined by continuous estimation; and wherein the determined geometric arrangement and position are taken into consideration during synchronization.

13. The camera system according to claim 1, wherein the control electronics is configured to produce the synchronization in such a manner that a greater line geometry of the first camera predefines the beginning of synchronization and the clock frequency of the second camera in such a manner that the exposure of each line of the first camera is begun synchronously with the relevant line of the second camera.

14. The camera system according to claim 2, wherein the first camera captures per line n.sub.w with a readout clock Δt.sub.l and the second camera captures per line n.sub.T with the readout clock t.sub.l/k, wherein k indicates the number of the lines n.sub.T, n.sub.T+1, n.sub.T+(k−1) of the second camera which are contained in one line n.sub.w of the first camera.

15. The camera system according to any claim 14, wherein two pixels of the second camera are predefined inside the overlap region and are exposed synchronously to the relevant pixels of the first camera.

16. The camera system according to claim 15, wherein a start pixel of the first line and an end pixel of the last line of the second camera are each exposed, inside the overlap region for each line of the first camera, synchronously to a relevant start and end pixel of the line of the first camera.

17. The camera system according claim 12, wherein the image sensor of the first camera exposes with an exposure time b1 and the image sensor of the second camera exposes with the exposure time b2, wherein b2>b1, and the start of exposure of the first camera is delayed by (b2−b1)/2 with respect to the start of exposure of the second camera.

18. The camera system according to claim 17, wherein the start of capturing of the second camera is delayed by half a line clock Δt.sub.l of the first camera with respect to the first camera inside the overlap region.

19. The camera system according to claim 12, wherein the image sensors of the first camera and the second camera have a comparable number of pixels.

20. The camera system according to claim 12, wherein the resolution of the second camera is twice the resolution of the first camera in the overlap region, wherein n is a natural number.

21. The camera system according to claim 12, wherein the geometric arrangement of the two cameras with respect to one another and the position of the overlap region is continuously estimated by utilizing a fundamental matrix estimation. cm 22. A method for controlling a camera system for capturing images of the surroundings of a vehicle for a driver assistance system of the vehicle, wherein a first rolling shutter camera has a first aperture angle and a second rolling shutter camera has a second aperture angle, wherein the first camera is configured to generate a wide-angle camera image and the second camera is configured to generate a tele camera image such an overlap region exists among the camera images, said method comprising: determining the geometric arrangement of the two cameras with respect to one another, and the position of the overlap region in the wide-angle image and in the tele camera image by continuous estimation; and synchronizing the two cameras including considering the determined geometric arrangement and position.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0036] Exemplary embodiments are described and explained in greater detail below on the basis of figures, wherein:

[0037] FIG. 1 shows an exemplary embodiment of a method for controlling the camera system (represented schematically);

[0038] FIG. 2 shows a schematic representation of the fields of vision of the two cameras of the camera system and a projection of the field of vision of the one camera into the image of the other camera;

[0039] FIG. 3 shows the relation in the image plane between the projection of the field of view of the first camera into the image of the second camera;

[0040] FIG. 4 schematically shows lines of the image sensor of the first camera and of the second camera in an image of the first camera and relevant time intervals;

[0041] FIG. 5 schematically shows an overlap region of the two cameras having the respective pixels and lines;

[0042] FIG. 6 shows a vehicle having a camera system;

[0043] FIG. 7 shows a first course of the time offset delta T over the line numbers; and

[0044] FIG. 8 shows a second course of the time offset delta T over the line numbers.

DETAILED DESCRIPTION

[0045] An exemplary embodiment of a camera system is represented in FIG. 6. The camera system or the camera device includes two cameras (1) and (2) which are arranged mechanically loosely coupled in the space. The two cameras (1) and (2) can be utilized in this configuration in an ADAS system. As represented in FIG. 6, the first camera (1) can be mounted on the right, viewed from the interior, and the second camera (2) can be mounted to the left of the rearview mirror on the windshield of a vehicle (15). The first camera (1) has a wide-angle lens and the second camera (2) has a telephoto lens.

[0046] FIG. 2 schematically represents the relationships between the spatial geometry of the fields of vision of the two cameras (1, 2) and a projection of the field of vision of the second camera (2) into the image of the first camera (1). A spatial point (5) lies in the field of view (8) of the first camera (1) and in the field of view (7) of the second camera (2).

[0047] A projection of the spatial point (5) onto a field of view plane of the second camera (2) produces the image (9) of the second camera (2).

[0048] In the general case, the projection (9) of the field of view of the second camera (2), intersected with the image of the first camera (1), is a distorted four-sided FIG. 6). Thanks to suitable production processes during the installation of the cameras (1, 2) in the vehicle, it can admittedly be assumed that the cameras (1, 2) are aligned virtually parallel.

[0049] FIG. 3 serves to illustrate the relation in the image plane between the projection of the field of view (7) of the second camera (2) into the image (8) of the first camera (1).

[0050] In this case, the four-sided FIG. 9) has virtually straight sides and is approximately axially parallel to the image axes of the image of the first camera (1). The latter is indicated by the dashed lines (9) in FIG. 3. The rectangle (10) is now to be interpreted as follows: if a spatial point (5) is located in the field of view of both cameras (1, 2), this is located in the image of the camera (1) in the region (10). The rectangle (10) thus represents, for instance, the overlap region of the fields of view (7, 8) of both cameras (1, 2). The aim of the synchronization is that the line is exposed with the image of each spatial point (5) in the first camera (1) at the same time as the relevant line in the image of the second camera (2). The latter requirement is, extremely generally, only sensible for a parallel camera system without optical distortions. In this case, the respective epipolar lines are parallel.

[0051] An exemplary embodiment of a method for controlling the camera system is schematically represented in FIG. 1. A spatial point (5) lies in the overlap region of the first camera (1) and the second camera (2). By an initially roughly adjusted exposure control or imager control (for instance time-synchronous start of the imagers/image sensors), the stereo geometry of the system is determined in an evaluation unit (3) on the basis of the image data acquired by the two cameras (1, 2). This is effected with standard methods for estimating the stereo geometry (for instance fundamental matrix estimation).

[0052] The ratio represented in FIGS. 2 and 3 can be estimated by geometric calculations. The calibration information regarding the determined overlap region (10) is forwarded to an imager control (4) and is used there for synchronizing the exposure of both cameras (1, 2). That is, exposure data can be transferred to each of the two cameras (1, 2). The synchronized images acquired by the first camera (1) and second camera (2) are passed to a method (16) for evaluating stereo images. The stereo method (16) determines distance or depth information from the disparity (image displacement) of image features in the overlap region in simultaneously acquired images of the first camera (1) and second camera (2). A semi-global matching (SGM) method can be used as a stereo method (16).

[0053] Relationships are represented in FIG. 4, on the basis of which an exemplary embodiment of the imager control (4) is explained in greater detail. It is assumed that the exposure start clock (the points in time of the start of exposure of each pixel) of the imager of the first camera (1) is known. In FIG. 4, this is represented by the left scale (outside of the framework), which is entered from 0 to 6. The exposure clock of the first camera (1), that is to say the time interval Δt.sub.l between the start of exposure of a line (e.g., line 3, top dashed line (12)) and the following line (line 4, bottom dashed line (11)). Knowledge of the position of the first line of the second camera (2) in the image of the first camera (1) produces the temporal offset Δt.sub.s, by which the second camera (2) should subsequently start the exposure process. The solid rectangle (10) symbolizes the overlap region of the two cameras (1, 2). The lines of the second camera (2) are symbolized by the right scale (inside the framework), which is entered from 0 to 4. This makes it possible to start the exposure of the first line of the second camera (2) at the same time as the relevant, corresponding line of the first camera (1). For the following lines of the second camera (2), the imager control (4) is now adjusted as follows. Assuming that the exposure of the line n has been started in a time-synchronous manner with a rectangular region in the image of the second camera (2), Δt.sub.l k lines of the second camera (2) fall into the time between the start of the exposure of the line n and the line n+1 of the first camera (1). In the example of FIG. 4, k=4 for example, that is, four lines of the second camera correspond to one line of the first camera (1).

[0054] The difference in the start of exposure of subsequent lines, the exposure start clock, of the second camera (2) is advantageously cut down to the fractional value Δt=Δt.sub.l/k. The exposure start clock corresponds to the readout clock of a line. The second camera (2) therefore has in total a shorter exposure start clock. As a result, the k lines of the second camera (2) start the exposure in the time Δt.sub.l. Consequently, it follows from this that the line n+1 of the first camera (1) in turn starts simultaneously with the corresponding line of the second camera (2).

[0055] The optimized imager control (4) can now be used in order to repeat the method according to FIG. 1 cyclically. The camera system geometry and, consequently, the imager control (4) can therefore be further adjusted and improved.

[0056] An exemplary overlap region of the two cameras (1, 2) is represented schematically in FIG. 5. A pixel of the first camera (1) in a line (14) n.sub.w of the “wide-angle” image sensor “includes” a group of pixels (17) of the second camera (2), as represented in the upper row of lines (13). A pixel of the first camera (1) corresponds to 3×3 pixels of the second camera (2). The line n.sub.w having four pixels of the first camera (1) corresponds to three lines n.sub.T, n.sub.T+1, n.sub.T+2 each having 12 pixels of the “tele” image sensor of the second camera (2).

[0057] Inside the overlap region, the start pixel (17) of the first line n.sub.T and the end pixel (18) of the last line n.sub.T+2 of the second camera (2) are now each exposed synchronously to the relevant start or end pixels (that is, the first or the fourth pixel, lower line (14) in FIG. 5) of the line n.sub.w of the first camera (1). An optimal timing for the synchronization is one in which the reading out of the first pixel in each case is started (top line of the tele camera), and the reading out of the last pixel in each case (lowest line of the tele camera) is stopped simultaneously.

[0058] As a general rule, the second camera (2) has more pixels/degree. It collects significantly less light per pixel and must therefore generally be exposed for longer than a pixel of the first camera (1). Consequently, the second camera (2) exposes object movements over a longer period of time than the first camera (1). The latter results in other (more pronounced) blurring in the image of the second camera (2), in particular due to the motion blur. In order to reduce the motion blur effect for the imaging of the stereo image, the following synchronization can be used in a preferred embodiment. If, for example, the image sensor of the second camera (2) requires an exposure time of b2=10 ms and the image sensor of the first camera (1) only requires b1=1 ms, the image sensor of the first camera (1) could be started with a delay of (b2−b1)/2=4.5 ms. The motion blur of the image sensor of the second camera (2) would therefore be distributed more uniformly among relevant regions of the image sensor of the first camera (1) and an error in the calculation of the stereo image is decreased.

[0059] Consequently, the respective imager control (4) allows a suitable synchronization for two cameras (1, 2) having different fields of view.

[0060] Results from camera calibration methods are cleverly utilized in order to indicate a passage of time for the exposure, which seeks to minimize the temporal differences in the exposure of corresponding world points (5).

[0061] It can be ensured with the method described above that the exposure takes place simultaneously at the start and end of the overlap region (first and last pixel or line). Moreover, it is ensured that, at the beginning of the exposure of a line of the wide-angle camera, there is synchronicity with the 1st relevant line of the tele camera. The further lines of the tele camera, which also correspond to the relevant line of the wide-angle camera, diverge in synchronicity in order to then converge at the beginning of the next line of the wide-angle camera with the tele camera. This is illustrated by FIG. 5 and the very schematic representation of the temporal offset delta T over the image sensor lines n.sub.i in FIG. 7. The exposure of each line n.sub.w, n.sub.w+1, n.sub.w+2, . . . of the first camera (1) is begun synchronously with the relevant line n.sub.T, n.sub.T+x, n.sub.T+2x, . . . of the second camera (2). The variable x is intended to mean that an integral ratio k of lines of the second camera (2) does not always have to be located in one line of the first camera (1).

[0062] A further optional improvement in the synchronicity of the capturing consists of again delaying the start of capturing of the tele camera with respect to the wide-angle camera by half the line clock of the wide-angle camera, as a result of which the maximum asynchronicity delta T max is again halved. The very schematic representation in FIG. 8 illustrates this.