METHOD AND DEVICE FOR ASCERTAINING AN IMAGE OF THE SURROUNDINGS OF A VEHICLE
20170251195 · 2017-08-31
Inventors
Cpc classification
H04N13/221
ELECTRICITY
H04N25/60
ELECTRICITY
G06V20/56
PHYSICS
International classification
H04N13/00
ELECTRICITY
Abstract
A method for ascertaining an image of the surroundings of a vehicle. The method includes reading in first image data and at least second image data, the first image data representing image data of a first image recording area of a camera in or on a vehicle and the second image data representing image data from a second image recording area of the camera differing from the first image recording area, and the second image data having been recorded chronologically after the first image data. The method further includes processing the second image data using a vehicle parameter and/or driving parameter, to obtain processed second image data. Finally, the method includes combining the first image data with the processed second image data to obtain the image of the surroundings of the vehicle.
Claims
1. A method for ascertaining an image of surroundings of a vehicle, the method comprising: reading in first image data and at least second image data, the first image data representing image data of a first image recording area of a camera in or on a vehicle and the second image data representing image data from a second image recording area of the camera, which differs from the first image recording area, and the second image data having been recorded chronologically after the first image data; processing the second image data using a vehicle parameter and/or driving parameter to obtain processed second image data; and combining the first image data with the processed second image data to obtain the image of the surroundings of the vehicle.
2. The method of claim 1, wherein in the processing, the second image data are processed using a driving speed of the vehicle or of the camera as the driving parameter and/or an installation height of the camera in or on the vehicle as the vehicle parameter.
3. The method of claim 1, wherein in the reading in, image data as first image data and/or as second image data are read in, which represent an image of a camera scanned by a scan row or a scan column or which represent a pixel of the camera.
4. The method of claim 1, wherein in the processing, the second image data are processed to ascertain image data as processed second image data in an image recording area outside the second image recording area.
5. The method of claim 1, wherein in the processing, a structure of an object in the surroundings of the vehicle detected in the first and/or second image data is used to ascertain the processed second image data, the detected structure, in particular, being compared to a comparison structure stored in a memory.
6. The method of claim 1, wherein in the reading in, at least third image data are read in, which represent image data from a third image recording area of the camera differing from the first image recording area and from the second image recording area, and the third image data having been recorded chronologically after the first image data and the second image data, the third image data being processed in the processing using the vehicle parameter and/or driving parameter, to obtain processed third image data, and the first image data being combined in the combining with the processed second image data and with the processed third image data to obtain the image of the surroundings of the vehicle.
7. The method of claim 1, wherein in the processing, the processed second image data are ascertained using a system of linear differential equations.
8. A method for detecting an object in surroundings of a vehicle, the method comprising: reading in an image; and evaluating the read-in image using at least one pattern recognition algorithm to identify the object in the surroundings of the vehicle; wherein the image is of surroundings of a vehicle, and wherein the image is ascertained by performing the following: reading in first image data and at least second image data, the first image data representing image data of a first image recording area of a camera in or on a vehicle and the second image data representing image data from a second image recording area of the camera, which differs from the first image recording area, and the second image data having been recorded chronologically after the first image data; processing the second image data using a vehicle parameter and/or driving parameter to obtain processed second image data; and combining the first image data with the processed second image data to obtain the image of the surroundings of the vehicle.
9. A device for ascertaining an image of surroundings of a vehicle, comprising: an image ascertain device to ascertain the image by performing the following: reading in first image data and at least second image data, the first image data representing image data of a first image recording area of a camera in or on a vehicle and the second image data representing image data from a second image recording area of the camera, which differs from the first image recording area, and the second image data having been recorded chronologically after the first image data; processing the second image data using a vehicle parameter and/or driving parameter to obtain processed second image data; and combining the first image data with the processed second image data to obtain the image of the surroundings of the vehicle.
10. A computer readable medium having a computer program, which is executable by a processor, comprising: a program code arrangement having program code for ascertaining an image of surroundings of a vehicle, by performing the following: reading in first image data and at least second image data, the first image data representing image data of a first image recording area of a camera in or on a vehicle and the second image data representing image data from a second image recording area of the camera, which differs from the first image recording area, and the second image data having been recorded chronologically after the first image data; processing the second image data using a vehicle parameter and/or driving parameter to obtain processed second image data; and combining the first image data with the processed second image data to obtain the image of the surroundings of the vehicle.
11. The computer readable medium of claim 10, wherein in the processing, the second image data are processed using a driving speed of the vehicle or of the camera as the driving parameter and/or an installation height of the camera in or on the vehicle as the vehicle parameter.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
DETAILED DESCRIPTION
[0038]
[0039] To ensure that cost-efficient components like such a camera 115, for example, in the form of a rolling shutter camera, may be used in the vehicle segment, device 105 for ascertaining image 107 of surroundings 110 of vehicle 100 is then used, in which a correction of second image data 125 provided chronologically subsequently takes place, so that the processed, i.e., corrected, second image data 125′ may then be combined with first image data 120 to form image 107, which may then be made available, for example, to driver assistance system 137.
[0040] To ensure this functionality, device 105 includes a read-in interface 140, via which first image data 120 and second image data 125 are read in. First image data 120 and second image data 125 are subsequently forwarded by read-in interface 140 to a processing unit 145. Second image data 125 are then processed in processing unit 145 using a vehicle parameter and/or driving parameter 147, which is downloaded, for example, from a memory unit 149 of vehicle 100 into processing unit 145. Vehicle parameter and/or driving parameter 147 may, for example, represent driving speed 135 of vehicle 100 and/or an installation height 150 of camera 115 in vehicle 100, for example, in relation to the plane of the roadway. For processing second image data 125, processing unit 145 may also use a time period, for example, which corresponds to the time difference between the recording of the first image data by the image recording sensor in camera 115 compared to the recording of second image data 125 by the image recording sensor in camera 115. It then becomes possible to process second image data 125 in such a way that processed second image data 125′ are obtained, which have been “calculated back” to the point in time in which first image data 120 were recorded. Processed second image data 125′, as well as first image data 120 are then forwarded to a combining unit 155, in which image 107, which then very realistically reflects surroundings 110 in their entirety very well approximated to the point in time of the recording of first image data 120 by the image recording sensor of camera 115, is then ascertained based on processed second image data 125′ and first image data 120.
[0041] This image 107 may then be further processed, for example, in driver assistance system 137, in order, for example, to detect objects 157 from this image 107, such as a road marker ahead of vehicle 100 or the traffic lane on the road ahead of vehicle 100, with the aid of a pattern recognition algorithm 158. This then enables driver assistance system 137 to assist in or to automatically perform the steering of vehicle 100 via steering unit 160 and/or the activation of the passenger protection arrangement such as, for example, a driver airbag 165 in the event that a directly imminent collision with object 157 is detected from image 107.
[0042] According to another exemplary embodiment of the approach presented herein, camera 115 may also provide third image data 170 from an image recording area 172 differing from first image recording area 130 and from second image recording area 132, which is then read in by read-in interface 140 and forwarded to processing unit 145. Third image data 170 were then recorded or provided by camera 115, for example, as third read-out row or as third read-out pixel chronologically after first image data 120 and second image data 125. Third image data 170 are then processed in processing unit 145 using vehicle parameter and/or driving parameter 147, in order to obtain processed third image data 170′, which are then conveyed to combining unit 155. First image data 120 are then combined in combining unit 155 with processed second image data 125′ and processed third image data 170′ in order to obtain image 107 of surroundings 110 of vehicle 100. In this way, it is possible to achieve an even further improved compensation of the distortions in the available image data caused by the chronological delay, which is related, for example, to the delayed read out of the rolling shutter.
[0043] One aspect of the approach presented herein is in the efficient modelling of the camera egomotion using quaternions and the simultaneous estimation of the camera egomotion on which they are based for each scan row or, if necessary, also each pixel and the point-by-point 3D scene geometry. This method leads to a system of linear differential equations for the rotation of camera 115 and for the translation of camera 115, which may be robustly solved and with high convergence speed using suitable numerical processes. In contrast, classical approaches based on Euler angles result in non-linear differential equations with singular points, which impair the convergence behavior in the numerical integration. The approach presented herein, for example, offers a point-by-point 3D reconstruction of the observed scene of surroundings 110 without systematic residual errors due to the rolling shutter distortions of the single images or partial images of the image sequence contained in image data 120 and 125. In addition, the approach presented herein enables a smooth transition of the use of global shutter cameras to rolling shutter cameras in vehicles and provides an efficient method for rectification or for rolling shutter compensation of the individual images or partial images in image data 120 and 125, so that subsequent processing methods in the signal flow such as, for example, in driver assistance system 137, may be implemented without further consideration given to rolling shutter effects.
[0044]
[0045] In this case, the rotation equation of camera 115 may be specified as
q=exp({dot over (φ)}(t′−t)n)
and the translation equation of camera 115 as
s′=v′(t′−t)
[0046] With the variables introduced in
{dot over (q)}=½Ωq.sub.0
{dot over (t)}′=v′
with the unit quaternion q for describing the camera rotation and the translation vector t′, it being assumed that the angular speed according to the absolute value (dφ/dt) and direction (n) and the vector of the translation speed (v.sup.1) in the observed time interval are constant. Under these assumptions, the skew symmetric (3×3) matrix Ω of the rotation rates measured in the camera coordination system is also constant, so that the given system of differential equations may be analytically solved and implemented as indicated above.
[0047] With this modelling, it is possible to easily apply the egomotion of a global shutter camera as specified in
δt(n,n′)=t′(n′)−t(n)
δt(n,m,n′,m′)=t′(n′,m′)−t(n,m)
[0048] The application of these models of the camera egomotion to the SfM-based 3D reconstruction of the observed scene results in a point-by-point 3D reconstruction which, however, initially exists only with respect to the respective chronologically modifiable camera coordinate system. For the subsequent processing steps, however, it is advantageous to relate the point-by-point 3D reconstruction to a shared reference camera coordinate system having a fixed reference time stamp. Provided for this purpose are, in particular, the reference time stamps (k×T.sub.f) specified in
[0049] For the 3D reconstruction with respect to the reference camera coordinate system, it is sufficient to utilize the specified model of the camera egomotion for the back projection of a 3D point (P) at point in time (t) to reference point in time (t.sub.ref=k×T.sub.f), the aforementioned time differences to be replaced merely by (δt=t−t.sub.ref). With the corresponding rotation matrix (R=R(δt)) and the associated translation vector (s′=s′(δt)), this initially leads to
P(t)=R(δt)P(t.sub.ref)+s′(δt)
and ultimately to the targeted back projection
P(t.sub.ref)=R.sup.T(δt)P(t)−s′(δt)
[0050]
[0051]
[0052]
[0053]
[0054]
[0055] If an exemplary embodiment includes an “and/or” linkage between a first feature and a second feature, this is to be read in the sense that the exemplary embodiment according to one specific embodiment includes both the first feature and the second feature, and according to another specific embodiment, either only the first feature or only the second feature.