Method and device for image-based visibility range estimation

09536173 ยท 2017-01-03

Assignee

Inventors

Cpc classification

International classification

Abstract

A method for image-based visibility range estimation for a vehicle includes: ascertaining a depiction of an object of the surroundings in an image of an image detection device of the vehicle, the object having an extension in the direction of travel of the vehicle, the image showing a depiction of surroundings ahead of the vehicle; segmenting the depiction of the object in order to obtain first and second object ranges of the object having respective first and second distances to the image detection device within respective tolerance ranges; determining a first object luminance of the first object range and a second object luminance of the second object range; and determining an atmospheric extinction coefficient correlated to the visibility range, using the first and second object luminances, and the first and second distances.

Claims

1. A method for an image-based visibility range estimation, comprising: obtaining, by processing circuitry, an image captured by an image detection device and that depicts surroundings of the image detection device; processing, by the processing circuitry, the image, wherein the processing includes: ascertaining, by processing circuitry, a depiction of an object in the image; and segmenting, by the processing circuitry, the depiction of the object in order to (i) obtain a first object range of the object having an equal first distance to the image detection device within a tolerance range, (ii) obtain a second object range of the object having an equal second distance to the image detection device within a tolerance range, (iii) determine a first object luminance for the first object range, and (iv) determine a second object luminance for the second object range; and determining, by the processing circuitry, an atmospheric extinction coefficient using (a) the first object luminance, the second object luminance, the first distance, and the second distance with (b) at least one of (i) a one-dimensional equation and (ii) a model for light transmission by atmospheric aerosols, wherein the atmospheric extinction coefficient is in direct correlation to the visibility range.

2. The method as recited in claim 1, wherein in the step of determining, the extinction coefficient is determined using an estimation method from the one-dimensional equation.

3. The method as recited in claim 1, wherein in the step of determining, the extinction coefficient is determined using an iterative Newton's method.

4. The method as recited in claim 1, wherein: in the step of segmenting, the depiction of the object is segmented in order to (i) further obtain at least one third object range having an equal third distance to the image detection device within a tolerance range, and (ii) further determine a third object luminance for the third object range is determined; and the atmospheric extinction coefficient is determined additionally using the third object luminance and the third distance.

5. The method as recited in claim 1, further comprising: a step of detecting the image using the image detection device, the image showing a depiction of an object of the surroundings in the image, the object having an extension at least along a direction of travel of the vehicle.

6. The method as recited in claim 1, wherein the steps of ascertaining and segmenting are carried out for an additional image, the atmospheric extinction coefficient being determined in the step of determining using at least one third object luminance and a third distance assigned to third object luminance.

7. A device for an image-based visibility range estimation, comprising: a control unit including a processor configured to perform the following: ascertaining a depiction of an object in an image, the image showing a depiction of surroundings of an image detection device; segmenting the depiction of the object in order to (i) obtain a first object range of the object having an equal first distance to the image detection device within a tolerance range, (ii) obtain a second object range of the object having an equal second distance to the image detection device within a tolerance range, (iii) determine a first object luminance for the first object range, and (iv) determine a second object luminance for the second object range; and determining an atmospheric extinction coefficient using (a) the first object luminance, the second object luminance, the first distance, and the second distance with (b) at least one of (i) a one-dimensional equation and (ii) a model for light transmission by atmospheric aerosols, wherein the atmospheric extinction coefficient is in direct correlation to the visibility range.

8. A non-transitory, computer-readable data storage medium storing a computer program having program codes which, when executed on a computer, perform a method for an image-based visibility range estimation, the method comprising: ascertaining a depiction of an object in an image, the image showing a depiction of surroundings of an image detection device; segmenting the depiction of the object in order to (i) obtain a first object range of the object having an equal first distance to the image detection device within a tolerance range, (ii) obtain a second object range of the object having an equal second distance to the image detection device within a tolerance range, (iii) determine a first object luminance for the first object range, and (iv) determine a second object luminance for the second object range; and determining an atmospheric extinction coefficient using (a) the first object luminance, the second object luminance, the first distance, and the second distance with (b) at least one of (i) a one-dimensional equation and (ii) a model for light transmission by atmospheric aerosols, wherein the atmospheric extinction coefficient is in direct correlation to the visibility range.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 shows a block diagram of a device for the image-based visibility range estimation for a vehicle according to an exemplary embodiment of the present invention.

(2) FIG. 2 shows a flow chart of a method for the image-based visibility range estimation according to an exemplary embodiment of the present invention.

(3) FIG. 3 shows a schematic representation of a meteorological visibility range according to an exemplary embodiment of the present invention.

(4) FIG. 4 shows a schematic representation of a correlation between object light and ambient light scattered in according to one exemplary embodiment of the present invention.

(5) FIG. 5 shows a schematic representation of a correlation between object light and ambient light scattered in according to one exemplary embodiment of the present invention.

(6) FIG. 6 shows an illustration of measured luminances plotted over a distance according to one exemplary embodiment of the present invention.

(7) FIG. 7 shows a schematic representation of an object at different distances according to one exemplary embodiment of the present invention.

(8) FIG. 8 shows an illustration of surroundings of a vehicle including a segmentation of a road surface according to one exemplary embodiment of the present invention.

(9) FIG. 9 shows a block diagram of a signal curve according to one exemplary embodiment of the present invention.

(10) FIG. 10 shows an illustration of estimated visibility ranges as a function of a distance according to one exemplary embodiment of the present invention.

(11) FIG. 11 shows a block diagram of a device for the image-based visibility range estimation for a vehicle according to one exemplary embodiment of the present invention.

DETAILED DESCRIPTION OF THE INVENTION

(12) In the following description of advantageous exemplary embodiments of the present invention, identical or similar reference numerals are used for elements which are similar operating elements and represented in the various figures, a repeated description of these elements being omitted.

(13) FIG. 1 shows a block diagram of a device 100 for the image-based visibility range estimation for a vehicle 102 according to one exemplary embodiment of the present invention. Device 100 includes a device 104 for ascertaining a depiction 106 of an object of the surroundings in an image 108 of an image detection device 110 of vehicle 102, and a device 112 for segmenting a first object range of the object having an equal first distance d.sub.1 within a tolerance range to image detection device 110 and a second object range of the object having an equal second distance d.sub.2 within a tolerance range to image detection device 110. Device 112 is furthermore designed to determine a first object luminance L.sub.1 for the first object range and a second object luminance L.sub.2 for the second object range. Furthermore, device 100 includes a determination device 114 for determining an atmospheric extinction coefficient K using first object luminance L.sub.1, second object luminance L.sub.2, first distance d.sub.1 and second distance d.sub.2, atmospheric extinction coefficient K being directly correlated to visibility range d.sub.met. The correlation between atmospheric extinction coefficient K and visibility range d.sub.met is shown in equation (1).

(14) In an alternative exemplary embodiment, image detection device 110 and device 100 for the image-based visibility range estimation is used independently of a vehicle.

(15) Image 108 represents a depiction 106 of surroundings ahead of vehicle 102 according to this exemplary embodiment. The object which is segmented in device 112 for segmenting has an extension in the direction of travel of vehicle 102.

(16) In an alternative exemplary embodiment, the object has an extension in distance or depth to the vehicle. In an alternative exemplary embodiment, the object has an extension in one viewing direction of the image detection device. For example, the road detected by the image detection device describes a curve, or the image detection device has a detection direction which is different from the direction of travel of the vehicle, and the object extending in the depth is, for example, a parking lot or a runway situated next to the vehicle.

(17) Furthermore, in the exemplary embodiment shown in FIG. 1, device 100 includes an interface 116 for reading in image 108. Image 108 is detected using image detection device 110. Image 108 shows an item from the surroundings of image detection device 110 as an object, the distance d.sub.1, d.sub.2 each representing a distance between image detection device 110 and a segmented object range.

(18) FIG. 2 shows a flow chart of a method for the image-based visibility range estimation according to one exemplary embodiment of the present invention. The method may be carried out on the device shown and described with reference to FIG. 1. The method for the image-based visibility range estimation includes a step 220 of ascertaining a depiction of an object in an image of an image detection device, the object having an extension in a viewing direction of the image detection device, a step 222 in which, on the one hand, the depiction is segmented in order to obtain one first object range of the object having an equal first distance d.sub.1 within a tolerance range to the image detection device, and a second object range of the object having an equal second distance d.sub.2 within a tolerance range to the image detection device, and, on the other hand, a first object luminance L.sub.1 is determined for the first object range and a second object luminance L.sub.2 is determined for the second object range. The method furthermore includes a step 224 for determining an atmospheric extinction coefficient K using first object luminance L.sub.1, second object luminance L.sub.2, first distance d.sub.1, and second distance d.sub.2, atmospheric extinction coefficient K being in direct correlation to visibility range d.sub.met. The correlation between atmospheric extinction coefficient K and visibility range d.sub.met is shown in equation (1).

(19) In one exemplary embodiment, in step 224 of determining, extinction coefficient K is determined using a one-dimensional equation and additionally or alternatively a model of horizontal view.

(20) In one exemplary embodiment, extinction coefficient K is determined in step 224 of determining using an estimation method from the one-dimensional equation.

(21) Optionally, extinction coefficient K is determined in step 224 of determining using an iterative Newton's method.

(22) In step 222 of segmenting, at least one third object range having an equal third distance within a tolerance range to the image detection device is optionally segmented and a third object luminance for the third object range is determined. In the step of determining, atmospheric extinction coefficient K is determined using the third object luminance and the third distance.

(23) The method optionally includes a step of detecting the image using an image detection device of the vehicle, the image showing a depiction of an object of the surroundings in the image, the object having an extension in the direction of travel of the vehicle.

(24) In one exemplary embodiment, the steps of ascertaining 220 and segmenting 222 are carried out for an additional image, atmospheric extinction coefficient K being determined in step 224 of determining using at least third object luminance L.sub.3 and a third distance d.sub.3 assigned to third object luminance L.sub.3. During the process, at least one third object luminance L.sub.3 and assigned third distance d.sub.3 are ascertained during the execution of the steps of ascertaining 220 and segmenting 222 for the additional image.

(25) As one aspect, this method segments the road area in the middle of the camera image if possible (or other surfaces with a z-extension/depth extension.) In the segmented area, distances to the road are determined (for example, via stereo (stereo camera), via Structure from Motion (mainly for mono camera), via knowledge about the road surface (flat earth assumption also possible) and orientation of the camera to the road (mainly for mono camera), via other sensors such as radar and LIDAR (requires good extrinsic calibration of the entire system, however), . . . . ) Road areas of approximately the same distance are combined (for example, line by line for a camera which is hardly rotated) and a luminance for the combined areas is estimated (for example as an average or median luminance.) Ncustom character luminance-distance value pairs (L.sub.1; d.sub.1), . . . , (L.sub.N; d.sub.N) thus result. If an approximately constant reflection of the road is presumed and it is assumed to be a Lambertian surface (realistic assumptions), extinction coefficient K may be determined from the measurements by adapting a fog model (at least in parameters L, d and K) to the measured values. It is advantageous that this method is not dependent on the possibility of segmenting the road completely to the horizon. This is particularly useful in the case of preceding vehicles or otherwise geometrically blocked sight on the course of the road.

(26) One exemplary embodiment of the method described here uses only one single image, not an image sequence including a trackable object. Tracking an object over a long time is replaced here or supplemented by the partial segmentation of the (road) surface.

(27) In one exemplary embodiment, additional surroundings information (e.g., surroundings lightness, object information, . . . ) are taken into account in step 224 of determining. These may be obtained from the image and/or from additional sensors and/or from the context, among others.

(28) FIG. 3 shows a schematic representation of a meteorological visibility range d.sub.met according to one exemplary embodiment of the present invention. Meteorological visibility range d.sub.met results from the distance at which an object is still perceivable with 5% of its original contrast. FIG. 3 thus shows a silhouette of a vehicle 300 in five views next to one another, the contrast varying, from an original contrast referred to as 100% to a contrast referred to as 2%. Between them, the silhouette is shown with 50%, 20% and 5%. The threshold of perception lies at a contrast of 5%.

(29) FIG. 4 shows a schematic representation of a correlation between object light 430 and ambient light 432 scattered in according to one exemplary embodiment of the present invention. Object light 430 is attenuated on the path from object 434 or item 434 to viewer 436 and enhanced by ambient light 432 scattered in.

(30) The approach described here is based on the tracking of objects, parts of surfaces or points across two or more frames or images of a camera. When these tracked entitieshere denoted with reference numeral 434are moved in their distance relative to the camera, the luminance or object luminance is modified by the fog. Luminance here is to mean not only the classic photometric luminance. Here, the term is to represent any arbitrary (but over the course of the embodiments constant) spectral weighting of radiation density. Luminance may here in particular also represent the spectral weighting according to the sensitivity curve of individual pixels of the camera imager or of the image detection device.

(31) This correlation between luminance and object distance is described in more detail by Koschmieder's theory of horizontal visual range, for example:
L=e.sup.KdL.sub.0+(1e.sup.+Kd)L.sub.air(2)
parameters L.sub.0 and L.sub.air representing the luminance of the object and the ambient light, and d [m] representing the distance between object and viewer. L is the object light perceived by the viewer, which is composed according to equation (2) of attenuated object light L.sub.0 and ambient light L.sub.air scattered in.

(32) According to one exemplary embodiment, when a road sign is tracked during passing by a front camera during fog, the luminance (perceived lightness) decreases when approaching the road sign since less ambient light is scattered into the optical path and the light reflected by the object is weaker than the ambient light. Therefore, a curve of decreasing luminances results during tracking. If the distances to the tracked object are measured simultaneously, the luminance curve may also be plotted over the distance. The distances or spaces to the tracked object may, for example, be measured and determined via structure from motion in a mono camera, via stereo vision or via other sensors, for example, LIDAR. One example for a luminance curve over the distance or the space is shown in FIG. 5.

(33) The distances may also be smoothed, interpolated and extrapolated during the process in order to also obtain distances in areas of the track at which a distance estimation is difficult or impossible, but the object may already be tracked. Information about the distance traveled between the recording points in time (for example, using ESP, GPS, ego-motion estimation, . . . ) may be taken into account during the process.

(34) FIG. 4 shows how object light 430 is attenuated on the path from the object to the viewer and enhanced by ambient light 432 scattered in. FIG. 5 shows how the Koschmieder model described here according to equation (2) describes the behavior shown in FIG. 4 by exponentially mixing the light.

(35) FIG. 5 shows a schematic representation of a correlation between object light 430 and ambient light 432 scattered in according to one exemplary embodiment of the present invention. Object light 430 is attenuated on the path from object 434 or item 434 to viewer 436 in a vehicle 102 and enhanced by ambient light 432 scattered in. The diagram in FIG. 5 shows a situation similar to the diagram in FIG. 4. Object 434 is a point or an area of a road surface. A luminance of object light 430 is shown in a Cartesian coordinate system 537, the luminance being plotted in percent over distance d. One first curve 538 shows an exponentially decreasing luminance over distance d of the object light, starting at 100%. Second curve 539 shows an exponentially increasing luminance over distance d of the ambient light, starting at 0%. The ambient light is also referred to as air light.

(36) In a favorable case, the road surface has a constant albedo within a narrow tolerance range, meaning that the road surface has a constant reflectivity and is diffusely reflecting. In one favorable exemplary embodiment, the road surface follows Lambert's Law, also known as Lambert's cosine law. The radiation density thus remains the same from all viewer angles. For the viewer, this results in a luminance independent of the viewer angle, the luminance being the photometric equivalent to the radiation density.

(37) FIG. 6 shows an illustration of measured luminances plotted over a distance according to one exemplary embodiment of the present invention. In a Cartesian coordinate system, a distance d is plotted on the abscissa and a luminance L of an object is plotted on the ordinate. Actual measuring points are plotted as dots, i.e., measured luminances of a tracked object, plotted over the distance. An adapted model curve corresponding to Koschmieder's model of horizontal visual range according to equation (2) is shown as a solid line 600.

(38) One way of estimating extinction coefficient K from individual frames or images of a front camera or image detection device during day light is based on the extraction of the so-called Road Surface Luminance Curve, also abbreviated as RSLC. Here, an area of road and sky is segmented in the camera image and a line-by-line median of the segmented area is plotted as a curve. It has been found that the position of the turning point of this curve may be correlated to extinction coefficient K via models.

(39) FIG. 7 shows a schematic representation of an object 434 at different distances according to one exemplary embodiment of the present invention. Luminances of an object are detected at different distances. An object may be understood to mean a diagram in an image of a depiction of a real item. The luminance is referred to as L, the space or the distance as d. The indices refer to the point in time; L.sub.1 represents the luminance at a first point in time, L.sub.2 represents the luminance at a second point in time. An image is assigned to each point in time.

(40) Thus the value pairs (L.sub.1; d.sub.1), . . . , (L.sub.N; d.sub.N) result in an object, N being the number of the frames or images in which the object was able to be tracked. In order to make an inference using Koschmieder's model according to equation (2) or also other models regarding the underlying extinction coefficient K, it is recommended that the value pairs correspond to the given model as much as possible; for Koschmieder:

(41) L 1 = - Kd 1 L 0 + ( 1 - - Kd 1 ) L air , .Math. L N = - Kd N L 0 + ( 1 - - Kd N ) L air ( 3 )

(42) Since equation system (3) for N>3 based on noisy real data generally cannot be solved accurately, the parameters (K; L.sub.0; L.sub.air) are estimated in such a way that equations (3) within the sense of the least error squares are fulfilled as best possible:

(43) .Math. n = 1 N ( [ - Kd n L 0 + ( 1 - - Kd n ) L air ] - L n ) 2 .fwdarw. min ( 4 )

(44) Extinction coefficient K, in particular, also meteorological visibility range d.sub.met, may thus be estimated from image sequences.

(45) FIG. 8 shows an illustration of surroundings of the vehicle having a segmentation of a road surface according to one exemplary embodiment of the present invention. The diagram may be an image 108 of an image detection device of a vehicle. Such an image detection device is shown, for example, in FIG. 1. A depiction 106 of the road was ascertained and is marked. The road essentially extends perpendicular in the image. Parts of the road surface are, due to fog, detectable only to a limited extent and are not taken into further account during the segmenting. This applies to a road section near the horizon. These limitations may also be caused by blocking objects, for example, vehicles or curves.

(46) As described with reference to FIG. 1 or 2, the depiction of the road is segmented and value pairs of object luminance and distance are ascertained in order to determine an atmospheric extinction coefficient K and thus a visibility range d.sub.met.

(47) FIG. 9 shows a block diagram of a signal curve according to one exemplary embodiment of the present invention. In a first block 740, a partial segmentation of a surface having z-extension/depth extension, for example, a road, is carried out in an image 108 from an image detection device in order to obtain at least one segmented image area. In a following second block 742, the at least one segmented image area as well as distances d.sub.1, . . . , d.sub.N are read in and a classification/binning into representative luminance-distance-pairs is carried out, for example, line by line. The corresponding results (L.sub.1, d.sub.1), . . . , (L.sub.N, d.sub.N) are transmitted to a third block 744 in which a rapid Koschmieder model fit is carried out in order to determine a visibility range d.sub.met or an extinction coefficient K.

(48) One camera is arranged in such a way that it is oriented along a surface having z-extension/depth extension (for example, a road). This means that multiple visible points exist at the surface which are at different distances to the camera. The surface is preferably Lambertian, has a preferably constant albedo and has preferably an extension over a large area for stability reasons. The scene is furthermore preferably evenly lit (this is the case, for example, during the day.) The camera supplies an image (signal) on which a preferably large area of the surface is segmented. This means that it is decided for each pixel of the image whether this pixel is part of the surface or not. In one exemplary embodiment, this is not carried out exactly. In particular large parts of the surface are classified as not belonging to the surface in one exemplary embodiment.

(49) Distances are estimated in the segmented area. Additional information from external sensors and/or other image processing devices and/or assumptions about the surroundings may thereby also be considered. At points of unknown distance it is also interpolated and/or distances are supplemented using assumptions about the surface.

(50) In order to prevent noise in the image or in distances and to render the data manageable for the visibility range estimation, the segmented area is combined into luminance-distance-pairs (L.sub.1; d.sub.1), . . . , (L.sub.N; d.sub.N). This may take place line by line, for example, with the assumption that the surface is hardly rotated with reference to the camera and that the intersection of an image line with the surface is approximately at a distance to the image detection device.

(51) Parameters K;L.sub.air;L.sub.0 in Koschmieder's model according to equation (2) are adapted in the next step to the luminance-distance-pairs. This is carried out within the sense of the least error squares so that functional

(52) : ( K , L air , L 0 ) .fwdarw. .Math. n = 1 N ( [ - Kd n L 0 + ( 1 - - Kd n ) L air ] - L n ) 2 ( 5 )
is minimized. K or d.sub.met is thus estimable from the data. Since this is generally a computationally very intensive step, a special method is used in order to carry out the model adaptation to the surroundings data computationally, preferably cost-effectively:

(53) In order to minimize functional custom character (see Equation 6), great effort is required with the aid of conventional methods (Gradient descent, Newton's method, Levenberg-Marquardt algorithm, . . . ) (depending on the number of N, the number of M of the objects and the length of object track N.sub.m.) This minimization would be integratable into a real-time system only with great difficulty and would use many resources there. In this exemplary embodiment, a system is described which instead of a minimization of functional custom character carries out an equivalent resolution thereto of a one-dimensional equation f(K)=0. Depending on the effort for the calculation of one-dimensional equation f(K), this is a much more cost-effective problem. For solving f(K)=0, the iterative Newton's method K:=Kf(K)/f(K) may be used, for example. For one-dimensional equation f presented below, few iterations suffice with starting value K:=0 for a sufficient precision of all forms of (simulated) data sets.

(54) A one-dimensional equation f which fulfils the required property may be calculated as follows:

(55) f ( K ) = L air 2 ( S eed - S ed ) + L air ( L 0 S ed - 2 L 0 S eed + S Led ) + L 0 L 0 S eed - L 0 S Led , Where ( 6 ) L 0 = S Le + L air ( S ee - S e ) S ee , and ( 7 ) L air = S L S ee - S e S Le S 1 S ee - S e S e , ( 8 )
as well as an abbreviated notation is used:

(56) S 1 := .Math. n = 1 N 1 ( 9 ) S e := .Math. n = 1 N - Kd n , S ee := .Math. n = 1 N - Kd n - Kd n , S L := .Math. n = 1 N L n , S Le := .Math. n = 1 N L n - Kd n , S ed := .Math. n = 1 N d n - Kd n , S eed := .Math. n = 1 N d n - Kd n - Kd n .

(57) Moreover, the inference of extinction coefficient K (required for Newton's method) from one-dimensional equation f is trivially determinable.

(58) In one exemplary embodiment, measuring uncertainties may be taken into account in the functional and thus be taken into account in the parameter estimation. For measuring uncertainties, which are expressed as standard deviations .sub.1, . . . , .sub.N in the measurements L.sub.1, . . . , L.sub.N, the following maximum likelihood objective functional would result for an underlying normally distributed random process, for which an f for the more rapid minimization exists similarly to equations (6), (7), (8) and (9):

(59) : ( K , L air , L 0 ) .fwdarw. .Math. n = 1 N 1 n 2 ( [ - Kd n L 0 + ( 1 - - Kd n ) L air ] - L n ) 2 ( 10 )

(60) Since the determination of the luminance from the image intensity is only possible using exact radiometric or photometric calibration of the camera, luminance L may here also represent a (approximately) linear representation of the luminance, i.e., L=.Math.luminance+.

(61) Saturation and quantification effects, as well as other inaccuracies in the linear camera model represent no problem. This is since, on the one hand, a linearly transformed representation of the luminance represents no problem for the estimation of extinction coefficient K using the above-mentioned method. And, on the other hand, the relatively small inaccuracies due to quantification and similar effects due to the estimation do not result in any important distortion of the result per error square. A saturation may furthermore be detected and saturated measured luminances may be ignored during the extinction coefficient estimation or K estimation.

(62) In one alternative exemplary embodiment, the method is expanded to multiple surfaces. During the minimization of F, parameters or barriers for parameters are also predefined or inserted into the functional using additional penalty terms (for example, in the form of (L.sub.airL.sub.air.sup.given).sup.2) as prior knowledge. The estimation is thus advantageously stabilized.

(63) FIG. 10 shows an illustration of estimated visibility ranges as a function of a distance according to one exemplary embodiment of the present invention. FIG. 10 thus shows an estimated visibility range of 1000 in a Cartesian coordinate system as a function of the maximum distance to which the road could be segmented. In this real example, the visibility range estimation becomes unstable below 50 m (however, initially still within acceptable error bounds.) A maximum distance of a segmentation of the road is shown on the abscissa; a meteorological visibility range d.sub.met is shown on the ordinate. A local minimum of the curve is situated at a distance of 20 meters, the curve subsequently rising until a distance of 50 to 60 meters to then represent a nearly constant visibility range. As a function of the quality of the measured data, very different curve progressions are possible here.

(64) FIG. 11 shows a block diagram of a device 100 for the image-based visibility range estimation for a vehicle 102 according to one exemplary embodiment of the present invention. Device 100 may be an exemplary embodiment of device 100 shown in FIG. 1. Device 100 is expanded by one processing of at least one second image 1150 of image detection device 110. Second image 1150 shows a depiction of the surroundings of vehicle 102 at a second point in time. The second point in time differs from a first point in time at which first image 108 was detected. In the exemplary embodiment shown, the second point in time sequentially follows the first point in time. FIG. 11 shows a device 100 for the combined, image-based and tracking-based visibility range estimation. A method for the image-based visibility range estimation, as described in FIG. 2, is thus combined with a method for the tracking-based visibility range estimation.

(65) The diagram of device 100 essentially corresponds to the diagram and description of device 100 for the image-based visibility range detection in FIG. 1, with the difference that the image detection device detects at least two images sequentially, and provides them to device 100 via interface 116 for reading in images.

(66) Device 104 for ascertaining is designed to track an object in first image 108 detected at the first point in time and in second image 1050 detected at the second point in time. Device 112 for segmenting is designed to ascertain at least one third object luminance L.sub.3 of the object and a third distance d.sub.3 to the object at the second point in time in addition to the value pairs of first object luminance L.sub.1, first distance d.sub.1, second object luminance L.sub.2 and second distance d.sub.2 at the first point in time. Determination device 114 is designed to ascertain atmospheric extinction coefficient K using first object luminance L.sub.1, second object luminance L.sub.2, at least third object luminance L.sub.3, first distance d.sub.1, second distance d.sub.2 and third distance d.sub.2, atmospheric extinction coefficient K being directly correlated to visibility range d.sub.met.

(67) In one optional exemplary embodiment, a plurality of value pairs of object luminance L and distance d are ascertained per image and provided to determination device 114. Furthermore, in one optional exemplary embodiment, a plurality of images are sequentially detected at a plurality of points in time and analyzed. For example, a sequence of 10, 30 or 50 images is analyzed.

(68) FIG. 11 shows a device which estimates K (during daylight) particularly rapidly or real-time-capably from a partially segmented surface (for example, a road) and distance data. In one exemplary embodiment, this is combined with a method for the visibility range estimation, which estimates extinction coefficient K from individual frames or images of a front camera or an image detection device during daylight. One method with which the method for visibility range estimation presented here may be combined is based on the extraction of the so-called Road Surface Luminance Curve (RSLC). Here, an area of road and sky is segmented in the camera image and a line-by-line median of the segmented area is plotted as a curve. It has been found that the position of the turning point of this curve may be correlated to K using models.

(69) In one exemplary embodiment, the method presented here, or the device, is combined with an additional approach to estimate the visibility range based on a model. This is based on the tracking of objects across multiple frames. The measured luminances L and distances d of the objects are approximated using a model which includes extinction coefficient K as a parameter. In this way, extinction coefficient K is estimated as the most likely parameter among the monitored measurements. One model for this is typically the model of horizontal visual range by Koschmieder according to equation (2), which is combined with a method for rapid estimation.

(70) Parameters L.sub.0 and L.sub.air represent the luminance of the object and the ambient light and d [m] represents the distance between object and viewer. L is the object light perceived by the viewer, which is composed accordingly of attenuated object light and ambient light scattered in.

(71) In one variant, the method described here is expanded to multiple surfaces. During the minimization of F, parameters or barriers for parameters are also predefined or inserted into the functional using additional penalty terms as prior knowledge.

(72) The exemplary embodiments described here and illustrated in the figures are selected only as examples. Different exemplary embodiments may be combined with each other completely or in regard to individual features. One exemplary embodiment may also be supplemented by features of another exemplary embodiment.

(73) Furthermore, the method steps presented here may also be repeated or carried out in a sequence different from the sequence described.

(74) If one exemplary embodiment includes an and/or link between a first feature and a second feature, this should be read in such a way that the exemplary embodiment according to one specific embodiment includes both the first feature and the second feature, and according to an additional specific embodiment includes either only the first feature or only the second feature.