Method and device for tracking-based visibility range estimation

09727792 · 2017-08-08

Assignee

Inventors

Cpc classification

International classification

Abstract

A method is provided for tracking-based visibility range estimation for a vehicle, the method including a step of tracking an object detected in a first image at a first point in time and in a second image at a second point in time, a step of ascertaining a first object luminance of the object and a first distance to the object at the first point in time and also ascertaining a second object luminance of the object and a second distance to the object at the second point in time, and also a step of determining an atmospheric extinction coefficient using the first object luminance, the second object luminance, the first distance, and the second distance, the atmospheric extinction coefficient being in direct correlation to visibility range.

Claims

1. A method for supporting safe driving of a vehicle, the method comprising: tracking, by a driver assistance system of the vehicle, an object depicted in a first image detected at a first point in time and in at least one second image detected at a second point in time; ascertaining, by the driver assistance system, a first object luminance of the object and a first distance to the object at the first point in time; ascertaining, by the driver assistance system, a second object luminance of the object and a second distance to the object at the second point in time; and determining, by the driver assistance system, an atmospheric extinction coefficient using the first object luminance, the second object luminance, the first distance, and the second distance, the atmospheric extinction coefficient being in direct correlation to a visibility range; estimating, by the driver assistance system, the visibility range using the determined atmospheric extinction coefficient; and warning a driver or adapting a driving of the vehicle, by the driver assistance system, based on the estimated visibility range; wherein the atmospheric extinction coefficient is determined using a one-dimensional equation of: F ( K ) = L air 2 ( S eed - S ed ) + L air ( .Math. m = 1 M L 0 m S ed m - 2 .Math. m = 1 M L 0 m S eed m + S Led ) + .Math. m = 1 M L 0 m L 0 m S eed m - .Math. m = 1 M L 0 m S Led m .

2. A method for supporting safe driving of a vehicle, the method comprising: tracking, by a driver assistance system of the vehicle, an object depicted in a first image detected at a first point in time and in at least one second image detected at a second point in time; ascertaining, by the driver assistance system, a value of a first overall luminance corresponding to the object and a value of a first distance to the object at the first point in time; ascertaining, by the driver assistance system, a value of a second overall luminance corresponding to the object and a value of a second distance to the object at the second point in time; fitting, by regression, the value of the first overall luminance, the value of the second overall luminance, the value of the first distance, and the value of the second distance to a model that models overall luminance against parameters that include an air luminance, a luminance of the object, object distance, and an atmospheric extinction coefficient, thereby obtaining an estimate of the air luminance, the luminance of the object, and the atmospheric extinction coefficient; estimating, by the driver assistance system, the visibility range using the estimated atmospheric extinction coefficient; and warning a driver or adapting a driving of the vehicle, by the driver assistance system, based on the estimated visibility range.

3. The method as recited in claim 1, wherein the atmospheric extinction coefficient is determined using an estimation method from the one-dimensional equation.

4. The method as recited in claim 1, wherein the atmospheric extinction coefficient is determined using an iterative Newton's method.

5. The method as recited in claim 1, further comprising: detecting the first image at the first point in time and the second image at a second point in time following the first point in time, wherein: in the step of detecting, the first and second images are detected using at least one image detection device, the first and second images represent an item as the object from surroundings of the at least one image detection device, and a distance is present between the at least one image detection device and the item.

6. The method as recited in claim 1, wherein: at least one additional object is tracked, an additional first object luminance of the additional object is ascertained, an additional first distance to the at least one additional object at the first point in time is ascertained, an additional second object luminance of the additional object is ascertained, an additional second distance to the at least one additional object at the second point in time is ascertained, and in the step of determining, the atmospheric extinction coefficient is determined using the additional first object luminance, the additional second object luminance, the additional first distance, and the additional second distance.

7. The method as recited in claim 1, wherein: the object is tracked in at least a third image detected at a third point in time, a third object luminance of the object is ascertained, a third distance to the object at the third point in time is ascertained, and the atmospheric extinction coefficient is determined using the third object luminance and the third distance.

8. A device for supporting safe driving of a vehicle, the device comprising: a processing circuitry that is configured for: tracking an object depicted in a first image detected at a first point in time and in at least one second image detected at a second point in time; ascertaining a first object luminance of the object and a first distance to the object at the first point in time; ascertaining a second object luminance of the object and a second distance to the object at the second point in time; determining an atmospheric extinction coefficient using the first object luminance, the second object luminance, the first distance, and the second distance, the atmospheric extinction coefficient being in direct correlation to a visibility range; estimating the visibility range using the determined atmospheric extinction coefficient; and warning a driver or adapting a driving of the vehicle based on the estimated visibility range; wherein the atmospheric extinction coefficient is determined using a one-dimensional equation of: F ( K ) = L air 2 ( S eed - S ed ) + L air ( .Math. m = 1 M L 0 m S ed m - 2 .Math. m = 1 M L 0 m S eed m + S Led ) + .Math. m = 1 M L 0 m L 0 m S eed m - .Math. m = 1 M L 0 m S Led m .

9. A non-transitory machine-readable memory medium having a computer program that is executable by a processor of a driver assistance system of a vehicle and that, when executed by the processor, causes the processor to carry out a method for supporting safe driving of the vehicle, the method comprising: tracking an object depicted in a first image detected at a first point in time and in at least one second image detected at a second point in time; ascertaining a first object luminance of the object and a first distance to the object at the first point in time; ascertaining a second object luminance of the object and a second distance to the object at the second point in time; determining an atmospheric extinction coefficient using the first object luminance, the second object luminance, the first distance, and the second distance, the atmospheric extinction coefficient being in direct correlation to a visibility range; estimating the visibility range using the determined atmospheric extinction coefficient; and warning a driver or adapting a driving of the vehicle based on the estimated visibility range; wherein the atmospheric extinction coefficient is determined using a one-dimensional equation of: F ( K ) = L air 2 ( S eed - S ed ) + L air ( .Math. m = 1 M L 0 m S ed m - 2 .Math. m = 1 M L 0 m S eed m + S Led ) + .Math. m = 1 M L 0 m L 0 m S eed m - .Math. m = 1 M L 0 m S Led m .

10. The method as recited in claim 2, wherein the fitting using the regression additionally yields an estimate for the air luminance.

11. The method as recited in claim 2, wherein: the model is L.sub.N=e.sup.−Kd.sup.NL.sub.o+(1−e.sup.−Kd.sup.N)L.sub.air or its algebraic equivalent; L.sub.N is the value of the overall luminance for point in time N; L.sub.o is the value of the luminance for object o; L.sub.air is the value of the air luminance; d.sub.N is the value of the distance of the object for point in time N; and K is the atmospheric extinction coefficient.

12. The method as recited in claim 11, wherein the fitting using the regression yields respective estimates for each of L.sub.o and L.sub.air by the fitting of each of the values of d.sub.N and L.sub.N of each of the first and second points in time to the model.

13. The method as recited in claim 2, wherein least squared error is used as the regression.

14. The method as recited in claim 2, wherein the model models light transmission by an atmospheric aerosol.

15. The method as recited in claim 2, wherein: the regression includes a plurality of fitting iterations; the model includes a first plurality of variables, the first plurality of variables including a first variable representing the first and second overall luminance and into which the values of the first overall luminance and the second overall luminance are plugged in each of the iterations, and a second variable representing the first and second distances and into which the values of the first and second distances are plugged in each of the iterations; the model includes a second plurality of variables, the values of which are modified over the course of the plurality of fitting iterations; and the second plurality of variables includes a third variable representing the atmospheric extinction coefficient.

16. The method as recited in claim 15, wherein the second plurality of variables further include a fourth variable representing the air luminance and a fifth variable representing the luminance of the object.

17. The method as recited in claim 15, wherein: the model is L.sub.N=e.sup.−Kd.sup.NL.sub.o+(1−e.sup.−Kd.sup.N)L.sub.air or its algebraic equivalent; L.sub.N is the first variable, into which are plugged values of the overall luminance per point in time N; L.sub.air is a fourth variable representing an unknown value of the air luminance; L.sub.o is a fifth variable representing the luminance for object o; d.sub.N is the second variable, in which are plugged values of the distance of the object per point in time N; and K is the fourth variable representing the atmospheric extinction coefficient.

18. The method as recited in claim 17, wherein the fitting using the regression yields respective estimates for each of L.sub.o and L.sub.air by the fitting of each of the values of d.sub.N and L.sub.N of each of the first and second points in time to the model.

19. The method as recited in claim 1, wherein: K represents the extinction coefficient; M is a number of objects considered; L.sub.0.sup.m is a current estimate of intrinsic luminance of object m; L.sub.air is a current estimate of air luminance; S ed m is .Math. n = 1 N m d n m e - Kd n m ; N.sub.m is a number of measurements for object m; e is Euler's number; d.sub.n.sup.m is distance to object m for measurement n; S ed is .Math. m = 1 M S ed m ; S eed m is .Math. n = 1 N m d n m e - Kd n m e - Kd n m ; S eed is .Math. m = 1 M S eed m ; S Led m is .Math. n = 1 N m L n m d n m e - Kd n m ; L.sub.n.sup.m is current estimate of luminance of object m for measurement n; and S Led is .Math. m = 1 M S Led m ;

20. The method as recited in claim 19, wherein the atmospheric extinction coefficient is determined by finding a value for the extinction coefficient in which F(K)=0.

21. The method as recited in claim 19, wherein: L.sub.o.sup.m is defined as S Le m + L air ( S ee m - S e m ) S ee m ; S Le m is .Math. n = 1 N m L n m e - Kd n m ; S ee m is .Math. n = 1 N m e - Kd n m e - Kd n m ; and S e m is .Math. n = 1 N m e - Kd n m .

22. The method as recited in claim 21, wherein: L.sub.air is defined as S L - .Math. m = 1 M s e m s Le m s ee m S 1 - .Math. m = 1 M s e m s e m s ee m ; S L is .Math. m = 1 M S L m ; S 1 is .Math. m = 1 M S 1 m ; S L m is .Math. n = 1 N m L n m ; and S 1 m is .Math. n = 1 N m 1.

23. The method as recited in claim 19, wherein: L.sub.air is defined as S L - .Math. m = 1 M s e m s Le m s ee m S 1 - .Math. m = 1 M s e m s e m s ee m ; S L is .Math. m = 1 M S L m ; S 1 is .Math. m = 1 M S 1 m ; S L m is .Math. n = 1 N m L n m ; S 1 m is .Math. n = 1 N m 1 ; S Le m is .Math. n = 1 N m L n m e - Kd n m ; S ee m is .Math. n = 1 N m e - Kd n m e - Kd n m ; and S e m is .Math. n = 1 N m e - Kd n m .

24. The device as recited in claim 8, wherein: K represents the extinction coefficient; M is a number of objects considered; L.sub.0.sup.m is a current estimate of intrinsic luminance of object m; L.sub.air is a current estimate of air luminance; S ed m is .Math. n = 1 N m d n m e - Kd n m ; N.sub.m is a number of measurements for object m; e is Euler's number; d.sub.n.sup.m is distance to object m for measurement n; S ed is .Math. m = 1 M S ed m ; S eed m is .Math. n = 1 N m d n m e - Kd n m e - Kd n m ; S eed is .Math. m = 1 M S eed m ; S Led m is .Math. n = 1 N m L n m d n m e - Kd n m ; L.sub.n.sup.m is current estimate of luminance of object m for measurement n; and S Led is .Math. m = 1 M S Led m ;

25. The device as recited in claim 24, wherein the atmospheric extinction coefficient is determined by finding a value for the extinction coefficient in which F(K)=0.

26. The non-transitory machine-readable memory medium as recited in claim 9, wherein: K represents the extinction coefficient; M is a number of objects considered; L.sub.0.sup.m is a current estimate of intrinsic luminance of object m; L.sub.air is a current estimate of air luminance; S ed m is .Math. n = 1 N m d n m e - Kd n m ; N.sub.m is a number of measurements for object m; e is Euler's number; d.sub.n.sup.m is distance to object m for measurement n; S ed is .Math. m = 1 M S ed m ; S eed m is .Math. n = 1 N m d n m e - Kd n m e - Kd n m ; S eed is .Math. m = 1 M S eed m ; S Led m is .Math. n = 1 N m L n m d n m e - Kd n m ; L.sub.n.sup.m is current estimate of luminance of object m for measurement n; and S Led is .Math. m = 1 M S Led m ;

27. The non-transitory machine-readable memory medium as recited in claim 26, wherein the atmospheric extinction coefficient is determined by finding a value for the extinction coefficient in which F(K)=0.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 shows a block diagram of a device for tracking-based visibility range estimation for a vehicle according to one exemplary embodiment of the present invention.

(2) FIG. 2 shows a flow chart of a method for tracking-based visibility range estimation according to one exemplary embodiment of the present invention.

(3) FIG. 3 shows a schematic representation of a meteorological visibility range according to one exemplary embodiment of the present invention.

(4) FIG. 4 shows a schematic representation of a relationship of object light and scattered ambient light according to one exemplary embodiment of the present invention.

(5) FIG. 5 shows a representation of measured luminances plotted over a distance according to one exemplary embodiment of the present invention.

(6) FIG. 6 shows a schematic representation of an object at different distances according to one exemplary embodiment of the present invention.

(7) FIG. 7 shows a block diagram of a signal curve according to one exemplary embodiment of the present invention.

DETAILED DESCRIPTION

(8) In the following description of advantageous exemplary embodiments of the present invention, identical or similar reference numerals are used for elements which are similar operating elements and represented in the various figures, a repeated description of these elements being omitted.

(9) FIG. 1 shows a block diagram of a device 100 for tracking-based visibility range estimation for a vehicle 102 according to one exemplary embodiment of the present invention. Device 100 includes a tracking device 104 for tracking a depiction of an object in a first image 106 detected at a first point in time and in a second image 108 detected at a second point in time, an ascertainment device 110 for ascertaining a first object luminance L.sub.1 of the object and a first distance d.sub.1 to the object at the first point in time and also ascertaining a second object luminance L.sub.2 of the object and a second distance d.sub.2 to the object at the second point in time, and also a determination device 112 for determining an atmospheric extinction coefficient K using first object luminance L.sub.1, second object luminance L.sub.2, first distance d.sub.1, and second distance d.sub.2, atmospheric extinction coefficient K being directly correlated to visibility range d.sub.met. The correlation between atmospheric extinction coefficient K and visibility range d.sub.met is represented in equation (1). The first point in time is thereby chronologically prior to the second point in time.

(10) Furthermore, in the exemplary embodiment shown in FIG. 1, device 100 has an interface 114 for detecting first image 106 at the first point in time and second image 108 at the second point in time following the first point in time. Images 106, 108 are detected using an image detection device 116. Images 106, 108 represent an object, for example an item from the surroundings of image detection device 116, distances d.sub.1, d.sub.2 each representing a distance between image detection device 116 and the item.

(11) FIG. 2 shows a flow chart of a method for tracking-based visibility range estimation according to one exemplary embodiment of the present invention. The method includes a step 220 of tracking an object, a step 222 of ascertaining, and a step 224 of determining an atmospheric extinction coefficient K. In step 220 of tracking, a depiction of an object is tracked in a first image detected at a first point in time and in a second image detected at a second point in time. The object is thus recognized and a position and extent of the depiction of the object in the image is ascertained in the image in order to be able to further examine the area according to the depiction of the object in the following steps. In step 222 of ascertaining, an object luminance and a distance value are ascertained for the object and provided as a pair of variables. Thus, a first object luminance of the object is ascertained in the image recorded at the first point in time and a second object luminance of the object is ascertained in the image recorded at the second point in time, for example by using the respective images. The assignable first distance and second distance between the object and an image detection device are ascertained, depending on the exemplary embodiment, from the image, from additional data from the image detection device, or from another sensor. In step 224 of determining, atmospheric extinction coefficient K is determined using the first object luminance, the second object luminance, the first distance, and the second distance, the atmospheric extinction coefficient being directly correlated to the visibility range. The correlation between atmospheric extinction coefficient K and visibility range d.sub.met is represented in equation (1).

(12) In one exemplary embodiment, in step 224 of determining, extinction coefficient K is determined using a one-dimensional equation and additionally or alternatively a model of horizontal view.

(13) In one exemplary embodiment, extinction coefficient K is determined in step 224 of determining using an estimation method from the one-dimensional equation.

(14) In one exemplary embodiment, extinction coefficient K is determined in step 224 of determining using an iterative Newton's method.

(15) In one exemplary embodiment, the method includes an optional step of detecting the first image at the first point in time and the second image at the second point in time following the first point in time. In the step of detecting, the images are detected using an image detection device, the images representing an item from the surroundings of the image detection device as an object, the distance representing a distance between the image detection device and the item

(16) In one exemplary embodiment, at least one additional object is tracked in step 220 of tracking. Then in step 222 of ascertaining, an additional first object luminance of the additional object and an additional first distance to the at least one additional object at the first point in time is ascertained and also an additional second object luminance of the additional object and an additional second distance to the at least one additional object at the second point in time is ascertained. In step 224 of determining, the atmospheric extinction coefficient is determined using the additional first object luminance, the additional second object luminance, the additional first distance, and the additional second distance.

(17) In one exemplary embodiment, a depiction of the object is tracked in step 220 of tracking in a third image detected at a third point in time, whereby in step 222 of ascertaining, a third object luminance of the object and a third distance to the object at the third point in time is ascertained, and in step 224 of determining, the atmospheric extinction coefficient being determined using the third object luminance and the third distance.

(18) In one exemplary embodiment, additional pieces of surroundings information (for example surroundings luminance, object knowledge, . . . ) are also incorporated into step 224 of determining. These may be obtained from the image and/or from additional sensors and/or from the context, among others.

(19) FIG. 3 shows a schematic representation of a meteorological visibility range d.sub.met according to one exemplary embodiment of the present invention. Meteorological visibility range d.sub.met results from the distance at which an object is still perceivable with 5% of its original contrast. FIG. 3 thus shows a silhouette of a vehicle in five views next to one another, the contrast varying, from an original contrast referred to as 100% to a contrast referred to as 2%. Between them, the silhouette is shown with 50%, 20% and 5%. The threshold of perception lies at a contrast of 5%.

(20) FIG. 4 shows a schematic representation of a correlation between object light 430 and ambient light 432 scattered in according to one exemplary embodiment of the present invention. Object light 430 is attenuated on the path from object 434, for example an item, to viewer 436 and enhanced by ambient light 432 scattered in.

(21) The approach described here is based on the tracking of objects, parts of surfaces or points across two or more frames or images of a camera. When these tracked entities—here denoted with reference numeral 434—are moved in their distance relative to the camera, the luminance or object luminance is modified by the fog Luminance here is to mean not only the classic photometric luminance. Here, the term is to represent any arbitrary (but over the course of the embodiments constant) spectral weighting of radiation density Luminance may here in particular also represent the spectral weighting according to the sensitivity curve of individual pixels of the camera imager or of the image detection device.

(22) This correlation between luminance and object distance is described in more detail by Koschmieder's model of horizontal visual range, for example:
L=e.sup.−KdL.sub.0+(1−e.sup.−Kd)L.sub.air  (2)
parameters L.sub.0 and L.sub.air representing the luminance of the object and the ambient light, and d [m] representing the distance between object and viewer. L is the object light perceived by the viewer, which is composed according to equation (2) of attenuated object light L.sub.0 and ambient light L.sub.air scattered in.

(23) According to one exemplary embodiment, when a road sign is tracked during passing by a front camera during fog, the luminance (perceived lightness) decreases when approaching the road sign since less ambient light is scattered into the optical path and the light reflected by the object is weaker than the ambient light. Therefore, a curve of decreasing luminances results during tracking. If the distances to the tracked object are measured simultaneously, the luminance curve may also be plotted over the distance. The distances or spaces to the tracked object may, for example, be measured and determined via “structure from motion” in a mono camera, via stereo vision or via other sensors, for example, LIDAR. One example for a luminance curve over the distance or the space is shown in FIG. 5.

(24) The distances may also be smoothed, interpolated and extrapolated during the process in order to also obtain distances in areas of the track at which a distance estimation is difficult or impossible, but the object may already be tracked. Information about the distance traveled between the recording points in time (for example, using ESP, GPS, ego-motion estimation, . . . ) may be taken into account during the process.

(25) FIG. 5 shows a representation of measured luminances plotted over a space according to one exemplary embodiment of the present invention. In a Cartesian coordinate system, a distance d is plotted on the abscissa and a luminance L of an object is plotted on the ordinate. Actual measuring points are plotted as dots, i.e., measured luminances of a tracked object are plotted over the space. As a continuous line, an adjusted model curve is represented according to Koschmieder's model of horizontal visual range, corresponding to Equation (2).

(26) FIG. 6 shows a schematic representation of an object 434 at different distances according to one exemplary embodiment of the present invention. The luminances of an object are detected at different distances. The object represented in an image may be understood, for example, as a depiction of a real item. The luminance is referred to as L, the space or the distance as d. The indices refer to the point in time; L.sub.1 represents the luminance at a first point in time, L.sub.2 represents the luminance at a second point in time. An image is assigned to each point in time.

(27) Thus the value pairs (L.sub.1; d.sub.1), . . . , (L.sub.N; d.sub.N) result in an object, N being the number of the frames or images in which the object was able to be tracked. In order to make an inference using Koschmieder's model according to equation (2) or also other models regarding the underlying extinction coefficient K, it is recommended that the value pairs preferably correspond to the predefined model; for Koschmieder:

(28) L 1 = - Kd 1 L 0 + ( 1 - - Kd 1 ) L air , .Math. L N = - Kd N L 0 + ( 1 - - Kd N ) L air . ( 3 )

(29) Since equation system (3) for N>3 based on noisy real data generally cannot be solved accurately, the parameters (K; L.sub.0; L.sub.air) are estimated in such a way that equations (3) within the sense of the least error squares are fulfilled as best possible:

(30) .Math. n = 1 N ( [ - Kd n L 0 + ( 1 - - Kd n ) L air ] - L n ) 2 .fwdarw. min

(31) Extinction coefficient K, in particular, also meteorological visibility range d.sub.met, may thus be estimated from image sequences.

(32) FIG. 7 shows a block diagram of a signal curve according to one exemplary embodiment of the present invention. In a first block 740, the signals to be processed are read in or recorded, in a second block 742, the signals to be processed are processed, and in third block 744, a visibility range signal d.sub.met or an extinction coefficient signal K is provided. Block 742 includes two sub blocks 746, 748.

(33) In block 740, the value pairs (L.sub.1; d.sub.1), . . . , (L.sub.N; d.sub.N) are read in or recorded for one or a plurality of object(s). In first sub block 746 of block 742, a model fit for estimating extinction coefficient K is carried out, and second sub block 748 of block 742 provides means suitable for this purpose.

(34) First sub block 746 represents a system which uses measuring data across tracked object luminances and distances via a model for estimating atmospheric extinction coefficient K or other fog properties.

(35) Second sub block 748 represents the system, which uses a subsequently described method for minimizing the concrete Koschmieder's model functional (which may be used in first sub block 746), and thus becomes real-time capable.

(36) Advantages of the general approach in first sub block 746 are that, due to its novelty, it is able to estimate extinction coefficients K or visibility range d.sub.met independently of previously existing methods, and thus may be used alone or in combination (for validation). It is independent of a road traffic scenario and could be used in principle in every system which estimates luminances and associated distances.

(37) A further advantage is the ease of integrability into an existing system. Thus, arbitrarily tracked objects, street signs, tracked rivers, etc. might be incorporated without additional complexity for visibility range estimation. The actually very expensive estimation in first sub block 746 (minimization of a complex functional into ≧3 parameters) becomes very cost-efficient due to second sub block 748.

(38) First sub block 746 is described in the following in greater detail. Defined should be object luminances (or almost-linear representations of object luminances)

(39) L.sub.1.sup.1, . . . , L.sub.N.sub.1.sup.1, . . . , L.sub.1.sup.M, . . . , L.sub.N.sub.M.sup.M of M≧1 object(s) and associated distances d.sub.1.sup.1, . . . , d.sub.N.sub.1.sup.1, . . . , d.sub.1.sup.M, . . . , d.sub.N.sub.M.sup.M, where Nm=length of the object track m, for all m ε {1, . . . , M}. A model-based estimation of extinction coefficient K is carried out based on these data. For the Koschmieder Model from equation 2, this means that equation system:

(40) obj . 1 : { L 1 1 = L air + ( L 0 1 - L air ) - Kd 1 1 .Math. L N 1 1 = L air + ( L 0 1 - L air ) - Kd N 1 1 .Math. obj . M : { L 1 M = L air + ( L 0 M - L air ) - Kd 1 M .Math. L N M M = L air + ( L 0 M - L air ) - Kd N M M ( 5 )
may be solved optimally in model parameters K, L.sub.air, L.sub.0.sup.1, . . . , L.sub.0.sup.M, in that functional custom character is minimized.

(41) : ( K , L air , L 0 1 , .Math. , L 0 M ) .Math. m = 1 M .Math. n = 1 N m ( [ - Kd n m L 0 m + ( 1 - - Kd n m ) L air ] - L n m ) 2 ( 6 )

(42) Since each object has its own intrinsic luminance L.sub.0, an additional parameter is added per object, resulting in a total of M+2 parameters.

(43) Alternatively, the model may also be very simple, roughly, “objects which move toward me become darker, which is indicative of fog”. Even trained fog recognition algorithms or machine learning algorithms, which specifically determine a trained visibility range, could be constructed on observations of this type.

(44) In one exemplary embodiment, measuring uncertainties may be incorporated into the functional and thus into the parameter estimation. For measuring uncertainties, which are expressed as standard deviations σ.sub.n.sup.m in the measurements L.sub.n.sup.m, the following maximum likelihood target functional results from underlying normally distributed random process:

(45) : ( K , L air , L 0 1 , .Math. , L 0 M ) .Math. m = 1 M .Math. n = 1 N m 1 ( σ n m ) 2 ( [ - Kd n m L 0 m + ( 1 - - Kd n m ) L air ] - L n m ) 2 ( 7 )

(46) Second sub block 748 is described in greater detail in the following. A large expense is necessary (depending on the number M of objects and the length of object tracks N.sub.m) in order to be able to minimize functional custom character with the aid of conventional methods (gradient descent, Newton's method, Levenberg-Marquardt, . . . ). This minimization could only be integrated into a real-time system with difficulty and would demand many resources there. In this exemplary embodiment, a system is described which, instead of a minimization of functional, carries out an equivalent resolution of a one-dimensional equation f(K)=0. According to the expense for calculating the one-dimensional equation f(K), this is a much more cost-efficient problem. For example, the iterative Newton's method, K:=K−f(K)/f′(K), may be used for the resolution of f(K)=0. For one-dimensional equation f presented below, few (precisely 3) iterations suffice with starting value K:=0 for a sufficient precision of all forms of (simulated) data sets.

(47) A one-dimensional equation f which meets the required property may be calculated as follows:

(48) f ( K ) = L air 2 ( S eed - S ed ) + L air ( .Math. m = 1 M L 0 m S ed m - 2 .Math. m = 1 M L 0 m S eed m + S Led ) + .Math. m = 1 M L 0 m L 0 m S eed m - .Math. m = 1 M L 0 m S Led m , where ( 8 ) j { 1 , .Math. , M } : L 0 j = S Le j + L air ( S ee j - S e j ) S ee j , and ( 9 ) L air = S L - .Math. m = 1 M S e m S Le m S ee m S 1 - .Math. m = 1 M S e m S e m S ee m , ( 10 )
and also an abbreviated notation is used:

(49) S 1 j := .Math. n = 1 N j 1 , S e j := .Math. n = 1 N j - Kd n j , S ee j := .Math. n = 1 N j - Kd n j - Kd n j , S L j := .Math. n = 1 N j L n j , S Le j := .Math. n = 1 N j L n j - Kd n j , S ed j := .Math. n = 1 N j d n j - Kd n j , S eed j := .Math. n = 1 N j d n j - Kd n j - Kd n j , S * := .Math. m = 1 M S * m . ( 11 )

(50) The derivation of the one-dimensional equation f according to extinction coefficient K (required for Newton's Method) is moreover trivially determinable.

(51) Since the determination of the luminance from the image intensity is only possible using exact radiometric or photometric calibration of the camera, luminance L may here also represent a (approximately) linear representation of the luminance, i.e., L=α.Math.luminance+β.

(52) Saturation and quantification effects, as well as other inaccuracies in the linear camera model represent no problem. This is since, on the one hand, a linearly transformed representation of the luminance represents no problem for the estimation of extinction coefficient K using the above-mentioned method. And, on the other hand, the relatively small inaccuracies due to quantification and similar effects due to the estimation do not result in any important distortion of the result per error square. A saturation may furthermore be detected and saturated measured luminances may be ignored during the extinction coefficient estimation or K estimation.

(53) Further embodiment variants: In order to improve the estimation of extinction coefficient K with the aid of a model, it may be useful to limit the other model parameters by other measurements. This may be implemented by fixing the parameter (it is then no longer estimated), constraints on the parameter (it is only exactly estimated within the constraints), or by an additional penalty term (deviations of the parameter from the specification are penalized).

(54) For example, the parameter for the luminance of the ambient light L.sub.air in the model could also be estimated independently from the image according to equation (2) (is useful, for the luminance of the ambient light L.sub.air to observe the luminances of the visible horizon and for example to average them). Now, functional F may be minimized by maintaining the luminance of ambient light L.sub.air or using constrained luminance of ambient light L.sub.air. If the uncertainty of the estimation of the luminance of the ambient light, in short, the L.sub.air estimate, is known, a weighted penalty term may be added to functional F, for example, for estimated {circumflex over (L)}.sub.air and λ, which describes the reliability of the estimation:
custom character.sub.λ,{circumflex over (L)}.sub.air:(K,L.sub.air,L.sub.0.sup.1, . . . L.sub.0.sup.M)custom charactercustom character(K,L.sub.air,L.sub.0.sup.1, . . . L.sub.0.sup.M)+λ(L.sub.air−{circumflex over (L)}.sub.air).sup.2  (12)

(55) In this variant as well, a f.sub.λ,{circumflex over (L)}.sub.air may be found in such a way that exactly f.sub.λ,{circumflex over (L)}.sub.air (K)=0 in case the other parameters may be found in such a way, such that custom character.sub.λ,{circumflex over (L)}.sub.air is minimized in this parameter selection.

(56) The step carried out in second sub block 748 thereby appears advantageous for the implementation of (A) (=minimization of F) in a real-time system.

(57) The exemplary embodiments described here and illustrated in the figures are selected only as examples. Different exemplary embodiments may be combined with each other completely or in regard to individual features. One exemplary embodiment may also be supplemented by features of another exemplary embodiment.

(58) Furthermore, the method steps presented here may also be repeated or carried out in a sequence different from the sequence described.

(59) If one exemplary embodiment includes an “and/or” link between a first feature and a second feature, this should be read in such a way that the exemplary embodiment according to one specific embodiment includes both the first feature and the second feature, and according to an additional specific embodiment includes either only the first feature or only the second feature.