Method and device for image-based visibility range estimation
09536173 ยท 2017-01-03
Assignee
Inventors
Cpc classification
G06V20/56
PHYSICS
International classification
Abstract
A method for image-based visibility range estimation for a vehicle includes: ascertaining a depiction of an object of the surroundings in an image of an image detection device of the vehicle, the object having an extension in the direction of travel of the vehicle, the image showing a depiction of surroundings ahead of the vehicle; segmenting the depiction of the object in order to obtain first and second object ranges of the object having respective first and second distances to the image detection device within respective tolerance ranges; determining a first object luminance of the first object range and a second object luminance of the second object range; and determining an atmospheric extinction coefficient correlated to the visibility range, using the first and second object luminances, and the first and second distances.
Claims
1. A method for an image-based visibility range estimation, comprising: obtaining, by processing circuitry, an image captured by an image detection device and that depicts surroundings of the image detection device; processing, by the processing circuitry, the image, wherein the processing includes: ascertaining, by processing circuitry, a depiction of an object in the image; and segmenting, by the processing circuitry, the depiction of the object in order to (i) obtain a first object range of the object having an equal first distance to the image detection device within a tolerance range, (ii) obtain a second object range of the object having an equal second distance to the image detection device within a tolerance range, (iii) determine a first object luminance for the first object range, and (iv) determine a second object luminance for the second object range; and determining, by the processing circuitry, an atmospheric extinction coefficient using (a) the first object luminance, the second object luminance, the first distance, and the second distance with (b) at least one of (i) a one-dimensional equation and (ii) a model for light transmission by atmospheric aerosols, wherein the atmospheric extinction coefficient is in direct correlation to the visibility range.
2. The method as recited in claim 1, wherein in the step of determining, the extinction coefficient is determined using an estimation method from the one-dimensional equation.
3. The method as recited in claim 1, wherein in the step of determining, the extinction coefficient is determined using an iterative Newton's method.
4. The method as recited in claim 1, wherein: in the step of segmenting, the depiction of the object is segmented in order to (i) further obtain at least one third object range having an equal third distance to the image detection device within a tolerance range, and (ii) further determine a third object luminance for the third object range is determined; and the atmospheric extinction coefficient is determined additionally using the third object luminance and the third distance.
5. The method as recited in claim 1, further comprising: a step of detecting the image using the image detection device, the image showing a depiction of an object of the surroundings in the image, the object having an extension at least along a direction of travel of the vehicle.
6. The method as recited in claim 1, wherein the steps of ascertaining and segmenting are carried out for an additional image, the atmospheric extinction coefficient being determined in the step of determining using at least one third object luminance and a third distance assigned to third object luminance.
7. A device for an image-based visibility range estimation, comprising: a control unit including a processor configured to perform the following: ascertaining a depiction of an object in an image, the image showing a depiction of surroundings of an image detection device; segmenting the depiction of the object in order to (i) obtain a first object range of the object having an equal first distance to the image detection device within a tolerance range, (ii) obtain a second object range of the object having an equal second distance to the image detection device within a tolerance range, (iii) determine a first object luminance for the first object range, and (iv) determine a second object luminance for the second object range; and determining an atmospheric extinction coefficient using (a) the first object luminance, the second object luminance, the first distance, and the second distance with (b) at least one of (i) a one-dimensional equation and (ii) a model for light transmission by atmospheric aerosols, wherein the atmospheric extinction coefficient is in direct correlation to the visibility range.
8. A non-transitory, computer-readable data storage medium storing a computer program having program codes which, when executed on a computer, perform a method for an image-based visibility range estimation, the method comprising: ascertaining a depiction of an object in an image, the image showing a depiction of surroundings of an image detection device; segmenting the depiction of the object in order to (i) obtain a first object range of the object having an equal first distance to the image detection device within a tolerance range, (ii) obtain a second object range of the object having an equal second distance to the image detection device within a tolerance range, (iii) determine a first object luminance for the first object range, and (iv) determine a second object luminance for the second object range; and determining an atmospheric extinction coefficient using (a) the first object luminance, the second object luminance, the first distance, and the second distance with (b) at least one of (i) a one-dimensional equation and (ii) a model for light transmission by atmospheric aerosols, wherein the atmospheric extinction coefficient is in direct correlation to the visibility range.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
DETAILED DESCRIPTION OF THE INVENTION
(12) In the following description of advantageous exemplary embodiments of the present invention, identical or similar reference numerals are used for elements which are similar operating elements and represented in the various figures, a repeated description of these elements being omitted.
(13)
(14) In an alternative exemplary embodiment, image detection device 110 and device 100 for the image-based visibility range estimation is used independently of a vehicle.
(15) Image 108 represents a depiction 106 of surroundings ahead of vehicle 102 according to this exemplary embodiment. The object which is segmented in device 112 for segmenting has an extension in the direction of travel of vehicle 102.
(16) In an alternative exemplary embodiment, the object has an extension in distance or depth to the vehicle. In an alternative exemplary embodiment, the object has an extension in one viewing direction of the image detection device. For example, the road detected by the image detection device describes a curve, or the image detection device has a detection direction which is different from the direction of travel of the vehicle, and the object extending in the depth is, for example, a parking lot or a runway situated next to the vehicle.
(17) Furthermore, in the exemplary embodiment shown in
(18)
(19) In one exemplary embodiment, in step 224 of determining, extinction coefficient K is determined using a one-dimensional equation and additionally or alternatively a model of horizontal view.
(20) In one exemplary embodiment, extinction coefficient K is determined in step 224 of determining using an estimation method from the one-dimensional equation.
(21) Optionally, extinction coefficient K is determined in step 224 of determining using an iterative Newton's method.
(22) In step 222 of segmenting, at least one third object range having an equal third distance within a tolerance range to the image detection device is optionally segmented and a third object luminance for the third object range is determined. In the step of determining, atmospheric extinction coefficient K is determined using the third object luminance and the third distance.
(23) The method optionally includes a step of detecting the image using an image detection device of the vehicle, the image showing a depiction of an object of the surroundings in the image, the object having an extension in the direction of travel of the vehicle.
(24) In one exemplary embodiment, the steps of ascertaining 220 and segmenting 222 are carried out for an additional image, atmospheric extinction coefficient K being determined in step 224 of determining using at least third object luminance L.sub.3 and a third distance d.sub.3 assigned to third object luminance L.sub.3. During the process, at least one third object luminance L.sub.3 and assigned third distance d.sub.3 are ascertained during the execution of the steps of ascertaining 220 and segmenting 222 for the additional image.
(25) As one aspect, this method segments the road area in the middle of the camera image if possible (or other surfaces with a z-extension/depth extension.) In the segmented area, distances to the road are determined (for example, via stereo (stereo camera), via Structure from Motion (mainly for mono camera), via knowledge about the road surface (flat earth assumption also possible) and orientation of the camera to the road (mainly for mono camera), via other sensors such as radar and LIDAR (requires good extrinsic calibration of the entire system, however), . . . . ) Road areas of approximately the same distance are combined (for example, line by line for a camera which is hardly rotated) and a luminance for the combined areas is estimated (for example as an average or median luminance.) N luminance-distance value pairs (L.sub.1; d.sub.1), . . . , (L.sub.N; d.sub.N) thus result. If an approximately constant reflection of the road is presumed and it is assumed to be a Lambertian surface (realistic assumptions), extinction coefficient K may be determined from the measurements by adapting a fog model (at least in parameters L, d and K) to the measured values. It is advantageous that this method is not dependent on the possibility of segmenting the road completely to the horizon. This is particularly useful in the case of preceding vehicles or otherwise geometrically blocked sight on the course of the road.
(26) One exemplary embodiment of the method described here uses only one single image, not an image sequence including a trackable object. Tracking an object over a long time is replaced here or supplemented by the partial segmentation of the (road) surface.
(27) In one exemplary embodiment, additional surroundings information (e.g., surroundings lightness, object information, . . . ) are taken into account in step 224 of determining. These may be obtained from the image and/or from additional sensors and/or from the context, among others.
(28)
(29)
(30) The approach described here is based on the tracking of objects, parts of surfaces or points across two or more frames or images of a camera. When these tracked entitieshere denoted with reference numeral 434are moved in their distance relative to the camera, the luminance or object luminance is modified by the fog. Luminance here is to mean not only the classic photometric luminance. Here, the term is to represent any arbitrary (but over the course of the embodiments constant) spectral weighting of radiation density. Luminance may here in particular also represent the spectral weighting according to the sensitivity curve of individual pixels of the camera imager or of the image detection device.
(31) This correlation between luminance and object distance is described in more detail by Koschmieder's theory of horizontal visual range, for example:
L=e.sup.KdL.sub.0+(1e.sup.+Kd)L.sub.air(2)
parameters L.sub.0 and L.sub.air representing the luminance of the object and the ambient light, and d [m] representing the distance between object and viewer. L is the object light perceived by the viewer, which is composed according to equation (2) of attenuated object light L.sub.0 and ambient light L.sub.air scattered in.
(32) According to one exemplary embodiment, when a road sign is tracked during passing by a front camera during fog, the luminance (perceived lightness) decreases when approaching the road sign since less ambient light is scattered into the optical path and the light reflected by the object is weaker than the ambient light. Therefore, a curve of decreasing luminances results during tracking. If the distances to the tracked object are measured simultaneously, the luminance curve may also be plotted over the distance. The distances or spaces to the tracked object may, for example, be measured and determined via structure from motion in a mono camera, via stereo vision or via other sensors, for example, LIDAR. One example for a luminance curve over the distance or the space is shown in
(33) The distances may also be smoothed, interpolated and extrapolated during the process in order to also obtain distances in areas of the track at which a distance estimation is difficult or impossible, but the object may already be tracked. Information about the distance traveled between the recording points in time (for example, using ESP, GPS, ego-motion estimation, . . . ) may be taken into account during the process.
(34)
(35)
(36) In a favorable case, the road surface has a constant albedo within a narrow tolerance range, meaning that the road surface has a constant reflectivity and is diffusely reflecting. In one favorable exemplary embodiment, the road surface follows Lambert's Law, also known as Lambert's cosine law. The radiation density thus remains the same from all viewer angles. For the viewer, this results in a luminance independent of the viewer angle, the luminance being the photometric equivalent to the radiation density.
(37)
(38) One way of estimating extinction coefficient K from individual frames or images of a front camera or image detection device during day light is based on the extraction of the so-called Road Surface Luminance Curve, also abbreviated as RSLC. Here, an area of road and sky is segmented in the camera image and a line-by-line median of the segmented area is plotted as a curve. It has been found that the position of the turning point of this curve may be correlated to extinction coefficient K via models.
(39)
(40) Thus the value pairs (L.sub.1; d.sub.1), . . . , (L.sub.N; d.sub.N) result in an object, N being the number of the frames or images in which the object was able to be tracked. In order to make an inference using Koschmieder's model according to equation (2) or also other models regarding the underlying extinction coefficient K, it is recommended that the value pairs correspond to the given model as much as possible; for Koschmieder:
(41)
(42) Since equation system (3) for N>3 based on noisy real data generally cannot be solved accurately, the parameters (K; L.sub.0; L.sub.air) are estimated in such a way that equations (3) within the sense of the least error squares are fulfilled as best possible:
(43)
(44) Extinction coefficient K, in particular, also meteorological visibility range d.sub.met, may thus be estimated from image sequences.
(45)
(46) As described with reference to
(47)
(48) One camera is arranged in such a way that it is oriented along a surface having z-extension/depth extension (for example, a road). This means that multiple visible points exist at the surface which are at different distances to the camera. The surface is preferably Lambertian, has a preferably constant albedo and has preferably an extension over a large area for stability reasons. The scene is furthermore preferably evenly lit (this is the case, for example, during the day.) The camera supplies an image (signal) on which a preferably large area of the surface is segmented. This means that it is decided for each pixel of the image whether this pixel is part of the surface or not. In one exemplary embodiment, this is not carried out exactly. In particular large parts of the surface are classified as not belonging to the surface in one exemplary embodiment.
(49) Distances are estimated in the segmented area. Additional information from external sensors and/or other image processing devices and/or assumptions about the surroundings may thereby also be considered. At points of unknown distance it is also interpolated and/or distances are supplemented using assumptions about the surface.
(50) In order to prevent noise in the image or in distances and to render the data manageable for the visibility range estimation, the segmented area is combined into luminance-distance-pairs (L.sub.1; d.sub.1), . . . , (L.sub.N; d.sub.N). This may take place line by line, for example, with the assumption that the surface is hardly rotated with reference to the camera and that the intersection of an image line with the surface is approximately at a distance to the image detection device.
(51) Parameters K;L.sub.air;L.sub.0 in Koschmieder's model according to equation (2) are adapted in the next step to the luminance-distance-pairs. This is carried out within the sense of the least error squares so that functional
(52)
is minimized. K or d.sub.met is thus estimable from the data. Since this is generally a computationally very intensive step, a special method is used in order to carry out the model adaptation to the surroundings data computationally, preferably cost-effectively:
(53) In order to minimize functional (see Equation 6), great effort is required with the aid of conventional methods (Gradient descent, Newton's method, Levenberg-Marquardt algorithm, . . . ) (depending on the number of N, the number of M of the objects and the length of object track N.sub.m.) This minimization would be integratable into a real-time system only with great difficulty and would use many resources there. In this exemplary embodiment, a system is described which instead of a minimization of functional
carries out an equivalent resolution thereto of a one-dimensional equation f(K)=0. Depending on the effort for the calculation of one-dimensional equation f(K), this is a much more cost-effective problem. For solving f(K)=0, the iterative Newton's method K:=Kf(K)/f(K) may be used, for example. For one-dimensional equation f presented below, few iterations suffice with starting value K:=0 for a sufficient precision of all forms of (simulated) data sets.
(54) A one-dimensional equation f which fulfils the required property may be calculated as follows:
(55)
as well as an abbreviated notation is used:
(56)
(57) Moreover, the inference of extinction coefficient K (required for Newton's method) from one-dimensional equation f is trivially determinable.
(58) In one exemplary embodiment, measuring uncertainties may be taken into account in the functional and thus be taken into account in the parameter estimation. For measuring uncertainties, which are expressed as standard deviations .sub.1, . . . , .sub.N in the measurements L.sub.1, . . . , L.sub.N, the following maximum likelihood objective functional would result for an underlying normally distributed random process, for which an f for the more rapid minimization exists similarly to equations (6), (7), (8) and (9):
(59)
(60) Since the determination of the luminance from the image intensity is only possible using exact radiometric or photometric calibration of the camera, luminance L may here also represent a (approximately) linear representation of the luminance, i.e., L=.Math.luminance+.
(61) Saturation and quantification effects, as well as other inaccuracies in the linear camera model represent no problem. This is since, on the one hand, a linearly transformed representation of the luminance represents no problem for the estimation of extinction coefficient K using the above-mentioned method. And, on the other hand, the relatively small inaccuracies due to quantification and similar effects due to the estimation do not result in any important distortion of the result per error square. A saturation may furthermore be detected and saturated measured luminances may be ignored during the extinction coefficient estimation or K estimation.
(62) In one alternative exemplary embodiment, the method is expanded to multiple surfaces. During the minimization of F, parameters or barriers for parameters are also predefined or inserted into the functional using additional penalty terms (for example, in the form of (L.sub.airL.sub.air.sup.given).sup.2) as prior knowledge. The estimation is thus advantageously stabilized.
(63)
(64)
(65) The diagram of device 100 essentially corresponds to the diagram and description of device 100 for the image-based visibility range detection in
(66) Device 104 for ascertaining is designed to track an object in first image 108 detected at the first point in time and in second image 1050 detected at the second point in time. Device 112 for segmenting is designed to ascertain at least one third object luminance L.sub.3 of the object and a third distance d.sub.3 to the object at the second point in time in addition to the value pairs of first object luminance L.sub.1, first distance d.sub.1, second object luminance L.sub.2 and second distance d.sub.2 at the first point in time. Determination device 114 is designed to ascertain atmospheric extinction coefficient K using first object luminance L.sub.1, second object luminance L.sub.2, at least third object luminance L.sub.3, first distance d.sub.1, second distance d.sub.2 and third distance d.sub.2, atmospheric extinction coefficient K being directly correlated to visibility range d.sub.met.
(67) In one optional exemplary embodiment, a plurality of value pairs of object luminance L and distance d are ascertained per image and provided to determination device 114. Furthermore, in one optional exemplary embodiment, a plurality of images are sequentially detected at a plurality of points in time and analyzed. For example, a sequence of 10, 30 or 50 images is analyzed.
(68)
(69) In one exemplary embodiment, the method presented here, or the device, is combined with an additional approach to estimate the visibility range based on a model. This is based on the tracking of objects across multiple frames. The measured luminances L and distances d of the objects are approximated using a model which includes extinction coefficient K as a parameter. In this way, extinction coefficient K is estimated as the most likely parameter among the monitored measurements. One model for this is typically the model of horizontal visual range by Koschmieder according to equation (2), which is combined with a method for rapid estimation.
(70) Parameters L.sub.0 and L.sub.air represent the luminance of the object and the ambient light and d [m] represents the distance between object and viewer. L is the object light perceived by the viewer, which is composed accordingly of attenuated object light and ambient light scattered in.
(71) In one variant, the method described here is expanded to multiple surfaces. During the minimization of F, parameters or barriers for parameters are also predefined or inserted into the functional using additional penalty terms as prior knowledge.
(72) The exemplary embodiments described here and illustrated in the figures are selected only as examples. Different exemplary embodiments may be combined with each other completely or in regard to individual features. One exemplary embodiment may also be supplemented by features of another exemplary embodiment.
(73) Furthermore, the method steps presented here may also be repeated or carried out in a sequence different from the sequence described.
(74) If one exemplary embodiment includes an and/or link between a first feature and a second feature, this should be read in such a way that the exemplary embodiment according to one specific embodiment includes both the first feature and the second feature, and according to an additional specific embodiment includes either only the first feature or only the second feature.