Estimating distance to an object using a sequence of images recorded by a monocular camera

11348266 · 2022-05-31

Assignee

Inventors

Cpc classification

International classification

Abstract

A method for monitoring headway to an object performable in a computerized system including a camera mounted in a moving vehicle. The camera acquires in real time multiple image frames including respectively multiple images of the object within a field of view of the camera. An edge is detected in in the images of the object. A smoothed measurement is performed of a dimension the edge. Range to the object is calculated in real time, based on the smoothed measurement.

Claims

1. A system for determining a distance from a moving vehicle to a pedestrian ahead of the moving vehicle, comprising: at least one processing device comprising circuitry and a memory, wherein the memory includes instructions that when executed by the circuitry cause the at least one processing device to: receive a plurality of images captured by an image capturing device mounted on the moving vehicle, the plurality of images including representations of the pedestrian; identify in a first image of the plurality of images, a bottom edge of the pedestrian, a top edge of the pedestrian, and a horizon represented in the first image; determine based on the bottom edge and the top edge, a first measured height of the pedestrian represented in the first image; determine a second measured height of the pedestrian based on a difference between a vertical position of the bottom edge and a vertical position of the horizon represented in the first image; and determine a distance from the moving vehicle to the pedestrian based on the first measured height and the second measured height.

2. The system of claim 1, wherein the second measured height is further determined based on a height of the image capture device.

3. The system of claim 1, wherein the memory includes instructions that when executed cause the at least one processing device to determine, based on a third measured height from a second image of the plurality of images, a smoothed height of the pedestrian, and wherein the distance from the moving vehicle to the pedestrian is determined based on the smoothed height and the second measured height.

4. The system of claim 1, wherein the at least one processing device is further configured to determine a refined vertical position of the horizon.

5. The system of claim 4, wherein determining the refined vertical position comprises detecting a change in position of the moving vehicle.

6. The system of claim 5, wherein the change in the position is detected based on a pitch angle estimation.

7. The system of claim 6, wherein the pitch angle estimation is performed based on at least one of an inertial sensor or an analysis of the plurality of images.

8. The system of claim 5, wherein detecting the change in position of the moving vehicle comprises: detecting a road lane based on an analysis of one or more of the plurality of images; determining a vanishing point of the road lane; and detecting relative motion of the vanishing point.

9. The system of claim 1, wherein determining the distance from the moving vehicle to the pedestrian comprises subtracting a hood range of the moving vehicle.

10. The system of claim 1, wherein the at least one processing device is further configured to determine a refined vertical position of the bottom edge of the pedestrian.

11. The system of claim 10, wherein the refined vertical position of the bottom edge is determined based on mapping pixels in the plurality of images.

12. The method of claim 11, wherein the method further comprises determining a refined vertical position of the horizon.

13. The method of claim 11, wherein determining the distance from the moving vehicle to the pedestrian comprises subtracting a hood range of the moving vehicle.

14. The method of claim 11, wherein the method further comprises determining a refined vertical position of the bottom edge of the pedestrian.

15. The method of claim 11, wherein the at least one processing device is further configured to provide a warning to a driver of the moving vehicle based on the distance.

16. The system of claim 1, wherein the at least one processing device is further configured to provide a warning to a driver of the moving vehicle based on the distance.

17. The method of claim 16, wherein determining the refined vertical position comprises detecting a change in position of the moving vehicle.

18. A method for determining a distance from a moving vehicle to a pedestrian ahead of the moving vehicle, the method comprising: receiving a plurality of images captured by an image capturing device mounted on the moving vehicle, the plurality of images including representations of the pedestrian; identifying in a first image of the plurality of images, a bottom edge of the pedestrian, a top edge of the pedestrian, and a horizon represented in the first image; determining based on the bottom edge and the top edge, a first measured height of the pedestrian represented in the first image; determining a second measured height of the pedestrian based on a difference between a vertical position of the bottom edge and a vertical position of the horizon represented in the first image; and determining a distance from the moving vehicle to the pedestrian based on the first measured height and the second measured height.

19. The method of claim 18, wherein the second measured height is further determined based on a height of the image capture device.

20. The method of claim 19, wherein the pitch angle estimation is performed based on at least one of an inertial sensor or an analysis of the plurality of images.

21. The method of claim 18, wherein the method further comprises determining, based on a third measured height from a second image of the plurality of images, a smoothed height of the pedestrian, and wherein the distance from the moving vehicle to the pedestrian is determined based on the smoothed height and the second measured height.

22. The method of claim 18, wherein the change in the position is detected based on a pitch angle estimation.

23. The system of claim 22, wherein the refined vertical position of the bottom edge is determined based on mapping pixels in the plurality of images.

24. The method of claim 18, wherein detecting the change in position of the moving vehicle comprises: detecting a road lane based on an analysis of one or more of the plurality of images; determining a vanishing point of the road lane; and detecting relative motion of the vanishing point.

25. A non-transitory computer-readable medium storing instructions that when executed by at least one processing device perform a method for determining a distance from a moving vehicle to a pedestrian ahead of the moving vehicle, the method comprising: receiving a plurality of images captured by an image capturing device mounted on the moving vehicle, the plurality of images including representations of the pedestrian; identifying in a first image of the plurality of images, a bottom edge of the pedestrian, a top edge of the pedestrian, and a horizon represented in the first image; determining based on the bottom edge and the top edge, a first measured height of the pedestrian represented in the first image; determining a second measured height of the pedestrian based on a difference between a vertical position of the bottom edge and a vertical position of the horizon represented in the first image; and determining a distance from the moving vehicle to the pedestrian based on the first measured height and the second measured height.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) The present invention will become fully understood from the detailed description given herein below and the accompanying drawings, which are given by way of illustration and example only and thus not limitative of the present invention:

(2) FIG. 1 (prior art) illustrates a vehicle with distance measuring apparatus, including a camera and a computer useful for practicing embodiments of the present invention;

(3) FIG. 2 (prior art) further shows the vehicle of FIG. 1 having a distance measuring apparatus, operative to provide a distance measurement to a leading vehicle; wherein measurement errors of distance Z are illustrated as known in the prior art;

(4) FIG. 2a (prior art) is a view of an image on an image plan of camera 32;

(5) FIGS. 3a-3d schematically illustrate image processing frames used to accurately measure distance to the “lead” vehicle, in accordance with embodiments of the present invention;

(6) FIG. 4 is flow diagram which illustrates an algorithm for determining distance, in accordance with embodiments of the present invention;

(7) FIG. 5 schematically illustrates a pitch angle error in the system of FIG. 1, which is compensated for using embodiments of the present invention;

(8) FIGS. 6a and 6b schematically illustrate compensation of the pitch angle error as illustrated in FIG. 5, and the compensation is performed, according to embodiments of the present invention; and

(9) FIG. 7 schematically illustrates an accurate distance measurement to a pedestrian in front of a vehicle, as performed in accordance with an embodiment of the present invention.

DESCRIPTION OF THE PREFERRED EMBODIMENTS

(10) The present disclosure is of a system and method of processing image frames of an obstacle as viewed in real time from a camera mounted in a vehicle. Specifically, the system and method processes images of the obstacle to obtain an obstacle dimension, typically an actual width of the obstacle which does not vary in time. The range to the object in each frame is then computed using the instantaneous measured image width (in pixels) in the frame and the smoothed physical width of the object (e.g. in meters). A smoothed width of the obstacle is preferably determined recursively over a number of frames using, for example, a Kalman filter. The range to the obstacle is then determined by comparing the instantaneous measured width in each frame to the smoothed width of the obstacle. Various embodiments of the present invention optionally include other refinements which improve the range measurement, particularly by reducing error in the range estimation due to changes in the pitch angle of the camera, e.g. slope of road surface 20, and multiple methods of locating the horizon position from frame to frame.

(11) The principles and operation of a system and method for obtaining an accurate range to an obstacle, according to features of the present invention, may be better understood with reference to the drawings and the accompanying description.

(12) It should be noted, that although the discussion herein relates to a forward moving vehicle 10 equipped with camera 32 pointing forward in the direction of motion to lead vehicle 11 also moving forward, the present invention in a different embodiment may, by non-limiting example, alternatively be configured as well using camera 32 pointing backward toward a following vehicle 11 and equivalently measure the range therefrom. The present invention in different embodiments may be similarly configured to measure range to oncoming obstacles such as vehicle 11 traveling towards vehicle 10. All such configurations are considered equivalent and within the scope of the present invention.

(13) Before explaining embodiments of the invention in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of the components set forth in the following description or illustrated in the drawings.

(14) Embodiments of the present invention are preferably implemented using instrumentation well known in the art of image capture and processing, typically including an image capturing device, e.g camera 32 and an image processor 34, capable of buffering and processing images in real time. Moreover, according to actual instrumentation and equipment of embodiments of the method and system of the present invention, several selected steps could be implemented by hardware, firmware or by software on any operating system or a combination thereof. For example, as hardware, selected steps of the invention could be implemented as a chip or a circuit. As software, selected steps of the invention could be implemented as a plurality of software instructions being executed by a computer using any suitable operating system. In any case, selected steps of the method and system of the invention could be described as being performed by a processor, such as a computing platform for executing a plurality of instructions.

(15) Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The methods, and examples provided herein are illustrative only and not intended to be limiting.

(16) By way of introduction, principal intentions of the present invention are: (1) to provide a preferably recursive algorithm over multiple frames to refine the range measurement to the obstacle; (2) to provide accurate determination of the “bottom edge” of a target vehicle or other obstacle and tracking of the “bottom edge” over multiple image frames and (3) to provide a mechanism to reduce pitch error of the camera, e.g. error due to road surface, and other errors in estimating the location of the horizon.

(17) The present invention, in some embodiments, may use a mechanism which determines the pitch angle of camera 32. The pitch error determination mechanism may be of any such mechanisms known in the art such as based on the images themselves or based on gravity sensors, e.g. plumb line or inertial sensors.

(18) It should be further noted that the principles of the present invention are applicable in Collision Warning Systems, such as Forward Collision Warning (FCW) systems based on scale change computations, and other applications such as headway monitoring and Adaptive Cruise Control (ACC) which require knowing the actual distance to the vehicle ahead. Another application is Lane Change Assist (LCA), where camera 32 is attached to or integrated into the side mirror, facing backwards. In the LCA application, a following vehicle is detected when entering a zone at specific distance (e.g. 17 meters), and thus a decision is made if it is safe to change lanes.

(19) Reference is now made to FIG. 3a-d, which illustrate processing of an image frame 40 formed on image plane 31 of camera 32, according to an embodiment of the present invention. Image frame 40 includes an image 41 of vehicle 11. Image 47 of shadow 23, as cast by vehicle 11 on road surface 20, starts at row H.sub.i of image frame 40. Item 47 is different at night time: during daytime item 47 is often the shadow or simply a darker region below leading vehicle 11. At night item 47 might be a light line where the road illuminated by leading vehicle 11 headlights is visible under leading vehicle 11.

(20) Step 401 in the image processing method, according to the present invention includes horizontal edge detection. An image processing unit, according to embodiments of the present invention, determines, for example, horizontal edge of bottom 50, underside edge of bumper 51, top edge of bumper 52, bottom edge of window 53 and roof edge 54, as illustrated in FIG. 3b. In step 403, vertical edges 55 and 56 of vehicle 11 sides are detected, as illustrated in FIG. 3c preferably based on the endpoints of horizontal features 50, 51, 52, 53 and/or 54. Further processing in step 405 of image frame 40 yields information needed to calculate the range Z of vehicle 11 from vehicle 10, specifically bottom edge 50 and vertical edges 55 and 56 are determined, as shown in FIG. 3d. Bottom edge 50 best represents location 25 where a vertical plane, tangent to the back of lead vehicle 11 meets road surface 20. Imprecise positioning of bottom edge 50 results in error Z.sub.e in calculating distance Z. Thus, in accordance with an embodiment of the present invention, in order to determine the “range” at a given time t, the image data is processed to determine the height (y position relative to horizon 60) of bottom edge 50 in each image frame 40. Error Z.sub.e is derived primarily from errors in image position y which in turn come from errors in locating bottom edge 50 of imaged vehicle 41 in each image frame 40, errors in locating horizon projection 60 and deviations from the planar road assumption of road surface 20. According to different embodiments of the present invention, these errors are minimized by: (a) multiple frame estimation of distance Z; and (b) refinement of locations of bottom 50 and horizon 60.

Multiple Frame Estimate of Distance Z

(21) Since the scene changes dynamically and range Z changes continuously over time, it does not make sense to reduce error Z.sub.e by averaging Z over time or otherwise directly “smoothing” the distance Z. A key step, according to an embodiment of the present invention, is to estimate a width W.sub.v of an obstacle such as lead vehicle 11, which remains constant over time. Image data related to width W.sub.v, is “smoothed” in time, preferably using a Kalman filter thereby processing the non-linear data and reducing random errors or “noise” in image frames 40. Smoothed width W.sub.v is then used to refine the distance Z estimation.

(22) An aspect of the present invention is to determine smoothed width W.sub.v of vehicle 11. Each of image frames 40 is processed to determine image positions for bottom horizontal edge 50. Two vertical edges 55 and 56 extending upwards from each end of bottom edge 50 and the imaged width w.sub.i between two vertical edges 55 and 56 are used to derive actual width W(t) of vehicle 11.

(23) The single frame measurement of width W(t) is given by:

(24) W ( t ) = w i ( t ) H c y ( t ) or ( 2 ) W ( t ) w i ( t ) = H c y ( t ) ( 3 )
where w.sub.i is the width of the image of vehicle 11, typically represented by the difference in pixels between vertical edges 55 and 56 multiplied by the width of the pixels. The smoothed vehicle width W.sub.v is obtained by recursively applying equation (2) over multiple frames preferably using a Kalman filter. The parameters of the Kalman filter are adjusted so that initially the convergence is fast and then, after tracking the target vehicle 11 for a while, e.g. a few seconds, only very slow changes in the smoothed width W.sub.v estimate are allowed. Given the smoothed vehicle width W.sub.v and the width in the image w.sub.i in a subsequent image frame 40, we can then compute the corrected range Z:

(25) W v ( t n ) = F ( { w i ( t j ) | 0 = j = n } ) ( 4 a ) Z = - f W v w i ( 4 b )
FIG. 4 is a flow diagram which illustrates and exemplifies an algorithm 100 that distance measuring apparatus 30 uses to determine the distance Z, in accordance with embodiments of the present invention. In step 101, camera 32 acquires an image at time t.sub.n. In step 102, processor 34 determines a value for the y position in image coordinates, relative to horizon 60, of bottom edge 50. In step 103 a value for the w.sub.i(t.sub.n) of image lead vehicle 11 width. In step 104 a “sample” real size W(t.sub.n) of the width of target vehicle 11 is determined from w.sub.i(t.sub.n) in accordance with equation (2). In step 105 a “smoothed width” W.sub.v(t.sub.n) is determined as a function F, optionally of the n sample real sizes W(t.sub.n) determined for the (n+1) images in the sequence of images acquired by camera 32 in accordance with an expression shown in equation (4a). F represents any suitable function of the set of values W(t.sub.j) determined for the (n+1) images acquired at times t.sub.j for 0≤j≤n). In step 106 a “smoothed distance” Z(t.sub.n) is determined in accordance with an expression shown in equation (4b).

(26) In a typical scenario, an image frame 40 of the vehicle 11 is acquired at a large distance Z and hence its dimensions in image frame 40 are small. This implies that both y and w.sub.i estimates contain large errors. As vehicle 11 gets closer and its corresponding image 41 grows larger, we obtain more accurate estimates of these parameters.

(27) Thus the measurement noise (R) and process noise (Q) matrices of the Kalman filter are both time dependent and image size dependent.
R(t)=R(0)+α*t  (5)
Q(t)=β*w+χ*max(0,T.sub.max−t)  (6)
Where t is the age of the object, T.sub.max is a parameter defining the age at which we expect the width to have converged (values of T.sub.max of 10-30 typically work well) and alpha (α), beta (β) and gamma (χ) are parameters tuned empirically from acquired data. Then in an embodiment of the present invention, the classic Kalman correction/prediction iteration is used:
Correction:

(28) K = P R + Q ( 7 ) x = x + K * ( z - x ) ( 8 ) P = ( 1 - K ) * P ( 9 )
Prediction:
P=P+Q  (10)
Where P is the state covariance, z is the measurement (in our case W(t)) from equation (2) and x is the state (in our case the smoothed width W.sub.v).

(29) In equations (7)-(10) operator “=” represents an assignment (as in computer language C), not an equality, for instance in equation (10) the value of P is replaced with P+Q. First equation (7) is applied which computes the Kalman gain K. Then equations (8) and (9) are applied in either order to update state x. Then equation (10) is applied to update P.

Refinement of Location of Bottom Edge

(30) As described above, much of the errors in estimating W(t) using eq. (2) come from errors in locating the bottom 50 of vehicle 11. Thus, according to the present invention, a few refining measurements are taken to get a more accurate estimation of location of bottom 50 of vehicle 11.

(31) In order to get good measurements of y from projection 60 of horizon to a horizontal feature (e.g. bottom 50), the horizontal feature needs to be accurately identified in each image frame 40. For instance bottom 50 can be represented by the perceived points of contact between vehicle 11 and road surface 20 at location 25 or better by corresponding edge of shadow 47 of vehicle 11 and road surface 20 at location 25 when the sun is positioned right above vehicle 11. It should be noted that the sun does not need to be right above. Even if the angle were to be 45 degrees the error in range would only be equal to the distance between the bottom of vehicle 11 and the road surface 20 (typically 30-40 cm). At every frame 40, a single frame estimate of the points of contact is obtained along with a confidence measure associated with the estimate. The single frame estimate and confidence measure are used for locating bottom 50 and the location is preferably adjusted using a non-linear smoothing filter. In order to estimate the location of bottom 50, a preliminary refinement is optionally performed on a single frame 40, according to embodiments of the present invention. Pixels are classified or mapped as belonging to road surface 20 or not belonging to road surface 20, preferably using pixel grayscale values I and gradients (I.sub.x,I.sub.y) in image coordinates. The image of the area on road surface 20 in immediate vicinity of vehicle 11 is used to derive the typical values of I and gradients (I.sub.x,I.sub.y) for road surface 20 provided that the area is clear of other objects such as other vehicles. In single frame 40, a horizontal edge in the pixel mapping, in proximity to the estimate of bottom 50 of vehicle 11 in the previous frame, is the new single frame estimate of bottom 50.

(32) Optionally, a second refinement step is also performed on a single frame 40. Typically, under vehicle 11 during the day there is a dark patch 47, such as a sun shadow, or a light patch at night caused by self illumination near vehicle 11. Other features discernible at night include a horizontal line 52 indicating the bumper of vehicle 11 and/or spots of light from lamps or reflections. Dark wheels are also optionally used to determine location of image 41. These and other discernible features are combined into a confidence score.

(33) Optionally, a third refinement step is performed using multiple frames 40. In order to avoid jumps in image location of bottom 50 from frame 40 to frame 40 of the location of bottom edge 50 in response to every mark or shadow on the road, the location of bottom 50 must be consistent with previous frames 40, with camera 32 pitch angle, and with changes in scale of target vehicle image 41.

(34) These three steps can be combined, by way of example, into the following algorithm: Vertical edges 55, 56 and bottom edge 50 represent a rectangular region which is the current best estimate of the location of vehicle image 41 in image frame 40. The rectangular region typically extends between the middle row of the vehicle image 41, such as an imaged row of the hood of vehicle 11, and the bottom valid row of the image 41. For each column in the rectangular region, absolute values of horizontal component I.sub.x are summed of the gradient of grayscale. Any column with a sum above a threshold T.sub.1 is excluded from further processing as well as neighboring columns to the left and right of the excluded column. If more than 50% of the columns are excluded in a single image frame 40, the image frame 40 is discarded as being of poor quality due to shadows or other image artifacts. For the non-excluded columns of image frame 40, the gray scale image values I are summed along the rows, thereby producing a single column of summed gray scale values. The single column or vector is differentiated looking for significant local maxima in absolute value. Typically, the local maxima are found at sub-pixel resolution. In daytime, bottom 50 of the vehicle image 41 is expected to be darker (shadow 47) than road 20. Thus to each peak representing a dark above light transition, a factor F.sub.1 is added. At night-time, on more distant vehicles 11, there is a bright line which is a patch of road illuminated by headlights of vehicle 11 and visible under vehicle 11. Therefore a factor F.sub.2 is added at night for light above dark transitions. Both during day and night, bottom edge 50 is terminated by the wheels of vehicle 11, A factor F.sub.3 is added for peaks which come from lines which terminate symmetrically inwards of the vehicle edge. In order to insure consistency between different frames 40, the absolute values of the modified column (or vector) are multiplied by a Gaussian function centered around the expected location of bottom 50 derived from the tracking of vehicle 11 and pitch angle estimate of camera 32. The Gaussian function preferably has a half width (sigma) which is an inverse function of the tracking confidence. The largest peak in the column vector is selected that is above a threshold T.sub.2 and the sub-pixel location of the peak is selected as the new location of vehicle bottom 50.

Refinement of Location of Horizon

(35) Another contribution to the above-mentioned errors in estimating distance W(t) using eq. (2) comes from errors in locating horizon 60. Thus, according to embodiments of the present invention, refinements are performed to acquire an accurate estimate of the location of the horizon 60. The initial horizon location 60 is obtained by the calibration procedure when the system is installed. Horizon refinement is based, among other things, on tracking the horizon (pitch/yaw), ego motion of vehicle 10 and on detecting the vanishing point and shape of the road lanes. Road lane shape also gives an indication as to the planarity of the road. A system known in the art for detecting the road lanes and their vanishing point is assumed. Such a system is described in is described in now allowed U.S. patent application Ser. No. 09/834,736 (US patent publication 2003/0040864) to Stein et al, the disclosure of which is incorporated herein by reference for all purposes as if entirely set forth herein. Horizon 60 is derived from the ordinate y.sub.p in image coordinates of the vanishing point. A pitch angle θ estimation system, based for instance on image measurements and/or an inertial sensor as known in the art is assumed. FIG. 5 depicts a scene where vehicle 10 encounters a bump 27 which causes a change in of the optical axis 33′, from parallel to road surface 25 to angle 33. Vehicle 10 is equipped with a camera 32 and an image processing system 64, according to embodiments of the present invention. Reference is now made to FIGS. 6a and 6b which illustrate compensation of pitch error (of FIG. 5) in image 40. Bottom 50 is shown in FIG. 6a at time t−1 at position y′ from horizon position 60 and at row H.sub.i′ from the image bottom. In FIG. 6b, at time t upon compensation for the pitch error, bottom 50 is shown also at H.sub.i′ instead of “would-be” row H.sub.i without compensation for the pitch error. The location 60 of the horizon is not effected by change in scale but by the pitch error. It can be derived from the images or by an inertial sensor as known in the art.

(36) After eliminating the vehicle pitch, another estimate of the horizon (y.sub.m) can be given by an equation representing the relation of the motion of image points on the road, for example A(x.sub.1,y.sub.1) and B(x.sub.2,y.sub.2), and vehicle speed:

(37) y 1 - y m = f H c Z 1 ( 11 ) y 2 - y m = f H c Z 2 ( 12 )
Therefore:

(38) Z 1 = f H c y 1 - y m ( 13 ) Z 2 = f H c y 2 - y m ( 14 )
The vehicle motion (d.sub.Z=Z.sub.1−Z.sub.2) is known from the vehicle speed. We then write the equation:

(39) d Z = f H c y 1 - y m - f H c y 2 - y m ( 15 )
which can be solved for y.sub.m, which is the newly estimated horizon location.
The horizon 60 estimates from different calculations can be combined using least squares. Horizon estimate y.sub.0 is calculated from the initial calibration, y.sub.p is calculated from tracking lane marking of the road and determining the vanishing point, y.sub.m is calculated from the relative motion of image points on the road 20, d.sub.y denotes the interframe pitch change and y.sub.−1 is a calculation based on pitch angle measurement or estimation. Horizon (y) is preferably calculated to minimize the error (E) as follows:
E=L.sub.1(y−y.sub.0).sup.2+L.sub.2(y−y.sub.p).sup.2+L.sub.3(y−y.sub.m).sup.2+L.sub.4(y−(y.sub.−1+d.sub.y)).sup.2  (16)
Where the factor L.sub.1 is constant, L.sub.2 depends on the confidence of the lane detection, L.sub.3 depends on the number of road points tracked and the confidence of the tracking and L.sub.4 depends on the confidence of the pitch estimation. Using least squares normalization means that the solution for (y) can be found using linear methods. The least squares (or custom character2 norm) can be replaced by the custom character.sub.1 norm L1-L4 in equation 16 or any other metric and solved numerically using standard non-linear optimization. Some, usually large, changes in horizon location 60 can indicate a bumpy road in which case the confidence on the y estimate is reduced.

(40) Referring back to FIG. 2, the distance Z.sub.h between the camera image plane 31 to the front bumper of vehicle 10 is subtracted from the computed range Z from image plane 31 to lead vehicle 11, as we want to measure the distance from the front bumper of vehicle 10 to the obstacle and not from the actual positioning of image plane 31 in vehicle 10.

(41) It should be noted that not only the width of the lead vehicle 11 remains constant over time but also other dimensions such as height or distances between various pairs of horizontal edges such as roof edge 54, the bottom of the rear window 53 the top of the bumper 52 or bottom of bumper 51. Then we can estimate the lead vehicle 11 smoothed selected vertical feature H.sub.v, which remains constant over time, preferably using the Kalman filter to process the non-linear data and reduce influences of noise in the frames 40. It is also possible to use more then one smoothed constant dimension of the lead vehicle 11, such as its width, height etc, to refine the estimation of range Z to said lead vehicle 11.

(42) To summarize, determining the bottom location 50, the horizon location 60 and integration of information over time are essential for the accurate measurement of the distance Z of a vehicle 10 from a target vehicle 11 or obstacle in front or in the back of it.

(43) In another embodiment of the present invention, a range Z from the host vehicle 10 to another object such as a pedestrian 70. Referring to FIG. 7, a pedestrian 70 width, as viewed by a camera 32, is dynamic, and may change considerably for instance as pedestrian turns around, but height Hp remains generally constant over time. Rectangle 71 represents the pedestrian 70 body changes in width as the pedestrian 70 moves, but rectangle 71 height remains generally constant over time. Hence, the estimated range Z can be refined using said pedestrian 70 smoothed height H.sub.v, which remains generally constant over time, preferably using the Kalman filter to process the non-linear data and reduce influences of noise in the frames 40.

(44) The invention being thus described in terms of several embodiments and examples, it will be obvious that the same may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the invention, and all such modifications as would be obvious to one skilled in the art are intended to be included within the scope of the following claims.