Pedestrian collision warning system
10940818 ยท 2021-03-09
Assignee
Inventors
- Dan Rosenbaum (Jerusalem, IL)
- Amiad Gurman (Elkana, IL)
- Yonatan Samet (Jerusalem, IL)
- Gideon P. Stein (Jerusalem, IL)
- David Aloni (Jerusalem, IL)
Cpc classification
B60W30/0956
PERFORMING OPERATIONS; TRANSPORTING
B60R1/00
PERFORMING OPERATIONS; TRANSPORTING
G06V20/58
PHYSICS
B60R21/013
PERFORMING OPERATIONS; TRANSPORTING
International classification
B60R21/013
PERFORMING OPERATIONS; TRANSPORTING
B60W30/095
PERFORMING OPERATIONS; TRANSPORTING
Abstract
A method is provided for preventing a collision between a motor vehicle and a pedestrian. The method uses a camera and a processor mountable in the motor vehicle. A candidate image is detected. Based on a change of scale of the candidate image, it may be determined that the motor vehicle and the pedestrian are expected to collide, thereby producing a potential collision warning. Further information from the image frames may be used to validate the potential collision warning. The validation may include an analysis of the optical flow of the candidate image, that lane markings prediction of a straight road, a calculation of the lateral motion of the pedestrian, if the pedestrian is crossing a lane mark or curb and/or if the vehicle is changing lanes.
Claims
1. A system for determining a risk of collision between a motor vehicle and a pedestrian, comprising: storage comprising instructions; and at least one processor, which when configured by the instructions, is to access image data corresponding to a plurality of image frames from a camera mountable in the motor vehicle, wherein the at least one processor is further configured by the instructions to: detect a candidate image of the pedestrian in at least one of the image frames; determine if the pedestrian is on a far side of a curb or lane mark and determine whether the pedestrian is crossing the curb or the lane mark representing a possible risk of collision between the motor vehicle and the pedestrian using the image data; validate the determination of the possible risk of collision between the motor vehicle and the pedestrian using radar; and in response to the possible risk of collision between the motor vehicle and the pedestrian being validated using radar, provide a collision warning or a collision prevention control signal.
2. The system of claim 1, wherein the collision prevention control signal causes a braking system to initiate braking in the motor vehicle without driver intervention.
3. The system of claim 1, wherein the collision prevention control signal causes a braking system to initiate braking in the motor vehicle with driver intervention.
4. The system of claim 1, wherein the collision warning includes an audible signal to a driver of the motor vehicle.
5. A system for determining a risk of collision between a motor vehicle and a pedestrian, comprising: a camera mountable in the motor vehicle; a braking system coupled to the motor vehicle to brake the motor vehicle; and a processor operably connected to the camera and configured to: access image data corresponding to a plurality of image frames from the camera; detect a candidate image of the pedestrian in at least one of the image frames; determine if the pedestrian is on a far side of a curb or lane mark and determine whether the pedestrian is crossing the curb or the lane mark representing a possible risk of collision between the motor vehicle and the pedestrian using the image data; validate the determination of the possible risk of collision between the motor vehicle and the pedestrian using radar; and based on a determination that the pedestrian is a possible risk of collision with the motor vehicle, in response to the possible risk of collision between the motor vehicle and the pedestrian being validated using radar, provide a collision warning or a collision prevention control signal.
6. The system of claim 5, wherein the collision prevention control signal is transmitted to the braking system to initiate braking in the motor vehicle without driver intervention.
7. The system of claim 5, wherein the collision prevention control signal is transmitted to the braking system to initiate braking in the motor vehicle with driver intervention.
8. The system of claim 1, wherein the at least one processor is further configured to: detect an image patch in the plurality of image frames, wherein the image patch includes a candidate image of the pedestrian in the field of view of the camera; and validate the determination of the possible risk of collision based on optical flow of the image patch.
9. The system of claim 8, wherein to validate the determination of the possible risk of collision based on optical flow of the image patch, the at least one processor is configured to: track optical flow between the plurality of image frames of an image point of the pedestrian; compute physical lateral motion of the pedestrian from horizontal image motion of the tracked point of the pedestrian; and estimate a lateral position of the pedestrian from the physical lateral motion at a time-to-collision.
10. The system of claim 5, wherein the at least one processor is further configured to: detect an image patch in the plurality of image frames, wherein the image patch includes a candidate image of the pedestrian in the field of view of the camera; and validate the determination of the possible risk of collision based on optical flow of the image patch.
11. The system of claim 10, wherein to validate the determination of the possible risk of collision based on optical flow of the image patch, the at least one processor is configured to: track optical flow between the plurality of image frames of an image point of the pedestrian; compute physical lateral motion of the pedestrian from horizontal image motion of the tracked point of the pedestrian; and estimate a lateral position of the pedestrian from the physical lateral motion at a time-to-collision.
12. A system comprising: an image sensor to capture a plurality of image frames; and an image processor to: access image data corresponding to the plurality of image frames; detect a candidate image of the pedestrian in at least one of the image frames; determine a possible risk of collision between the motor vehicle and the pedestrian using the image data; validate the determination of the possible risk of collision between the motor vehicle and the pedestrian using radar; and based on a determination that the pedestrian is a possible risk of collision with the motor vehicle, provide a collision warning or a collision prevention control signal.
13. The system of claim 11, wherein the image processor is to: detect an image patch in the plurality of image frames, wherein the image patch includes a candidate image of the pedestrian in the field of view of the camera; and validate the determination of the possible risk of collision based on optical flow of the image patch.
14. The system of claim 13, wherein to validate the determination of the possible risk of collision based on optical flow of the image patch, the image processor is to: track optical flow between the plurality of image frames of an image point of the pedestrian; compute physical lateral motion of the pedestrian from horizontal image motion of the tracked point of the pedestrian; and estimate a lateral position of the pedestrian from the physical lateral motion at a time-to-collision.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The invention is herein described, by way of example only, with reference to the accompanying drawings, wherein:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
DETAILED DESCRIPTION
(25) Reference will now be made in detail to features of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. The features are described below to explain the present invention by referring to the figures.
(26) Before explaining features of the invention in detail, it is to be understood that the invention is not limited in its application to the details of design and the arrangement of the components set forth in the following description or illustrated in the drawings. The invention is capable of other features or of being practiced or carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting.
(27) By way of introduction, embodiments of the present invention are directed to a forward collision (FCW) system. According to U.S. Pat. No. 7,113,867, a image of lead vehicle is recognized. The width of the vehicle may be used to detect a change in scale or relative scale S between image frames and the relative scale scale is used for determining time to contact. Specifically, for example width of the lead vehicle, have a length (as measured for example in pixels or millimeters) in the first and second images represented by w(t1) and w(t2) respectively. Then, optionally the relative scale is S(t)=w(t2)/w(t1).
(28) According to the teachings of U.S. Pat. No. 7,113,867, the forward collision (FCW) system depends upon recognition of an image of an obstruction or object, e.g. lead vehicle for instance, as recognized in the image frames. In the forward collision warning system, as disclosed in U.S. Pat. No. 7,113,867, the scale change of a dimension, e.g. width, of the detected object e.g. vehicle is used to compute time-to-contact (TTC). However, the object is first detected and segmented from the surrounding scene. The present application describes a system in which optical flow is used to determine the time to collision TTC and/or likelihood of collision and issue an FCW warning if required. Optical flow causes the looming phenomenon in perception of images which appear larger as objects being imaged get closer. Object detection and/or recognition may be performed or object detection and/or recognition may be avoided, according to different features of the present invention.
(29) The looming phenomenon has been widely studied in biological systems. Looming appears to be a very low level visual attention mechanism in humans and can trigger instinctive reactions. There have been various attempts in computer vision to detect looming and there was even a silicon sensor design for detection of looming in the pure translation case.
(30) Looming detection may be performed in real world environments with changing lighting conditions, complex scenes including multiple objects and host vehicle which includes both translation and rotation.
(31) The term relative scale or change of scale as used herein refers to the relative size increase (or decrease) of an image patch in an image frame and a corresponding image patch in a subsequent image frame.
(32) Reference is now made to
(33) A feature of the present invention is illustrated in
(34) Reference is now made to
(35) Reference is now made to
(36) Alternatively, step 213 may also include in the image frames 15, the detection of a candidate image. The candidate image may be a pedestrian or a vertical line of a vertical object such as lamppost for example. In either case of a pedestrian or a vertical line, patch 32 may be selected so as to include the candidate image. Once patch 32 has been selected it may then be possible to perform a verification that the candidate image is an image of an upright pedestrian and/or a vertical line. The verification may confirm that the candidate image is not an object in the road surface when the best fit model is the vertical surface model.
(37) Referring back to
(38)
(39) If vehicle 18 speed in known (=4.8 m/s), the distance Z to the target can also be computed using equation 2 below:
(40)
(41)
(42)
(43) Equation (3) is a linear model relating y and y and has effectively two variables. Two points may be used to solve for the two variables.
(44) For vertical surfaces the motion is zero at the horizon (y.sub.0) and changes linearly with image position since all the points are at equal distance as in the graph shown in
(45)
and so the image motion (y) increases at more than linear rate as shown in equation 5 below and in the graph of
(46)
(47) Equation (5) is a restricted second order equation with effectively two variables.
(48) Again, two points may be used to solve for the two variables.
(49) Reference is now made to
(50) Reference is now made to
(51) Reference is now made to
(52) Estimation of the Motion Model and Time to Collision (TTC)
(53) The estimation of the motion model and time to contact (TTC) (step 215) assumes we are provided a region 32, e.g. a rectangular region in image frame 15. Examples of rectangular regions are rectangles 32a and 32b shown in
(54) These rectangles may be selected based on detected objects such as pedestrians or based on the host vehicle 18 motion.
(55) 1. Tracking Points (Step 211):
(56) (a) A rectangular region 32 may be tessellated into 520 grid of sub-rectangles.
(57) (b) For each sub-rectangle, an algorithm may be performed to find a corner of an image, for instance by using the method of Harris and Stephens and this point may be tracked. Using the best 55 Harris Point the eigenvalues of the matrix below may be considered,
(58)
and we look for two strong eigenvalues.
(59) (c) Tracking may be performed by exhaustive search for the best some of squared differences (SSD) match in a rectangular search region of width W and height H. The exhaustive search at the start is important since it means that a prior motion is not introduced and the measurements from all the sub-rectangles are more statistically independent. The search is followed by fine tuning using an optical flow estimation using for instance the method Lukas Kanade. The Lukas Kanade method allows for sub-pixel motion.
(60) 2. Robust Model Fitting (Step 213):
(61) (a) Pick two or three points randomly from the 100 tracked points.
(62) (b) The number of pairs (N.sub.pairs) picked depends the vehicle speed () and is given for instance by:
N.sub.pairs=min(40,max(5,50))(7)
(63) where is in meter/second. The number of triplets (N.sub.triplets) is given by:
N.sub.triplets=50N.sub.pairs(8)
(64) (c) For two points, two models may be fit (step 213). One model assumes the points are on an upright object. The second model assumes they are both on the road.
(65) (d) For three points two models may also be fit. One model assumes the top two points are on an upright object and the third (lowest) point is on the road. The second model assumes the upper point is on an upright object and the lower two are on the road.
(66) Two models may be solved for three points by using two points to solve for the first model (equation 3) and then using the resulting y.sub.0 and the third point to solve for the second model (equation 5).
(67) (e) Each model in (d) gives a time-to-collision TTC value (step 215). Each model also gets a score based on how well the 98 other points fit the model. The score is given by the Sum of the Clipped Square of the Distance (SCSD) between the y motion of the point and predicted model motion. The SCSD value is converted into a probability like function:
(68) where is the number of points (N=98).
(69)
(70) (f) Based on the TTC value, vehicle 18 speed and assuming the points are on stationary objects, the distance to the points: Z=TTC may be computed. From the x image coordinate of each image point distance, the lateral position in world coordinates may be computed:
(71)
(72) (g) The lateral position at time TTC is computed thus. A binary Lateral Score requires that at least one of the points from the pair or triplet must be in the vehicle 18 path.
(73) 3. Multiframe Scores: At each frame 15 new models may be generated, each with its associated TTC and score. The 200 best (highest scoring) models may be kept from the past 4 frames 15 where the scores are weighted:
(74) where n=0.3 is the age of the score and =0:95.
score(n)=.sup.nscore(12)
(75) 4. FCW Decision: the actual FCW warning is given if any of the following three conditions occurs:
(76) (a) The TTC for the model with the highest score is below the TTC threshold and the score is greater than 0.75 and
(77)
(78) (b) The TTC for the model with the highest score is below the TTC threshold and
(79)
(80) (c)
(81)
(82)
(83) FCW Trap for General Stationary Objects
(84) Reference is now made to
(85)
(86) Where is the vehicle 18 speed, H is the height of camera 12 and w and y are a
(87)
rectangle width and vertical position in the image respectively. The rectangular region is an example of an FCW trap. If an object falls into this rectangular region, the FCW Trap may generate a warning if the TTC is less than a Threshold.
(88) Improving Performance Using Multiple Traps:
(89) In order to increase the detection rate, the FCW trap may be replicated into 5 regions with 50% overlap creating a total trap zone of 3 m width.
(90) Dynamic position of the FCW trap may be selected (step 605) on yaw rate: the trap region 32 may be shifted laterally based on the vehicle 18 path determined from a yaw rate sensor, the vehicle 18 speed and dynamical model of the host vehicle 18.
(91) The FCW trap 601 concept can be extended to objects consisting mainly of vertical (or horizontal lines). A possible problem with using the point based techniques on such objects is that the good Harris (corner) points are most often created by the vertical lines on the edge of the object intersecting horizontal lines on the distant background. The vertical motion of these points will be like the road surface in the distance.
(92)
(93) FCW Trap for Validating Collision Warning Signals with Pedestrians
(94) Special classes of objects such as vehicles and pedestrians can be detected in image 15 using pattern recognition techniques. According to the teachings of U.S. Pat. No. 7,113,867, these objects are then tracked over time and an FCW 22 signal can be generated using the change in scale. However, before giving a warning it is important to validate the FCW 22 signal using an independent technique. Validating the FCW 22 signal using an independent technique, for instance using method 209 (
(95) Object (e.g. pedestrian, lead vehicle) detection is not the issue. Very high detection rate can be achieved with a very low false rate. A feature of the present invention is to generate a reliable FCW signal without too many false alarms that will irritate the driver, or worse, cause the driver to brake unnecessarily. A possible problem with conventional pedestrian FCW systems is to avoid false forward collision warnings as the number of pedestrians in the scene is large but the number of true forward collision situations is very small. Even a 5% false rate would mean the driver would get frequent false alarms and probably never experience a true warning.
(96) Pedestrian targets are particularly challenging for FCW systems because the targets are non-rigid making tracking (according to the teachings of U.S. Pat. No. 7,113,867) difficult and scale change in particular is very noisy. Thus the robust model (method 209) may be used to validate the forward collision warning on pedestrians. The rectangular zone 32 may be determined by a pedestrian detection system 20. A FCW signal may be generated only if target tracking performed by FCW 22, according to U.S. Pat. No. 7,113,867 and the robust FCW (method 209) give a TTC smaller than one or more threshold values which may or may not be previously determined. Forward collision warning FCW 22, may have a different threshold value from the threshold used in the robust model (method 209).
(97) One of the factors that can add to the number of false warning is that pedestrians typically appear in less structured roads where the drivers driving pattern can be quite erratic including sharp turns and lane changes. Thus some further constraints may need to be included on issuing a warning:
(98) When a curb or lane mark is detected the FCW signal is inhibited if the pedestrian is on the far side of the curb or lane and neither of the following conditions occur:
(99) 1. The pedestrian is crossing the lane mark or curb (or approaching very fast). For this it may be important to detect the pedestrian's feet.
(100) 2. The host vehicle 18 is not crossing the lane mark or curb (as detected by an LDW 21 system for example).
(101) The drivers intentions are difficult to predict. If the driver is driving straight, has not activated turn signals and there are no lane markings predicting otherwise it is reasonable to assume that the driver will continue straight ahead. Thus, if there is a pedestrian in path and TTC is below threshold an FCW signal can be given. However if the driver is in a turn it is equally likely that he/she will continue in the turn or break out of the turn and straighten out. Thus, when yaw rate is detected, an FCW signal may only be given if the pedestrian is in path assuming the vehicle 18 will continue at the same yaw and also the pedestrian is in path if the vehicle straightens out.
(102) According to a feature of the present invention, likely paths of vehicle 18 are predicted. The likely paths may include proceeding straight ahead, continue on a curve, following a lane marking or curb, avoiding going up on a curb and/or following a preceding vehicle.
(103) In order to avoid false positive collision warnings with pedestrians, FCW signal may be inhibited if there is a likely path for the motor vehicle which does not include a pedestrian with the TTC to the pedestrian less than the threshold.
Pedestrian Lateral Motion
(104) The pedestrian typically moves slowly compared to the host vehicle 18 and therefore the longitudinal motion of the pedestrian can be ignored. The lateral motion of the pedestrian, whether into the host vehicle 18 path or away from the host vehicle 18 path is critical.
(105) As the longitudinal distance to the pedestrian decreases there will be outward image motion (optical flow):
(106)
where is vehicle 18 speed, T=Z, is the vehicle 18 longitudinal motion and x.sub.0 is the x coordinate of the focus of expansion (FOE):
(107)
where w.sub.y is the yaw rate. This is a simplified motion model that assumes no lateral slippage of the host vehicle 18.
(108) So the lateral motion from a tracked point on the pedestrian can be computed from the lateral image motion x:
(109)
(110) The current lateral position of the pedestrian (X.sub.T+0) or simply X is given by:
(111)
(112) The lateral position of the pedestrian at T=TTC is given by:
X.sub.T=TTC=X+X*TTC
Estimating Collision Path with a Pedestrian
(113) To determine whether the host vehicle 18 is on a collision course with the pedestrian and a warning should be issued, two warning zones may be defined in world coordinate space. Warning zone 1 is shown in
(114) 1. Warning zone 1: The intersection of a rectangular region spanning 1 m to the left and right of vehicle 18 and of length *TTC with a curved region 62a spanning 1 m to the left and right of the vehicle assuming vehicle 18 continues on a path predicted by the yaw rate. 2. Warning zone 2: The intersection of a rectangular region spanning 0.1 m to the left and right of vehicle 18 and of length *TTC with a curved region 62b spanning 0.1 m to the left and right of vehicle 18 assuming vehicle 18 continues on a path predicted by the yaw rate. Region 62b is further intersected with the region to the right of the left lane mark 64 or curb if detected and with the region to the left of the right lane mark or curb if detected.
Triggering a Pedestrian FCW Warning
(115) A pedestrian FCW warning may be given if the following hold: 1. A pedestrian has been detected with high confidence. 2. TTC.sub.1, based on distance and and vehicle speed is below a threshold T.sub.1:
(116)
|TTC.sub.1TTC.sub.1|<T.sub.3 6. The predicted pedestrian position (X.sub.T=TTC, Z.sub.T=TTC) is inside warning zone 2.
(117) Reference is now made to
(118) Reference is now made to
(119) If a best fit is found in decision box 637, then in decision box 639a if a collision is expected, that is the best fit corresponds to an upright pedestrian or another vertical object, then a time-to-collision may be determined (step 639b). If a collision is not expected, for instance because the object is not upright, then a candidate image may be detected again in step 623.
(120) Reference is now made to
(121) A second time-to-collision may be calculated based on information 655 from image frames 15, by a comparison made between the first and second times to collision. The collision warning signal or the collision control signal provided in step 645 may be performed when the absolute value difference between the first and second time-to-collision is less than a threshold.
(122) Reference is now made to
(123) Validation step 649 may include an analysis of an optical flow of the candidate image. In the case of a pedestrian the optical flow may be the lateral motion of the pedestrian. Validation step 649 may also include a determination that lane markings predict a straight road. The straight road may be indicative that a pedestrian may be be more likely to cross the road as opposed to the curved road giving the impression that a pedestrian is already in road. Further validation 649 may also include a determination that a pedestrian is crossing a lane mark or curb. Avoidance of the provision of a collision warning signal or collision control signal (step 645) may take into consideration that motor vehicle 18 is changing lanes and therefore the pedestrian will not be in the path of motor vehicle 18.
(124) Alternatively, a collision warning and/or collision control may be provided (or not inhibited) without necessarily determining a change in scale in the candidate image (step 645). A candidate image of a pedestrian is detected. The location of point of contact between the pedestrian and the road, e.g. a foot of the pedestrian is detected. If the feet are determined to be in one or more likely vehicle paths then the collision warning and/or collision control may be provided or not inhibited in a validation step 649. The predicted vehicle path may be projected onto the image and if the pedestrian's feet overlap the predicted path then the collision warning and/or collision control may be provided or not inhibited in a validation step 649. Alternatively, vehicle path may be predicted in world coordinates and the known feet location and the known camera perspective projection are used to locate the pedestrian in world coordinates.
(125) The term collision control signal as used herein may include but is not limited to a control signal which initiates braking of the vehicle with or without driver intervention.
(126) The term collision warning as used herein is a signal, typically audible, to the driver of the motor vehicle that driver intervention may be required to prevent a collision.
(127) The indefinite articles a, an is used herein, such as an image, a rectangular region have the meaning of one or more that is one or more images or one or more rectangular regions.
(128) The term validation and verification are used herein interchangeably.
(129) Although selected features of the present invention have been shown and described, it is to be understood the present invention is not limited to the described features. Instead, it is to be appreciated that changes may be made to these features without departing from the principles and spirit of the invention, the scope of which is defined by the claims and the equivalents thereof.