Multiple-parts based vehicle detection integrated with lane detection for improved computational efficiency and robustness
10576974 ยท 2020-03-03
Assignee
Inventors
Cpc classification
B60R11/04
PERFORMING OPERATIONS; TRANSPORTING
B60W30/0956
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/303
PERFORMING OPERATIONS; TRANSPORTING
G06F18/254
PHYSICS
B60R1/00
PERFORMING OPERATIONS; TRANSPORTING
G06V20/58
PHYSICS
B60R2300/307
PERFORMING OPERATIONS; TRANSPORTING
G06V20/588
PHYSICS
G06V10/446
PHYSICS
G06V10/809
PHYSICS
International classification
B60W30/095
PERFORMING OPERATIONS; TRANSPORTING
B60R11/04
PERFORMING OPERATIONS; TRANSPORTING
B60R1/00
PERFORMING OPERATIONS; TRANSPORTING
Abstract
Detecting the presence of target vehicles in front of a host vehicle by obtaining, using one or more visual sensors, an image of a field of view in front of a host vehicle and detecting, individually, a plurality of parts of one or more target vehicles in the obtained image. The detected plurality of parts, of the one or more target vehicles, are extracted from the obtained image and paired to form a complete target vehicle from the plurality of parts. The pairing is only performed on selective individual parts that overlap and have similar sizes indicating that they belong to the same target vehicle. A complete target vehicle is detected in response to forming a substantially complete target vehicle.
Claims
1. A method comprising: obtaining, using one or more visual sensors, an image of a field of view in front of a host vehicle; assigning a first set of classifier windows to a left portion of the obtained image and a second set of classifier windows to a right portion of the obtained image, the first set of classifier windows associated with a first part from a left side of the target vehicle, and the second set of classifier windows associated with a second part from a right side of the target vehicle; applying a threshold to generate a left blob that includes a first group of overlapping classifier windows from the first set of classifier windows and a right blob that includes a second group of overlapping classifier windows from the second set of classifier windows, threshold corresponding to a minimum quantity and a maximum quantity of overlapping classifier windows forming each of the left blob and the right blob; determining that the left blob and the right blob form a pair of symmetrical blobs based at least on the left blob being adjacent to the right blob and a difference in size between the left blob and the right blob not exceeding a threshold value, and the left blob and the right blob further being determined to form the pair of symmetrical blobs based on the left blob and the right blob being symmetrical relative to a central axis; and detecting, based at least on the target vehicle including more than a threshold quantity of pairs of symmetrical blobs, the presence of a complete vehicle.
2. The method of claim 1, further comprising: detecting the first part from the right side of the target vehicle using Haar-Adaboost cascades; and, detecting the second part from the left side of the target vehicle using Haar-Adaboost cascades, where the Haar-Adaboost cascades are trained using a modified active learning methodology.
3. The method of claim 1, further comprising: generating a first voting map for the first set of classifier windows and as second voting map for the second set of classifier windows, an iterative voting technique being applied to each of the first voting map and the second voting map in order to generate the left blob and the right blob.
4. The method of claim 1, further comprising: applying a symmetry regression model to determine a symmetry score for the left blob and the right blob.
5. The method of claim 3, further comprising: wherein the threshold is applied to each of the first window voting map and the second window voting map to generate the first group overlapping classifier windows comprising the left blob and the second group of overlapping classifier windows comprising the right blob.
6. The method of claim 1, further comprising: detecting, in the obtained image, lane features of a road between the host vehicle and the complete vehicle.
7. The method of claim 6, wherein the detecting of the lane features is performed by analyzing a plurality of horizontal band disposed in the obtained image between the host vehicle and the complete vehicle.
8. A system, comprising: one or more visual sensors configured to obtain an image of a field of view in front of a host vehicle; one or more data processors; a memory storing computer-readable instructions, which when executed by the one or more data processors, cause the one or more data processors to perform one or more operations comprising: assigning a first set of classifier windows to a left portion of the obtained image and a second set of classifier windows to a right portion of the obtained image, the first set of classifier windows associated with a first part from a left side of the target vehicle, and the second set of classifier windows associated with a second part from a right side of the target vehicle; applying a threshold to generate a left blob that includes a first group of overlapping classifier windows from the first set of classifier windows and a right blob that includes a second group of overlapping classifier windows from the second set of classifier windows, the threshold corresponding to a minimum quantity and a maximum quantity of overlapping classifier windows forming each of the left blob and the right blob; determining that the left blob and the right blob form a pair of symmetrical blobs based at least on the left blob being adjacent to the right blob and a difference in size between the left blob and the right blob not exceeding a threshold value, and the left blob and the right blob further being determined to form the pair of symmetrical blobs based on the left blob and the right blob being symmetrical relative to a central axis; and detecting, based at least on the target vehicle including more than a threshold quantity of pairs of symmetrical blobs, the presence of a complete vehicle.
9. The system of claim 8, further comprising: detecting the first part from the right side of the target vehicle using Haar-Adaboost cascades; and, detecting the second part from the left side of the target vehicle using Haar-Adaboost cascades, where the Haar-Adaboost cascades are trained using a modified active learning methodology.
10. The system of claim 8, further comprising: generating a first voting map for the first set of classifier windows and as second voting map for the second set of classifier windows, an iterative voting technique being applied to each of the first voting map and the second voting map in order to generate the left blob and the right blob.
11. The system of claim 8, further comprising: applying a symmetry regression model to determine a symmetry score for the left blob and the right blob.
12. The system of claim 8, wherein the threshold is applied to each of the first window voting map and the second window voting map to generate the first group overlapping classifier windows comprising the left blob and the second group of overlapping classifier windows comprising the right blob.
13. The system of claim 8, wherein the operations further comprise at least: detecting, in the obtained image, lane features of a road between the host vehicle and the complete vehicle.
14. The system of claim 13, wherein the detecting of the lane features is performed by analyzing a plurality of horizontal band disposed in the obtained image between the host vehicle and the complete vehicle.
Description
DESCRIPTION OF THE DRAWINGS
(1) The accompanying drawings, which are incorporated in and constitute a part of this specification, show certain aspects of the subject matter disclosed herein and, together with the description, help explain some of the principles associated with the disclosed implementations. In the drawings,
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
DETAILED DESCRIPTION
(12) On-road vehicle detection and lane detection are tasks that may be used in vision-based active safety systems for vehicles. With emerging hybrid and electric vehicles that rely on battery power, advanced driver assistance systems may be designed such that they are power efficient. Therefore, computational efficiency of the inherent algorithms is one of the components in the design of these algorithms.
(13) The subject matter disclosed herein includes a vehicle detection algorithm with lane detection such that there may be, in some implementations, improved computational efficiency without compromising the robustness of both lane and/or vehicle detection. This computational efficiency may enable embedded realization of systems for in-vehicle electronic systems.
(14) The subject matter disclosed herein facilitates the detection of vehicles with low false detection rate and acceptable levels of true detection rate. The computational efficiency may be realized through a reduction in the number of false positives detected by the on-road vehicle detection systems. A false positive occurs when the on-road vehicle detection systems determine that there is a vehicle in the path and/or vicinity of the on-road vehicle detection systems when there are none.
(15)
(16) In some variations, the on-road vehicle detection system 102 can be configured to detect the presence of a rear portion 108 of a target vehicle 106 in front of the host vehicle 102. The on-road vehicle detection system 102 can be configured to separately detect a plurality of parts of the rear portion 108 of the target vehicle 106. For example, the on-road vehicle detection system 102 can be configured to separately detect a right rear portion 110 and a left rear portion 112 of the target vehicle 106.
(17) In the event that the detected target vehicle 106 does not include a right portion 110 and a left portion 112 the on-road vehicle detection system 102 can be configured to flag the detected target vehicle 106 as a false detection. A target vehicle target vehicle 106, that is in front of the host vehicle 104, is likely to always include, unless obstructed, a right rear portion 110 and a left rear portion 112.
(18) In some variations, the on-road vehicle detection system 102 can be configured to detect the multiple portions of the target vehicle 106 can be detected using a sliding window approach as described below. Individuals windows can be extracted to verify the existence of a portion of the target vehicle 106 within the extracted window.
(19) The on-road vehicle detection system 102 can be include a visual sensor. A visual sensor can include a camera, a CCD, CMOS, NMOS, Live MOS, an infrared sensor, LIDAR, or the like. The visual sensor can be configured to detect visual data associated with the field-of-view of the on-road vehicle detection system 102 in front of the host vehicle host vehicle 104 to form images of the field of view in front of the host vehicle 104. The on-road vehicle detection system 102 can be configured to perform image analysis techniques on the formed image of the field of view in front of the host vehicle 104.
(20) The on-road vehicle detection system 102 can be configured to generate a plurality of virtual classifier windows 114 within the formed image.
(21) Most of the image 200 contains no information that is useful in the determination of whether a vehicle is in front of the host vehicle 104. The image 200 can be split into multiple windows. Only some of those windows will contain information that is pertinent to the detection of a target vehicle 106 in front of the host vehicle 104. The size of these windows must be determined. When the window is too big, too much of the image is obtained and a processor associated with the on-road vehicle detection system 102 must analyze a much larger portion of the image 200 than is necessary. This process is not necessary for the vehicles that are far away from the host vehicle 104 in the image 200. In collision avoidance systems, vehicles that are far away need not be analyzed because the likelihood of a collision with those vehicles is low. A window that is too small will not contain enough features that the on-road vehicle detection system 102 can use to determine the content of the window. Consequently, small windows need not be processed in the image 200 to detect a target vehicle 106 that is close to the host vehicle 104.
(22) The window sizes can be determined for the image 200 by applying inverse perspective mapping (IPM) to the image to compute a look-up table (LUT) that contains possible window sizes for image 200. The window sized can be based on the specific camera calibration information of the image sensor used to obtain the image 200. The LUT generation step can be a one-time offline step based on the parameters of the camera used by the on-road vehicle detection system 102 to obtain the image 200. The parameters of the camera can be used to generate camera calibration parameters. The camera calibration parameters can reflect the calibration needed to determine correlate a point within the image 200 with a distance in front of the host vehicle 104.
(23) In some variations, the LUT generation step uses the lane calibration information to generate a look up table (LUT) that holds the different window sizes for each y-coordinate in the image domain. This generation of LUT enables elimination of a so-called blind-folded approach of applying unwarranted multiple scaled windows on the entire image. Instead, the window scales using the LUT which is defined such that only the necessary scaled windows are processed. This process may, in some implementations, reduce the number of windows to processes by for example one or more orders of magnitude.
(24) The on-road vehicle detection system 102 can be configured to generate a homography matrix H. The on-road vehicle detection system 102 can use the camera calibration parameters to convert camera coordinate system (CCS), or image domain, into world coordinate system (WCS), or top view. For example, given an input image 200, the IPM image 202 is generated using the homography matrix H. Consequently, every point P(x,y) in the image 200 is transformed to the point P.sub.w in image 202 using the homography matrix H. A mathematical representation of transformation can include:
[x.sub.wy.sub.w1].sup.T=kH[xy1].sup.T
where K is the calibration constant based on the camera calibration parameters. From this the locations of the minimas and maximas in IPM image, image 202, can be determined to yield those positions in the image domain, i.e. in the image 200. The positions in the image 200 that correspond to the maximas and minimas in the IPM image can be denoted as points P.sub.1, P.sub.2, P.sub.3 and P.sub.4 within the image 200. For each row in image 202, point P.sub.3w can be determined. P.sub.3w can be determined using the equation P.sub.3wP.sub.1w=w.sub.V.sup.W. w.sub.V.sup.W is the width of a vehicle in WCS, i.e., as seen from top view. Most consumer vehicles have similar axle lengths. The length of the axle on most consumer vehicles is approximately the same width as the vehicle. Consequently, by fixing the length of the axle w.sub.V.sup.W at an estimated length, P.sub.3w can be determined.
(25) Given P.sub.1w and P.sub.3w, in the IPM domain, the inverse of H, i.e. H.sup.1, can be used to determine the corresponding points P.sub.1 and P.sub.3 in the image domain. For each row index y in image 200, the LUT has the following list of values:
w.sub.V(y)=x.sub.3x.sub.1
(26) where w.sub.V(y) is the width of the virtual window that should be used for vehicle detection in the y-th row of image 200.
(27)
(28) Multiple sets of virtual classifier windows 114 can be generated, one set for each of the multiple parts of the vehicles to be detected. For example, a set of classifier windows 114, B.sup.P1, can be generated for the left part 302 of the image 300 of the target vehicle 106 and a set of classifier windows 114, B.sup.P2, can be generated the right part 304 of the image 300 of the target vehicle target vehicle 106. For every B.sub.i.sup.P.sup.
.sub.w=|w.sub.V(y)2w.sub.i.sup.P.sup.
where w.sub.i.sup.P.sup.
(29) In some variations, where .sub.w is determined to be too small for a particular classifier window 114, the classifier window 114 can be considered for further processing. This may occur, for example, in the event that .sub.w<10.
(30)
(31) Following the generation of the classifier windows 114 based on the calibration parameters associated with the visual sensor that obtained the images, the on-road vehicle detection system 102 can be configured to determine the combinations of classifier windows 114 that each contain a part of a vehicle to form a whole vehicle.
(32) An iterative window search algorithm with symmetric regression model may be used to identify pairs of left and right vehicle parts that must be combined together to confirm the presence of a vehicle.
(33) The on-road vehicle detection system 102 can be configured to generate a window voting map for each part of the vehicle. For example, when the on-road vehicle detection system 102 is configured to determine the right rear portion and a left rear portion of a vehicle, the on-road vehicle detection system 102 can be configured to generate a window voting map for the sets of windows B.sup.P1 for the left rear portion of the vehicle, and a window voting map for the sets of windows B.sup.P2 for the right rear portion of the vehicle. The sets of windows B.sup.P1 and B.sup.P2 contain the filtered set of classifier windows 114.
(34) In some variations, for an image of the field of view in front of the on-road vehicle detection system 102 of the host vehicle 104, such as image 200, having a size denoted by m columns and n rows, the window voting maps can be denoted by M.sup.P1 and M.sup.P2, where
M.sup.P1(x, y)=nif{(x, y)|(x, y)nB.sub.i.sup.P1windows}
M.sup.P2(x, y)=nif{(x, y)|(x, y)nB.sub.i.sup.P2 windows}
(35)
(36) The on-road vehicle detection system 102 can be configured to apply thresholds to the1 window voting maps M.sup.P1 and M.sup.P2. Applying thresholds to the window voting maps, by a value, can yield blobs that correspond to the minimum number of windows that overlap in the image domain, e.g., the minimum number of windows. In some variations, windows can be searched iteratively to determine the windows in the M.sup.P1 voting map that have corresponding windows in the M.sup.P2 voting map. A blob is a group of classification or detection windows that are clustered around the same part of a target vehicle. When a part of target vehicle is detected within a pixel a window can be generated around it. When there is a part of a vehicle in an area of the image multiple windows will cluster on and around that part. That group of windows can be referred to as a blob.
(37)
||(x.sub.B.sup.P1, y.sub.B.sup.P1)(x.sub.B.sup.P2, y.sub.B.sup.P2)||<d.sub.max, |w.sub.B.sup.P1w.sub.B.sup.P2|<d.sub.w, |h.sub.B.sup.P1h.sub.B.sup.P2|<d.sub.h
To pair a blob from B.sub.B.sup.P1 with a blob from B.sub.B.sup.P2, both blobs must be of similar size and placed adjacently to each other. If such pairs of blobs are found, window pairs (B.sub.i.sup.P1, B.sub.j.sup.P2) can be formed, such that windows B.sub.i.sup.P1 and B.sub.j.sup.P2 satisfy the following conditions:
(38) 1) Overlap with the blob from which B.sub.i.sup.P1 and B.sub.j.sup.P2 are chosen, is high.
(39) 2) Symmetry condition is met.
(40) The first condition ensures that the windows B.sub.i.sup.P1 and B.sub.j.sup.P2 are forming the selected blob, i.e., smaller windows may also be present within a blob obtained using a lower threshold. Consequently, evaluating the size of the windows B.sub.i.sup.P1 and B.sub.j.sup.P2, with respect to the size of the blobs, ensures that the windows B.sub.i.sup.P1 and B.sub.j.sup.P2 are positioned overlapping the entire blob. The equation ||(x.sub.B.sup.P1, y.sub.B.sup.P1)(x.sub.B.sup.P2, y.sub.B.sup.P2)||<d.sub.max can be used to determine the overlap of the windows B.sub.i.sup.P1 and B.sub.j.sup.P2 and the blob.
(41) The second condition is a symmetry condition. The symmetry condition is premised on the fact that most on-road fully visible vehicles have a symmetric rear profile. This premise is used to determine how best B.sub.i.sup.P1 with B.sub.j.sup.P2 in appearance. The left rear portion 302 and the right rear portion 304 of the vehicle illustrated in
(42) The on-road vehicle detection system 102 can be configured to check the symmetry of windows B.sub.i.sup.P1 and B.sub.j.sup.P2 by generating a bounding box B.sup.F. The bounding box can be defined by: B.sup.F=[x.sup.F, y.sup.F, w.sup.F, h.sup.F]. The bounding box, B.sup.F, can include both windows B.sub.i.sup.P1 and B.sub.j.sup.P2 in it. The image patch I.sup.F corresponding to bounding box B.sup.F is extracted from image 200, which is divided into two equal parts along the vertical central axis 306. This results in I.sub.F.sup.P1 and I.sub.F.sup.P2, corresponding to the two parts of the bounding box, B.sup.F, being extracted.
(43) The on-road vehicle detection system 102 can be configured to divide I.sub.F.sup.P1 and I.sub.F.sup.P2 into grids. For example, I.sub.F.sup.P1 and I.sub.F.sup.P2 can be divided into 88 blocks. Any incomplete blocks can be padded. Each block from the left part of the bounding box, i.e. I.sub.F.sup.P1, is flipped and its corresponding symmetrically opposite block from the right part of the bonding box, i.e., I.sub.F.sup.P2 is selected.
(44) The on-road vehicle detection system 102 can be configured to generate scaled histograms of gradient angles h.sup.P1 and h.sup.P2 the two selected block, one block selected and flipped from I.sub.F.sup.P1 and the other block selected from I.sub.F.sup.P2. While the description describes block from I.sub.F.sup.P1 being selected and flipped, the on-road vehicle detection system 102 can be equally configured to select and flip a block from I.sub.F.sup.P2.
(45) The scaling can be performed using the gradient magnitudes at each pixel coordinate, whose gradient angle is used to generate the histogram of gradient angles. h.sup.P1 and h.sup.P2 can be normalized and a dot product can be taken to determine the symmetry score S.sub.b(p,q) for that pair of symmetrically opposite blocks in I.sup.F, i.e.:
(46)
In other words, S.sub.b(p,q) is the dot product between the two normal histogram vectors. These scores, S.sub.b(p,q), are generated by the on-road vehicle detection system 102 for each of the blocks within the extracted image I.sup.F. The total score for the bounding box B.sup.F that encloses the chosen windows B.sub.i.sup.P1 and B.sub.j.sup.P2 is determined by the on-road vehicle detection system on-road vehicle detection system 102, which is given by:
(47)
(48) The symmetry score S.sub.I.sub.
S.sub.w.sup.p=.sub.0+.sub.1w+.sub.2w.sup.2
S.sub.h.sup.p=.sub.0+.sub.1h+.sub.2h.sup.2
(49) The coefficients .sub.i and .sub.i in the above relationships can be learned using positive training annotations, examples of which are described below.
(50) For a given I.sup.F, the symmetry score is computed as S.sub.I.sub.
(51) If a final detection is found, all the blobs that lie within the finally detected bounding box, B.sup.F, are eliminated from further processing. In subsequent iterations, the remaining blobs only are further thresholded and checked for the presence of vehicles. The on-road vehicle detection system 102 can be configured to terminate the iteration of the blobs when either both left and right blocks have exhausted their threshold options or there are no more blobs left in either of the parts to further process.
(52)
(53) At 702, an image can be obtained by a on-road vehicle detection system, such as on-road vehicle detection system 102. The image can be of a field-of-view in the direction of travel of a vehicle, such as a host vehicle. For example, when the vehicle is travelling forwards, an image can be taken of a field-of-view in front of the vehicle. The angle of the field-of-view may be determined based on a likelihood of an object causing an obstruction for the motion of the vehicle. The image may not contain information about the areas ninety degrees to the left or the right of a moving vehicle because it is unlikely that the moving vehicle would strike an object in those areas when travelling forwards.
(54) At 704, classifiers for the two parts are applied to the image 702 to get a set of detection windows. Sets of classifier windows are then used to generate blobs. Sets of classifier windows can include a left-hand set of classifier windows and a right-hand set of classifier windows. The left-hand set of classifier windows can be assigned to the left-hand portion of a vehicle detected in the image and the right-hand set of classifier windows can be assigned to a right-hand portion of a vehicle detected in the image.
(55) At 706, window voting maps can be generated for each set of classifier windows. Examples of window voting maps can be seen in
(56) At 708, thresholds can be applied to each voting map. Applying thresholds to the window voting maps, by a value, can yield blobs that correspond to the minimum number of windows that overlap in the image domain, e.g., the minimum number of windows.
(57) At 710, blobs can be generated for each threshold of each voting map, for both right-hand voting maps and left-hand voting maps. Each of the blobs can have an associated bounding box.
(58) At 712, right-hand blobs and left-hand blobs can be matched based on a right-hand blob and a left-hand blob being next to each other and having a similar size. For example, a blob from B.sub.B.sup.P1 is paired with a blob from B.sub.B.sup.P2, when both blobs must be of similar size and placed adjacently to each other.
(59) At 714, a determination of symmetry between a matched right-hand blob and a left-hand blob can be made. Symmetry can be determined by dividing the blobs into blocks of equal numbers. Blocks from a blob on one side, either the left or the right, can be flipped and a determination of symmetry between a flipped block and a corresponding block on the other side can be made.
(60) At 716, when the matched blobs are within a threshold number of units of being symmetrical it can be determined that a vehicle has been detected. For example, the on-road vehicle detection system 102 may be configured to determine that the bounding box B.sup.F has sufficient symmetry to conclude that it bounds a whole target vehicle 106 when S.sub.I.sub.
(61)
(62) At 802, the right-hand side of the vehicle and the left-hand side of the vehicle can be detected using Haar-Adaboost cascades. For example,
(63) At 804, iterative voting techniques and symmetry regression models can be used to associate left-hand sides of the vehicles with right-hand sides of the vehicles. For example,
(64) At 806, a determination can be made as to whether left-hand parts of a vehicle and right-hand parts of a vehicle have been matched. For example, a blob from B.sub.B.sup.P1 is paired with a blob from B.sub.B.sup.P2, when both blobs must be of similar size and placed adjacently to each other.
(65) At 808, in response to a match at 806, the on-road vehicle detection system can determine that a vehicle has been detected.
(66) In response to a determination at 806 that there is no match, it can be determined that there is no vehicle in that window and the on-road vehicle detection system can, at 810, be configured to analyze the next window.
(67)
(68) The on-road vehicle detection system can be trained to detect vehicles using a two-step training methodology. During a typical cascade training process, positive training samples can be annotated from images, and negative training samples can be sampled randomly from images that do not contain any positive instances. The proposed training method however does not discard the remaining of the input image from which the positive samples are collected.
(69) An input image 902 can be provided to the training framework. As illustrated in
(70) In the second training stage 908, the AdaBoost cascades are trained but with more informed and selective positive and negative samples. A multi-scale sliding window approach is applied on the training image set to generate Haar-like features, which are then classified using the classifiers from first step, i.e., C.sub.1.sup.P1 and C.sub.1.sup.P2. C.sub.1.sup.P1 and C.sub.1.sup.P2 are applied to the Haar-like features generated from input images that were previously annotated with positive vehicle samples. It should be noted that the entire training image and not just the previously annotated window is sent for classification in the second training step 908. This classification results in two sets of windows W.sub.1.sup.P1 and W.sub.1.sup.P2 for V.sup.P1 and V.sup.P2. These windows are then filtered as true positives and false positives using the annotations for positive samples that were previously used to train in the first training step 906.
(71) The positive and negative windows selected using the above method are then used to extract Haar-like features, which are then used to train two AdaBoost cascade classifiers C.sub.2.sup.P1 and C.sub.2.sup.P2. During testing phase, C.sub.2.sup.P1 and C.sub.2.sup.P2 are used to generate two sets of hypothesis windows B.sup.P1 and B.sup.P2 for the next stage of hypothesis verification, where
B.sup.P1={B.sub.1.sup.P1, B.sub.2.sup.P1, . . . , B.sub.N.sub.
B.sup.P2={B.sub.1.sup.P2, B.sub.2.sup.P2, . . . , B.sub.N.sub.
(72) In the above equations, B.sub.i.sup.P.sup.
(73) The subject matter disclosed herein may bring together both lane detection and vehicle detection in an integrated approach to not only increase the robustness of each of the tasks but also reduce the computational complexity of both techniques. The subject matter disclosed herein may bring may also fuse the two tasks at an algorithmic level to address both robustness and computational efficiency of the entire lane and vehicle detection systems. The subject matter disclosed herein may also provide a process for detecting parts of vehicles. Moreover, an iterative window search algorithm and/or a symmetry regression model may be used to detect vehicles.
(74) The division of two parts of the vehicle into a left part and a right part may provide, in some implementations, an improved robustness and computational efficiency. In terms of robustness, the detection of independent parts may occur in the sequence, as described above, and may result in a reduced number of false alarms.
(75) The vehicle detections from the above processes may then be used to detect the lane features in an informed and controlled manner.
(76) The vehicle positions are using ELVIS to determine the maximum band positions for lane feature extraction. Lane features can be extracted using the lane detection algorithm in those specific bands only that are unobstructed by vehicles, resulting in improved computational efficiency.
(77) In some example implementations, the subject matter disclosed herein may be used in active safety systems and advanced driver assistance systems, such as collision avoidance systems, lane change assist systems, and the like.
(78) One or more aspects or features of the subject matter described herein can be realized in digital electronic circuitry, integrated circuitry, specially designed application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) computer hardware, firmware, software, and/or combinations thereof. These various aspects or features can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. The programmable system or computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
(79) These computer programs, which can also be referred to as programs, software, software applications, applications, components, or code, include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the term machine-readable medium refers to any computer program product, apparatus and/or device, such as for example magnetic discs, optical disks, memory, and Programmable Logic Devices (PLDs), used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term machine-readable signal refers to any signal used to provide machine instructions and/or data to a programmable processor. The machine-readable medium can store such machine instructions non-transitorily, such as for example as would a non-transient solid-state memory or a magnetic hard drive or any equivalent storage medium. The machine-readable medium can alternatively, or additionally, store such machine instructions in a transient manner, such as for example, as would a processor cache or other random access memory associated with one or more physical processor cores.
(80) To provide for interaction with a user, one or more aspects or features of the subject matter described herein can be implemented on a computer having a display device, such as for example a cathode ray tube (CRT) or a liquid crystal display (LCD) or a light emitting diode (LED) monitor for displaying information to the user and a keyboard and a pointing device, such as for example a mouse or a trackball, by which the user may provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well. For example, feedback provided to the user can be any form of sensory feedback, such as for example visual feedback, auditory feedback, or tactile feedback; and input from the user may be received in any form, including, but not limited to, acoustic, speech, or tactile input. Other possible input devices include, but are not limited to, touch screens or other touch-sensitive devices such as single or multi-point resistive or capacitive track pads, voice recognition hardware and software, optical scanners, optical pointers, digital image capture devices and associated interpretation software, and the like.
(81) The subject matter described herein can be embodied in systems, apparatus, methods, and/or articles depending on the desired configuration. The implementations set forth in the foregoing description do not represent all implementations consistent with the subject matter described herein. Instead, they are merely some examples consistent with aspects related to the described subject matter. Although a few variations have been described in detail above, other modifications or additions are possible. In particular, further features and/or variations can be provided in addition to those set forth herein. For example, the implementations described above can be directed to various combinations and subcombinations of the disclosed features and/or combinations and subcombinations of several further features disclosed above. In addition, the logic flows depicted in the accompanying figures and/or described herein do not necessarily require the particular order shown, or sequential order, to achieve desirable results. Other implementations may be within the scope of the following claims.
(82) Appendix A includes a paper titled On-road Vehicle Detection using Multiple Parts, pages 1-10, which is incorporated herein in its entirety.