ADVANCED DRIVER ASSISTANCE SYSTEM AND METHOD
20200143176 ยท 2020-05-07
Inventors
Cpc classification
G06V10/446
PHYSICS
G06V20/588
PHYSICS
International classification
Abstract
An advanced driver assistance system is configured to detect lane markings in a perspective image of a road in front of the vehicle. The perspective image of the road is separated into horizontal stripes corresponding to different road portions at different average distances from the vehicle. Features are extracted from the plurality of horizontal stripes using a plurality of kernels.
Claims
1. An advanced driver assistance system for a vehicle, the advanced driver assistance system being configured to detect lane markings in a perspective image of a road in front of the vehicle, wherein the advanced driver assistance system comprises: a feature extractor configured to separate the perspective image of the road into a plurality of horizontal stripes, wherein each horizontal stripe of the perspective image corresponds to a different road portion at a different average distance from the vehicle, wherein the feature extractor is further configured to extract features from the plurality of horizontal stripes using a plurality of kernels, each kernel being associated with a kernel width, by processing a first horizontal stripe corresponding to a first road portion at a first average distance using a first kernel associated with a first kernel width, a second horizontal stripe corresponding to a second road portion at a second average distance using a second kernel associated with a second kernel width and a third horizontal stripe corresponding to a third road portion at a third average distance using a third kernel associated with a third kernel width, wherein the first average distance is smaller than the second average distance and the second average distance is smaller than the third average distance and wherein the ratio of the first kernel width to the second kernel width is larger than the ratio of the second kernel width to the third kernel width.
2. The system of claim 1, wherein the first horizontal stripe is adjacent to the second horizontal stripe and the second horizontal stripe is adjacent to the third horizontal stripe.
3. The system of claim 1, wherein each kernel of the plurality of kernels is defined by a plurality of kernel weights and wherein each kernel comprises left and right outer kernel portions, left and right intermediate kernel portions and a central kernel portion, including left and right central kernel portions, wherein for each kernel the associated kernel width is the width of the whole kernel.
4. The system of claim 3, wherein for detecting a feature the feature extractor is further configured to determine for each horizontal stripe a respective average intensity in the left and right central kernel portions, the left and right intermediate kernel portions and the left and right outer kernel portions using a respective convolution operation and to compare a respective result of the respective convolution operation with a respective threshold value.
5. The system of claim 1, wherein for a currently processed horizontal stripe identified by a stripe index r the feature extractor is configured to determine the width of the central kernel portion d.sub.C(r), the widths of the left and right intermediate kernel portions d.sub.B(r) and the widths of the left and right outer kernel portions d.sub.A(r) on the basis of the following equations:
d.sub.A(r)=L.sub.x(r); d.sub.B(r)=L.sub.y(r); d.sub.C(r)=d.sub.A(r)d.sub.B(r)+1; d.sub.C1(r)=d.sub.C2(r)=d.sub.C(r)/2,
Kr(r)=d.sub.B(r)=L.sub.y(r); d.sub.C(r)1, wherein L.sub.x(r) denotes a distorted expected width of the lane marking, L.sub.y(r) denotes a height of the currently processed horizontal stripe, d.sub.C1(r) denotes a width of the left central kernel portion, d.sub.C2(r) denotes a width of the right central kernel portion and Kr(r) denotes the height of the currently processed horizontal stripe.
6. The system of claim 5, wherein for the currently processed horizontal stripe identified by the stripe index r the feature extractor is configured to determine the plurality of kernel weights on the basis of the following equations:
7. The system of claim 5, wherein for the currently processed horizontal stripe identified by the stripe index r the feature extractor is configured to determine the plurality of kernel weights on the basis of the following equations:
8. The system of claim 5, wherein for the currently processed horizontal stripe identified by the stripe index r the feature extractor is configured to determine the plurality of kernel weights on the basis of the following equations:
9. The system of claim 5, wherein for the currently processed horizontal stripe identified by the stripe index r the feature extractor is configured to determine the plurality of kernel weights on the basis of the following equations:
10. The system of claim 5, wherein for the currently processed horizontal stripe identified by the stripe index r the feature extractor is configured to determine the plurality of kernel weights on the basis of the distorted expected width of the lane marking L.sub.x(r) and the height of the currently processed horizontal stripe L.sub.y(r).
11. The system of claim 5, wherein for the currently processed horizontal stripe identified by the stripe index r the feature extractor is configured to determine the width of the central kernel portion d.sub.C(r), the widths of the left and right intermediate kernel portions d.sub.B(r) and the widths of the left and right outer kernel portions d.sub.A(r) on the basis of the distorted expected width of the lane marking L.sub.x(r) and the height of the currently processed horizontal stripe Lf.sub.y(r) and to determine the plurality of kernel weights on the basis of the width of the central kernel portion d.sub.C(r), the widths of the left and right intermediate kernel portions d.sub.B(r) and the widths of the left and right outer kernel portions d.sub.A(r).
12. The system of claim 1, wherein the system further comprises a stereo camera configured to provide the perspective image of the road in front of the vehicle as a stereo image having a first channel and a second channel.
13. The system of claim 12, wherein the feature extractor is configured to independently extract features from the first channel of the stereo image and the second channel of the stereo image and wherein the system further comprises a unit configured to determine those features, which have been extracted from both the first channel and the second channel of the stereo image.
14. A method of operating an advanced driver assistance system for a vehicle, the advanced driver assistance system being configured to detect lane markings in a perspective image of a road in front of the vehicle, wherein the method comprises: separating the perspective image of the road into a plurality of horizontal stripes, wherein each horizontal stripe of the perspective image corresponds to a different road portion at a different average distance from the vehicle; and extracting features from the plurality of horizontal stripes using a plurality of kernels, each kernel being associated with a kernel width, by processing a first horizontal stripe corresponding to a first road portion at a first average distance using a first kernel associated with a first kernel width, a second horizontal stripe corresponding to a second road portion at a second average distance using a second kernel associated with a second kernel width and a third horizontal stripe corresponding to a third road portion at a third average distance using a third kernel associated with a third kernel width, wherein the first average distance is smaller than the second average distance and the second average distance is smaller than the third average distance and wherein the ratio of the first kernel width to the second kernel width is larger than the ratio of the second kernel width to the third kernel width.
15. A non-transitory computer-readable medium comprising program code which, when executed by a processor, causes the method of claim 14 to be performed.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0035] Further embodiments of the invention will be described with respect to the following figures, wherein:
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044] In the various figures, identical reference signs will be used for identical or at least functionally equivalent features.
DETAILED DESCRIPTION OF EMBODIMENTS
[0045] In the following description, reference is made to the accompanying drawings, which form part of the disclosure, and in which are shown, by way of illustration, specific aspects in which the invention may be placed. It is understood that other aspects may be utilized and structural or logical changes may be made without departing from the scope of the invention. The following detailed description, therefore, is not to be taken in a limiting sense, as the scope of the invention is defined by the appended claims.
[0046] For instance, it is understood that a disclosure in connection with a described method may also hold true for a corresponding device or system configured to perform the method and vice versa. For example, if a specific method step is described, a corresponding device may include a unit to perform the described method step, even if such unit is not explicitly described or illustrated in the figures. Further, it is understood that the features of the various exemplary aspects described herein may be combined with each other, unless specifically noted otherwise.
[0047]
[0048] In the embodiment shown in
[0049] As illustrated in
[0050] As illustrated in
[0051] As will be described in more detail further below, the feature extractor 101 is configured to decrease the kernel width with a lower rate compared to, for instance, the kernel height to take into account the increased contribution of the camera sensor noise as the features sizes get smaller. Differently put, the feature extractor 101 is configured to extract features from the plurality of horizontal stripes on the basis of the plurality of kernels by processing a first horizontal stripe corresponding to a first road portion at a first average distance from the vehicle using a first kernel associated with a first kernel width, a second horizontal stripe corresponding to a second road portion at a second average distance from the vehicle using a second kernel associated with a second kernel width and a third horizontal stripe corresponding to a third road portion at a third average distance from the vehicle using a third kernel associated with a third kernel width, wherein the first average distance is smaller than the second average distance and the second average distance is smaller than the third average distance and wherein the ratio of the first kernel width to the second kernel width is larger than the ratio of the second kernel width to the third kernel width. As will be appreciated, for the conventional linear variation of the kernel width the ratio of the first kernel width to the second kernel width would be equal to the ratio of the second kernel width to the third kernel width, i.e. constant. Thus, the feature extractor 100 of the ADAS 100 can be regarded to vary the kernel width on the basis of a dependency that varies more strongly than a linear dependency.
[0052] In an embodiment, the feature extractor 101 is further configured to perform convolution operations and compare the respective result of a respective convolution operation with a respective threshold value for extracting the features, in particular coordinates of the lane markings. Mathematically, such a convolution operation can be described by the following equation for a 2-D discrete convolution:
wherein the kernel K is a matrix of the size (KrKc) or (Kernel row or heightKernel column or width) and I(i,j) and O(i,j) denote the respective arrays of input and output image intensity values. The feature extractor 101 of the ADAS 100 can be configured to perform feature extraction on the basis of a horizontal 1-D kernel K, i.e. a kernel with a kernel matrix only depending on m (i.e. the horizontal direction) but not on n (i.e. the vertical direction).
[0053] In the exemplary embodiment shown in
[0054] As illustrated in
[0055] The geometrical transformation from the bird's eye view, i.e. the non-distorted view 200 to the perspective image view, i.e. the distorted view 200 is feasible through a transformation matrix H which maps each point of the distorted domain into a corresponding point of the non-distorted domain and vice versa, as the transformation operation is invertible.
[0056] L.sub.x and L.sub.y are the non-distorted expected width of lane marking and sampling step, respectively. They may be obtained from the camera projection parameter , the expected physical width of the lane marking, and the expected physical gap between the markings of a dashed line.
L.sub.y=(,,)
L.sub.x=(,,)
[0057] Each horizontal stripe of index r in the image view has the height of a distorted sampling step L.sub.y(r) which corresponds to the non-distorted sampling step, i.e. L.sub.y.
[0058] The expected width of lane marking at stripe r is denoted by a distorted expected width L.sub.x(r) which corresponds to the non-distorted expected width of lane marking L.sub.x. The geometrical transformation from the distorted domain (original image) to the non-distorted domain (bird's eye view) is feasible through a transformation matrix H which maps each point of the distorted domain into a corresponding point of the non-distorted domain. The operation is invertible.
[0059] The filtering is done block-wise and row-wise where the proposed kernel height corresponds to the height and the kernel width is adjusted based on the parameters L.sub.y(r) and L.sub.x(r). Since these parameters are constant for each stripe, the kernel size will also be constant for a given stripe. As will be described later, the kernel width can be divided into several regions or sections.
[0060] As illustrated in the perspective image view 200 of
[0061] As will be appreciated and as illustrated in
[0062]
[0063] In an embodiment, for a currently processed horizontal stripe identified by a stripe index r the feature extractor 101 of the ADAS 100 is configured to determine the width of the central kernel portion d.sub.C(r), the widths of the left and right intermediate kernel portions d.sub.B(r) and the widths of the left and right outer kernel portions d.sub.A(r) on the basis of the following equations:
d.sub.A(r)=L.sub.x(r); d.sub.B(r)=L.sub.y(r); d.sub.C(r)=d.sub.A(r)d.sub.B(r)+1; d.sub.C1(r)=d.sub.C2(r)=d.sub.C(r)/2,
Kr(r)=d.sub.B(r)=L.sub.y(r); d.sub.C(r)1,
wherein L.sub.x(r) denotes a distorted expected width of the lane marking, L.sub.y(r) denotes a height of the currently processed horizontal stripe, d.sub.C1(r) denotes a width of the left central kernel portion, d.sub.C2(r) denotes a width of the right central kernel portion and Kr(r) denotes the height of the currently processed horizontal stripe.
[0064] The respective width of the left and right outer kernel portions d.sub.A(r) can be based on the smallest expected gap between closely spaced lane markings. In the embodiment above, it is assumed that d.sub.A(r) equals L.sub.x(r). In another embodiment, d.sub.A(r) can be a fraction of L.sub.x(r), for instance L.sub.x(r)/2.
[0065] In the embodiment above, the respective widths of the left and right intermediate kernel portions d.sub.B(r) is equal to L.sub.y(r). In a further embodiment, d.sub.B(r) can be equal to L.sub.y(r).Math.tan , as illustrated in
[0066] In an embodiment, the feature extractor 101 is configured to use kernel #1 shown in
wherein w.sub.A1(r) denotes the kernel weight of the left outer kernel portion, w.sub.A2(r) denotes the kernel weight of the right outer kernel portion, w.sub.B(r) denotes the kernel weight of the left and right intermediate kernel portions, w.sub.C1(r) denotes the kernel weight of the left central kernel portion and w.sub.C2(r) denotes the kernel weight of the right central kernel portion. Kernel #1 is especially suited for detecting the difference of the average intensity between the lane marking and its surroundings.
[0067] Alternatively or additionally, the feature extractor 101 can be configured to use kernel #2 shown in
[0068] Kernel #2, is especially suited for detecting the uniformity or intensity in the region of the lane marking.
[0069] Alternatively or additionally, the feature extractor 101 can be configured to use kernel #3 shown in
[0070] Kernel #3 is especially suited for detecting the difference between the mean intensity of the lane and road surface to the left of the lane markers.
[0071] Alternatively or additionally, the feature extractor 101 can be configured to use kernel #4 shown in
[0072] Kernel #3 is especially suited for detecting the difference between the mean intensity of the lane and road surface to the right of the lane markers.
[0073]
[0074]
[0075]
[0076]
[0077] While a particular feature or aspect of the disclosure may have been disclosed with respect to only one of several implementations or embodiments, such a feature or aspect may be combined with one or more further features or aspects of the other implementations or embodiments as may be desired or advantageous for any given or particular application. Furthermore, to the extent that the terms include, have, with, or other variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term comprise. Also, the terms exemplary, for example and e.g. are merely meant as an example, rather than the best or optimal. The terms coupled and connected, along with derivatives thereof may have been used. It should be understood that these terms may have been used to indicate that two elements cooperate or interact with each other regardless whether they are in direct physical or electrical contact, or they are not in direct contact with each other.
[0078] Although specific aspects have been illustrated and described herein, it will be appreciated that a variety of alternate and/or equivalent implementations may be substituted for the specific aspects shown and described without departing from the scope of the present disclosure. This application is intended to cover any adaptations or variations of the specific aspects discussed herein.
[0079] Although the elements in the following claims are recited in a particular sequence with corresponding labeling, unless the claim recitations otherwise imply a particular sequence for implementing some or all of those elements, those elements are not necessarily intended to be limited to being implemented in that particular sequence.
[0080] Many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the above teachings. Of course, those skilled in the art readily recognize that there are numerous applications of the invention beyond those described herein. While the invention has been described with reference to one or more particular embodiments, those skilled in the art recognize that many changes may be made thereto without departing from the scope of the invention. It is therefore to be understood that within the scope of the appended claims and their equivalents, the invention may be practiced otherwise than as specifically described herein.