Method and device for detecting lanes, driver assistance system and vehicle

11417117 · 2022-08-16

Assignee

Inventors

Cpc classification

International classification

Abstract

A method of detecting lanes includes the steps: capturing (S1) a camera image (K) of a vehicle environment by a camera device (2) of a vehicle (5); determining (S2) feature points (P1 to P15) in the camera image (K), which feature points correspond to regions of possible lane boundaries (M1, M2); generating (S3) image portions of the captured camera image (K) respectively around the feature points (P1 to P15); analyzing (S4) the image portions using a neural network to classify the feature points (P1 to P15); and determining (S5) lanes in the vehicle environment taking account of the classified feature points (P1 to P15).

Claims

1. A method of detecting a lane on a roadway, comprising the steps: with a camera of a vehicle, capturing a camera image of a vehicle environment including a roadway on which the vehicle is driving; determining feature points in the camera image by analyzing the camera image with image processing that does not use a neural network, wherein the feature points correspond to points on possible lane boundaries of at least one lane on the roadway; selecting a respective image portion of the camera image having a respective predefined size around each respective one of the feature points; analyzing the image portions using a neural network to classify the feature points thereof regarding whether or not the feature points represent actual lane boundaries; and determining the at least one lane on the roadway in the vehicle environment based on the feature points that have been classified as representing the actual lane boundaries.

2. The method according to claim 1, further comprising operating a driver assistance system of the vehicle in response to and dependent on the at least one lane that has been determined, to autonomously or semi-autonomously control a driving operation of the vehicle with respect to guidance thereof relative to a respective lane of the at least one lane that has been determined.

3. The method according to claim 1, further comprising operating a driver assistance system of the vehicle in response to and dependent on the at least one lane that has been determined, to output a warning signal to a driver of the vehicle when the vehicle leaves a respective lane of the at least one lane that has been determined.

4. The method according to claim 1, wherein the respective predefined size of the respective image portion is defined based on a predefined total number of pixels of the respective image portion, predefined pixel dimensions of the respective image portion, or a predefined physical size of the respective image portion.

5. The method according to claim 1, wherein the feature points are respective feature point pixels in the camera image, and the selecting of the respective image portion comprises defining the respective image portion having the predefined size around a respective one of the feature point pixels so that the respective feature point pixel is located within the respective image portion.

6. The method according to claim 1, wherein the feature points are respective feature point pixels in the camera image, and the selecting of the respective image portion comprises defining the respective image portion having the predefined size around a respective one of the feature point pixels so that the respective feature point pixel is located at a center of the respective image portion.

7. The method according to claim 1, wherein the image processing for the determining of the feature points uses edge detection algorithms.

8. The method according to claim 7, wherein the edge detection algorithms use Sobel filters.

9. The method according to claim 1, wherein the possible lane boundaries comprise lane markings.

10. The method according to claim 1, wherein the neural network is a convolutional neural network.

11. The method according to claim 1, further comprising teaching the neural network using predefined training data from a database.

12. The method according to claim 11, wherein the training data comprise illustrations of example lane boundaries and illustrations of example non-boundary structures that do not constitute lane boundaries.

13. The method according to claim 12, wherein the illustrations have been generated at least partially with different brightnesses and/or different times of day and/or different weather conditions.

14. The method according to claim 1, further comprising determining courses of the actual lane boundaries by interpolating between neighboring ones of the feature points that have been classified as representing the actual lane boundaries, and wherein the determining of the at least one lane is further based on the courses of the actual lane boundaries.

15. A device for detecting a lane on a roadway, comprising: an interface configured to receive, from a camera device of a vehicle, a camera image of a vehicle environment including a roadway on which the vehicle is driving; and a computing apparatus which is configured: to determine feature points in the camera image by analyzing the camera image with image processing that does not use a neural network, wherein the feature points correspond to points on possible lane boundaries of at least one lane on the roadway; to select a respective image portion of the camera image having a respective predefined size around each respective one of the feature points; to analyze the image portions using a neural network to classify the feature points thereof regarding whether or not the feature points represent actual lane boundaries; and to determine the at least one lane on the roadway in the vehicle environment based on the feature points that have been classified as representing the actual lane boundaries.

16. The device according to claim 15, wherein the computing apparatus is configured to perform the image processing for determining the feature points by using edge detection algorithms.

17. The device according to claim 16, wherein the computing apparatus is configured to use Sobel filters for the edge detection algorighms.

18. The device according to claim 15, wherein the computing apparatus is further configured to determine courses of the actual lane boundaries by interpolating between neighboring ones of the feature points that have been classified as representing the actual lane boundaries, and to determine the at least one lane further based on the courses of the actual lane boundaries.

19. A driver assistance system for a vehicle, comprising: a device for detecting a lane on a roadway according to claim 15, and the camera device configured to capture the camera image of the vehicle environment of the vehicle.

20. A vehicle comprising the driver assistance system according to claim 19 and a vehicle body.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) The present invention is explained in greater detail below with reference to the embodiment examples indicated in the schematic figures of the drawings.

(2) Therein:

(3) FIG. 1 shows a schematic block diagram of a device for detecting lanes according to an embodiment of the invention;

(4) FIG. 2 shows a schematic camera image captured by a camera device;

(5) FIG. 3 shows illustrations as training data for a neural network;

(6) FIG. 4 shows lane boundaries detected in a camera image;

(7) FIG. 5 shows a schematic block diagram of a driver assistance system;

(8) FIG. 6 shows a schematic block diagram of a vehicle; and

(9) FIG. 7 shows a flow chart of a method for detecting lanes.

DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS OF THE INVENTION

(10) Further possible configurations, further developments and implementations of the invention also comprise combinations of features of the invention described above or below with respect to the embodiment examples, which are not explicitly indicated.

(11) The appended drawings are intended to convey a further understanding of the embodiments of the invention. They illustrate embodiments and, in connection with the description, serve to explain the principles and concepts of the invention. Other embodiments and many of the indicated advantages are set out with respect to the drawings. The same reference numerals indicate the same or similarly acting components.

(12) FIG. 1 shows a schematic block diagram of a device 1 for detecting lanes.

(13) The device 1 comprises an interface 11 which is configured to receive and output data wirelessly or via a wired connection. In particular, the interface 11 receives camera data and transfers said data to a computing apparatus 12 of the device 1. The camera data comprise at least one camera image which has been generated by a camera device 2 of a vehicle. The camera image can also be combined from multiple individual images of a vehicle camera of the camera device 2 or from multiple images of a plurality of vehicle cameras of the camera device 2.

(14) The computing apparatus 12 analyzes the camera image by means of image detection methods in order to extract feature points which correspond to regions having lane boundaries in the vehicle environment. The computing apparatus 12 comprises at least one microprocessor in order to perform the calculation steps.

(15) The computing apparatus 12 generates a respective image portion around each of the feature points. This image portion serves as an input variable for a neural network which assesses the image portion. The neural network is preferably a convolutional neural network. A probability with which the image portion illustrates a lane boundary is calculated by means of the neural network. If the probability exceeds a predefined threshold, the computing apparatus 12 detects that the feature point of the image portion corresponds to a lane boundary.

(16) The feature points classified in such away are further evaluated by the computing apparatus 12, in order to determine lanes in the environment of the vehicle. Thus, the computing apparatus 12 can determine the course of the lane boundaries in the camera image by interpolating feature points neighboring each other, which correspond to lane boundaries. The regions running between lane boundaries can be identified as lanes, and the computing apparatus 12 can generate an environment model.

(17) Individual aspects of the device 1 are depicted more precisely below on the basis of FIGS. 2 to 4.

(18) Thus, FIG. 2 shows a camera image K captured by a camera device 2. The illustrated objects comprise a right lane marking 51, a middle lane marking 52, a guardrail 53 located at the edge of the right lane and a vehicle 54 driving on a parallel lane.

(19) The computing apparatus 12 analyzes the pixels of the camera image K by means of traditional edge detection methods. In particular, the computing apparatus 12 can apply a Sobel filter to each pixel in order to detect an edge on or in the surroundings of the pixel. The Sobel filter can take account of 3×3 pixels in the surroundings of the pixel to be examined, but it can also allow for a larger surrounding region of the pixel.

(20) The computing apparatus 12 can establish for each pixel whether the pixel is located at or in the proximity of an edge. In particular, the computing apparatus 12 can compare the value calculated by means of the Sobel filter with a predefined threshold. If the threshold is exceeded, the computing apparatus 12 establishes that the pixel is a feature point which corresponds to a possible lane boundary.

(21) In the camera image K shown in FIG. 2, the computing apparatus 12 determines a total of 15 feature points P1 to P15. This is only to be understood by way of example. In general, a larger number of feature points is generated.

(22) The computing apparatus 12 generates an image portion B1 to B3 for each feature point P1 to P15. For the sake of simplicity, only the image portions for the first three feature points P1 to P3 are marked in FIG. 2. The image portions B1 to B3 can have a predefined size of, for example, 128×128 pixels. The feature point P1 to P15 is, in each case, preferably arranged in the center of the respective image portion B1 to B3.

(23) The generation of the neural network used to further analyze the image portions B1 to B3 is explained in greater detail in FIG. 3. Accordingly, a database DB with training data is first produced. The training data comprise illustrations which have been captured by means of a vehicle camera. These are preferably manually classified into two groups. A first group Ta1 to Ta4 comprises illustrations which show lane boundaries. The illustrations can exclusively comprise images of lane markings. However, according to further embodiments, the illustrations can also illustrate curbsides or further lane boundaries. To this end, the illustrations can be produced with various brightnesses or weather conditions. The second group comprises illustrations Tb1 to Tb4 which show objects with edges which are not, however, lane boundaries. These can be illustrations of vehicles Tb1, Tb3, guardrails Tb2 or bridges Tb4.

(24) The neural network is then trained in such a way that the illustrations of the first group Ta1 to Ta4 are classified as illustrations of lane boundaries, while the illustrations of the second group Tb1 to Tb4 are classified as illustrations which do not show lane boundaries. Following the training phase, the computing apparatus 12 can classify any image portions B1 to B3 by means of the neural network. To this end, a probability that the image portion B1 to B3 is an illustration of a lane boundary can first be output by means of the neural network. If the calculated probability exceeds a predefined threshold, for example 0.5, the computing apparatus 12 classifies the image portion B1 to B3 as corresponding to a lane boundary.

(25) For the feature points P1 to P15 of the camera image K, the computing apparatus 12 detects, for example, that the feature points P2, P5 to P8 of the middle lane marking 52 and the feature points P1, P9 to P13 of the right lane marking 51 are feature points which correspond to lane boundaries. Conversely, the measuring points P4, P3, P14, P15 of the guardrail 53 and of the vehicle 54 are discarded as misidentifications since the illustrated objects are not lane boundaries.

(26) In order to determine the lanes, the computing apparatus 12 preferably merely includes those feature points P1 to P15 which have been detected as corresponding to lane boundaries.

(27) The computing apparatus 12 can then determine the corresponding lane boundaries by interpolating the neighboring remaining feature points P1 to P15 or pixels.

(28) As shown in FIG. 4, the computing apparatus 12 detects, for example, a first lane boundary M1 and a second lane boundary M2 for the camera image K illustrated in FIG. 2. The computing apparatus 12 accordingly determines that a lane F runs between the lane boundaries M1, M2.

(29) The described detection of the lanes is preferably performed iteratively, wherein the lane boundaries and lanes already detected are updated.

(30) A block diagram of a driver assistance system 4 for a vehicle according to an embodiment of the invention is depicted in FIG. 5. The driver assistance system 4 comprises a camera device 2 which has one or a plurality of vehicle cameras which are arranged or arrangeable on the vehicle.

(31) The driver assistance system 4 additionally comprises a device 1 for detecting lanes, which is described above. The device comprises an interface 12 which is described above and which receives the camera images captured by the camera device 2, as well as a computing apparatus 12 which determines lanes F on the basis of the camera images.

(32) The driver assistance system 4 can comprise a control device 3 which can control specific driving functions of the vehicle. Thus, the control device 3 can control the vehicle as a function of the detected lanes in such a way that the vehicle is accelerated, braked or steered. The driver assistance system 4 can, as a result, make possible a semi-autonomous or autonomous control of the vehicle. The control device 3 can additionally be configured to output a warning signal if the vehicle leaves the detected lane F in order to warn the driver against an unintentional departure from the lane F.

(33) A block diagram of a vehicle 5 according to an embodiment of the invention is depicted in FIG. 6. The vehicle 5 can, for instance, be a car, a truck or a motorcycle. The vehicle 5 comprises a driver assistance system 4, which is described above, comprising a device 1 for detecting lanes F in the surroundings of the vehicle 5.

(34) FIG. 7 shows a flow chart of a method for detecting lanes F according to an embodiment of the invention.

(35) In a method step S1, a camera image of a vehicle environment is captured by means of a camera device 2. To this end, multiple individual images can also be combined.

(36) In the further method step S2, the individual pixels of the camera image are evaluated by means of an edge detection method, in order to determine feature points P1 to P15. To this end, Sobel filters can for example be used in order to detect edges. If the values calculated by means of the Sobel filter exceed a predefined threshold, the pixels are identified as feature points P1 to P15 which can correspond to lane boundaries.

(37) An image portion B1 to B3 is generated around each feature point P1 to P15 in a method step S3. The feature point P1 to P15 can preferably be located in a center of a square image portion B1 to B3. The size of the image portion can, for example, be 128×128 pixels. However, the invention is not restricted to this. Thus, the image portion does not necessarily have to have a square or rectangular configuration. The form of the image portion can be selected, for example, as a function of the perspective representation of the camera device 2.

(38) In a method step S4, the image portions are analyzed using a neural network. To this end, the neural network is produced or taught on the basis of training data from a database. The training data comprise the illustrations of lane boundaries or of lane surroundings without lane boundaries described in connection with FIG. 3. Following the training phase, the neural network is configured to analyze and classify any image portions. It is detected for each image portion whether the illustrated region of the vehicle environment illustrates a lane boundary or not. The feature points are classified accordingly. Those feature points which are classified accordingly as a lane boundary by means of the neural network are further evaluated, while the remaining feature points are discarded.

(39) On the basis of the remaining feature points, the courses of lane boundaries M1, M2 are determined in a method step S5. On the basis of the courses of the lane boundaries M1, M2, lanes F which are usable by the vehicle 5 are detected.

(40) On the basis of the detected lanes F, warning signals can additionally be output or a semi-autonomous or autonomous control of the vehicle 5 can be performed.

REFERENCE NUMERALS

(41) 1 Device for detecting lanes 2 Camera device 3 Control device 4 Driver assistance system 5 Vehicle 11 Interface 12 Computing apparatus 51 Right lane marking 52 Middle lane marking 53 Guardrail 54 Vehicle F Lane M1, M2 Lane boundaries P1 to P15 Feature points E1 to B3 Image portions Ta1 to Ta4 Illustrations of lane boundaries Tb1 to Tb4 Illustrations of objects which are not lane boundaries