METHODS AND SYSTEMS FOR ESTIMATING LANES FOR A VEHICLE

20230118134 · 2023-04-20

    Inventors

    Cpc classification

    International classification

    Abstract

    Another computer implemented method for estimating lanes for a vehicle may include the following steps carried out by computer hardware components: determining measurement data at a location of the vehicle using a sensor mounted at the vehicle; transforming the measurement data of the sensor into a global coordinate system to obtain transformed measurement data; and estimating lanes at the location for the vehicle based on the transformed measurement data.

    Claims

    1. Computer implemented method for estimating lanes for a vehicle , the method comprising the following steps carried out by computer hardware components: determining measurement data at a location of the vehicle using a sensor mounted at the vehicle; transforming the measurement data of the sensor into a global coordinate system to obtain transformed measurement data and estimating lanes at the location for the vehicle based on the transformed measurement data.

    2. The method of claim 1, wherein the measurement data comprises estimates for lane markings.

    3. The method of claim 1, wherein the measurement data comprises estimates for trails of objects.

    4. The method of claim 1, wherein the measurement data are determined from several drives of the vehicle and/or from several drives of a plurality of recording vehicles.

    5. The method of claim 1, wherein the sensor comprises a radar sensor and/or a camera.

    6. The method claim 1, further comprising the following step carried out by the computer hardware components: determining the location of the vehicle.

    7. The method of claim 6, wherein the determining of the location of the vehicle is based on simultaneous localization and mapping and/or a GPS system and/or a dGPS system and/or an inertial measurement unit.

    8. The method of claim 1, further comprising the following step carried out by the computer hardware components: checking a plausibility of the lanes wherein the plausibility is based on physical assumptions regarding a driving behavior of the vehicle and/or physical assumptions regarding a driving behavior of other vehicles.

    9. The method of claim 8, wherein the physical assumptions regarding the driving behavior of the vehicle comprise assumptions regarding a velocity of the vehicleand/or assumptions regarding a yaw-rate of the vehicle.

    10. The method of claim 8, wherein the physical assumptions regarding the driving behavior of the other vehicles comprises an acceleration assumption of the other vehicles and/or a braking assumption of the other vehicles and/or a yaw-rate assumption of the other vehicles .

    11. The method of claim 1, further comprising the following step carried out by the computer hardware components: estimating uncertainties of the transformed measurement data.

    12. The method of claim 1, wherein the lanes are estimated further based on weights with a confidence value.

    13. Computer system, the computer system comprising a plurality of computer hardware components configured to carry out the steps of the computer implemented method of claim 1.

    14. Vehicle, comprising the computer system of claim 13 and the sensor, wherein the measurement data is determined based on an output of the sensor.

    15. Non-transitory computer readable medium comprising instructions for carrying out the computer implemented method of claim 1.

    Description

    DRAWINGS

    [0039] Exemplary embodiments and functions of the present disclosure are described herein in conjunction with the following drawings, showing schematically:

    [0040] FIG. 1A a flow diagram illustrating a method for estimating lanes for a vehicle according to various embodiments;

    [0041] FIG. 1B a flow diagram illustrating a method for estimating lanes for a vehicle comparing a first preliminary estimate of lanes and a second preliminary estimate of lanes;

    [0042] FIG. 2 a flow diagram illustrating a comparison of a first preliminary estimate of lanes and a second preliminary estimate of lanes;

    [0043] FIGS. 3A, 3B a lane estimation based on estimated lane markings and estimated trails;

    [0044] FIGS. 3C, 3D a lane estimation based on estimated trails;

    [0045] FIG. 3E a lane estimation based on estimated lane markings and estimated trails, considering drivability of lanes;

    [0046] FIG. 3F a lane estimation based on estimated trails, considering an object detection based on the estimated trails;

    [0047] FIG. 3G a lane estimation based on estimated lane markings and estimated trails, considering a variance of estimated trails;

    [0048] FIG. 4 a flow diagram illustrating a method for estimating lanes for a vehicle based on transforming measurement data into a global coordinate system;

    [0049] FIG. 5 a flow diagram illustrating a method for estimating lanes for a vehicle according to various embodiments;

    [0050] FIG. 6 a flow diagram illustrating a method for estimating lanes for a vehicle according to various embodiments; and

    [0051] FIG. 7 a computer system with a plurality of computer hardware components configured to carry out steps of a computer implemented method for estimating lanes for a vehicle to various embodiments.

    DETAILED DESCRIPTION

    [0052] Lane estimation may be based on different sensor data (e.g., camera, radar sensors and/or LiDAR sensors) and may use neural networks or classical methods not based on machine learning. High Definition (HD) lane maps may be generated for a given region or a location of a vehicle based on sensor data of many recording drives of recording vehicles using a method for Lane Map Aggregation (LMA). This may require a global localization in a global coordinate system (GCS) of the recording vehicles since the lanes must be given in the GCS as well. The (recording) vehicles may record various sensor data e.g., data from a camera, a LiDAR sensor and/or a radar sensor. Based on this data, lanes estimates may be derived from different detections e.g., from camera based or LiDAR based lane marking detections and/or from LiDAR based or radar based object detections and trackings, yielding trails of other vehicles. Thus, multiple lane estimates may be obtained for estimating a true lane. The multiple estimates of lanes - from multiple recording drives and/or multiple and different detections - may be used to get a reliable estimate of lanes and their position by aggregating the multiple estimates of lanes.

    [0053] Therefore, lane map aggregation may be a process of combining multiple object detections or landmark detections from multiple recording drives of the same location into a single, more robust representation.

    [0054] The process may include determining a first preliminary estimate of lanes based on a plurality of lane markings at a location of the vehicle by aggregating the plurality of lane markings detected from several drives of the vehicle and/or from several drives of a plurality of recording vehicles. Further, the process may include determining a second preliminary estimate of lanes based on a plurality of trails of objects at the location of the vehicle by aggregating the plurality of trails of objects detected from several drives of the vehicle and/or from several drives of a plurality of recording vehicles. Then the (aggregated) preliminary estimates of lanes with the information where they are based on (lane markings or trails) may be evaluated to get a final estimate of lanes with a confidence score. The aggregation may be for instance a geometric mean of the plurality of lane markings or trails or an arithmetic mean of the plurality of lane markings or trails. The confidence score may be an output of a lane function I(t) or I(t, m), which may take as input a vector containing the (aggregated) trails information or the (aggregated) trails information and (aggregated) lane markings information and may output a confidence value for the lane estimate. Thus, the confidence score may be derived by criterias of the aggregated lane markings and trails and additional information, for example, how many trails are used in the aggregation and from how many recordings the trails are estimated. The confidence of the final lane estimation may give information on whether the specified lane is drivable, i.e. non-blocked. The evaluation may include a comparison of the first preliminary estimate of lanes and the second preliminary estimate of lanes and a determination of a final estimate of lanes at the location of the vehicle based on the comparison.

    [0055] In other words, the process describes a robust map-aggregated estimation of lanes based on the combination of two subsystems. One subsystem may be based on lane markings detection, wherein data obtained by a LiDAR sensor or a camera may be used. The plurality of thereby estimated preliminary lane markings from possibly several recording drives of the vehicle and/or several recording drives of other recording vehicles may then be aggregated in a first preliminary estimate of lanes. The other subsystem may follow a different approach and may estimate lanes based on object trails. These object trails may become available through an object detection method (e.g., LiDAR and/or camera and/or radar based). The trails of other road users may then be aggregated to describe another set of estimated lanes, a second preliminary estimate of lanes. Subsequently, both sets of estimated lanes for an area or a location around the vehicle from the two subsystems may then be compared, such that each estimation method may benefit from the results of the other. This may result in a reliable final estimate of lanes at the location of the vehicle compared to when using only a single lane detection method and even may allow to infer additional information such as whether a lane is blocked or drivable.

    [0056] FIG. 1A shows a flowchart illustrating a method for estimating lanes for a vehicle according to various embodiments. At 102, sensor data 122, 124, 126 may be determined using a first sensor and a second sensor. At 112, using a localization system, a position 136 may be estimated in a world coordinate system based on sensor data 126. At 115, lanes from lane markings may be estimated based on the sensor data 122 and the position estimates 136. At 117, lanes from trails may be estimated based on the sensor data 124 and the position estimates 136. At 118, the estimated lanes 138 from lane markings and the estimated lanes 140 from trails may be compared. At 120, the final estimate of lanes may be determined based on the comparing of step 118. A detailed description of the steps will follow below.

    [0057] According to one embodiment the method of estimating lanes for a vehicle may be based on determining map-aggregated road lanes by cross-checking two lane detection and estimation systems. The method may comprise the following steps carried out by computer hardware components: running a lane marking detection method to detect lanes as one subsystem; running an object detection method to detect other road users as another subsystem; aggregating the lane markings of the lane marking subsystem in a global map (using a localization system) to obtain estimates for where the lanes are; aggregating trails of other road users obtained from the object detection subsystem in a global map (using a localization system) to get a separate, independent estimate for where the lanes are; and cross-checking the lanes coming from both subsystems to obtain information of which lanes are actually drivable, non-blocked, and/or obstacle-free.

    [0058] According to one embodiment the method of estimating lanes for a vehicle based on two subsystems is described in the following detailed description of FIG. 1B. FIG. 1B shows a flow diagram 101 illustrating a method for estimating lanes for a vehicle comparing a first preliminary estimate of lanes and a second preliminary estimate of lanes. At 102, sensor data 122, 124, 126 may be determined using a first sensor and a second sensor. The first sensor and/or the second sensor may be mounted at the vehicle or may be mounted at other recording vehicles that may be different from the vehicle. The vehicle and/or the other vehicles may be part of a vehicle fleet, for example a vehicle fleet of a company. The sensor data 122, 124, 126 may be determined from several drives of the vehicle and/or from several drives of a plurality of recording vehicles. At 104, a position and a type of lane markings may be determined based on the sensor data 122. Therefore, an appropriate method may be used, e.g. an image recognition method with neural networks or a classical method not based on machine learning. The sensor data 122 may be determined using a camera and/or a LiDAR sensor, or any other suitable sensor. At 108, estimates of lane markings 128 obtained from a plurality of sensor data 122 may be tracked using a tracker. The tracker may identify an object (for example lane markings or another road user) over multiple frames. A plurality of lane markings 132 may include uncertainty estimates of those lane markings (e.g. determined by a standard deviation method), wherein the uncertainty estimates for the lane markings may be determined, for example, by the tracker. The tracker may provide an uncertainty value. For example, a tracker using a Kalman filter or a Particle filter may provide uncertainty information or these uncertainties may be obtained. Otherwise, the uncertainty values may also be defined separately.

    [0059] At 106, objects around the vehicle, preferably other road users, may be determined based on the sensor data 124. The objects may be other vehicles or bicycles or the like. The sensor data 124 may be determined using a radar sensor and/or a LiDAR sensor, or any other suitable sensor. At 110, the object estimates 130 determined from a plurality of sensor data 124 may be tracked using a tracker. Thus, trajectories or trails of the other road users, preferably other vehicles, may be determined. A plurality of trails 134 may include uncertainty estimates of those trails (e.g. determined by a standard deviation method), wherein the uncertainty estimates for the trails may be determined, for example, by the tracker.

    [0060] At 112, a position and a pose of the vehicle may be determined in a world coordinate system based on sensor data 126. To determine the position and/or the pose of the vehicle, hardware such as a dGPS system, or an appropriate method such as an elaborate simultaneous localization and mapping (SLAM) may be used. SLAM systems may be based on camera sensors, LiDAR sensors, radar sensors, ordinary GPS sensors or a combination of those sensors. Additional, inertial measurement unit (IMU) sensors may be used for better performance.

    [0061] At 114, the plurality of estimated lane markings 132 may be aggregated, wherein uncertainties may be considered. The uncertainties of the estimates may be used as a weight in the aggregation, wherein the aggregation may be for instance a weighted average (or weighted mean) of the plurality of estimated lane markings 132. In other words, the plurality of estimated lane markings 132 may be combined from several drives and/or from several drives of multiple recording vehicles recorded at the same position 136 to determine a combined, more accurate estimate of the lane markings at this position 136. The combined or aggregated lane markings may be used to determine a first preliminary estimate of lanes 138.

    [0062] At 116, the plurality of estimated trails 134 may be aggregated, wherein uncertainties may be considered. The uncertainties of the estimates may be used as a weight in the aggregation, wherein the aggregation may be for instance a weighted average (or weighted mean) of the plurality of estimated trails 134. In other words, the plurality of estimated trails 134 may be combined from several drives and/or from several drives of multiple recording vehicles recorded at the same position 136 to determine a combined, more accurate estimate of where other road users may have driven at this position 136. The combined or aggregated trails may be used to determine a second preliminary estimate of lanes 140 based on a distribution of the trails.

    [0063] At 118, the first preliminary estimated lanes 138 from lane markings and the second preliminary estimated lanes 140 from trails may be compared. The first preliminary estimated lanes 138 may indicate where lanes are according to the available information about lane markings. Lane markings may not necessarily have to coincide with drivable lanes. In many situations, lane markings could be visible, but the lane would still not be drivable because there is an obstacle, a construction area, or a prohibition to use the lane. The second preliminary estimated lanes 140 may give an indication on where other road users have driven and thereby may give a hint on which lane might actually be usable.

    [0064] At 120, the final estimate of lanes may be determined based on the comparing of step 118. Combining the two methods, i.e. estimating lanes based on a plurality of lane markings and estimating lanes based on a plurality of trails, an accurate position of lanes (by lane markings) together with the information whether these lanes may actually be used may be received. In addition, the lane estimates may be more robust by combining a lane marking detection method based on one sensor with a trail detection method based on another sensor with another working principle. For example, the lane marking detection may be based on data observed by a camera while the trail detection may be based on data observed by a LiDAR sensor. Those sensors may have different failure modes. When these two methods are combined as described herein, reliable lanes may be estimated in most circumstances.

    [0065] FIG. 2 shows a flow diagram 200 illustrating a comparison of the first preliminary estimate of lanes and the second preliminary estimate of lanes. At 202, it may be checked whether the second preliminary estimated lanes based on a plurality of trails are available. This may be the case if a minimum number of trails, i.e. trajectories of tracked vehicles, with respect to the number of recording drives is available. Depending on the environment, the number of trails might be different. For example, on highways there is in general more traffic. Thus, the minimum number of trails may be e.g., 8 trails from 10 recordings of a specific location on highways. For a sub-urban environment, the minimum number of trails may be e.g., 5 trails from 10 recordings as such an environment may have less traffic. The difference according to the location of the vehicle may ensure to have a minimum number of trails. The minimum number of trails may be a predetermined trail threshold.

    [0066] At 204, if there are no second estimated lanes based on a plurality of trails available or the number of second estimated lanes based on a plurality of trails is below the predetermined trail threshold, the process of estimating lanes as described herein will be terminated. More recordings of the same location may be needed for estimating lanes for the vehicle.

    [0067] At 206, if second estimated lanes based on a plurality of trails are available and the number of second estimated lanes based on a plurality of trails is above the predetermined trail threshold, there is a request whether first preliminary estimates of lanes based on a plurality of lane markings are available. That may be the case for a position or location or area of the vehicle if e.g., at least 80% or at least 90% of the recording drives contain lane markings detections. Because some environments may not contain lane markings, the definition of a predetermined number of estimated lanes based on lane markings may ensure that there are really lane markings available. Furthermore, the predetermined lane marking threshold may avoid that the lane marking detections are only false positives e.g., if in only 1 of 10 recordings a lane marking detection is given.

    [0068] At 208, if no first preliminary estimates of lanes based on a plurality of lane markings are available or the number of first preliminary estimates of lanes based on a plurality of lane markings is below the predetermined lane marking threshold, the determination of the final estimate of lanes 212 may only be based on the second preliminary estimation of lanes based on the trails by a basic lane function l(t), wherein t may describe a dependency of estimated lanes based on trails. Mathematically l(t) may be a function

    [00001]lt:n.fwdarw.0,1

    which may take as input a vector containing the (aggregated) trail information and may output a confidence value for the lane estimate.

    [0069] At 210, if first preliminary estimates of lanes based on a plurality of lane markings are available and the number of first preliminary estimates of lanes based on a plurality of lane markings is above the predetermined lane marking threshold, a comparison of the first preliminary estimate of lanes and the second preliminary estimate of lanes may be performed to determine the final estimated lanes 212. The comparison may be divided in multiple scenarios, which may be expressed with a lane function I(t, m), wherein t may describe a dependency of estimated lanes based on trails and m may describe a dependency of estimated lanes based on lane markings. Mathematically I(t, m) may be a function

    [00002]lt,m:i.fwdarw.0,1

    which may take as input a vector containing the (aggregated) trail information and lane markings information and may output a confidence value for the lane estimate. In other words, the lane function I(t, m) may take input information from the corresponding lane detection e.g., number of lanes with respect to the number of recordings and the estimated uncertainties from the estimated lanes based on lane markings and estimated lanes based on trails. For example, if there are lane estimates from trails and lane estimates from lane markings with high confidences, but both lanes intersect, then the true lane may be blocked. If both, the lane estimates from trails and the lane estimates from lane markings, do not intersect and have a logical distance to each other and are in parallel then the true lane may be confident. The lane function l(t,m) may be deterministic by identifying predefined scenarios (as shown in FIGS. 3A to 3G) or may also be based on an artificial intelligence (Al) model like neural network.

    [0070] The output of steps 208 or 210 may be a final estimate of lanes 212 at the location of the vehicle with a confidence value, wherein the confidence value may consider if the final estimate of lanes may be drivable lanes, i.e. for example not blocked by an object or due to traffic jam. Every lane estimate output of the system may provide a confidence value which has to be defined. Thus, the confidence value may depend on the number of recordings.

    [0071] The method described herein may lead to a reliable lane estimation due to the combination of different methods (lane markings detection method and object detection method), which may use different sensors (e.g., camera, radar sensors and/or LiDAR sensors). It has been found that the aggregation of lanes obtained from multiple drives along the same route using an accurate localization system may lead to a reliable lane estimation for the same location. Also, additional information about the drivability of lanes by combining the information about the behavior of other road users with accurate lane information may provide a reliable lane estimation.

    [0072] FIGS. 3A to 3G depict some scenarios where a lane cross-check, i.e. a comparison of estimated lanes based on lane markings and estimated lanes based on trails, yields more reliable final estimated lanes for a vehicle 302 compared to when not using the cross-check, i.e. only using a single lane detection method. FIG. 3A and FIG. 3B show a lane estimation for a vehicle 302 based on estimated lane markings 308 and estimated trails of moving vehicles 304. Lane markings 308 indicate the lanes, and trails of the moving vehicles 304 indicate the lanes as well, but the lanes with driving direction 310 are provided by the trails of moving vehicles 304 only. FIGS. 3C and 3D show a lane estimation based on estimated trails of moving vehicles 304, if no lane markings 308 are available. Therefore, the final estimated lanes are based on trails of the moving vehicles 304, which also provide information about the driving direction. FIG. 3E shows a lane estimation based on estimated lane markings 308 and estimated trails of moving vehicles 304, considering a drivability of the estimated lanes. Based on the preliminary estimated lanes based on lane markings 308, there are two lanes finally estimated. But the trails indicate that only the left lane is actually drivable, since the right lane is blocked by static vehicles 306. This scenario may be typically for cities, for example, when parking vehicles may block a lane or there is a traffic jam on a turning lane. FIG. 3F shows a lane estimation based on estimated trails of moving vehicles 304, considering detecting an obstacle 312 based on the estimated trails of the moving vehicles 304. Based on the preliminary estimated lanes based on the trails of the moving vehicles 304, the obstacle 312 may be detected, and the final estimated lanes may be adjusted. FIG. 3G shows a lane estimation based on estimated lane markings 308 and estimated trails of moving vehicles 304, considering a variance of estimated trails. In the case that the more dynamic trails have a high variance such that the lane may not easily be extractable, the more static preliminary lane estimations based on lane markings 308 may be used to adjust the preliminary lane estimations based on the trails for the final estimation of lanes.

    [0073] The lane cross-check may also include a logical sanity check to identify and handle an unplausible constellation of multiple lane detections. The road system may follow specified rules, that may be defined by convention or even legislation, which may be considered. In a mathematical sense on the other hand, the lanes and lane markings as well as the trails may follow geometric relations and rules. Therefore, situations may be detected where the geometric description may not match typical rules of a road system and this information may be used to discard unplausible lanes. For example, for a road consisting of two lanes, the lanes need to be in parallel and are required to have a specific distance to each other such that vehicles can drive on both lanes without crashing into each other. If there is a lane estimation based on trails from a vehicle changing the lane, then the estimated lanes would contradict this logic. Furthermore, a lane may not able to intersect with lane markings (excluding crossroads). Finally, it may be assumed that, depending on the location and the speed limit, estimated lanes will be within certain boundaries of curvature.

    [0074] It will be understood that while it is described herein that cross checks are applied for road parts by explicitly comparing the lanes obtained from lane markings with those obtained from trails (as shown in FIGS. 3A to 3G), also these cross checks may be applied to more complex situations such as crossroads. However, these cross-checks may not focus on, for example, whether lane changes are actually allowed, mandated, recommended or else, e.g., in a highway situation where three lanes are narrowed down to two lanes.

    [0075] In order to get accurate and reliable lanes, the Lane Map Aggregation (LMA) may use an accurate localization system like a differential Global Positioning System (dGPS) as well as accurate lane detections e.g., from trail-based LiDAR sensor data object detection, as mentioned above. In other words, an accurate localization system may determine the position and pose of the recording vehicle with high accuracy in order to aggregate lanes from detected lane markings or trails into a global map and benefit from the increase in robustness from doing so. LiDAR sensors and dGPS may not be equipped in series production vehicles and both sensors may be costly. For example, a dGPS may be able to localize the vehicle up to a resolution of several centimeters but also may be one of the most expensive components in the vehicle. Beside the costs, such sensors may also require extensive training and time of skilled engineers to operate. To overcome those deficiencies, it may also be possible according to another embodiment to replace these sensors by low-cost (in other words: cost efficient) sensors, still retaining accurate lane estimations.

    [0076] According to one embodiment the method of estimating lanes for a vehicle may be based on determining map-aggregated road lanes using a low-cost sensor system. Besides estimations of lanes based on camera data or LiDAR data, estimations of lanes based on data from e.g., a radar sensor, may provide a new source for estimating lanes. This may enable not only using different sensor sources for the estimation of lanes when aggregating a plurality of lanes but may also substantially increase the number of measurements to average over. This may result in more reliable lanes even when using a less sophisticated, low-cost sensor system. In essence, more and different lane estimates may allow to mitigate localizations errors of the low-cost system simply by gathering more statistics.

    [0077] Accurate HD maps of lanes in the world may be costly to obtain and maintain. Ideally, such maps should be cost-efficient and easy to maintain. Creating such maps may make it necessary to aggregate the lane detections from various methods in a global map. Thus, in turn, a localization system may be desired which is as accurate as possible. Equipping a vehicle with an accurate localization system may be expensive. For example, a good off-the-shelf dGPS system with an inertial measurement unit (IMU) may be very expensive. Furthermore, it may be desired to install and maintain these systems by specifically trained engineers. This cost may scale with the number of vehicles in the recording fleet that is used to generate the map. The statistics over which lane detections may be aggregated, and thereby the accuracy of the resulting map, may be limited by the number of vehicles that record. Thus, if the number of vehicles in the fleet is limited due to cost, this directly influences the accuracy.

    [0078] It may also be possible to equip a vehicle with mostly low-cost sensors to perform the lane estimation, and a low-cost localization system, for example a regular GPS and simultaneous localization and mapping (SLAM), or regular GPS and a low-cost inertial measurement unti (IMU) to aggregate them in a map. Combining these low-cost sensors with extracting lane information also from lane markings and /or trails of other road users may lead to accurate lane estimations. The trails may be obtained from object detection methods running inside the vehicle. These object detection methods may operate on low-cost sensors, such as radar sensors and cameras instead of more expensive sensors like LiDAR sensors or sophisticated surround camera systems.

    [0079] The lanes obtained may be cross-checked by physical sanity checks, for example of the trails of other road users. Jitter introduced from the sub-optimal localization system may be filtered out when aggregating the trails in the map. This may be done by making reasonable physical assumptions about the driving behavior of other vehicles, such as maximum accelerations, braking’s, yaw rates and similar. Additionally, the trails of other vehicles coming out of the detection method may be first transformed into the global coordinate system (using the simple localization info available) and then may be tracked in this coordinate frame using a tracker. The use of the tracker after the data is transformed in the global map coordinate system may smooth and reduces the introduced jitter.

    [0080] Similar to the above, physical sanity checks for the lanes obtained from a lane marking detection may be applied. Jitter from the localization system may be filtered out when aggregating the lane markings in the global coordinate system. This may be done by making reasonable physical assumptions about the driving behavior of the vehicle (using information such as vehicle velocity and yaw rate of the vehicle). This may allow to propagate the lane markings using a tracker to the expected next position and thereby reduce the introduced jitter.

    [0081] A fusion between the lanes obtained from trails with the lanes obtained from lane markings as described above may be different than a simple tracking, as these physical constraints are applied on the ensemble of trails, rather than individual trajectories. This may be different than just applying a tracker with an underlying physical model for each tracked object. The same may apply for the aggregation of lane markings, where again physical sanity checks may be applied on the ensemble of lane markings to aggregate, in addition to any tracking of individual lane markings.

    [0082] The method described herein may be applied not only with specially equipped test vehicles, but also to series production vehicles which may feature a lane detection system and an object detection system. In this way, potentially more statistics about lanes may be gathered, the accuracy may be increased and the costs may be further reduced.

    [0083] FIG. 4 shows a flow diagram 400 illustrating a method for estimating lanes for a vehicle based on transforming measurement data into a global coordinate system. The lane estimation may work carrying out the left side of FIG. 4 alone, comprising steps 402, 404, 407, 408, 412 and 414. It may also be possible to determine estimated lanes carrying out the right side of FIG. 4 alone, comprising steps 402, 406, 409, 410, 412 and 416. Another possibility may be to carry out all steps shown in FIG. 4, in accordance with the method described above, to determine final estimated lanes (step 420 in FIG. 4) at the location of the vehicle based on comparing (step 418 in FIG. 4) first preliminary estimate of lanes (steps 402, 404, 407, 408, 412 and 414 in FIG. 4) and the second preliminary estimate of lanes (steps 402, 406, 409, 410, 412 and 416 in FIG. 4). Each of the possibilities may be as described in detail as follows.

    [0084] Starting with the left side of FIG. 4, in step 402, sensor data 422, 426 may be determined using a sensor or a plurality of sensors. The sensor, preferably a low-cost sensor, may be mounted at the vehicle or may be mounted at other recording vehicles that may be different from the vehicle. The vehicle and/or the other vehicles may be part of a fleet. The sensor data 422, 426 may be determined from several drives of the vehicle and/or from several drives of a plurality of recording vehicles. At 404, a position and a type of lane markings may be determined based on the sensor data 422. Therefore, an appropriate method may be used, e.g. an image recognition method with neural networks or classical methods not based on machine learning. The sensor data 422 may be determined using a camera and/or a forward-looking LiDAR sensor, or any other suitable sensor. At 412, a position and a pose of the vehicle may be determined in a world coordinate system based on sensor data 426. To determine the position and/or the pose of the vehicle, low-cost hardware such as a series GPS system (as may be equipped in almost all cars), or a combination of a series GPS system with an appropriate method such as an elaborate simultaneous localization and mapping (SLAM) may be used. SLAM systems may be based on camera sensors, LiDAR sensors, radar sensors, ordinary GPS sensors or a combination of those sensors. Alternatively, a combination of a series GPS system with inertial measurement unit (IMU) sensors or a combination of a series GPS system with SLAM and IMU may be used for a better performance. The selection of the sensor may depend on what is equipped in the vehicle. At 407 the lane markings estimate 428 may be transformed into a global coordinate system using position estimates 436. At 408, the lane markings estimate in global coordinates 431, obtained from a plurality of sensor data 422 and transformed into a global coordinate system, may be tracked using a tracker. In comparison with the method described above, the lane markings estimates are first transformed into the global coordinate system before doing the tracking. In this way, jitter from the low-cost localization system may be smoothed out. A plurality of lane markings described in global coordinates 432 may include uncertainty estimates of those lane markings (e.g. determined by a standard deviation method), wherein the uncertainty estimates for the lane markings may be determined, for example, by the tracker. At 414, the plurality of lane markings in global coordinates 432 may be aggregated, wherein the estimated uncertainties may be considered. In other words, the plurality of lane markings in global coordinates 432 may be combined from several drives of the vehicle and/or from several drives of multiple recording vehicles recorded at the same position 436 to determine a combined, more accurate estimate of the lane markings in global coordinates 432 at this position 436. The combined or aggregated lane markings may be used to estimate lanes 438 at the location for the vehicle based on the transformed measurement data.

    [0085] Alternatively, the lane estimation may also be possible based on trails of other road users, mostly other vehicles around the vehicle. Therefore, at 402, sensor data 424, 426 may be determined using a sensor or a plurality of sensors. The sensor, preferably a low-cost sensor, may be mounted at the vehicle or may be mounted at other recording vehicles that may be different from the vehicle. The vehicle and/or the other vehicles may be part of a fleet. The sensor data 424, 426 may be determined from several drives of the vehicle and/or from several drives of a plurality of recording vehicles. At 406, objects, preferably other road users, which may be other vehicles or bicycles or the like, around the vehicle may be determined based on the sensor data 424. The sensor data 424 may be determined using a low-cost sensor, for example a radar sensor and/or a forward-looking LiDAR sensor, or any other suitable sensor. At 412, a position and a pose of the vehicle may be determined in a world coordinate system based on sensor data 426. To determine the position and/or the pose of the vehicle, low-cost hardware such as a series GPS system (as may be equipped in almost all cars), or a combination of a series GPS system with an appropriate method such as an elaborate simultaneous localization and mapping (SLAM) may be used. SLAM systems may be based on camera sensors, LiDAR sensors, radar sensors, ordinary GPS sensors or a combination of those sensors. Alternatively, a combination of a series GPS system with inertial measurement unit (IMU) sensors or a combination of a series GPS system with SLAM and IMU may be used for a better performance. The selection of the sensor may depend on what is equipped in the vehicle. At 409 the object estimates 430 may be transformed into a global coordinate system using position estimates 436. At 410, the object estimates in global coordinates 433, determined from a plurality of sensor data 424 and transformed into a global coordinate system, may be tracked using a tracker. Thus, object trajectories or trails of the other road users, preferably other vehicles, may be determined. In comparison with the method described above, the trail estimates are first transformed into the global coordinate system before doing the tracking. In this way, jitter from the low-cost localization system may be smoothed out. A plurality of trails in global coordinates 434 may include uncertainty estimates of those trails (e.g. determined by a standard deviation method), wherein the uncertainty estimates for the trails may be determined, for example, by the tracker. At 416, the plurality of trails in global coordinates 434 may be aggregated, wherein the estimated uncertainties may be considered. In other words, the plurality of trails in global coordinates 434 may be combined from several drives of the vehicle and/or from several drives of multiple recording vehicles recorded at the same position 436 to determine a combined, more accurate estimate of where other road users have driven at this position 436. The combined or aggregated trails may be used to estimate lanes 440 at the location for the vehicle based on the transformed measurement data.

    [0086] Combining the aforementioned estimations of lanes based on lane markings and based on trails to determine final estimated lanes 420 at the location of the vehicle may lead to a more robust estimation of lanes as described above. Therefore, at 418, the estimates of lanes 438 from lane markings and the estimates of lanes 440 from trails may be compared. The estimates of lanes 438 may indicate where lanes are according to the available information about lane markings. The estimates of lanes 440 from trails may give an indication on where other road users have driven and thereby may give a hint on which lane might actually be usable. By combining the two methods, i.e. combining estimates of lanes based on a plurality of lane markings and estimates of lanes based on a plurality of trails, an accurate position of lanes (by lane markings) together with the information whether these lanes can actually be used may be received. Combining the estimations of lanes based on lane markings and estimations of lanes based on trails may be done by cross checks as described above.

    [0087] With the method described herein, it may be possible to use a low-cost sensor setup, especially for the localization, in recording vehicles and still obtaining reliable estimates of lanes by extracting lane markings and/or lanes from trails e.g., based on radar detections, and doing various cross checks. Low-cost sensors may be off-the-shelf sensors. For example, it may be possible to use even the sensor setup of series production vehicles for the given purpose.

    [0088] FIG. 5 shows a flow diagram 500 illustrating a method for estimating lanes for a vehicle according to various embodiments. At 502, a first preliminary estimate of lanes may be determined based on a plurality of lane markings at a location of the vehicle. At 504, a second preliminary estimate of lanes may be determined based on a plurality of trails of objects at the location of the vehicle. At 506, the first preliminary estimate of lanes and the second preliminary estimate of lanes may be compared. At 508, a final estimate of lanes at the location of the vehicle may be determined based on the comparing.

    [0089] According to an embodiment, the plurality of trails of the objects may be determined based on second sensor data, wherein the second sensor data may be determined using a second sensor, wherein preferably the second sensor may comprise a camera or a radar sensor or a LiDAR sensor.

    [0090] According to an embodiment, the method may further comprise the following step carried out by the computer hardware components: determining the location of the vehicle.

    [0091] According to an embodiment, the location of the vehicle may be determined based on simultaneous localization and mapping and/or a GPS system and/or a dGPS system and/or an inertial measurement unit.

    [0092] According to an embodiment, the method may further comprise the following step carried out by the computer hardware components: detecting a pose of the vehicle in a world coordinate system.

    [0093] According to an embodiment, the method may further comprise the following step carried out by the computer hardware components: estimating uncertainties of the first preliminary estimate of lanes and/or of the second preliminary estimate of lanes.

    [0094] According to an embodiment, the plurality of lane markings may be determined from several drives of the vehicle and/or from several drives of a plurality of recording vehicles.

    [0095] According to an embodiment, the plurality of trails may be determined from several drives of the vehicle and/or from several drives of a plurality of recording vehicles.

    [0096] According to an embodiment, the method may further comprise the following step carried out by the computer hardware components: checking a first plausibility of the first preliminary estimate of lanes and/or checking a second plausibility of the second preliminary estimate of lanes, wherein the first plausibility and/or the second plausibility may be based on geometric relations and/or rules.

    [0097] According to an embodiment, a number of trails in the plurality of trails of objects may be above a predetermined trail threshold.

    [0098] According to an embodiment, a number of lane markings in the plurality of lane markings may be above a predetermined lane marking threshold.

    [0099] Each of the steps 502, 504, 506, 508, and the further steps described above may be performed by computer hardware components.

    [0100] FIG. 6 shows a flow diagram 600 illustrating a method for estimating lanes for a vehicle according to various embodiments. At 602, measurement data may be determined at a location of the vehicle using a sensor mounted at the vehicle. At 604, the measurement data of the sensor may be transformed into a global coordinate system to obtain transformed measurement data. At 606, lanes may be estimated at the location of the vehicle based on the transformed measurement data.

    [0101] According to an embodiment, the measurement data may comprise estimates for lane markings.

    [0102] According to an embodiment, the measurement data may comprise estimates for trails of objects.

    [0103] According to an embodiment, the measurement data may be determined from several drives of the vehicle and/or from several drives of a plurality of recording vehicles.

    [0104] According to an embodiment, the sensor may comprise a radar sensor and/or a camera.

    [0105] According to an embodiment, the method may further comprise the following step carried out by the computer hardware components: determining the location of the vehicle.

    [0106] According to an embodiment, the determining of the location of the vehicle may be based on simultaneous localization and mapping and/or a GPS system and/or a dGPS system and/or an inertial measurement unit.

    [0107] According to an embodiment, the method may further comprise the following step carried out by the computer hardware components: checking a plausibility of the lanes, wherein the plausibility may be based on physical assumptions regarding a driving behavior of the vehicle and/or physical assumptions regarding a driving behavior of other vehicles.

    [0108] According to an embodiment, the physical assumptions regarding the driving behavior of the vehicle may comprise assumptions regarding a velocity of the vehicle and/or assumptions regarding a yaw-rate of the vehicle.

    [0109] According to an embodiment, the physical assumptions regarding the driving behavior of the other vehicles may comprise an acceleration assumption of the other vehicles and/or a braking assumption of the other vehicles and/or a yaw-rate assumption of the other vehicles.

    [0110] According to an embodiment, the method may further comprise the following step carried out by the computer hardware components: estimating uncertainties of the transformed measurement data.

    [0111] According to an embodiment, the lanes may be estimated further based on weights with a confidence value.

    [0112] Each of the steps 502, 504, 506, 508, and the further steps described above may be performed by computer hardware components.

    [0113] FIG. 7 shows a computer system 700 with a plurality of computer hardware components configured to carry out steps of a computer implemented method for estimating lanes for a vehicle according to various embodiments. The computer system 700 may include a processor 702, a memory 704, and a non-transitory data storage 706. A camera 708 and/or a distance sensor 710 (for example a radar sensor and/or a LiDAR sensor) may be provided as part of the computer system 700 (like illustrated in FIG. 7), or may be provided external to the computer system 700.

    [0114] The processor 702 may carry out instructions provided in the memory 704. The non-transitory data storage 706 may store a computer program, including the instructions that may be transferred to the memory 704 and then executed by the processor 702. The camera 708 and/or the distance sensor 710 may be used to determine input data, for example measurement data and/or sensor data that is provided to the methods described herein.

    [0115] The processor 702, the memory 704, and the non-transitory data storage 706 may be coupled with each other, e.g. via an electrical connection 712, such as e.g. a cable or a computer bus or via any other suitable electrical connection to exchange electrical signals. The camera 708 and/or the distance sensor 710 may be coupled to the computer system 700, for example via an external interface, or may be provided as parts of the computer system (in other words: internal to the computer system, for example coupled via the electrical connection 712).

    [0116] The terms “coupling” or “connection” are intended to include a direct “coupling” (for example via a physical link) or direct “connection” as well as an indirect “coupling” or indirect “connection” (for example via a logical link), respectively.

    [0117] It will be understood that what has been described for one of the methods above may analogously hold true for the computer system 700.

    Reference Numeral List

    [0118] TABLE-US-00001 100 flow diagram illustrating a method for estimating lanes for a vehicle according to various embodiments 101 flow diagram illustrating a method for estimating lanes for a vehicle comparing a first preliminary estimate of lanes and a second preliminary estimate of lanes 102 step of determining sensor data 104 step of determining lane markings 106 step of determining objects 108 step of tracking lane markings 110 step of tracking trails of the objects 112 step of determining position and pose of the vehicle 114 step of aggregating the lane markings 115 step of estimating lanes from lane markings 116 step of aggregating the trails 117 step of estimating lanes from trails 118 step of comparing lanes from lane markings and lanes from trails 120 step of determining a final estimate of lanes 122 sensor data 124 sensor data 126 sensor data 128 lane markings estimate 130 object estimates 132 plurality of lane markings 134 plurality of trails 136 position estimates 138 first preliminary estimate of lanes 140 second preliminary estimate of lanes 200 flow diagram illustrating a comparison of a first preliminary estimate of lanes and a second preliminary estimate of lanes 202 request for second preliminary estimate of lanes based on trails 204 step of termination 206 request for first preliminary estimate of lanes based on lane markings 208 step of determining a final estimate of lanes based on second preliminary estimate of lanes 210 step of determining a final estimate of lanes based on first preliminary estimate of lanes and second preliminary estimate of lanes 212 final estimated lanes with confidence value 302 vehicle 304 moving vehicle for trails 306 static vehicle 308 lane marking 310 lane with direction 312 obstacle 400 flow diagram illustrating a method for estimating lanes for a vehicle based on transforming measurement data into a global coordinate system 402 step of determining measurement data 404 step of determining lane markings 406 step of determining objects 407 step of transforming data into a global coordinate system 408 step of tracking lane markings 409 step of transforming data into a global coordinate system 410 step of tracking trails of the objects 412 step of determining position and pose of the vehicle 414 step of aggregating the lane markings 416 step of aggregating the trails 418 step of comparing lanes from lane markings and lanes from trails 420 step of determining a final estimate of lanes 422 measurement data 424 measurement data 426 measurement data 428 lane markings estimate 430 object estimates 431 lane markings estimate in global coordinates 432 plurality of lane markings in global coordinates 433 object estimate in global coordinates 434 plurality of trails in global coordinates 436 position estimates 438 estimates of lanes 440 estimates of lanes 500 flow diagram illustrating a method for estimating lanes for a vehicle according to various embodiments 502 step of determining a first preliminary estimate of lanes based on a plurality of lane markings at a location of the vehicle 504 step of determining a second preliminary estimate of lanes based on a plurality of trails of objects at a location of the vehicle 506 step of comparing the first preliminary estimate of lanes and the second preliminary estimate of lanes 508 step of determining a final estimate of lanes at the location of the vehicle based on the comparison 600 flow diagram illustrating a method for estimating lanes for a vehicle according to various embodiments 602 step of determining measurement data at a location of the vehicle using a sensor mounted at the vehicle 604 step of transforming the measurement data of the sensor into a global coordinate system to obtain transformed measurement data 606 step of estimating lanes at the location of the vehicle based on the transformed measurement data 700 computer system according to various embodiments 702 processor 704 memory 706 non-transitory data storage 708 camera 710 distance sensor 712 connection