Lidar based Road Condition Estimation for Passenger Vehicles
20250171028 · 2025-05-29
Assignee
Inventors
Cpc classification
B60W10/22
PERFORMING OPERATIONS; TRANSPORTING
G06V10/26
PHYSICS
B60W2552/00
PERFORMING OPERATIONS; TRANSPORTING
B60W2420/503
PERFORMING OPERATIONS; TRANSPORTING
B60W10/20
PERFORMING OPERATIONS; TRANSPORTING
B60W40/12
PERFORMING OPERATIONS; TRANSPORTING
International classification
B60W40/12
PERFORMING OPERATIONS; TRANSPORTING
B60W10/22
PERFORMING OPERATIONS; TRANSPORTING
B60W10/20
PERFORMING OPERATIONS; TRANSPORTING
G06V20/56
PHYSICS
Abstract
A method performed by a vehicle system for enabling estimation of a condition of a road comprising obtaining, from a sensor mounted on a vehicle, first data comprising a first plurality of data points, each comprising longitudinal, lateral and vertical coordinates representing dimensions of a part of the road at a first time, obtaining a second plurality of data points, each comprising longitudinal, lateral and vertical coordinates representing dimensions of the part of the road at a second time, wherein the first time is more recent than the second time, segmenting the first plurality of data points into a plurality of segmented data areas based on longitudinal and lateral coordinates of the first plurality of data points, and estimating a respective vertical position in each segmented data area based on vertical coordinates of the second plurality of data points and a motion state of the vehicle at the second time.
Claims
1. A method performed by a vehicle system for enabling estimation of a condition of a road, the method comprising: Obtaining, from a sensor mounted on a vehicle, first data comprising a first plurality of data points, wherein each of the first plurality of data points comprises longitudinal, lateral and vertical coordinates representing respective dimensions of a part of the road at a first time; Obtaining a second plurality of data points, wherein each of the second plurality of data points comprises longitudinal, lateral and vertical coordinates representing respective dimensions of the part of the road at a second time, wherein the first time is more recent than the second time; Segmenting the first plurality of data points into a plurality of segmented data areas based on the longitudinal and lateral coordinates of the first plurality of data points by dividing a representation of the part of the road into the plurality of segmented data areas and assigning each of the first plurality of data points to a segmented data area based on the longitudinal and lateral coordinates of the first plurality of data points; and estimating a respective vertical position in each segmented data area based on the vertical coordinates of the second plurality of data points and a motion state of the vehicle at the second time, wherein the motion state of the vehicle comprises at least one of a longitudinal velocity, a lateral velocity, a vertical velocity, a roll angle, a pitch angle and a yaw angle of the vehicle.
2. The method according to claim 1, further comprising estimating the condition of the road based on the estimated respective vertical positions.
3. The method according to claim 2, further comprising calculating a mean of the vertical coordinates of the first plurality of data points for each segmented data area based on vertical coordinates of the first plurality of data points and wherein the condition of the road is estimated based on the calculated means.
4. The method according to claim 1, wherein the sensor comprises one or more Light Detection and Ranging, LIDAR, sensors.
5. The method according to claim 1, wherein the motion state of the vehicle is estimated based on sensor data received from one or more motion tracking sensor and/or rotation wheel speed sensors of the vehicle.
6. The method according to claim 1, wherein the sensor data is received from an inertial measurement unit, IMU, and/or from a wheel speed sensor, WSS.
7. The method according to claim 2, wherein the condition of the road comprises at least one of a curvature of the road, an inclination of the road, a banking of the road, an anomaly of the road, and a smoothness of the road, wherein the anomaly of the road comprises at least one of a road bump, an undulation, a pothole and a manhole cover.
8. The method according to claim 2, further comprising adjusting a regenerative braking force of the vehicle based on the estimated condition and/or adjusting a speed profile based on the estimated condition.
9. The method according to claim 2, further comprising adjusting a steering, a suspension and/or a speed of the vehicle based on the estimated condition.
10. The method according to claim 2, further comprising obtaining second data indicative of a steering angle to estimate segmented data areas among the plurality of segmented data areas where the vehicle might travel and wherein estimating the condition of the road is further based on the estimated segmented areas.
11. A vehicle system comprising a memory, and a controller configured to: Obtain, from a sensor mounted on a vehicle, first data comprising a first plurality of data points, wherein each of the first plurality of data points comprises longitudinal, lateral and vertical coordinates representing respective dimensions of a part of the road at a first time; Obtain a second plurality of data points, wherein each of the second plurality of data points comprises longitudinal, lateral and vertical coordinates representing respective dimensions of the part of the road at a second time, wherein the first time is more recent than the second time; Segment the first plurality of data points into a plurality of segmented data areas based on the longitudinal and lateral coordinates of the first plurality of data points by dividing a representation of the part of the road into the plurality of segmented data areas and assigning each of the first plurality of data points to a segmented data area based on the longitudinal and lateral coordinates of the first plurality of data points; and Estimate a respective vertical position in each segmented data area based on the vertical coordinates of the second plurality of data points and a motion state of the vehicle at the second time, wherein the motion state of the vehicle comprises at least one of a longitudinal velocity, a lateral velocity, a vertical velocity, a roll angle, a pitch angle and a yaw angle of the vehicle.
12. The vehicle system according to claim 11, wherein the controller is further configured to estimate the condition of the road based on the estimated respective vertical positions.
13. The vehicle system according to claim 12, wherein the controller is further configured to calculate a mean of the vertical coordinates of the first plurality of data points for each segmented data area based on vertical coordinates of the first plurality of data points and wherein the condition of the road is estimated based on the calculated means.
14. The vehicle system according to claim 11, wherein the sensor comprises one or more Light Detection and Ranging, LIDAR, sensors.
15. The vehicle system according to claim 11, wherein the motion state of the vehicle is estimated based on sensor data received from one or more motion tracking sensor and/or rotation wheel speed sensors of the vehicle.
16. The vehicle system according to claim 11, wherein the sensor data is received from an inertial measurement unit, IMU, and/or from a wheel speed sensor, WSS.
17. The vehicle system according to claim 11, wherein the condition of the road comprises at least one of a curvature of the road, an inclination of the road, a banking of the road, an anomaly of the road, and a smoothness of the road, wherein the anomaly of the road comprises at least one of a road bump, an undulation, a pothole and a manhole cover.
18. The vehicle system according to claim 11, wherein the controller is further configured to adjust a regenerative braking force of the vehicle based on the estimated condition and/or adjust a steering, a suspension and/or a speed of the vehicle based on the estimated condition and/or adjust a speed profile based on the estimated condition.
19. The vehicle system according to claim 11, wherein the controller is further configured to obtain second data indicative of a steering angle to estimate segmented data areas among the plurality of segmented data areas where the vehicle might travel and wherein estimating the condition of the road is further based on the estimated segmented areas.
20. A vehicle comprising the vehicle system according to claim 11.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
DESCRIPTION OF EMBODIMENTS
[0033]
[0034]
[0035] In step 202 of the method, the vehicle system obtains, from a sensor mounted on a vehicle, first data comprising a first plurality of data points. Each of the first plurality of data points comprises longitudinal, lateral and vertical coordinates representing respective dimensions of a part of the road at a first time. The sensor may be a three-dimensional (3D) sensor. For instance, the 3D sensor may be a lidar sensor which is part of the sensor unit 108. The lidar sensor may scan the road extending in front of the vehicle and generate the first plurality of data points.
[0036] In step 204 of the method, the vehicle system obtains a second plurality of data points, wherein each of the second plurality of data points comprises longitudinal, lateral and vertical coordinates representing respective dimensions of the part of the road at a second time, and wherein the first time is more recent than the second time. The second plurality of data points may be stored in a memory or in a data base that the vehicle system can access.
[0037] In step 206, the first plurality of data points is segmented into a plurality of segmented data areas or grids based on the longitudinal and lateral coordinates of the first plurality of data points.
[0038] In step 208, a respective vertical position is estimated in each segmented data area based on the vertical coordinates of the second plurality of data points and a motion state of the vehicle at the second time. The motion state of the vehicle may be estimated based on sensor data received from one or more motion tracking sensor and/or rotation wheel speed sensors of the vehicle. The sensor data may be received from the IMU, and/or from the WSS of the sensor unit 108. The motion state of the vehicle may comprise at least one of a longitudinal velocity, a lateral velocity, a roll angle, a pitch angle and a yaw angle of the vehicle.
[0039] The method of
[0040] The condition of the road may comprise at least one of a curvature of the road, an inclination of the road as shown in
[0041] In an alternative embodiment, the method of
[0042] The method of
[0043] Furthermore, the method may also comprise obtaining second data indicative of a steering angle to estimate segmented data areas among the plurality of segmented data areas where the vehicle might travel and wherein estimating the condition of the road is further based on the estimated segmented areas.
[0044] The step 206 of
[0045]
[0046] The vertical coordinate can be used to determine whether a data point received from the lidar at a time t belongs to the road surface and not to a static or a dynamic object around or over the road surface. The data points with lowest values in the vertical coordinate below a certain threshold have the highest probability of belonging to the road and will form the first plurality of data points. The longitudinal and lateral coordinates of the first plurality of data points can be used then to assign each of the first plurality of data points to one of the grids 610 of
[0047] A second plurality of data points is obtained, wherein the second plurality of points comprise longitudinal, lateral and vertical coordinates representing respective dimensions of the part of the road at a second time t-1, wherein the first time is more recent than the second time. The grids comprising data points that belong to the first time t and to the second time t-1 are indicated as 702 in
[0048] As said, the motion states of the vehicle (such as longitudinal velocity, lateral velocity, vertical velocity, roll angle 430, pitch angle 330, yaw angle) for previous second time t-1 (in previous 100 milliseconds lidar measurement time stamp), are calculated by processing IMU and Wheel Speed Sensor (WSS) data.
[0049] In the following, Z.sub.t-1 is a measured mean value of the vertical coordinates of the second plurality of data points in each grid 610, Z.sub.t is the measured mean value of the vertical coordinates of the first plurality of data points in each grid 610, {circumflex over (Z)}.sub.t-1|t-1 is the updated grid value of the vertical coordinates of the second plurality of points in each grid 610, {circumflex over (Z)}.sub.t|t is the updated grid value of the vertical coordinates of the first plurality of points in each grid 610 and {circumflex over (Z)}.sub.t|t-1 is the estimated grid value of the vertical coordinate of the first plurality of points in each grid 610.
[0050] First, Z.sub.t is calculated as a mean of measured values of a segmented grid area at time t. Then {circumflex over (Z)}.sub.t|t-1 for a segmented area is calculated from updated value {circumflex over (Z)}.sub.t-1|t-1 from second time step t-1 according to below equation 1. The method proceed then to calculate an updated grid value {circumflex over (Z)}.sub.t|t as an average of values {circumflex over (Z)}.sub.t|t-1 and Z.sub.t according to below equation 2.
[0051] As said, {circumflex over (Z)}.sub.t|t-1 is calculated as follows:
is the translation vector of size 31 in x, y, z axis, N=[0 0 0] is a zero vector of size 13, 0.1 is a constant difference in seconds between t and t-1 and may have any other value, and v.sub.t-1.sup.x, v.sub.t-1.sup.y, v.sub.t-1.sup.z are respectively the following motion states of the vehicle at the second time t-1: vehicle's longitudinal velocity along x axis, vehicle's lateral velocity along y axis and vehicle's vertical velocity along z axis respectively.
[0053] The velocities v.sub.t-1.sup.x, v.sub.t-1.sup.y, v.sub.t-1.sup.z may be obtained, for instance, by fusion of IMU and WSS sensor data. Furthermore, R.sub.x() is the rotation matrix along x axis, is the vehicle roll angle, R.sub.y() is the rotation matrix along y axis, is the vehicle pitch angle, R.sub.z() is the rotation matrix along z axis, is the vehicle pitch angle and , , and may be obtained from a gyroscope of the vehicle.
[0054] A measured mean value Z.sub.t or mean of the vertical coordinates of the first plurality of data points for each segmented data area can be calculated based on vertical coordinates of the first plurality of data points. The measured mean Z.sub.t-1 of the vertical coordinates of the second plurality of data points for each segmented data area 610 is obtained by calculating the mean of all the data points in said segmented data area 610 that belong to the second plurality of data points.
[0055] Then the grid's estimated respective vertical position {circumflex over (Z)}.sub.t|t-1 can be combined or fused with measured mean value, Z.sub.t. This combination or fusion will give a grid's updated height value {circumflex over (Z)}.sub.t|t. Equation 2 shows the simplest possible fusion or combination of {circumflex over (Z)}.sub.t|t-1 and Z.sub.t by averaging them as below:
[0056] Other popular fusion and filtering technique can also be applied there, e.g. Kalman filter.
[0057] A steering wheel sensor may provide second data indicative of a steering angle to estimate segmented data areas among the plurality of segmented data areas where the vehicle might travel and wherein calculating the condition of the road is further based on the Updated value of segmented areas. In
[0058] A predicted height of the road h.sub.road ahead or estimated inclination of the road can be obtained using below Equations 3-5 and as shown in
[0060] The lateral inclination b.sub.road ahead, also called forward road bank measurement or height difference in lateral direction of the vehicle, is calculated in a similar way using Equations 6-8 wherein the different parameters of Equations 6-8 are shown in
where, b.sub.road ahead is the lateral inclination, b.sub.observed grids is a bank difference component based only on lidar measurements, b.sub.vehicle 410 is a compensation due to the current roll angle (.sub.vehicle) 430 of the vehicle obtained by integration of IMU gyroscope pitch rate, and front wheel base 420 is the distance between the centre of the front tires.
[0061] If a grid's mean and variance values are observed to be deviating beyond certain threshold to their adjacent grids, this indicates road anomalies, either a bump 502 or a pothole 500, along the driving direction, as in
[0062] Those skilled in the art will appreciate that the methods, systems and components described herein may comprise, in whole or in part, a combination of analogue and digital circuits and/or one or more appropriately programmed processors (e.g., one or more microprocessors including central processing units (CPU)) and associated memory, which may include stored operating system software, firmware and/or application software executable by the processor(s) for controlling operation thereof and/or for performing the particular algorithms represented by the various functions and/or operations described herein, including interaction between and/or cooperation with each other as well as transmitters and receivers. One or more of such processors, as well as other digital hardware, may be included in a single ASIC (Application-Specific Integrated Circuitry), or several processors and various digital hardware may be distributed among several separate components, whether individually packaged or assembled into a SoC (System-on-a-Chip).
[0063] Furthermore, the systems, methods and components described, and/or any other arrangement, unit, system, device or module described herein may for instance be implemented in one or several arbitrary nodes comprised in the host vehicle and/or one or more separate devices. In that regard, such a node may comprise an electronic control unit (ECU) or any suitable electronic device, which may be a main or central node. It should also be noted that the these may further comprise or be arranged or configured to cooperate with any type of storage device or storage arrangement known in the art, which may for example be used for storing input or output data associated with the functions and/or operations described herein. The systems, components and methods described herein may further comprise any computer hardware and software and/or electrical hardware known in the art configured to enable communication therebetween.
[0064] While the disclosure has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the disclosure. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the disclosure without departing from the essential scope thereof. Therefore, it is intended that the disclosure not be limited to the particular embodiments disclosed, but that the disclosure will include all embodiments falling within the scope of the appended claims.