Method and Device for Estimating a Velocity of an Object
20220334238 · 2022-10-20
Inventors
Cpc classification
G01S17/58
PHYSICS
G01S13/4427
PHYSICS
G01S17/42
PHYSICS
G01S17/66
PHYSICS
International classification
Abstract
A method is provided for estimating a velocity of an object located in the environment of a vehicle. Detections of a range, an azimuth angle and a range rate of the object are acquired for at least two different points in time via a sensor. A cost function is generated which depends on a first source and a second source. The first source is based on a range rate velocity profile which depends on the range rate and the azimuth angle, and the first source depends on an estimated accuracy for the first source. The second source is based on a position difference which depends on the range and the azimuth angle for the at least two different points in time, and the second source depends on an estimated accuracy for the second source. By minimizing the cost function, a velocity estimate is determined for the object.
Claims
1. A method comprising: acquiring, via a sensor, detections of a range, an azimuth angle, and a range rate of an object in an environment of a vehicle for at least two different points in time; via a processing unit: generating a cost function that depends on a first source and a second source, the first source based on a range rate velocity profile that depends on the rate range and the azimuth angle and the first source depends on an estimated accuracy for the first source, the second source based on a position difference that depends on the range and the azimuth angle for the at least two different points in time and the second source depends on an estimated accuracy for the second source; and determining a velocity estimate for the object by minimizing the cost function.
2. The method of claim 1, wherein: a plurality of detections of the range, the azimuth angle, and the range rate are acquired for the object for each of the at least two points in time; a respective standard deviation is estimated for the range, the azimuth angle, and the range rate based on the plurality of detections; the range rate velocity profile depends on the plurality of detections of the rate range and the azimuth angle; the position difference depends on the plurality of detections of the range and the azimuth angle for the at least two different points in time; the estimated accuracy of the first source depends on the standard deviation of the range rate and the standard deviation of the azimuth angle; and the estimated accuracy of the second source depends on the standard deviation of the range and the standard deviation of the azimuth angle.
3. The method of claim 2, wherein: different standard deviations of the range, the azimuth angle, and the range rate are estimated for at least one of: each of the at least two points in time; or each detection of the range, the azimuth angle, and the range rate.
4. The method of claim 2, wherein: the first source and the second source are based on a normalized estimation error squared (NEES) that includes the respective standard deviations of the range, the azimuth angle, and the range rate.
5. The method of claim 2, wherein: the cost function comprises a first contribution based on a normalized estimation error squared (NEES) related to the first source and a second contribution based on a normalized estimation error squared (NEES) related to the second source.
6. The method of claim 5, wherein: the first contribution and the second contribution each comprise a sum of elements over the plurality of detections and each element is estimated as a normalized estimation error squared (NEES) for the respective detection.
7. The method of claim 6, wherein: the elements of the first contribution are based on a range rate equation and on the standard deviation of the range rate.
8. The method of claim 6, wherein: the elements of the second contribution are based on: the position difference for the respective detection, a velocity covariance matrix estimated based on the standard deviations of the range and the azimuth angle, and a time interval between the at least two different points in time for which the range and the azimuth angle are acquired by the sensor.
9. The method of claim 6, wherein: the cost function is generated as an average of the first contribution and the second contribution.
10. The method of claim 1, wherein: a component of the velocity is estimated by setting a derivative of the cost function with respect to a velocity component to zero.
11. The method of claim 1, wherein the cost function and the velocity estimate are determined by assuming a constant velocity of the object (23) in order to initialize a Kalman filter state estimation of the velocity.
12. A device comprising: a sensor configured to provide data for acquiring detections of a range, an azimuth angle, and a range rate of an object in a field of view of the sensor for at least two different points in time; and a processing unit configured to: generate a cost function that depends on a first source and a second source, the first source based on a range rate velocity profile that depends on the range rate and the azimuth angle and the first source depends on an estimated accuracy for the first source, the second source based on a position difference which depends on the range and the azimuth angle for the at least two different points in time and the second source depends on an estimated accuracy for the second source; and determine a velocity estimate for the object by minimizing the cost function.
13. The device of claim 12, wherein the sensor includes at least one of: a radar sensor or a Lidar sensor.
14. The device of claim 12, wherein the device comprises a vehicle.
15. The device of claim 12, wherein: a plurality of detections of the range, the azimuth angle, and the range rate are acquired for the object for each of the at least two points in time; a respective standard deviation is estimated for the range, the azimuth angle, and the range rate based on the plurality of detections; the range rate velocity profile depends on the plurality of detections of the rate range and the azimuth angle; the position difference depends on the plurality of detections of the range and the azimuth angle for the at least two different points in time; the estimated accuracy of the first source depends on the standard deviation of the range rate and the standard deviation of the azimuth angle; and the estimated accuracy of the second source depends on the standard deviation of the range and the standard deviation of the azimuth angle.
16. The device of claim 15, wherein: the first source and the second source are based on a normalized estimation error squared (NEES) that includes the respective standard deviations of the range, the azimuth angle, and the range rate.
17. The device of claim 16, wherein: the first contribution and the second contribution each comprise a sum of elements over the plurality of detections, wherein each element is estimated as a normalized estimation error squared (NEES) for the respective detection.
18. The device of claim 17, wherein: the elements of the first contribution are based on a range rate equation and on the standard deviation of the range rate.
19. The device of claim 17, wherein: the elements of the second contribution are based on: the position difference for the respective detection, a velocity covariance matrix estimated based on the standard deviations of the range and the azimuth angle, and a time interval between the at least two different points in time for which the range and the azimuth angle are acquired by the sensor.
20. A computer-readable storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to: receive, via a sensor, data for acquiring detections of a range, an azimuth angle, and a range rate of an object in a field of view of the sensor for at least two different points in time; generate a cost function that depends on a first source and a second source, the first source based on a range rate velocity profile that depends on a range rate and the azimuth angle and the first source depends on an estimated accuracy for the first source, the second source based on a position difference which depends on the range and the azimuth angle for the at least two different points in time and the second source depends on an estimated accuracy for the second source; and determine a velocity estimate for the object by minimizing the cost function.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0035] Exemplary embodiments and functions of the present disclosure are described herein in conjunction with the following drawings, showing schematically:
[0036]
[0037]
[0038]
[0039]
[0040]
DETAILED DESCRIPTION
[0041]
[0042]
[0043] The device 11 is provided for detecting objects in the environment of the vehicle 10 such as a target object 23 shown in
[0044] In addition, the raw detections are captured for a plurality of points in time (e.g. at least two). The detections for a certain point in time are also regarded as a radar scan, whereas the detections belonging to the same object are regarded as a cloud.
[0045] For each radar scan (or measurement instance), the radar sensor 13 captures m raw detections from the target object 23. The number m is typically 3 to 5. Each raw detection is indexed by i and described by the following parameters expressed in the vehicle or sensor coordinate system 17:
[0046] r.sub.i: range (or radial distance)
[0047] θ.sub.i: azimuth angle
[0048] {dot over (r)}.sub.i: raw range rate (or radial velocity), wherein i=1, . . . , m.
[0049] In addition, the accuracy of each of these detection attributes is assumed to be known (from the accuracy of the radar sensor 13) and is represented by the respective standard deviations σ.sub.θ, σ.sub.r, σ.sub.{dot over (r)}, or σ.sub.{dot over (r)}.sub.
[0050] The range rate equation for a single raw detection i is given as follows:
{dot over (r)}.sub.i+V.sub.s.sup.x cos θ.sub.i+V.sub.s.sup.y sin θ.sub.i=V.sub.t,i.sup.x cos θ.sub.i+V.sub.t,i.sup.y sin θ.sub.i (1)
[0051] wherein V.sub.s denotes the current velocity of the host vehicle 10 and V.sub.t denotes the velocity to be determined for the target object 23. The components of the velocity vector of the target object 23, e.g. V.sub.x and V.sub.y, are defined along the x-axis 19 and the y-axis 21, respectively, of the vehicle coordinate system 17 (see
{dot over (r)}.sub.i,cmp={dot over (r)}.sub.i+V.sub.s.sup.x cos θ.sub.i+V.sub.s.sup.y sin θ.sub.i (2)
[0052] wherein {dot over (r)}.sub.i,cmp is the compensated range rate of the i-th raw detection.
[0053] Then the above equation can be reduced to:
{dot over (r)}.sub.i,cmp=V.sub.t,i.sup.x cos θ.sub.i+V.sub.t,i.sup.y sin θ.sub.i (3)
[0054] The range rate equation is represented in vector form as:
[0055] An estimate 45 (see
[0056] 1. A velocity profile which is determined based on the rate range equation:
{dot over (r)}.sub.comp.sub.
[0057] 2. A position difference which is defined as follows:
[0058] For the velocity profile (which may also be referred to as range rate velocity profile), it is assumed that all m detections belonging to the “distributed” target object 23 have the same absolute value for their velocity, but different azimuth angles. Therefore, the velocity vectors belonging to detections of the target object form the velocity profile. For the position difference, dt denotes the time difference between two points in time for which two detections are acquired.
[0059] Both sources of velocity have to be combined for determining the estimate 45 for the velocity, e.g. both sources need to be weighted utilizing estimated accuracies or standard deviations of the detections. For weighting the two sources according to equations (5) and (6), a Normalized Estimation Error Squared (NEES) can be used which is generally defined as follows:
e.sub.k=(X.sub.k−{circumflex over (x)}).sup.T{circumflex over (P)}.sub.k.sup.−1(X.sub.k−{circumflex over (x)}) (7)
[0060] {circumflex over (P)}.sub.k.sup.−1 denotes the inverse of the covariance matrix, and {circumflex over (x)} denotes an estimate based on measurements X.sub.k. The Normalized Estimation Error Squared (NEES) is usually employed for a consistency check of a variance estimation. For the present disclosure, the NEES is used to define a cost function Q for the two sources as defined above for the velocity estimation:
Q=1/2(NEES.sub.VP+NEES.sub.PD) (8)
[0061] The first term or contribution is regarded as Velocity Profile NEES and is calculated as follows:
[0062] The second term or contribution is regarded as Position Difference NEES and can be calculated as follows:
NEES.sub.PD=Σ.sub.jV.sub.diff,j(σ.sub.V.sub.
wherein:
V.sub.diff,j=V−V.sub.j (11)
[0063] and V=[V.sub.x V.sub.y] is the “true” velocity of the target object 23 which is to be estimated. Here and in the following, j denotes the index of the detection within the Position Difference NEES, e.g. in the same manner as described above for the index i.
[0064] The respective vector V.sub.i includes both components of the velocity based on the position difference and given by equation (6):
V.sub.j=[V.sub.x,jV.sub.y,j] (12)
[0065] The velocity covariance matrix of equation (10) is defined as:
[0066] which may also be written as:
[0067] wherein A and B denote two different points in time (separated by a time interval dt) for which the respective covariance matrix is determined, and wherein:
[0068] Hence, the velocity variance finally depends on θ and on the standard deviations of r and θ. It is noted that the above definitions are also valid accordingly for the second point in time denoted by B.
[0069] The term of Position Difference NEES may be simplified:
[0070] wherein:
|σ.sub.V.sub.
[0071] Finally, the entire or total NEES cost function can be written as:
[0072] To find V.sub.x and V.sub.y, the entire NEES cost function Q is minimized analytically by calculating the first derivative to find global minimum:
[0073] For determining the minimum of the NEES cost function, these derivatives are set equal to zero:
[0074] Some reorganization of the above equations can be performed as follows:
[0075] Finally, this leads to two equations depending from the two variables V.sub.x and V.sub.y:
wherein S.sub.xy=S.sub.yx
[0076] By defining the following abbreviations:
W=S.sub.xxS.sub.yy−S.sub.xyS.sub.yx
W.sub.x=S.sub.xS.sub.yy+S.sub.xyS.sub.y
W.sub.y=S.sub.xxS.sub.y+S.sub.xS.sub.yx (22)
the velocity of the target object 23 is estimated as follows:
[0077] In summary, the two components V.sub.x and V.sub.y of the velocity vector of the target object 23 (see
[0078] The velocity vector of the target object 23 is therefore determined analytically based on the input data and based on the NEES cost function. The NEES includes a respective first and second contribution for each of the two sources for estimating the velocity, e.g. for the velocity profile which is based on the rate range equation and for the position difference which depends on the range and the azimuth angle for each detection. When combining these two contributions, the velocity profile NEES and the position difference NEES are weighted by the respective standard deviations which reflect the accuracy of the sensor measurements.
[0079] Since two sources for estimating the velocity are considered and the cost function is minimized, the accuracy of a velocity estimate is improved in comparison to methods which rely on one of these sources for estimating the velocity only. Due to the analytical expression as shown explicitly above, performing the method requires a low computational effort and a minimum time. Hence, the method is easy to be embedded in automotive systems.
[0080] For verifying the method according to the disclosure, two different scenarios are considered in
[0081] The first scenario is shown in
[0082] The velocity estimation is performed with time steps of dt=100 ms, and single point detections are generated from the front left corner of the target object 23 for each time step or point in time. Furthermore, constant detection accuracies (standard deviations) are assumed to be: σ.sub.θ=0.3 deg, σ.sub.r=0.15 m and
[0083] In
[0084] In
[0085] In
[0086] As can be seen in
[0087] For
[0088] This is also reflected in the statistics as shown in
[0089] This is also confirmed by the results as shown in
[0090] In
Single point detections are generated again from the left corner of the target object 23 for each time step.
[0091] The results as shown in
(−13,89, −13,89) m/s which is the relative velocity with respect to the vehicle 10 for the second scenario. In other words, the velocity of the vehicle 10 is compensated for the reference 35 in
[0092] As can be seen in
[0093] As shown in
[0094]
[0095] In
[0096] This is also confirmed by the lines 61, 63 and 65 for the root mean square for the deviations of the heading (
[0097] In summary, the results for both scenarios (see