Method for calibrating the alignment of a moving object sensor

11486988 · 2022-11-01

Assignee

Inventors

Cpc classification

International classification

Abstract

This disclosure relates, e.g., to a method for calibrating the alignment of a moving object sensor that comprises the steps: Detection of the movement of the object sensor, multiple detection of at least one static object by the moving object sensor at different positions of the object sensor, calculation of the relative positions of the static object with respect to the corresponding positions of the object sensor, calculation of anticipated positions of the static object from the relative positions while assuming an alignment error of the object sensor, calculation of an error parameter from the anticipated positions, and minimization of the error parameter by adapting the alignment error of the object sensor.

Claims

1. A method for calibrating the alignment of a moving object sensor that comprises: detecting the movement of the object sensor; repeated detection of at least one static object by the moving object sensor in different positions of the object sensor; calculating the relative positions of the static object with reference to the corresponding positions of the object sensor; calculating anticipated positions of the static object from the relative positions using an assumed alignment error of the object sensor; calculating an error parameter from the anticipated positions; minimizing the error parameter by adapting the alignment error of the object sensor; and determining an actual alignment error of the object sensor from the adapted alignment error at which the error parameter is substantially minimized.

2. The method of claim 1, wherein the object sensor detects the at least one static object in a plane.

3. The method of claim 2, wherein the at least one static object is an elongated object that appears basically punctiform in the detection plane.

4. The method of claim 3, wherein the alignment error of the object sensor comprises an azimuth angle error.

5. The method of claim 3, wherein the error parameter is calculated from the Helmert point error.

6. The method of claim 2, wherein the alignment error of the object sensor comprises an azimuth angle error.

7. The method of claim 2, wherein the error parameter is calculated from the Helmert point error.

8. The method of claim 1, wherein the alignment error of the object sensor comprises an azimuth angle error.

9. The method of claim 8, wherein the error parameter is iteratively minimized by changing the azimuth angle error.

10. The method of claim 9, wherein the error parameter is calculated from the Helmert point error.

11. The method of claim 8, wherein the error parameter is minimized by an optimization method.

12. The method of claim 8, wherein the error parameter is calculated from the Helmert point error.

13. The method of claim 1, wherein the error parameter is calculated from the Helmert point error.

14. The method of claim 1, wherein more than one static object is detected.

15. The method of claim 1, wherein the absolute positions of the object sensor are also detected.

16. The method of claim 1, wherein the positions of the static object are calculated from the positions of the object sensor by polar appending.

17. The method of claim 16, wherein an error-corrected position of the static object is calculated.

18. The method of claim 17, wherein an absolute position of the object sensor is determined from the error-corrected position of the static object and a known absolute position of the static object.

19. The method of claim 1, wherein the object sensor comprises at least one of a radar sensor, an ultrasonic sensor, a laser sensor, and an image sensor.

20. The method of claim 1, wherein the object sensor is installed in a vehicle.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) IN THE FIGS.:

(2) FIG. 1 shows a schematic representation of a sensor misalignment;

(3) FIG. 2 shows the detection of an object by means of an object sensor mounted on a moving vehicle;

(4) FIG. 3 shows the geometric situation of the relative position of an object to the sensor with an assumed sensor misalignment.

DETAILED DESCRIPTION

(5) Specific embodiments of the invention are here described in detail, below. In the following description of embodiments of the invention, the specific details are described in order to provide a thorough understanding of the invention. However, it will be apparent to one of ordinary skill in the art that the invention may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the instant description.

(6) A first exemplary aspect relates to a method for calibrating the alignment of a moving object sensor that comprises the steps of: Detecting of the movement of the object sensor, multiple detecting of at least one static object by the moving object sensor at different positions of the object sensor, calculating of the relative position of the static object with respect to the corresponding positions of the object sensor, calculating of anticipated positions of the static object from the relative positions while assuming an alignment error of the object sensor, calculating of an error parameter from the anticipated positions, and minimizing of the error parameter by adapting the alignment error of the object sensor.

(7) The present aspect accordingly proposes to provide a sensor calibration using an alignment error of the object sensor. The associated reduction of the data detection, or respectively data processing, required for sensor calibration to a few features which can be detected robustly increases both, the robustness of ascertaining a potential alignment error, as well as the calculation speed during operation.

(8) The method according to the present aspect is based on the consideration that, in an error-free detection of a static object, the calculated relative position of the static object with respect to the corresponding position of the object sensor only depends on the movement of the object sensor. When taking into account the movement of the object sensor, the position of the static object should therefore not change. While taking into account an alignment error of the object sensor, the detected relative static object positions with respect to the corresponding object sensor positions deviate from the anticipated static object positions, however. It is therefore proposed in some embodiments to calculate the anticipated positions while assuming an alignment error of the object sensor. If the assumed alignment error corresponds to the actual alignment error of the object sensor, the anticipated positions of the static object should not change when taking into account the movement of the object sensor. When the assumed alignment error does not correspond to the actual alignment error, deviations between the individual anticipated positions are established. An error parameter for the anticipated position can be ascertained from these deviations. The assumed alignment error is then adapted so that the error parameter is minimized. The alignment error at which the error parameter is minimal then corresponds to the actual alignment error of the object sensor.

(9) The method in some embodiments may be realized by detecting a wide range of different static objects. For example, flat objects may be detected, wherein for example it can be checked how effectively the surfaces to be detected at different times can be brought into congruence with each other while taking into account an alignment error. Individual features that are processed further in the method may, e.g., be extracted from flat objects, for example also by means of suitable image processing such as edge detection.

(10) To reduce the accumulating data, it is proposed that the object sensor detects the at least one static object in a plane. In some embodiments, the method therefore is conducted two-dimensionally. Even if the data provided by the object sensor are three-dimensional data, a two-dimensional plane can always be extracted from the three-dimensional data set for further processing in some embodiments.

(11) In some embodiments, robust data for sensor calibration result when in a two-dimensional capacity the at least one static object is an elongated object that then appears substantially punctiform in the detection plane. Even when the data has interference, punctiform data can also be detected very reliably from the static objects used for sensor calibration and can be effectively tracked when the vehicle is moving. Suitable elongated rod-shaped objects that are frequently located in a vehicle's surroundings are for example traffic signs, stoplights, streetlights, guideposts, etc. In three-dimensional detection, lane markers, surfaces such as building façades, fences, guide rails and the like can be used for data evaluation.

(12) In some embodiments, the alignment error of the object sensor comprises an azimuth angle error. In sensor calibration, the azimuth angle error generally represents the error with the greatest effect on the positioning of static objects since the azimuth angle error has an increasingly greater effect in particular as the distance of the object from the sensor increases. In some embodiments that work in a two-dimensional plane, it can be taken into account that alignment errors in the other solid angle directions can also contribute to a lesser extent to the detected azimuth angle error. If the evaluation is reduced to only the azimuth angle error, the error correction in fact refers to a “pseudo-azimuth angle” since it is assumed that the entire error is based on the azimuth angle. Depending on the extent of the influence of the other angle errors on the azimuth angle, the pseudo-azimuth angle determined by the method according to the present embodiments cannot always be used as the correction angle for correcting/further processing other sensor data since the detected pseudo-azimuth angle would then not correspond to the actual azimuth angle error. In these cases, the method in some embodiments can at least serve as an indication of an alignment error of the object sensor and be used to generate a corresponding warning message.

(13) In some embodiments, the error parameter is iteratively minimized by changing the azimuth angle error. In this case, for example a greatest possible error is assumed for the azimuth error. The position of the static object is ascertained taking into account the greatest possible error. By skillfully varying the azimuth angle error, recalculating the object positions and determining the error parameter, the actual azimuth angle error may be ascertained by means of a suitable stop criterion. The stop criterion may for example entail that the error parameter will no longer significantly change given increasingly smaller changes in the azimuth angle error.

(14) In some embodiments, the error parameter can be minimized by an optimization method. For example, an optimization problem may be formulated to minimize the error parameter with respect to the azimuth angle error. Suitable optimization methods are for example curve fitting methods for example based on Gauß-Markov models which are known per se.

(15) A wide range of values can be used as a suitable error parameter that is to be minimized. For punctiform static objects, the averages of the anticipated positions of the static object may be used as the error parameter, for example.

(16) A suitable error parameter may for example also be derived from the Helmert point error. The Helmert point error of an object results in two dimensions from the root of the sum of the squared standard deviations of the X-coordinate of the object and the squared standard deviation of the Y-coordinate of the object. These can be expressed by the following the equation:
S.sub.P.sub.i.sup.H=√{square root over (S.sub.x.sub.i.sup.2+S.sub.y.sub.i.sup.2)}
where:
S.sub.P.sub.i.sup.H=is the Helmert point error of the i-th object,
S.sub.x.sub.i.sup.2=is the standard deviation of the X-coordinate of the i-th object, and
S.sub.y.sub.i.sup.2=is the standard deviation of the Y-coordinate of the i-th object.

(17) In some embodiments, more than one static object is detected. In these cases, the sum of the Helmert point error S.sub.sum.sup.H may be used as the error parameter for punctiform static objects in two dimensions and minimized by suitably varying the azimuth angle error:

(18) s sum H = .Math. i = 1 n ( s P i H )

(19) The Helmert point error is also suitable as a measure of the compactness or quality of the assignment of the detections to the repeatedly detected objects. If several objects are detected several times in sequential measurements, these repeatedly detected objects can be assigned to each other with the assistance of a tracking method. This assignment can frequently occur through sensor processing.

(20) If several static objects are detected and in some embodiments, the movement of the object sensor itself can already be calculated from the tracking of the static objects when processing sensor data, given the prerequisite that the relative positions of the static objects remain unchanged.

(21) In some embodiments, the vehicle position and the driving direction at the time of each detection are detected separately, for example using suitable additional sensors on the object sensor or on the vehicle in which the object sensor is installed, such as speed sensors, direction sensors that for example can be detected by steering angle sensors, and the like.

(22) Frequently, satellite-supported navigation sensors may also be available in some embodiments, with which the absolute positions of the object sensor can be detected in each detection. To increase the precision of sensor movement detection, absolute positioning methods such as satellite navigation methods can be combined with relative movement information.

(23) Independent of whether absolute positions of the object sensor are used, or relative positions of the object sensor at the individual points in time of each detection, the relative or absolute positions of the static object may be calculated by polar appending from the positions of the object sensor in some embodiments. The embodiments are beneficial because the direction error is directly included in the ascertained position of the static object with polar appending.

(24) If the azimuth angle error has been ascertained and in some embodiments, an error-corrected actual position of the static object may also be calculated from the position of the static object measured in this manner. These embodiments are therefore also suitable for ascertaining error-corrected absolute positions of static objects when the absolute position of the object sensor is known, which for example can be used for cartography tasks.

(25) Conversely and in some embodiments, the error-corrected absolute position of the static object may also be compared with an absolute position of the static object that is for example known from map data to thereby ascertain the absolute position of the object sensor even without tools such as for example satellite-supported navigation.

(26) A wide variety of sensors may be used as the object sensors, e.g., sensors that enable a measurement of angle and distance such as for example radar sensors, ultrasonic sensors, laser sensors (laser scanners) or image sensors.

(27) In some embodiments, the object sensor is mounted on a vehicle such as a motor vehicle, or is installed therein. The vehicle may for example be an autonomously driving or partially autonomously driving vehicle.

(28) Reference will now be made to the drawings in which the various elements of embodiments will be given numerical designations and in which further embodiments will be discussed.

(29) Specific references to components, process steps, and other elements are not intended to be limiting. Further, it is understood that like parts bear the same or similar reference numerals when referring to alternate figures. It is further noted that the figures are schematic and provided for guidance to the skilled reader and are not necessarily drawn to scale. Rather, the various drawing scales, aspect ratios, and numbers of components shown in the figures may be purposely distorted to make certain features or relationships easier to understand.

(30) FIG. 1 shows an object sensor S that can detect objects in the surroundings of the sensor. To calculate the detected sensor data, it is assumed that the sensor is aligned along arrow 10 in the direction of movement of the sensor. The actual sensor orientation however deviates laterally from the assumed sensor alignment 10 and is shown by the arrow 11. The angle Δα between the assumed sensor alignment 10 and the actual sensor alignment 11 accordingly represents the azimuth angle error. The object data are detected in the plane of the drawing in the depicted example. The azimuth angle error Δα is therefore an error in the lateral alignment of the sensor S.

(31) FIG. 2 shows a schematic representation of the detection of an object O by a moving vehicle 12 on which the sensor S is installed. In the first measurement, the sensor is in position S.sub.1. The object O appears at an azimuth angle α.sub.1. Due to the sensor error Δα, the object actually appears at the angle α.sub.1+Δα at position O.sub.1, however.

(32) At a second point in time, the vehicle 12 has moved to the middle position shown in FIG. 2. The static object O is still located at the same location, but appears at a greater angle α.sub.2 from sensor position S.sub.2. Due to the azimuth angle error Δα, the object seemingly appears to be at location O.sub.2 at an azimuth angle α.sub.2+Δα, however.

(33) In the third measurement, the vehicle 12 has moved further to the left, and the sensor is situated at location S.sub.3. The actual object position O appears here at an even greater angle α.sub.3. The apparent position O.sub.3 in the third measurement appears at an angle α.sub.3+Δα.

(34) FIG. 3 shows how the apparent object position O.sub.i results from polar appending from the sensor position S.sub.i. When the distance d is known between the sensor S.sub.i and object O that can be ascertained for example by run-time measurements of ultrasound or radar impulses, or by stereoscopic image evaluation, the distance between the sensor and the apparent object position in Cartesian coordinates results as
Δx.sub.i=d*sin(α.sub.i+Δα),
Δy.sub.i=d*cos(α.sub.i+Δα).

(35) If the sensor position S.sub.i corresponding to the vehicle movement is taken into account, the position of the object O results from polar appending to the position of the sensor S:
x.sub.O=x.sub.S+d*sin(α+Δα),
y.sub.O=y.sub.S+d*cos(α+Δα).

(36) The object position of a static object should not change while the sensor is moving if the object position O is ascertained from the ascertained object positions O.sub.i while taking into account the actual azimuth angle error Δα. If the assumed azimuth angle error Δα does not correspond to the actual azimuth angle error, the calculated actual object positions O will have more or less large deviations from the actual object position O, i.e., the object positions derived from the values O.sub.I will differ. The actual azimuth angle error Δα can then be ascertained through suitable minimization/optimization methods from an error parameter ascertained therefrom such as for example the sum of the Helmert point error for several detected objects O.

(37) Accordingly by skillfully varying the angle Δα, recalculating the object positions and determining the sum of the Helmert point error, a pseudo-azimuth angle can be determined with the assistance of a suitable stop criterion. The stop criterion can for example entail that the sum of the Helmert point error will no longer significantly change given increasingly smaller changes of Δα. For example by assuming a greatest possible azimuth angle error of 2°, the sum of the Helmert point error can be calculated for the values Δα=2°, Δα=0° and Δα=−2°. In the next iteration step, the sum of the Helmert point error is calculated for the values Δα=1° and Δα=−1°. If the sum of the Helmert point error for Δα=1° is smaller, then the calculation is continued in the next iteration step with Δα=0.5° and Δα=1.5°. This is continued until the sum of the Helmert point error does not change significantly and accordingly approaches the correct pseudo-azimuth angle.

(38) Alternatively, an optimization problem can be formulated so that Δαis optimized/searched so that S.sub.sum.sup.H is minimal. Curve fit calculation methods are suitable for this (such as Gauss-Markov models).

(39) It is also possible for the portion of the error that is actually caused by the other angles to be a different size at different locations in the sensor coordinate system. This means that theoretically, a pseudo-azimuth angle could be determined for different areas in the sensor coordinate system, for example, in the form of a matrix of any size that would remain more precise and stable.

(40) A benefit of the method is that it can be used without great effort while driving. It is moreover a fast method that requires few resources. This is not the case with many known methods. The calculated positions of the objects using the “pseudo-azimuth angle” can be used for mapping as well as for positioning with the assistance of a map. In its simplest embodiment, the method does not offer absolute calibration, but it may however be used for example to at least give a warning that the used calibration is inappropriate. This could cause downstream applications that need precise sensor data to transition to a restricted or nonfunctional mode.

LIST OF REFERENCE NUMERALS

(41) 10 Arrow, direction of movement of the sensor, assumed sensor orientation 11 Arrow, actual sensor orientation 12 Vehicle S Sensor S.sub.i Different sensor positions (i=1, 2, 3) O Object O.sub.i Different apparent object positions Δα Azimuth angle error α.sub.i Azimuth angle at different sensor positions (i=1, 2, 3)

(42) The invention has been described in the preceding using various exemplary embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor, module or other unit or device may fulfil the functions of several items recited in the claims.

(43) The mere fact that certain measures are recited in mutually different dependent claims or embodiments does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope.