Method for determining a position of at least two sensors, and sensor network

10782402 ยท 2020-09-22

Assignee

Inventors

Cpc classification

International classification

Abstract

In a method for determining a position of at least two sensors in relation to one another, a moving object is sensed by the at least two sensors, and at least one movement variable for the moving object is determined by one of the sensors, at least said movement variable or variables derived therefrom being used jointly for determining a position of the sensors in relation to one another. The sensor network is designed to carry out a method of said kind.

Claims

1. A method for determining a relative arrangement of at least two sensors in relation to one another, the method comprising: detecting a moving object by the at least two sensors; establishing a velocity of the moving object relative to each sensor of the at least two sensors by the respective sensor of the at least two sensors, wherein the velocity of the moving object relative to each sensor is a velocity of the moving object along a direction of a line connecting the respective sensor and the moving object; and determining the relative arrangement of the at least two sensors in relation to one another by using the velocity of the moving object relative to each sensor of the at least two sensors or variables derived from the velocity of the moving object relative to each sensor of the at least two sensors.

2. The method as claimed in claim 1, wherein the relative arrangement of the sensors comprises at least a relative orientation of the sensors in relation to one another.

3. The method as claimed in claim 1, wherein an angle between movement directions of the moving object, established by the at least two sensors, is determined and used for determining the relative arrangement of the at least two sensors in relation to one another.

4. The method as claimed in claim 1, wherein for each sensor of the at least two sensors in a direction of the moving object, the direction of the moving object as observed by the respective sensor is established and used for determining the relative arrangement of the at least two sensors in relation to one another.

5. The method as claimed in claim 4, wherein, by each sensor of the at least two sensors, the direction of the moving object as observed by the respective sensor is established and detected in a manner dependent on a velocity of the moving object along a direction of a line connecting the respective sensor of the at least two sensors and the moving object, and wherein directions in which the established velocity vanishes are used for determining the relative arrangement of the at least two sensors in relation to one another.

6. The method as claimed in claim 1, wherein, by each sensor of the at least two sensors, a direction of the moving object as observed by each sensor of the at least two sensors is established and detected in a manner dependent on the velocity of the moving object along the direction of the line connecting the respective sensor of the at least two sensors and the moving object, and wherein directions in which the established velocity vanishes are used for determining the relative arrangement of the at least two sensors in relation to one another.

7. The method as claimed in claim 1, wherein, by one of the at least two sensors, a distance of the moving object from the sensor is established and used for determining the relative arrangement of the at least two sensors in relation to one another.

8. The method as claimed in claim 1, wherein an offset of the at least two sensors in relation to one another is determined.

9. A sensor network comprising: at least two sensors; and an evaluation device configured to establish a velocity of a moving object from data of each sensor of the at least two sensors, to link the velocity of the moving object or variables derived from the velocity of the moving object, and to use the velocity of the moving object or the variables derived from the velocity of the moving object for determining a relative arrangement of the at least two sensors in relation to one another, wherein the velocity of the moving object is configured to be established, relative to each sensor of the at least two sensors, by the respective sensor of the at least two sensors, and wherein the velocity of the moving object relative to each sensor of the at least two sensors is a velocity of the moving object along a direction of a line connecting the respective sensor and the moving object.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 shows, in a plan view of a schematic diagram, a sensor network according to an embodiment with two radar sensors when carrying out a method according to the present embodiments for determining an arrangement of the radar sensors in relation to one another in the situation of simultaneous detection of a moving object.

(2) FIG. 2 shows, in a plan view of a schematic diagram, the sensor network in accordance with FIG. 1 when carrying out the method according to the embodiment in accordance with FIG. 1 in the situation of a non-simultaneous detection of a moving object.

(3) FIG. 3 shows, in a diagram, a taking account of the plurality of measured values for each radar sensor of the sensor network according to the embodiment in accordance with FIG. 1 when carrying out the method according to the present embodiments.

(4) FIG. 4 shows, in a plan view of a schematic diagram, a further exemplary embodiment of a sensor network with two radar sensors when carrying out a further exemplary embodiment of the method according to the present embodiments in the situation of simultaneous detection of a moving object.

(5) FIG. 5 shows, in a plan view of a schematic diagram, a further exemplary embodiment of the sensor network with two radar sensors when carrying out a further exemplary embodiment of a method according to the present embodiments in the situation of simultaneous detection of a moving object.

DETAILED DESCRIPTION

(6) The sensor network 5 depicted in FIG. 1 includes a first radar sensor S.sub.0 and a second radar sensor S.sub.1.

(7) The first and second radar sensor S.sub.0 and S.sub.1 are each provided to measure the distance R from an object O. The measurement of the distance R to the object O (e.g., equivalently the measurement of the distance R from the object O to the respective radar sensor S.sub.0, S.sub.1) is carried out in a manner known by a time-of-flight measurement of the radar signal emitted by the radar sensor S.sub.0, S.sub.1, and each radar signal is reflected by the object O and received by the radar sensor S.sub.0, S.sub.1. The signals are evaluated by evaluation electronics (not depicted explicitly) for establishing the time-of-flight.

(8) Furthermore, a first and second radar sensor S.sub.0, S.sub.1 are each provided to measure the direction in which the object O is situated as viewed from the radar sensor S.sub.0, S.sub.1, and consequently to measure the angle at which the object O appears in relation to a reference direction relative to the radar sensor S.sub.0, S.sub.1 when viewed from the radar sensor S.sub.0, S.sub.1 of the radar. The first and second radar sensor S.sub.0 and S.sub.1 each have a group antenna (not explicitly depicted here), which is provided for a phase-sensitive reception of the radar signal reflected by the object O.

(9) Consequently, the first and second radar sensor S.sub.0 and S.sub.1 are provided to detect the distance R and the angle , specifying the direction of the object O relative to the radar sensor S.sub.0, S.sub.1. The position of the object O is detected in polar coordinates (R, O) of the respective radar sensor S.sub.0, S.sub.1, in which the radar sensor S.sub.0, S.sub.1 lies at the origin.

(10) In the depicted exemplary embodiment, the angle may specify the angle within a plane parallel to the ground, said plane being parallel to the plane of the drawing in the illustrations. Consequently, the angle forms an aspect angle.

(11) In the depicted exemplary embodiments, the object O is a moving object (e.g., a land vehicle, as depicted). Further, not expressly depicted embodiments, the object O may also be any other moving object (e.g., a person, an aircraft or a machine part). The exemplary embodiments are explained first using a uniformly moving object O (e.g., the direction and magnitude of the velocity of the object O are constant in time in the reference system) in which the radar sensors S.sub.0, S.sub.1 of the sensor network 5 are at rest (extended method acts in the case of a non-uniform movement of the object O are explained in the context of another exemplary embodiment).

(12) The position of the object O is detected in polar coordinates at a known time interval with a time duration by each one of the radar sensors S.sub.0, S.sub.1. The detection of the position of the object O in polar coordinates is carried out in a time-synchronous manner by the two radar sensors S.sub.0, S.sub.1. Hence, a movement variable, namely the change in position of the object O during the time duration is detected by each one of the radar sensors S.sub.0, S.sub.1. A change in position over time of the object O corresponds to the velocity u of the object O and this may be expressed in vectors in Cartesian coordinates x, y relative to each radar sensor S.sub.0, S.sub.1:

(13) u = [ x , y ] , ( 3 )

(14) In the case of the uniform movement of the object O, the velocity u of the object O corresponds to the change in position of the object O during the time duration (in the case of deviations of the movement of the object O from the uniform movement, the time duration is selected to be sufficiently small such that the instantaneous velocity u is detectable with a sufficiently accurate approximation by the change of position over time of the object O during the time duration , as specified above).

(15) This direction of the velocity u is an inherent property of the movement of the object O and therefore independent of the absolute position of the target, and of the relative distance of the radar sensors S.sub.0, S.sub.1 from one another.

(16) The magnitude of the velocity u detected by the radar sensors S.sub.0, S.sub.1 is independent of the circumstances by which the radar sensors S.sub.0, S.sub.1 are established.

(17) However, the measured movement direction depends directly on the direction in which the object O is situated as seen from the radar sensor S.sub.0, S.sub.1 (e.g., the angle at which the object O appears fixed in relation to the radar sensor S.sub.0, S.sub.1 in relation to a reference direction) as viewed from the radar sensor S.sub.0, S.sub.1 of the radar, and consequently the movement direction depends directly on the orientation of the radar sensor S.sub.0, S.sub.1.

(18) If the radar sensor S.sub.0, S.sub.1 is rotated about a direction perpendicular to the ground, the angle changes by precisely the negative rotation. The angle of intersection of the velocity directions detected by two radar sensors at the in each case equivalent instant therefore directly supplies the relative orientation of the radar sensors S.sub.0, S.sub.1 with respect to one another:

(19) 01 = arccos ( u 0 .Math. u 1 .Math. u 0 .Math. .Math. .Math. u 1 .Math. ) ( 4 )

(20) where u.sub.0 denotes the velocity vector of the object O measured by the radar sensor S.sub.0 and u.sub.1 denotes the velocity of the object O measured by the radar sensor S.sub.1 (FIG. 1). In the case of a plurality of radar sensors, the relative orientation of the radar sensors S.sub.n, S.sub.k in relation to one another is given by:

(21) nk = arccos ( u n .Math. u k .Math. u n .Math. .Math. .Math. u k .Math. ) ( 5 )

(22) where the velocity vector u.sub.n denotes the velocity of the object O measured by the radar sensor S.sub.n and the velocity vector u.sub.k denotes the velocity of the object O measured by the radar sensor S.sub.k.

(23) If the target is not simultaneously observable by both sensors, a trajectory up to the temporal intersection is extrapolated based on a movement model or previous measured values (FIG. 2).

(24) In the case of synchronous detection of the positions of the object O by the radar sensors S.sub.0, S.sub.1, the method is moreover robust in relation to changes in the velocity vector u (e.g., changes in the magnitude or the direction of the velocity u).

(25) It suffices to detect the velocity u.sub.0 and u.sub.1 of the object O once by each radar sensor S.sub.0, S.sub.1. However, in the depicted exemplary embodiment, measurement inaccuracies are reduced with an increasing number of detected velocities by suitable estimation methods or filtering (e.g., least squares method, FIG. 3).

(26) In order to completely determine the arrangement of the radar sensors S.sub.0, S.sub.1 in relation to one another, the offset of the radar sensors S.sub.0, S.sub.1 in relation to one another is determined.

(27) The offset is calculated as a translation vector between the radar sensors S.sub.0, S.sub.1 with the aid of the relative orientation .sub.01 and the positions p.sub.k, p.sub.n of the object O.

(28) If the relative orientation .sub.01 between two radar sensors S.sub.0, S.sub.1 is known (e.g., by carrying out the method according to the present embodiments as described above) the offset of the radar sensors S.sub.0, S.sub.1 may be determined directly.

(29) in the offset may be determined in accordance with:

(30) d nk = p n - [ cos kn sin kn - sin kn cos kn ] p k , ( 6 )

(31) wherein the position vector p.sub.k of the object O, established by the radar sensor K, is rotated relative to the radar sensor K with the relative orientation .sub.kn between the radar sensors S.sub.k and S.sub.n (e.g., by a passive rotation matrix), or with the relative orientation .sub.kn between the radar sensors S.sub.n and S.sub.k (e.g., by an active rotation matrix), and it is displaced by the position vector p.sub.n of the object O relative to the radar sensor S.sub.n, which is established by the radar sensor S.sub.n in a temporally synchronous manner to the establishment of the position vector p.sub.k.

(32) In order to carry out this method, one establishment of the position vector p.sub.k, p.sub.n per radar sensor S.sub.k, S.sub.n is sufficient. However, measurement inaccuracies are reducible, as described above based on FIG. 3, by virtue of seeking for an error minimization (e.g., a best-possible superposition of a plurality of measured values) instead of directly determining the offset, as described above (e.g. FIG. 3).

(33) This method is also robust in relation to changes in velocity or direction of the object O in the case of simultaneous position measurements.

(34) As an alternative or in addition to the aforementioned determination of the relative orientation .sub.01 of the radar sensors S.sub.0, S.sub.1 in relation to one another, it is also possible to estimate the orientation from a nonlinear regression.

(35) To this end, radar sensors S.sub.0, S.sub.1 are used in this further exemplary embodiment of the method, said radar sensors at least being provided to establish the direction of the object O in which the latter is situated as seen from the radar sensor and to establish at least the velocity of the object O along the direction of a path in each case connecting the radar sensor and the object. Here, like in the exemplary embodiments described above, the direction of the object O is expressed by the angle . The velocity of the object O along the direction of a path connecting the radar sensor and the object O is denoted by v.sub.r.

(36) In the case of constant velocity vector u of the object O, the time evolution of the variables , v.sub.r describes a circle K in the Cartesian space defined by:
v.sub.x=v.sub.r cos
v.sub.y=v.sub.r sin .(7)

(37) The center point of this circle K is:

(38) [ v x 2 , v y 2 ] , ( 8 )

(39) For example, half the component-wise velocity of the object in the x- and y-direction from the respective view of the radar sensor S.sub.0, S.sub.1.

(40) .Math. v .Math. 2

(41) The radius represents half the magnitude of the velocity u of the object.

(42) Because v={square root over (v.sub.x.sup.2+v.sub.y.sup.2)} applies, this circle potentially intersects the origin at (v.sub.x, v.sub.y), namely precisely if the object O reaches the point of closest approach to the radar sensor (e.g., if it reaches a tangential point of the tangential movement). The tangential point, v.sub.r=0 m/s is known (e.g., the measurement reduces to the measurement of the angle of the radius at this tangential point).

(43) As described above in relation to FIGS. 1 and 2, the relative orientation is therefore given, analogous to (7) by the angle of intersection of the tangent line at the origin (FIG. 4).

(44) In many cases, an object O will never reach the tangential point (e.g., not simultaneously from the view of two spatially offset radar sensors). Therefore, the trajectory for each radar sensor is approximated or extrapolated for the purposes of determining the tangent line by nonlinear regressions or other suitable methods. Constraints for this optimization problem are the origin being a fixed point and, in the exemplary case of a circle K, an equal radius for all circles K. Therefore, the regression may be carried out from the first measured value. A deviation in the angle leads to a displacement of the measured value along the circular arc; a circulation is . Errors in the velocity v.sub.r of the object O along the direction of the path connecting the radar sensor S.sub.0, S.sub.1 and the object O by contrast influence the estimated radius of the circle K.

(45) A change in the movement direction (e.g., a rotation of the velocity coordinates) displaces all circle centers equally and therefore does not influence the angle of intersection of the tangent at the origin. If the velocity v of the object O is not constant, this is expressed in a variable radius, and so a suitable piecewise approximation for variable circle radii or generic curve fitting is used instead of a fixed circular trajectory.

(46) As an alternative to estimating the translation vector (e.g., the offset of the radar sensors S.sub.0, S.sub.1 in relation to one another), there may also be fitting of movement vectors in -curves.

(47) If, as assumed below, only measured values of the angle are available, the distance R of the object O from each radar sensor S.sub.0, S.sub.1 may be estimated.

(48) If the movement of the object O (e.g., the velocity vector) is represented in polar coordinates, the following emerges with the time difference t between the measured values:

(49) x = v x t = R i cos i - R j cos j y = v y t = R i sin i - R j sin j . ( 9 )

(50) In conjunction with the angles .sub.i.sub.j, the movement vector or the distance thereof from the radar system is unique in a global coordinate system. In the matrix form, equation (9) is directly solvable according to:

(51) [ R i R j ] = [ cos i - cos j sin i - sin j ] - 1 [ v x v y ] t . ( 10 )

(52) The calculated distances R.sub.i and measured angles .sub.i are transformed into Cartesian coordinates for radar sensors n and k, whereupon the position P.sub.i,n=[x.sub.i,n, y.sub.i,n] and P.sub.i,k of the object O is known and the relative translation vector is given in accordance with equation (5). Alternatively, equations (9) and (10) may also be formulated directly in Cartesian coordinates. Here, one measured value per radar sensor is enough to calculate the translation vector. The robustness of the method is determined, firstly, by the accuracy of the velocity vector and, secondly, by the angle range |.sub.i.sub.j| that has been passed through (and is as large as possible).

(53) In addition to the method presented here, further evaluation methods are conceivable, depending on the curve of the movement variables (e.g., the movement model of the object O). To this end, for example, use may be made of an acceleration of the object O, a defined curve trajectory of the object O, and other movement variables characteristic for a movement model. Movement variables render it possible to detect a trajectory of the movement variable as an arbitrary mathematical function. By the inclusion of a further angle (e.g., an elevation angle ) all exemplary embodiments listed above are also extendable to the three-dimensional case.

(54) Like in the exemplary embodiments explained above depicted in FIGS. 1 to 5, the method for determining an arrangement is decomposable into two partial methods: the relative rotation; and the relative translation between in any case two radar sensors of the radar sensor pair are determined.

(55) The application to all pairs of radar sensors therefore enables a global sensor map to be constructed. Here, the assumption is made either that the fields of view of two radar sensors overlap in the region of the target measurement or that the target follows from a known or sufficiently predictable movement model in the case of non-overlapping regions.

(56) By way of example, a rotation may be determined from gradients of the position trajectory of the object O and, subsequently, a translation of positions of the object O in the (R, )-space, or it is possible to undertake a determination of the rotation from nonlinear regressions in the (, v.sub.r)-space (e.g., referred to as Angle-Doppler space below) and optionally a subsequent determination of the translation, as described above, and optionally a subsequent determination of the translation by fitting estimated movement vectors into angle trajectories.

(57) The method may be used equally for radar sensors and optical sensors, sonar sensors, lidar sensors or any other wireless sensors, provided that corresponding measurement variables are available. If the cost effective system is sought after, the determination only in the Angle-Doppler space offers clear advantages: a simple continuous wave sensor is sufficient for measuring the Doppler frequency; and the angle measurement may already take place by way of a phase comparison of only two reception channels. In general, the use of Doppler frequencies offers a metrological advantage, as the achievable measurement accuracy and resolution and the measurement range are coupled less to the frequency or bandwidth approval, and instead are coupled to the available measurement time. Moreover, all methods, as described above, may be combined or carried out in parallel, and therefore it is possible to increase the reliability and robustness of the position estimation. Depending on the expected/measured trajectory of the object O and fluctuations of the measured values, there is also the option of switching between the methods automatically (e.g. based on plausibility evaluations). Likewise, it is possible and expedient to embed the pairwise inherent localization into a global, continuously learning model in order to minimize estimation errors.

(58) All depicted methods are worded and solvable in a closed deterministic manner, which accommodates the implementability and reproducibility at all times.

(59) The elements and features recited in the appended claims may be combined in different ways to produce new claims that likewise fall within the scope of the present invention. Thus, whereas the dependent claims appended below depend from only a single independent or dependent claim, it is to be understood that these dependent claims may, alternatively, be made to depend in the alternative from any preceding or following claim, whether independent or dependent. Such new combinations are to be understood as forming a part of the present specification.

(60) While the present invention has been described above by reference to various embodiments, it should be understood that many changes and modifications can be made to the described embodiments. It is therefore intended that the foregoing description be regarded as illustrative rather than limiting, and that it be understood that all equivalents and/or combinations of embodiments are intended to be included in this description.