Device, system and method for localization of a target in a scene
11762085 · 2023-09-19
Assignee
Inventors
Cpc classification
G01S13/50
PHYSICS
G01S2013/468
PHYSICS
G01S13/4418
PHYSICS
G01S13/878
PHYSICS
G01S2013/466
PHYSICS
International classification
Abstract
A device comprising circuitry configured to: obtain radar signal measurements simultaneously acquired by two or more radar sensors having overlapping fields of view, derive range information of one or more potential targets from samples of radar signal measurements of said two or more radar sensors acquired at the same time or during the same time interval, the range information of a single sample representing a ring segment of potential positions of a potential target at a particular range from the respective radar sensor in its field of view, determine intersection points of ring segments of the derived range information, determine a region of the scene having one of the highest densities of intersection points, select a ring segment per sensor that goes through the selected region, and determine the most likely target position of the potential target from the derived range information of the selected ring segments.
Claims
1. A device for localization of a target in a scene, the device comprising: circuitry configured to: obtain radar signal measurements acquired by two or more radar sensors arranged at different locations, the two or more radar sensors having overlapping fields of view; derive range information of one or more potential targets from samples of the radar signal measurements which are simultaneously acquired or acquired during a same time interval, wherein the range information of a single sample represents a ring segment of potential positions of a potential target at a particular range from a respective radar sensor and in a field of view of the respective radar sensor; determine intersection points of ring segments of the range information; determine a first region of the scene having a density of intersection points higher than other regions in the scene; identify all ring segments crossing through a confidence region which surrounds the first region; select, from the identified ring segments, a ring segment per sensor that goes through the first region; and then determine a most likely target position of the potential target from the range information of the ring segment selected for each sensor.
2. The device as claimed in claim 1, wherein the circuitry is further configured to iteratively determine the most likely target position from different combinations of ring segments.
3. The device as claimed in claim 2, wherein the circuitry is further configured to determine the most likely target position from different combinations of ring segments by finding a position with a least squared radial distance that minimizes a minimization function.
4. The device as claimed in claim 3, wherein the circuitry is further configured to use as the minimization function a sum of squared radial distances between an estimated target position and respective range rings of a respective combination.
5. The device as claimed in claim 1, wherein the circuitry is further configured to determine a velocity of the potential target.
6. The device as claimed in claim 1, wherein the circuitry is further configured to determine a direction of movement of the potential target.
7. The device as claimed in claim 1, wherein the circuitry is further configured to determine a velocity and/or a direction of movement of the potential target using an angle between positions of the sensors and the most likely target position and/or using relative velocities measured by the sensors.
8. The device as claimed in claim 1, wherein the circuitry is further configured to determine a velocity and/or a direction of movement of the potential target by minimization of a sum of squared errors of relative velocities.
9. The device as claimed in claim 1, wherein the circuitry is further configured to use relative velocities measured by the sensors to improve determination of the most likely target position.
10. A radar system, comprising: the device as claimed in claim 1; and the two or more radar sensors arranged at the different locations and having the overlapping fields of view, wherein the two or more radar sensors are configured to acquire radar signal measurements from the scene including one or more targets.
11. A method for localization of a target in a scene, the method comprising: obtaining radar signal measurements acquired by two or more radar sensors arranged at different locations, the two or more radar sensor having overlapping fields of view of the scene; deriving range information of one or more potential targets from samples of the radar signal measurements which are simultaneously acquired or acquired during a same time interval, wherein the range information of a single sample represents a ring segment of potential positions of a potential target at a particular range from a respective radar sensor and in a field of view of the respective sensor; determining intersection points of ring segments of the range information; determine a first region of the scene having a density of intersection points higher than other regions in the scene; identifying all ring segments crossing through a confidence region which surrounds the first region; selecting, form the identified ring segments, a ring segment per sensor that goes through the first region; and then determining a most likely target position of the potential target from the range information of the ring segment selected from each sensor.
12. The method as claimed in claim 11, further comprising: iteratively determining the most likely target position from different combinations of ring segments.
13. The method as claimed in claim 12, further comprising determining the most likely target position from different combinations of ring segments by finding a position with a least squared radial distance that minimizes a minimization function.
14. The method as claimed in claim 13, further comprising using as the minimization function a sum of squared radial distances between an estimated target position and respective range rings of a respective combination.
15. The method as claimed in claim 11, further comprising determining a velocity of the potential target.
16. The method as claimed in claim 11, further comprising determining a direction of movement of the potential target.
17. The method as claimed in claim 11, further comprising determining a velocity and/or a direction of movement of the potential target using an angle between positions of the sensors and the most likely target position and/or using relative velocities measured by the sensors.
18. The method as claimed in claim 11, further comprising determining a velocity and/or a direction of movement of the potential target by minimization of a sum of squared errors of relative velocities.
19. The method as claimed in claim 11, further comprising using relative velocities measured by the sensors to improve determination of the most likely target position.
Description
BRIEF DESCRIPTION OF THE DRAWING
(1) A more complete appreciation of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
DETAILED DESCRIPTION OF THE EMBODIMENTS
(21) Referring now to the drawings, wherein like reference numerals designate identical or corresponding parts throughout the several views,
(22) Every single sensor performs radar measurements independent of the other sensors so that no direct phase synchronization between the sensors is necessary. The exact time of a measurement may be determined by an external trigger signal or may be otherwise known with high accuracy. The control and configuration may be ensued by a central unit 20, as shown in
(23)
(24) The signal processing utilizes a multilateration approach, which uses the measured distance, optionally in combination with an approach that uses the measured relative velocity between the sensors and the target(s) to estimate the position (angle and distance relative to the sensors) of the target(s). The relative velocity can be estimated by each sensor due to the Doppler frequency shift of the reflected signal. This information, as well as the target's distance is different for each particular sensor for a common target. This enables the derivation of a target's angle in relation to the sensor base line, due to the correlation of the different relative velocities and ranges between a target and each particular sensor. In addition, the estimation of a target's movement within a single measurement cycle is possible by virtue of the possible large spacings between the sensors, which cover a common target.
(25) Basically four different scenarios are conceivable: It exists no movement in the scene; only the sensor platform is moving; only single targets move within the scene; and the sensor platform and single targets have an arbitrary movement.
(26)
(27) 1. Data acquisition and pre-processing 100: The data of at least three radar sensors, which cover a common field of view, are sampled simultaneously (S100). In case of chirp-sequence radars, this data set consists of the time-domain samples, from which the range and velocity information of radar reflections within the field of view can be estimated by e.g. two Fourier transformations (S101). A subsequent target extraction algorithm (CFAR—constant false alarm rate) can be used to reduce the amount of data to be transferred to a certain number of radar targets (S102).
(28) 2. Localization algorithm 110:
(29) a. In a first step (S110), the detected ranges of all single sensors are linked together by bilateration. The range information of each radar target results in a ring of ambiguous positions around the particular sensor position, the intersections of two rings of different sensors result in candidates for the actual target position with a lowered ambiguity. Additional intersections of range rings of different targets lead to intersection points at wrong positions.
(30) b. These pairwise intersection points of all range rings are accumulated (S111) into a common grid to determine clusters with high densities of intersection points. Therefore, copies of the intersection-matrices are shifted against each other and accumulated.
(31) c. Subsequent to the grid-based accumulation, the highest intersection density cell is searched (S112) and all range rings that cross through a certain confidence region around the maximum density cell are selected for further processing.
(32) d. The most likely target position is iteratively searched (S113, S115) in consideration of all possible combinations of range rings of the involved sensors. Therefore, the range information is supplemented with the velocity information related to each range ring and the most likely target position evaluated (S114).
(33) e. The range rings related to a targets position are removed from the dataset (S116), after localization has succeeded and the dataset is fed back to step c. Here, the new density maximum of the intersection point distribution is selected and further target positions are extracted iteratively.
(34) 3. Output (120): The algorithm stops after all possible targets are found (S120). Hence, the position, the velocity and the direction of movement for each target may be estimated.
(35) In comparison to single radar sensors, which are based on phased-array antennas, the use of distributed sensors within the described concept is advantageous to the localization accuracy due to the large possible spacing. The actual scenario affects directly the localization accuracy. Particularly, the number of targets, the relative velocities, and the directions of movement have an impact on the performance.
(36) Failures of single or multiple sensors do not necessarily lead to a total failure of the system, but merely to a degradation of the performance regarding the localization accuracy, detection probability or limitations to the field of view.
(37) The measured relative velocities between a target and each sensor differ according to the possibly wide distribution of the sensors. This allows improving the localization by correlation of the velocities and the range information and enables the determination of a targets velocity and direction of movement within a single measurement cycle. Hence, no tracking of targets over multiple measurement cycles is necessary according to this concept in contrast to single sensors with array-antennas.
(38) In the following more details of the steps of the disclosed method and of embodiments of the disclosed device, system and method will be provided.
(39) According to an embodiment a network of non-coherent single channel radar sensor nodes is utilized to estimate the position and motion of multiple targets. Therefore, simultaneous snapshot measurements of sensor nodes covering a common field of view are evaluated for a single algorithm run. Every single sensor performs radar measurements, independent of the other sensors, so that no direct phase synchronization between the sensors is necessary. The exact time of a measurement is either determined by an external trigger signal or otherwise known with high accuracy. The control aid configuration may be carried out by a central unit. The obtained raw data of every single sensor is directly, or after preprocessing, transferred to a central processing unit.
(40) Automotive radar scenarios exhibit large numbers of targets distributed over the complete field of view. Hence, ambiguities arise for localization approaches based only on the radial range information. An example for this is given in
(41) Moving objects in a scenario result in a Doppler shift in the frequency domain, that is measured by a radar. This Doppler shift conforms to the velocity relative to the radar. Automotive scenarios can be split up in three different cases regarding their movement:
(42) 1. Sensors are moving with velocity v.sub.ego>0 and targets are stationary.
(43) 2. Sensors are stationary and targets are moving with velocity v.sub.tar>0.
(44) 3. Sensors and targets are moving with velocities v.sub.ego>0 and v.sub.tar>0.
(45) These cases are considered in the following.
(46) First, the first case of moving sensors shall be considered. The proper motion of a vehicle with mounted sensors leads to relative velocities that are measured by a radar sensor. The measured relative velocities of a stationary target are dependent on the angle a target appears in, relative to the direction of motion. These relative velocities differ as the angle between the common target position and each respective sensor differs due to the spatial distribution of the sensors. The relationship between the target-sensor angles, the relative velocities and the actual movement need to fulfil Thales' theorem. Hence, it can be illustrated by a circle whose diameter is determined by the actual velocity, as depicted in
(47) This principle is also depicted in
(48)
(49) In
(50) Next, the second case of moving targets shall be considered. In contrast to the first case, here the sensors are assumed to be stationary while the targets are moving. The velocity relation between the measured relative velocity and the target movement is depicted in
(51) An exemplary scenario with three stationary sensors and three moving targets is depicted in
(52) Next, the third case of moving targets and moving sensors shall be considered. The third case comprises a movement of the sensors and a movement of the targets, which is superimposed in view of the measurement. An exemplary depiction of this behavior is given in
(53) The actual target movement (direction and velocity) can be determined by additionally using the ego-motion of the sensors. This information might be available from other systems or sensors build in a car, like wheel speed sensors or the odometer. It can also be derived from stationary targets, as they provide a common velocity behavior reflecting the actual motion of the car. Such a method is related to the second case explained above with reference to
(54) The disclosed concept may utilize multiple spatially distributed radar sensor nodes for mid-range sensing application in scenarios that involve a relative movement between sensors and targets. In particular, at least two radar sensors spatially distributed and loosely coupled may be utilized. Each sensor independently performs a radar measurement resulting in range and velocity information of detected targets. Simultaneous measurements are assumed so that all sensors observe a target at the same time.
(55) The technique of multilateration enables the localization of targets by exploitation of the range information measured by several sensor nodes. Hereby a common scattering point is assumed and the intersection of all range rings is required. However, in real scenarios, targets are likely extended which leads to multiple scattering points distributed over a target's contour instead of a common scattering point. Therefore, not more than two range circles traverse a single intersection point. This behavior changes with varying spatial distances between the sensor nodes due to different angles of incidence at the target.
(56) An exemplary scenario with non-ideal range measurements around a single target T is depicted in
(57)
is determined by the number of sensor nodes M, assuming a single reflection per sensor at the target. Additional intersections occur at different positions, where probably no target is present.
(58) Therefore the number of targets T determines the total number of intersection points
(59)
(60) In scenarios with many more targets than sensors, the number of intersection points not representing the target position become predominant what results in ambiguous target positions like clusters of intersection points.
(61) An embodiment of the disclosed algorithm utilizes the range and relative velocity information that is gathered by the sensor nodes to estimate the position, absolute velocity and direction of movement. The flowchart shown in
(62) The sensor nodes may operate with the chirp-sequence modulation scheme that allows the measurement of ranges and of the RF-signals Doppler-shift. The time domain data is processed by a two-dimensional Fourier transform resulting in range and velocity data. Targets are extracted from this data by CFAR algorithms, so that a list of detected targets with their corresponding relative velocities is available for each sensor node.
(63) The following description is done in a two-dimensional space. With a view on single sensors data, the detected target ranges are ambiguous on a circle around the sensor position with a radius of the detected range. In the first step (S110 “Range Circle Intersection”) of joint data processing (110), a lateration technique is used. Thereby, the pairwise cross-section points
{right arrow over (S)}.sub.1,2=S.sub.1/2,x.Math.{right arrow over (e)}.sub.x+S.sub.1/2,y.Math.{right arrow over (e)}.sub.y (1.4)
are calculated between two circles with the different center points
{right arrow over (P)}.sub.i=P.sub.i,x.Math.{right arrow over (e.sub.x)}+P.sub.i,y.Math.{right arrow over (e.sub.y)} (1.5)
for the ranges r.sub.i and r.sub.j. Therefore, the distance |P.sub.iP.sub.j| between two sensor nodes can be calculated to
|P.sub.iP.sub.j|=√{square root over ((P.sub.j,x−P.sub.i,x).sup.2+(P.sub.j,y−P.sub.i,y).sup.2)} (1.6)
and the angle
(64)
between the node connecting line and a intersection. With
(65)
the two points are calculated to
x1,2=P.sub.i,x+r.sub.i.Math.cos(α.sub.1,2) (1.9)
y1,2=P.sub.i,y+r.sub.i.Math.sin(α.sub.1,2). (1.10)
(66) Two distinct intersection points exist for overlapping range circles, while two tangent circles result in a single point of intersection.
(67) The number of intersection points per target n.sub.t is given by equation 1.2, if the target is detected by M sensor nodes. Therefore, the n.sub.t intersection points with the most probable relation to the same target need to be found as starting point for the iterative algorithm. For this reason, the two-dimensional spatial density distribution of pair-wise intersection points is determined.
(68) This can for example be done by an accumulation (step S111) of the intersection points to multiple grids with spatial offset, which are merged afterwards. The size of the grid cells has to be chosen considerably larger than the range resolution that is performed by a sensor node. To circumvent the limitation of the accumulation to consider only points, lying within the borders of a grid cell, the accumulation can be accomplished at multiple grids that are spatial shifted in the x and y dimension by half of the grid size.
(69) Hereby,
(70) A target detection that is exclusively based on constant false alarm rate (CFAR) or peak detection could lead to an erroneous estimation of a targets position. For a more robust localization of moving targets the proposed algorithm is divided into coarse position estimation, followed by iterative error minimization. The coarse estimation step aims for the selection (S112) of all range rings that probable belong to a single target. This is achieved by the following steps:
(71) a) estimation of the highest intersection point density and
(72) b) evaluation of all range rings, related to the intersection points in the picked area, with respect to the least error of a mapping between the calculated emphasis of the n.sub.t intersection points and the related velocity vectors.
(73) Regarding the first step a), the highest density in the actual density map is evaluated. In a single target scenario with M≥3 sensors, the appropriate grid area at the target position has in any case the highest density, while ambiguous intersection points occur as less dense areas. In multi target scenarios, a single grid cell could consist either of intersection points related to a single target located in that grid area, multiple targets located in that grid area, or combinations of target(s) and ambiguous intersections of targets located in other grid areas.
(74) For the coarse estimation, the highest density grid cell is considered and the distances
dSi=√{square root over ((Cpos≈,x+Si,x).sup.2+(Cpos≈,y+Si,y).sup.2)} (1.11)
are calculated for every node. An exemplary accumulation grid is depicted in
||{right arrow over (S.sub.iZ.sub.l)}|−d.sub.S.sub.
(75) This behavior is shown in
(76) For a too small observation area O (i.e. a too small radius B), not all range rings that belonging to the same target are omitted from further processing stages. A too large radius B leads to a high number of range rings that may be considered whereby the required computation time is increased. An adaptive adjustment of B during the runtime of the algorithm is possible.
(77) The accurate estimation of the target location and velocity is described in the following. After a target is found and the corresponding range ring set was removed, a new coarse estimation step is executed.
(78) In a next step (S113), a target position is estimated for all combinations of the different range rings, crossing through the circular area with radius B. A subset of the possible combinations is depicted in
(79) The point with the least squared radial distance to the treated node-range-combinations minimizes the function
(80)
(81) Hence, this is the most likely target position. The solution of the minimization problem can be found by utilization of the gradient method.
(82) The least squared radial distances are error distances between the estimated target position and the corresponding range measurements of each sensor. (1.13) denotes the corresponding error function. The function denotes the sum of the squared distances between an estimated target position P(X,Y) and the respective range rings of the combination. In other words, the range rings of the measurements need to be increased by this values to intersect at the common point P(X,Y).
(83) For each range ring set with n sensors this function is evaluated and the set with the lowest errors is used.
(84) As described before, the measurement of each sensor also gives a range and velocity (Doppler) information including the relative direction of motion. The relative velocity between a target and distributed sensors differs (cf.
(85) In the following, the estimation of a target's velocity and direction of movement (S114) is described in detail. (1.14) denotes a function used to calculate the x and y components of these velocities for a probable target position. (1.15) gives the expected relative velocity which would be measured for a certain target position. The relative velocity of this expectation is also separated into x and y components. This allows the comparison of the expectation and the measurement by an error value that is computed in (1.17).
(86) (1.17) calculates the error between an expected value from (1.16) and the measured velocity. Here, the Euclidean distance is used. This is the square root of the sum of the squared differences between expectation and measurement. Finally, (1.18) denotes the sum of the squared velocity differences of all sensor positions which represent the error function g.
(87) The expected relative velocity a stationary target at a certain angle has can be calculated with the knowledge of the sensor/vehicle movement and the possible target position (e.g. the angle).
(88) In detail, the estimation of a target's velocity and direction of movement (S114) can be done on the basis of suitable estimations of a target's position. The error of the proposed velocity estimation is also used as criterion for choosing a set range rings for a target position. As described above, the target motion V.sub.z on the true target position is composed of the relative velocities {right arrow over (V.sub.rel,Si,Z1)} measured by spatially distributed sensors. These velocities can be resolves to x- and y-components with knowledge of the angle Φ{right arrow over (.sub.S.sub.
(89)
where the sign sgn( . . . ).sub.S.sub.
(90)
where Φ{right arrow over (.sub.V.sub.
(91)
(92) These calculated relative velocities and the measured relative velocities can be compared in Cartesian coordinates by calculation of the velocity deviation
Δv.sub.i=√{square root over (({right arrow over (V)}.sub.rel,S.sub.
(93) The summation of the squared errors of all relative velocities leads to the function
(94)
which is minimal for a target velocity |{right arrow over (V.sub.Z)}| and direction of movement Φ{right arrow over (.sub.V.sub.
(95)
(96) The previously discussed approach is divided into two parts, first the estimation a possible target position and second the estimation of a matching target movement. In contrast to that, the information provided by the measured relative velocities can also be utilized to improve the estimation of the target position. This is achieved by combining the functions from equations (1.18) and (1.13) to a single function
(97)
which expresses a 4-dimensional optimization problem. Normalization with the maximum measured range S.sub.R and the maximum measured velocity |S.sub.V| and adjustment of the weighting with the squared range resolution ΔR.sub.min.sup.2 and the squared velocity resolution Δv.sub.r.sup.2 results in
(98)
(99) The results from equations (1.18) and (1.13) need to be set as seed to solve this multi-modal optimization problem.
(100) Both, the range information and the target information of a measurement can be used to calculate ambiguous target locations. As both are coupled to the target location, combining the error functions from (1.18) and (1.13) to a single function enables simultaneous minimization of errors of range and velocity measurements. This leads to improved target localization.
(101) As depicted in
(102) The foregoing discussion discloses and describes merely exemplary embodiments of the present disclosure. As will be understood by those skilled in the art, the present disclosure may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Accordingly, the disclosure of the present disclosure is intended to be illustrative, but not limiting of the scope of the disclosure, as well as other claims. The disclosure, including any readily discernible variants of the teachings herein, defines, in part, the scope of the foregoing claim terminology such that no inventive subject matter is dedicated to the public.
(103) In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single element or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
(104) In so far as embodiments of the disclosure have been described as being implemented, at least in part, by software-controlled data processing apparatus, it will be appreciated that a non-transitory machine-readable medium carrying such software, such as an optical disk, a magnetic disk, semiconductor memory or the like, is also considered to represent an embodiment of the present disclosure. Further, such software may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
(105) The elements of the disclosed devices, apparatus and systems may be implemented by corresponding hardware and/or software elements, for instance appropriated circuits. A circuit is a structural assemblage of electronic components including conventional circuit elements, integrated circuits including application specific integrated circuits, standard integrated circuits, application specific standard products, and field programmable gate arrays. Further a circuit includes central processing units, graphics processing units, and microprocessors which are programmed or configured according to software code. A circuit does not include pure software, although a circuit includes the above-described hardware executing software.
(106) It follows a list of further embodiments of the disclosed subject matter:
(107) 1. Device for localization of a target in a scene, said device comprising circuitry configured to: obtain radar signal measurements simultaneously acquired by two or more radar sensors arranged at different locations, said two or more radar sensors having overlapping fields of view, derive range information of one or more potential targets from samples of radar signal measurements of said two or more radar sensors acquired at the same time or during the same time interval, the range information of a single sample representing a ring segment of potential positions of a potential target at a particular range from the respective radar sensor in its field of view, determine intersection points of ring segments of the derived range information, determine a region of the scene having one of the highest densities of intersection points, select a ring segment per sensor that goes through the selected region, and determine the most likely target position of the potential target from the derived range information of the selected ring segments.
(108) 2. Device as defined in embodiment 1,
(109) wherein the circuitry is further configured to iteratively determine the most likely target position from different combinations of ring segments, wherein a combination includes one ring segment per sensor that goes through the selected region and each combination comprises one or more ring segments different from one or more ring segments of other combinations.
(110) 3. Device as defined in embodiment 2,
(111) wherein the circuitry is further configured to determine the most likely target position from different combinations of ring segments by finding the position with the least squared radial distance that minimizes a minimization function.
(112) 4. Device as defined in embodiment 3,
(113) wherein the circuitry is further configured to use as minimization function a sum of the squared radial distances between an estimated target position and the respective range rings of the respective combination.
(114) 5. Device as defined in any preceding embodiment,
(115) wherein the circuitry is further configured to determine the velocity of the potential target.
(116) 6. Device as defined in any preceding embodiment, wherein the circuitry is further configured to determine the direction of movement of the potential target.
(117) 7. Device as defined in any preceding embodiment,
(118) wherein the circuitry is further configured to determine the velocity and/or direction of movement of the potential target by use of the angle between the positions of the sensors and the most likely target position and/or by use of relative velocities measured by the sensors.
(119) 8. Device as defined in any preceding embodiment,
(120) wherein the circuitry is further configured to determine the velocity and/or direction of movement of the potential target by minimization of a sum of the squared errors of the relative velocities.
(121) 9. Device as defined in any preceding embodiment,
(122) wherein the circuitry is further configured to use relative velocities measured by the sensors for improving the determination of the most likely target position.
(123) 10. Radar system comprising two or more radar sensors arranged at different locations and having overlapping fields of view from a scene, said radar sensor being configured to simultaneously acquire radar signal measurements from the scene including one or more targets, and a device as in any one of embodiments 1-9 for localization of a target in the scene based on the acquired radar signal measurements.
(124) 11. Method for localization of a target in a scene, said method comprising: obtaining radar signal measurements simultaneously acquired by two or more radar sensors arranged at different locations, said two or more radar sensor having overlapping fields of view, deriving range information of one or more potential targets from samples of radar signal measurements of said two or more radar sensors acquired at the same time or during the same time interval, the range information of a single sample representing a ring segment of potential positions of a potential target at a particular range from the respective radar sensor in its field of view, determining intersection points of ring segments of the derived range information, determine a region of the scene having one of the highest densities of intersection points, selecting a ring segment per sensor that goes through the selected region, and determining the most likely target position of the potential target from the derived range information of the selected ring segments.
(125) 12. A non-transitory computer-readable recording medium that stores therein a computer program product, which, when executed by a processor, causes the method according to embodiment 11 to be performed.
(126) 13. A computer program comprising program code means for causing a computer to perform the steps of said method according to embodiment 11 when said computer program is carried out on a computer.