Object tracking based on multiple measurement hypotheses

11455736 · 2022-09-27

Assignee

Inventors

Cpc classification

International classification

Abstract

A method and system for integrating multiple measurement hypotheses in an efficient labeled multi-Bernoulli (LMB) filter. The LMB filter estimates a plurality of tracks for a plurality of objects, each track of the plurality of tracks having a unique label, a probability, and a state, wherein each track of the plurality of tracks is associated to an object of a plurality of objects to be tracked, each object having an object state. The method receives one or more measurement hypotheses of the multiple measurement hypotheses for each object of the plurality of objects; updates each track of the plurality of tracks based on the respective track and the one or more measurement hypotheses of the multiple measurement hypotheses; determines, for each combination of track of the plurality of tracks and measurement hypothesis, a likelihood η.sub.i(j, k); samples, for each iteration of a plurality of iterations, an update hypothesis γ.sup.(t), based on an association of each track of the plurality of tracks to one of: a measurement hypothesis, an events missed detection, or a track dying detection; determining the state of each track of the plurality of tracks based on its respective associations in the updated hypotheses γ.sup.(t); extracts, for each track of the plurality of tracks, an existence probability; predicting the object state of each object of the plurality of objects with respect to a next measurement time; determines, whether another update is to be performed; and if another update is to be performed, repeats again the method steps from and including updating each track of the plurality of tracks.

Claims

1. A method for integrating multiple measurement hypotheses in an efficient labeled multi-Bernoulli (LMB) filter, the LMB filter estimating a plurality of tracks for a plurality of objects, each track (i) of the plurality of tracks having a unique label (ID), a probability (r), and a state, wherein each track (i) of the plurality of tracks is associated to an object (j) of a plurality of objects to be tracked, each object (j) having an object state, the method comprising: receiving one or more measurement hypotheses ((j, k)) of the multiple measurement hypotheses for each object (j) of the plurality of objects; updating each track (i) of the plurality of tracks based on the respective track (i) and the one or more measurement hypotheses ((j, k)) of the multiple measurement hypotheses; determining, for each combination of track (i) of the plurality of tracks and measurement hypothesis ((j, k)), a likelihood (η.sub.i(j, k)); sampling, for each iteration (t) of a plurality of iterations (T), an update hypothesis γ.sup.(t), based on an association of each track (i) of the plurality of tracks to one of: a measurement hypothesis ((j, k)), an events missed detection, or a track dying detection; determining the state of each track (i) of the plurality of tracks based on its respective associations in the update hypotheses (γ); extracting, for each track (i) of the plurality of tracks, an existence probability; predicting the object state of each object (j) of the plurality of objects with respect to a next measurement time; determining, whether another update is to be performed; and if another update is to be performed, repeating again the method steps from and including updating each track (i) of the plurality of tracks.

2. The method of claim 1, wherein the sampling further comprises: determining a weight (p.sub.G) of an updated hypothesis based on a likelihood of the contained associations and/or events; and erasing duplicate update hypotheses (γ) out of the vector of hypothesis (γ.sup.(1 . . . T)).

3. The method of claim 2, wherein the sampling is configured to create a hypotheses vector (γ(.sup.1 . . . T)) containing the respective associations; and determining the state of each track (i) of the plurality of tracks is further based on the weight (p.sub.G) of the update hypotheses (γ).

4. The method of claim 2, wherein extracting for each track (i) an existence probability is based on the weight (p.sub.G) of the update hypotheses (γ), which confirms the respective track (i) by either a measurement update or a missed detection.

5. The method of claim 1, wherein, on a first update, updating each track (i) of the plurality of tracks is based on the respective track (i) and all measurement hypotheses ((j, k)) of the multiple measurement hypotheses.

6. The method of claim 1, further comprising: pre-gating each track (i) of the plurality of tracks in order to determine measurements relevant to the respective track (i) and to update each track (i) of the plurality of tracks based on relevant measurements only; optionally determining relevant measurements being based on a distance between the respective track (i) of the plurality of tracks the respective and measurement, thereby discarding measurements exceeding a predetermined maximum distance.

7. The method of claim 6, wherein determining each measurement hypothesis (k) of the multiple measurement hypotheses is based on a Gaussian mixture Z j = .Math. k = 1 K j w j k 𝒩 ( z ; z j k , R j k ) wherein w.sub.j.sup.k is the probability of each hypothesis k=1 . . . K.sub.j and z.sub.j.sup.k, R.sub.j.sup.k is the measurement vector and measurement covariance of the hypothesis.

8. The method of claim 7, further comprising: discarding one or more measurement hypotheses ((j, k)) of the multiple measurement hypotheses based on position-based gating, optionally wherein the discarding is based on an association likelihood g(z.sub.j.sup.k|x.sub.i).

9. The method of claim 1, wherein determining, for each track (i) of the plurality of tracks, a likelihood (η.sub.i(j, k)) is based on η i ( j , k ) = { 1 - r i , died γ i = - 1 ( 1 - p D ( x i ) ) r i , missed γ i = 0 p D ( x i ) g ( z j k .Math. x i ) κ ( z ( k , j ) ) w j k r i , updated γ i = ( k , j ) wherein p.sub.D(x.sub.i) is a detection rate assumed to depend only on the track (i) and K denotes the Poisson distributed spatial clutter intensity.

10. The method of claim 1, wherein sampling, for each iteration t of a number of iterations T, an update hypothesis γ.sup.(t) is based on a Gibbs Sampling Algorithm.

11. The method of claim 10, wherein the Gibbs Sampling Algorithm receives as input the number of iterations T, a likelihood table η, and a look-up table Λ holding associated measurement hypotheses labels (j.sub.1 . . . L, k.sub.1 . . . L).

12. The method of claim 11, wherein the likelihood table η has a size P×(L+2) and is constructed based on η = [ η 1 ( - 1 ) η 1 ( 1 ) η 1 ( j 1 , k 1 ) .Math. η 1 ( j L , k L ) .Math. .Math. .Math. .Math. η P ( - 1 ) η P ( 1 ) η P ( j 1 , k 1 ) .Math. η P ( j L , k L ) ] wherein L denotes the most likely measurement hypotheses; optionally wherein L is chosen, such that at least all hypotheses of close-by measurements are included.

13. The method of claim 11, wherein the look-up table Λ is constructed based on Λ = [ - 1 0 ( j 1 , k 1 ) 1 .Math. ( j L , k L ) 1 .Math. .Math. .Math. .Math. - 1 0 ( j 1 , k 1 ) P .Math. ( j L , k L ) P ] .

14. The method of claim 1, further comprising: generating one or more measurement hypotheses ((j, k)) of the multiple measurement hypotheses for each object (j) of the plurality of objects; optionally wherein the one or more measurement hypotheses ((j, k)) of the multiple measurement hypotheses are provided with equal weight or with different weights based on a measurement of one or more sensors.

15. The method of claim 14, wherein at least one of: the one or more sensors include one or more of: a radar sensor, an optical sensor, in particular a camera, an ultrasonic sensor, a lidar sensor; the one or more measurement hypotheses ((j, k)) include one or more extracted boxes out of images representing a single real world object; the one or more measurement hypotheses ((j, k)) include multiple boxes, ellipses, or similar other geometries representing a single real world object; and the one or more measurement hypotheses ((j, k)) are based on a single radar measurement, where multiple Doppler velocity profiles of extended real world objects are generated.

16. The method of claim 1, wherein sampling further comprises associating not more than a single measurement hypothesis ((j, k)) of the multiple measurement hypotheses of a single object (j) of the plurality of objects to a respective track (i) of the plurality of tracks.

17. A system for integrating multiple measurement hypotheses in an efficient labeled multi-Bernoulli (LMB) filter, the system comprising: a control unit configured to: receive one or more measurement hypotheses ((j, k)) of the multiple measurement hypotheses for each object (j) of the plurality of objects; update each track (i) of the plurality of tracks based on the respective track (i) and the one or more measurement hypotheses ((j, k)) of the multiple measurement hypotheses; determine, for each combination of track (i) of the plurality of tracks and measurement hypothesis ((j, k)), a likelihood (η.sub.i(j, k)); sample, for each iteration (t) of a plurality of iterations (T), an update hypothesis γ(.sup.t), based on an association of each track (i) of the plurality of tracks to one of: a measurement hypothesis ((j, k)), an events missed detection, or a track dying detection; determine the state of each track (i) of the plurality of tracks based on its respective associations in the update hypotheses (γ); extract, for each track (i) of the plurality of tracks, an existence probability; predict the object state of each object (j) of the plurality of objects with respect to a next measurement time; determine, whether another update is to be performed; and if another update is to be performed, repeat again the method steps from and including updating each track (i) of the plurality of tracks.

18. A vehicle comprising the system according to claim 17.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 shows an example configuration illustrating three objects and two measurements and corresponding hypotheses in accordance with embodiments of the present invention.

(2) FIG. 2 shows a diagram illustrating how a number of update hypotheses and the weight of the most likely association of a first track depend on the number of Gibbs iterations Tin accordance with embodiments of the present invention.

(3) FIG. 3 shows a flow chart of an exemplary method for integrating multiple measurement hypotheses in accordance with embodiments of the present invention.

DETAILED DESCRIPTION OF THE DRAWINGS

(4) The LMB filter estimates a set of statistically independent tracks, which have a unique label (ID), an existence probability r and a state representation. According to embodiments of the present invention, the Gaussian Mixture implementation is chosen as the state, where each track is described by one Gaussian Mixture, having multiple mixture components each with a state vector, covariance and weight w.sup.c. The weight over all mixture components for a single track sums up to one.

(5) In accordance with embodiments of the present invention, multiple measurement hypotheses are considered, which merely pertains to the update step. Therefore this step is described in detail.

(6) Multiple Measurement Hypothesis Definition

(7) The multiple measurement hypotheses are defined by a set of measurement hypotheses originating from a single object j using a Gaussian mixture

(8) Z j = .Math. k = 1 K j w j k 𝒩 ( z ; z j k , R j k ) ( 1 )
where w.sub.j.sup.k is the probability of each hypothesis k=1 . . . K.sub.j and z.sub.j.sup.k, R.sub.j.sup.k is the measurement vector and measurement covariance of a particular hypothesis. A single hypothesis, referring to a single component of a multiple measurement hypotheses Gaussian mixture, can be uniquely identified by the tuple (j,k). In general only one of the hypotheses is the best measurement for the object and it is unknown which of the measurement hypotheses it could be.

(9) For a single measurement frame, the total received measurements can then be defined as

(10) Z = C .Math. [ .Math. j = 1 N z Z j ] ( 2 )
where N.sub.z is the number of objects generating measurement hypotheses and C are additional clutter measurement hypotheses. Traditionally, clustering or detection algorithms are used to generate such hypotheses sets for a single object, where a choice is made about the best measurement in such a set via some criteria, such as the highest score. According to embodiments of the present invention, a choice will not be explicitly made after the clustering of hypotheses, but instead all hypotheses for a measurement cluster Z.sub.j are given to the LMB filter, which will resolve to the best solution over time. The hypothesis probability w.sub.j.sup.k will then directly represent the confidence in the hypotheses as resulting from such clustering or detection algorithms.
LMB Update

(11) The input to the update step is a set of predicted tracks (a priori state) and new measurements Z. At the beginning of the update the existence probability of each track is multiplied by a surviving probability. The set is then extended with new tracks from the birth model, after which there are i=1 . . . P tracks with existence probabilities r.sub.i. First, all tracks are updated by all measurement hypotheses. To increase performance a position-based gating is performed to save the computation time of very unlikely associations. An association is specified with γ.sub.i=(j,k) and refers to an association of track i with a measurement hypothesis (j,k). For each likely association the track is updated with the measurement hypothesis. Since multiple mixture components could be present in one track, each component is updated and its weight is adapted accordingly. The result is the a posteriori Gaussian mixture, which is stored and used at the end to build up the a posteriori track. During the update the likelihood of the association is calculated g(z.sub.j.sup.k|x.sub.i).

(12) Additionally, the likelihoods for a dying track γ.sub.i=−1 and missed detection γ.sub.i=0 are calculated. The detection rate p.sub.D(x.sub.i) is assumed to depend only on the track, where κ denotes the Poisson distributed spatial clutter intensity:

(13) η i ( j , k ) = { 1 - r i , died γ i = - 1 ( 1 - p D ( x i ) ) r i , missed γ i = 0 p D ( x i ) g ( z j k .Math. x i ) κ ( z ( k , j ) ) w j k r i , updated γ i = ( k , j ) ( 3 )

(14) A likelihood table η with size P×(L+2) is constructed containing the likelihood for dying, missed detection and the L most likely measurement hypotheses:

(15) η = [ η 1 ( - 1 ) η 1 ( 1 ) η 1 ( j 1 , k 1 ) .Math. η 1 ( j L , k L ) .Math. .Math. .Math. .Math. η P ( - 1 ) η P ( 1 ) η P ( j 1 , k 1 ) .Math. η P ( j L , k L ) ] ( 4 )
where L should be chosen, such that at least all hypotheses of the close-by measurements are included. To retain the relation of the tracks to the measurements, a look-up table Λ holding the associated measurement hypotheses labels (j.sub.1 . . . L, k.sub.1 . . . L) is constructed:

(16) Λ = [ - 1 0 ( j 1 , k 1 ) 1 .Math. ( j L , k L ) 1 .Math. .Math. .Math. .Math. - 1 0 ( j 1 , k 1 ) P .Math. ( j L , k L ) P ] ( 5 )

(17) One combination t of associations for all tracks is called an update hypothesis and is denoted as a P-tuple with γ.sup.(t)=(γ.sub.1, . . . , γ.sub.p). In one update hypothesis, each measurement j can only be associated to one track.

(18) For each update hypothesis γ.sup.(t) the corresponding likelihood w.sub.G.sup.(t) can be estimated by multiplying the likelihood of the included track associations:

(19) 0 w G ( t ) = .Math. i = 1 .Math. P η i ( γ i ( t ) ) ( 6 )

(20) It is intractable to consider all update hypotheses, since there are many combinations of tracks surviving, not being detected or updated by any measurement hypotheses. A tractable solution to generate the update hypotheses is Gibbs sampling.

(21) Gibbs Sampling

(22) The proposed Gibbs sampling is shown in Algorithm 1 (see below). For each Gibbs iteration t from 1 . . . T an update hypothesis γ.sup.(t) is sampled. There is an array J holding the measurement ids j of currently associated measurements. For the extraction of the measurement label j from the measurement hypotheses (j,k) the following function is defined:

(23) λ ( γ i ) = { - 1 , γ i = - 1 died 0 , γ i = 0 missed detection j γ i = ( j , k ) updated by ( j , k ) ( 7 )

(24) In one Gibbs iteration, there is a loop over all tracks. For each track, first the associated measurement of the previous frame is released from J and second a new association is sampled. It is sampled from all available measurement hypotheses ((j,k)∀j=[1 . . . N.sub.z], jcustom characterJ) and the two other cases died or missed detection. After sampling T update hypotheses, all duplicates are erased and the remaining association likelihoods are normalized to 1, resulting in the update hypotheses probabilities p.sub.G. The complexity of the Gibbs sampling is linear with the number of tracks. The two parameters L (maximum number of associations per object) and T (maximum Gibbs iterations) provide a trade-off between accuracy and speed.

(25) In some embodiments, optionally, a Hungarian association is used as an initial update hypothesis γ.sup.(1). However, this is not required. Generally, it is assumed that all tracks die (j=−1) as an initial solution (see FIG. 2). An alternative option would be that all tracks have a missed detection, meaning they are still alive, but they haven't generated any measurement in the current frame.

(26) Based on the example datasets, it is shown that this does not yield a significant improvement when using a sufficient number of Gibbs iterations. Hungarian association requires as input a cost matrix (see, e.g., Reuter et al.), which must additionally be calculated. Furthermore, the standard Hungarian algorithm is not able to differentiate between multiple measurement hypotheses, so that the solution could contain multiple hypotheses of one measurement j, which is undesired.

(27) TABLE-US-00001 ALGORITHM 1 Modified Gibbs sampling Input: T, η, Λ Output: γ.sup.(1) . . . γ.sup.(T ) and p.sub.G.sup.(1) . . .p.sub.G.sup.(T) 1: c := [1 : L + 2] 2: J = [−1, . . . −1] 3: {tilde over (η)} = η 4: for t = 1 : T do 5:  γ.sup.(1) = [ ] 6:  w.sub.G.sup.(t) = 1.0 7:  for i = 1 : P do 8:   J [i] = −1 9:   for l = 3 : (L + 2) do 10:    if Λ(l) in J then 11:     {tilde over (η)}.sub.k(l) = 0 12:    else 13:     {tilde over (η)}.sub.k(l) = η.sub.k(l) 14:    end if 15:   end for 16:   y.sub.i.sup.(t) = Categorial(Λ.sub.k(c), {tilde over (η)}.sub.i) 17:   w.sub.G.sup.(t) *= η.sub.i(y.sub.i.sup.(t)) 18:   if λ(y.sub.i.sup.(t)) > 0 then 19:    J [i] = λ(y.sub.i.sup.(t)) 20:   end if 21:  end for 22: end for 23: ClearDuplicates(γ,w.sub.G) 24: p.sub.G = Normalize(w.sub.G)
LMB Reconstruction

(28) In order to close the recursion, the a posteriori LMB tracks are built up by approximating the multi-object posteriori density by an LMB density with parameters π.sub.Z.sub.+(X)={r,p}.sub.l.sub.+custom character.sub.+ where (references (19), (20), and (21) referring to Reuter et al.)

(29) r ( + ) = .Math. ( I + , θ + ) ( 𝕃 + ) × + w _ Z + ( I + , θ + ) 1 I + ( + ) , ( 19 ) p ( x + , + ) = 1 r ( + ) .Math. ( I + , θ + ) ( 𝕃 + ) × + w _ Z + ( I + , θ + ) 1 I + ( + ) p Z + ( θ + ) ( x + , + ) , ( 20 ) w _ Z + ( I + , θ + ) = w Z + ( I + , θ + ) .Math. ( I + , θ + ) ( 𝕃 + ) × + w Z + ( I + , θ + ) ( 21 )

(30) The approximation exactly matches the first moment of the posterior δ-GLMB distribution, i.e. the spatial distribution of the tracks as well as the mean value of the cardinality hypotheses that contradict the received measurement set (e.g. by modeling the disappearance of a track which obtains a precise measurement).

(31) The multi-object posteriori density is equivalent to the update hypotheses with corresponding probability p.sub.G. A single for-loop over all update hypotheses is sufficient. If for the first time an association γ.sub.i=(j,k) appears, the updated mixture components are included in the a posteriori track as mixture components and their weights w.sub.i.sup.c (j,k) are multiplied by the update hypothesis probability p.sub.G. The a posteriori Gaussian mixture has been stored at the beginning of this section when the update likelihood g(z.sub.j.sup.k|x.sub.i) was calculated. If the association γ.sub.i is already included, only the weights w.sub.i.sup.c (j,k) of the corresponding mixture components are increased by the original components weight multiplied by p.sub.G. The complexity of this transformation is linear with the number of tracks and the number of unique update hypotheses (at most T) and does not depend on the number of measurements.

(32) Finally the existence probability of each track is calculated by summing-up the weights of its mixture components. Since the mixture weights are normalized to one before the update step, the existence probability of the track is equivalent to the sum of the probability of all update hypotheses p.sub.G the track was included in with j≥0:

(33) r i = .Math. t p G ( t ) t 1 .Math. T , λ ( γ i ( t ) ) 0 ( 8 )

(34) The weights of the mixture components are equivalent to the sum of the probabilities of all update hypotheses this association is included in:

(35) w i c ( j , k ) = .Math. t p G ( t ) t 1 .Math. T , γ i ( t ) ( j , k ) ( 9 )
and are normalized to one.

(36) A further optimization could be to find subsets of measurements in combination with tracks e.g. using cluster techniques, where the update step can be assumed to be independent.

(37) It has been shown, that the existence probability r.sub.i of an a posteriori track is constant when multiple measurement hypotheses are added. The existence probability directly depends on the update hypotheses probabilities p.sub.G.sup.(t) in which the track is included. Therefore, it is sufficient to show that the un-normalized probability (w.sub.G.sup.(t)) is constant. To retain the existence probability, all measurement hypotheses have the same association likelihood g(z.sub.j|x.sub.i). By including K.sub.j measurement hypotheses, a single update hypothesis γ.sup.(t) splits up to K.sub.j update hypotheses, which are summed up to determine the existence probability. In the following, the measurement to which track i=1 is associated to, is split up in K.sub.j measurement hypotheses:

(38) w G = .Math. k = 1 K j η 1 ( j , k ) .Math. i = 2 .Math. P γ i ( 10 )
where the sum is equal to the likelihood without measurement hypotheses η.sub.1(j):

(39) .Math. k = 1 K j η 1 ( j , k ) = ( 3 ) p D ( x 1 ) g ( z j .Math. x 1 ) κ ( z ) r 1 .Math. k = 1 K j w j k = p D ( x 1 ) g ( z j .Math. x 1 ) κ ( z ) r 1 = η 1 ( j ) ( 11 )
Object Extraction

(40) In most cases instead of the complete mixture a simplified output is provided per track, which contains only a single state and covariance. This can be achieved e.g. by calculating the weighted mean of all mixture components. The weight would be the component's weight w.sub.i.sup.c. Nevertheless, since multiple measurement hypotheses with systematic errors cause a multi-modal mixture distribution, it is advisable not to use the weighted mean value. The reason is, that if the systematic errors of the hypotheses are biased, the extracted state would be biased as well. Instead the mixture component with the highest weight should be extracted, so that the influence of wrong hypotheses is inhibited. Another option would be to extract all mixture components in close proximity with the highest common weight.

Example Scenario

(41) FIG. 1 shows an example configuration illustrating three objects 120-1, 120-2, and 120-3 (ID1, ID2, and ID3) and two measurements 140, 150 and corresponding hypotheses in accordance with embodiments of the present invention.

(42) The theoretical derivation of the last section is shown in a simple example in this section, which is shown in FIG. 1. In this scenario, there are three tracks and two measurements, whereas measurement j=2 consists of three hypotheses. The tracks are already predicted and their existence probabilities multiplied by the survival probability, such that only the update step is considered. For simplicity every track consists only of a single mixture component.

(43) FIG. 1 shows an overview of the example configuration with three objects 120-1, 120-2, and 120-3 (ID1, ID2, and ID3) and two measurements (stars) with one hypothesis 140-1 for measurement 1 and three hypotheses 150-1, 150-2, and 150-3 for measurement 2.

(44) In a first step all tracks are updated with all measurement hypotheses (see above “LMB Update”). The association likelihood g(z.sub.j.sup.k|x.sub.i) which considers the distance between predicted track and measurement and the covariance matrix, is given in Table I:

(45) TABLE-US-00002 TABLE I ASSUMPTION OF ASSOCIATION LIKELIHOODS g(z.sub.j.sup.k|x.sub.i) (j, k) (1, 1) (2, 1) (2, 2) (2, 3) track 1 1.0 0.2 0.1 0.2 track 2 0.1 1.2 1.2 1.0 track 3 0.5 0.1 0.05 1.5

(46) Next, the likelihoods η.sub.i are calculated based on equation (3). To prepare the input for the Gibbs sampling (Algorithm 1), the likelihood table η based on equation (4) and the corresponding mapping Λ based on equation (5) are constructed (see Tables II and III).

(47) TABLE-US-00003 TABLE II LIKELIHOODS OF TRACK DIED, PREDICTION AND L = 3 MOST LIKELY ASSOCIATIONS η l −1 0 1 2 3 track i = 1 0.30 0.07 0.63 0.04 0.04 track i = 2 0.30 0.07 0.25 0.25 0.21 track i = 3 0.60 0.04 0.18 0.18 0.01

(48) TABLE-US-00004 TABLE III ASSOCIATED MEASUREMENT HYPOTHESES Λ FOR ALL ENTRIES OF η l −1 0 1 2 3 i = 1 −1 0 1 (λ(1, 1)) 2 (λ(2, 1)) 2 (λ(2, 3)) i = 2 −1 0 2 (λ(2, 1)) 2 (λ(2, 2)) 2 (λ(2, 3)) i = 3 −1 0 2 (λ(2, 3)) 1 (λ(1, 1)) 2 (λ(2, 1))

(49) A constant detection probability p.sub.D of 0.9 and clutter rate κ of 1 are assumed and L=3 is used. It is assumed that all hypotheses of measurement j=2 are equally weighted

(50) ( w 2 k = 1 3 ) .

(51) If the measurement hypotheses are treated independently w.sub.2.sup.k=1, the association likelihood of all combinations with (2, k) would increase by a factor of 3 in Table II. The measurement labels in Λ (Table III) would be 1 to 4 (1 . . . 4,1). Update hypotheses like γ=[(1,1), (2,1), (2,3)] would be possible. This solution is also evaluated in the following as Standard LMB.

(52) After running the Gibbs Sampling (see above “Gibbs Sampling”), the weights of all update hypotheses are normalized to 1. The most likely update hypothesis is γ.sup.1=[(1,1), −1, −1] with probability p.sub.G=0.11. The next three hypotheses change only in γ.sub.2.sup.2 . . . 4, with the associated hypotheses (2,1), (2,2), (2,3) and probability 0.09, 0.09, 0.08. The next three hypotheses are dying track 1,3 and all hypotheses of measurement j=2 with track i=2 (probability 0.05, 0.05, 0.04).

(53) FIG. 2 shows diagram 200 illustrating how the number of update hypotheses and the weight of the most likely association of the first track depend on the number of Gibbs iterations T. Diagram 200 shows results from a Monte Carlo simulation of a number of Gibbs iterations: influence on the weight of the most likely track-to-measurement association for the first track using Hungarian as initial solution (graph 202) and without (graph 204) and influence on the number of update hypotheses (graph 206). The number of update hypotheses is the extraction of unique hypotheses out of T hypotheses. The Hungarian solution as starting update hypothesis overestimates the weight, since the most likely association is always included, whereas starting with dying tracks (j=−1) yields an underestimation. But with more than 50 Gibbs iterations, both approaches show identical results. Since the weight converges with approximately 1000 iterations, the Hungarian initialization has no benefit in this example (see above).

(54) The last step is the conversion of the update hypotheses to a posteriori tracks (see above “LMB Reconstruction”). The existence probability of the track r is calculated by summing up the probability of all update hypotheses p.sub.G in which the track is included based on equation (8), identified by the unique label. The result are shown in Table IV:

(55) TABLE-US-00005 TABLE IV EXISTENCE PROBABILITY OF A POSTERIORI TRACKS track i= 1 2 3 before 0.70 0.70 0.40 Multi. Hyp. LMB 0.67 0.66 0.27 Standard LMB 0.75 0.87 0.55

(56) For track 1 the existence probability is 0.67. It is mainly composed from the update hypothesis γ.sub.1=(1,1) (p.sub.G=0.58) and missed detection hypothesis (p.sub.G=0.07). Obviously the existence probability of track 3 decreases, since all measurements are in average closer to other tracks. Tracks 1 and 2 have a similar probability, slightly decreased due to the non-perfect measurements. Even though the sums of the likelihoods in Table I are identical for track 1 and 2, track 1 has a higher probability. The reason is, that track 3 has a higher likelihood with measurement 2, which is associated more likely with track 2.

(57) When the measurement hypotheses are treated independently (Standard LMB), the existence probabilities of the tracks increase significantly, since more update hypotheses are feasible. Further, more measurements than tracks are available. Especially the combination of track 3 to (2,3) and track 2 to (2,1) or (2,2) is possible. The probability of track 3 increases significantly to 0.55 instead of decreasing.

(58) The state of the a posteriori track is a Gaussian mixture, containing all feasible L track-to-measurement associations as mixture components. The weights of the components are calculated based on equation (9) and are shown in Table V:

(59) TABLE-US-00006 TABLE V WEIGHT OF THE MIXTURE COMPONENTS w.sub.i.sup.c(j, k) OF THE PREDICTED (0) OR UPDATED TRACK (BASED ON A MEASUREMENT (j, k)) w.sub.i.sup.c(.Math.) 0 (1, 1) (2, 1) (2, 2) (2, 3) track i = 1 0.11 0.85 0.02 0.0 0.02 track i = 2 0.13 0.0 0.30 0.30 0.26 track i = 3 0.19 0.33 0.19 0.0 0.28

(60) One solution for a single output state would be to extract the mixture component with the highest weight. For track 1, this would be the mixture component from the association with measurement hypothesis (1,1). The components from the associations with (2,1) and (2,2) have the highest weight for track 2.

(61) The cardinality distribution before and after the LMB update are shown in Table VI:

(62) TABLE-US-00007 TABLE VI LMB CARDINALITY (EQUIVALENT TO GLMB CARDINALITY) cardinality 0 1 2 3 mean before 0.05 0.29 0.45 0.2 1.79 Multi. Hyp. LMB 0.06 0.35 0.54 0.06 1.59 Standard LMB 0.01 0.15 0.47 0.36 2.18

(63) The probability of a cardinality of 1 or 2 is increased, whereas the cardinality of 0 and 3 is decreased. The average cardinality slightly decreases, since track 3 is not confirmed and the measurements for track 1 and 2 have a larger displacement.

(64) If all hypotheses are treated as independent measurements, the average cardinality increases significantly to 2.18 and especially of a cardinality of 3 to 0.36.

(65) FIG. 3 shows a flow chart of an exemplary method 300 for integrating multiple measurement hypotheses in an efficient labeled multi-Bernoulli (LMB) filter in accordance with embodiments of the present invention. The LMB filter estimates a plurality of tracks i for a plurality of objects j. Each track i of the plurality of tracks has a unique label (ID), a probability r, and a state. Further, each track i of the plurality of tracks is associated to an object j of a plurality of objects to be tracked. Each object j has an object state. The method starts at step 301.

(66) In step 304, one or more measurement hypotheses (j,k) of the multiple measurement hypotheses for each object j of the plurality of objects are received. Here, one or more measurement hypotheses (j, k) are received. Indeed, it can also happen that a sensor does not detect an object at all, so that no measurement hypothesis (j, k) is received at all. In general, it is assumed that at least one object j is detected.

(67) In step 306, each track i of the plurality of tracks is updated based on the respective track i and the one or more measurement hypotheses (j, k) of the multiple measurement hypotheses. Indeed, on the first update only, all measurements are updated with all tracks. This may be required to calculate the measurement likelihood. In later iterations only reasonable measurement hypotheses (i.e. having a high likelihood) are processed by the LMB and unlikely measurement-track updates are withdrawn during the Gibbs sampling (see step 310). In further detail, only measurement-track updates, which have a high likelihood are used in the Gibbs sampling (see likelihood table η and look-up table Λ as described above). In an example, all measurement-tracks are performed and the likelihood is shown in Table I (see above). In this example, for the Gibbs sampling, only the three most-likely measurement-track updates are used (see Table III). For the first track, λ(2,2) is missing, meaning that it is very unlikely that track 1 is associated with measurement hypothesis (2,2). Consequently, the Gibbs sampling can “ignore” this update. In another practical example, if 100 tracks and 500 measurement hypothesis exist, it would be rather cumbersome to perform this update for all combinations, so that a pre-gating is performed (see above). For example, in autonomous driving usually only measurements which lie in a certain area (e.g. 5 m) around the track are considered. For all other measurements the likelihood may be set to infinity in order to disregard such measurements.

(68) In step 308, for each combination of track (i) of the plurality of tracks and measurement ((j, k)), a likelihood (η.sub.i(j,k)) is determined.

(69) In step 310, for each iteration t of a plurality of iterations T, an update hypothesis γ.sup.(t) is sampled, based on an association of each track i of the plurality of tracks to one of: a measurement hypothesis (j, k), an events missed detection, or a track dying detection. It is noted, however, that for each measurement j, only a single hypotheses (j, k) is associated.

(70) If, for example, measurement hypothesis (5,3) is associated to track 1, all other measurement hypotheses (5,1),(5,2),(5, . . . ) of measurement 5 cannot be associated in this update hypothesis t to any other track i.

(71) In step 312, the state of each track i of the plurality of tracks is determined based on its respective association associations in the updated hypotheses γ.sup.(t). In general, “object state” denotes the state of the object j (e.g. true velocity of vehicle) and the state of a track denotes the estimated state (e.g. estimated velocity). There are multiple ways to describe the track state if an object is updated with multiple hypotheses. According to embodiments of the present invention, a Gaussian mixture is used. This means that each feasible track-measurement update in y results in a Gauss component with a corresponding weight (see above).

(72) In step 314, an existence probability is extracted for each track i.

(73) In step 316, the object state of each object (j) of the plurality of objects is predicted with respect to a next measurement time. Typically, if the next measurement is received 100 ms later, all states are predicted to this time. For example, if the track has the x-position 10 m and velocity 5 m/s, the object state at the next measurement should be 10.5 m (100 ms*5 m/s), whereas the velocity stays constant. In some embodiments in accordance with the present invention, other models may be employed, for example assuming that the velocity changes and the acceleration is constant.

(74) Subsequently it is determined, whether another update is to be performed. If another update is to be performed, the method steps are repeated again from and including the step 306 of updating each track i of the plurality of tracks. Otherwise, the method ends at step 318.

(75) Optionally, the method 300 further comprises a step 302, in which one or more measurement hypotheses (j, k) of the multiple measurement hypotheses are generated for each object j of the plurality of objects. These generated one or more measurement hypotheses (j, k) are then received in step 304. Optionally, the one or more measurement hypotheses (j, k) of the multiple measurement hypotheses are provided with equal weight or with different weights based on a measurement of one or more sensors.