Method of estimating a navigation state constrained in terms of observability

10295348 ยท 2019-05-21

Assignee

Inventors

Cpc classification

International classification

Abstract

There is proposed a method of estimating a navigation state with several variables of a mobile carrier according to the extended Kalman filter method, comprising the steps of:acquisition of measurements of at least one of the variables,extended Kalman filtering (400) producing a current estimated state and a covariance matrix delimiting in the space of the navigation state a region of errors, with the help of a previous estimated state, of an observation matrix, of a transition matrix and of the measurements acquired, the method being characterized in that it comprises a step (310, 330) of adjustment of the transition matrix and of the observation matrix before their use in the extended Kalman filtering in such a way that the adjusted matrices satisfy an observability condition which depends on at least one of the variables of the state of the carrier, the observability condition being adjusted so as to prevent the Kalman filter from reducing the dimension of the region along at least one non-observable axis of the state space, in which the observability condition to be satisfied by the adjusted transition and observation matrices is the nullity of the kernel of an observability matrix associated therewith and in which the adjustment comprises the steps of:calculation (301) of at least one primary basis of non-observable vectors with the help of the previous estimated statefor each matrix to be adjusted, calculation (306, 308) of at least one matrix deviation associated with the matrix with the help of the primary basis of vectors, shifting (330) of each matrix to be adjusted according to the matrix deviation associated therewith so as to satisfy the observability condition.

Claims

1. A method for estimating a navigation state of a mobile carrier using an extended Kalman filter, wherein the navigation state comprises variables, wherein the method comprises steps of: acquiring measurements of at least one of the variables, using the extended Kalman filter to produce a current estimation of the navigation state and a covariance matrix delimiting in a space of the navigation state a region of errors, from a previous estimation of the navigation state, an observation matrix, a transition matrix and acquired measurements, wherein the method further comprises adjusting the transition matrix and the observation matrix prior to their use by the extended Kalman filter, such that the adjusted matrices fulfill an observability condition, which depends on at least one of the variables of the state of the carrier, wherein the observability condition is adapted to prevent the Kalman filter from reducing the dimension of the region along at least one non-observable axis of the space, wherein the observability condition to be fulfilled by the adjusted transition and observation matrices is the nullity of the kernel of an observability matrix, which is associated with said matrices, and wherein adjusting the transition matrix and the observation matrix comprises: calculating at least one primary base of non-observable vectors from the previous estimation of the navigation state, for each matrix to be adjusted, calculating at least one matrix difference associated with the matrix from the primary base of vectors, for each matrix to be adjusted, offsetting the matrix to be adjusted according to the matrix difference the matrix is associated with, so as to fulfill the observability condition.

2. The method as claimed in claim 1, in which using the extended Kalman filter comprises: propagating the preceding estimated state into a predicted state using the adjusted transition matrix, linearising in the predicted state a non-linear model so as to produce the observation matrix prior to the adjustment step, adjusting the observation matrix produced by linearising.

3. The method as claimed in claim 1, also comprising orthogonalizing the primary base into a secondary base of vectors, wherein the matrix difference associated with the observation matrix is calculated from the secondary base of vectors.

4. The method as claimed in claim 1, in which the matrix difference associated with the observation matrix is a sum of elementary matrix differences, wherein each elementary matrix difference is calculated from a vector of the secondary base associated to the elementary matrix difference.

5. The method as claimed in claim 1 whereof the steps are repeated in successive cycles, and in which a current cycle comprises steps of: storing the secondary base of vectors and storing orthogonalization coefficients which are also produced from the primary base when orthogonalizing the primary base into the secondary base of vectors, transforming the primary base of vectors into a tertiary base of vectors using previous orthogonalization coefficients stored during a preceding cycle, wherein calculating the matrix difference associated with the transition matrix takes as input a secondary base stored during the preceding cycle and the tertiary base calculated during the current cycle.

6. The method as claimed in claim 5, wherein the matrix difference associated with the transition matrix is a sum of elementary matrix differences, wherein each elementary matrix difference is calculated from a vector of the secondary base stored during the preceding cycle and specific to the elementary matrix difference, and from a vector of the tertiary base calculated during the current cycle and specific to the elementary matrix difference.

7. An inertial unit comprising a plurality of sensors and at least one processor configured to estimate navigation state of the inertial unit by executing the method as claimed in claim 1.

8. A computer program product comprising program code instructions for execution of the steps of the process as claimed in claim 1, when this program is executed at least one processor.

Description

DESCRIPTION OF FIGURES

(1) Other characteristics, aims and advantages of the invention will emerge from the following description, which is purely illustrative and non-limiting and must be considered in conjunction with the attached diagrams, in which:

(2) FIGS. 1 and 2, already discussed, illustrate two evolutions of 3 sigma envelopes over time, during deployment of an extended Kalman filter.

(3) FIG. 3 schematically illustrates a carrier embedding an inertial unit according to an embodiment of the invention.

(4) FIG. 4 is the functional diagram of an estimator according to an embodiment.

(5) FIG. 5 is the functional diagram of a linearisation and adjustment block included in the estimator of FIG. 2.

(6) FIG. 6 is the functional diagram of a sub-block shown in FIG. 5.

(7) FIG. 7 illustrates geometric justification of the adjustment of matrices used by the Kalman filter.

(8) FIG. 8 illustrates a data flow corresponding to performing the steps illustrated in FIG. 3.

(9) In all figures similar elements bear identical reference numerals.

DETAILED DESCRIPTION OF THE INVENTION

(10) In reference to FIG. 3, an inertial unit IN is embedded on a mobile carrier device P, such as a land vehicle, a helicopter, an aircraft, etc.

(11) The inertial unit IN comprises several parts: inertial sensors CI, complementary sensors CC, and means E for performing estimation calculations. These parts can be physically separated from each other.

(12) The inertial sensors CI are typically accelerometers and/or gyrometers respectively measuring specific forces and speeds of rotation undergone by the carrier relative to an inertial referential. The specific force corresponds to the acceleration of non-gravitational origin. When these sensors are fixed relative to the carrier, the unit is called a strap down type.

(13) The additional sensors CC are variable according to the type of carrier, its dynamic, and the specific application. The inertial units typically use a receiver GNSS (GPS for example). For a land vehicle this can also be one or more odometers. For a boat it can be a loch, giving the speed of the boat relative to that of the water or of the seabed. The cameras form another example of sensors.

(14) The navigation estimation means E will be designated hereinbelow under the term estimator. The means E typically comprises one or more processors.

(15) The output data of the estimator E are a state of navigation of the carrier and optionally of internal states of the inertial unit.

(16) The estimator E comprises especially an extended Kalman filter EKF configured to combine the information data by the complementary sensors and the inertial sensors so as to provide optimal estimation of navigation information.

(17) The merging is done as per a continuous dynamic system serving as model to predict the state any time by means of a function of non-linear propagation f, and the way of the observer by means of an observation function h, which can be non-linear (for this see annex 5 emphasising a few principles on dynamic systems), this function dependent on the type of sensor CC used:

(18) { X . ( t ) = f ( X ( t ) , u ( t ) ) z ( t ) = h ( X ( t ) )

(19) wherein (t) illustrates an input command constituted by the specific force and angular speed. These 2 magnitudes are measured by the sensors CI discretely over time according to a period T.sub.1.

(20) The overall state X(t) comprises inter alia coordinates of position, speed, and the orientation of the carrier in the form of one or more rotations, each of them represented for example by a matrix or an attitude quaternion. The estimator E exploits a discrete version of these equations at continuous time obtained according to the rules of the prior art:

(21) { X k = g ( X k - 1 , u k - 1 ) z k = h ( X k )

(22) The EKF filter functions iteratively according to a prediction step taking into account the measurements of sensors CI and an updating step taking into account the measurements of sensors CC:

(23) The prediction step comprises predicting an overall estimated state, and predicting an associated covariance. Prediction of estimated overall state: {circumflex over (X)}.sub.k|k1=g({circumflex over (X)}.sub.k1|k1, .sub.k1) Prediction of associated covariance: {circumflex over (P)}.sub.k|k1=.sub.k1.fwdarw.k{circumflex over (P)}.sub.k1|k1.sub.k.sup.T1.fwdarw.k+Q.sub.k1
where .sub.k1.fwdarw.kcorresponds to the transition matrix dependent on the propagation function f, and Q the matrix of model noises.

(24) The updating step utilises the following variables: Innovation: {tilde over (y)}.sub.k=z.sub.kh({circumflex over (X)}.sub.k|k1) Covariance innovation: .sub.k=H.sub.k{circumflex over (P)}.sub.k1|k1H.sub.k.sup.T+R.sub.k
where H.sub.k corresponds to the observation matrix. Kalman gain: K.sub.k={circumflex over (P)}.sub.k1|k1H.sub.k.sup.TS.sub.k.sup.1

(25) Updating is performed as follows: Estimated overall state updating: {circumflex over (X)}.sub.k|k={circumflex over (X)}.sub.k|k1+K.sub.k{tilde over (y)}.sub.k Associated covariance updating : {circumflex over (P)}.sub.k|k=(IK.sub.kH.sub.k){circumflex over (P)}.sub.k|k1

(26) Theoretical transition matrix : k - 1 .fwdarw. k = ( k - 1 ) T kT exp ( F ( ) ) d With : F ( t ) = f X | X ( t ) , u ( t ) Observation matrix : H k = h X | X ^ k | k - 1

(27) Although it is close to this algorithm the estimator E is different to it. Updating of the overall state cannot be done fully additively, especially for retaining the properties of rotation matrices, and especially also for considering the rapid cadence of measurements of sensors CI. Prediction of the estimated overall state can be done at a faster cadence than that of other operations. The difference in cadence can depend on the dynamic of the carrier. Other differences are possible. For example, an approximation can be made on the transition matrix.

(28) FIG. 4 shows an estimator E according to an embodiment comprising different functional blocks corresponding to respective steps of its execution.

(29) Estimation is performed in successive iterations, each iteration being identified by an index k.

(30) Predicting the estimated overall state according to a cadence 1 is done in step 100. A memory on 1 cycle, and a time synchronization step function at the same cadence. This time synchronization step considers data updated at a slower cadence 2.

(31) All other processing operates at cadence 2. Step 200 converts states of cadence 1 into states of cadence 2. At cadence 2, {circumflex over (X)}.sub.k/k1, {circumflex over (x)}.sub.k/k1, {circumflex over (X)}.sub.k/k, {circumflex over (x)}.sub.k/k, this last state coming from the last update of the Kalman filter 400, represents in order the predicted overall state, the predicted linearised state, the overall updated state, the updated linearised state. The state {circumflex over (x)}.sub.k/k1 can in some cases still be 0.

(32) The transition and observation matrices are calculated in the linearisation step 300. A sub-adjustment step, specific to the invention and detailed later, is added to this step 300.

(33) The innovation is calculated as per the predicted overall state and as per observation of sensors CC shown below in FIG. 4.

(34) The other steps of the EKF are combined into step 400, which performs estimations by means of matrices, and constituting a Kalman filter, these estimations referring to linearised states and covariances. A particular feature originates from the fact that step 400 contains no prediction of linearised state, which can optionally be incorporated into step 100 at cadence 1.

(35) Step 401 comprises predicting the covariance associated with the predicted overall state and exploits the transition matrix. Step 402 comprises calculation of the covariance of the innovation, Kalman gains, updating of the covariance of the overall state, and updating of the linearised state. This step exploits the observation matrix.

(36) The linearisation points in step 300 can be calculated as a function of {circumflex over (X)}.sub.k/k1, {circumflex over (x)}.sub.k/k1, {circumflex over (X)}.sub.k/k, {circumflex over (x)}.sub.k/k, this latter state originating from the last update of the Kalman filter 400. If {circumflex over (x)}.sub.k/k1={circumflex over (x)}.sub.k/k=0 then {circumflex over (X)}.sub.,k1={circumflex over (X)}.sub.k1/k1 and it {circumflex over (X)}.sub.H,k={circumflex over (X)}.sub.k/k1.

(37) In step 300 a first linearisation point {circumflex over (X)}.sub.,k1 allows to calculate the transition matrix .sub.k1.fwdarw.k({circumflex over (X)}.sub.,k1) at the Kalman period corresponding to the cadence 2. This linearisation point corresponds to the overall state estimated after the last time synchronization.

(38) A second linearisation point {circumflex over (X)}.sub.H,k allows to calculate the observation matrix H.sub.k({circumflex over (X)}.sub.H,k) at the Kalman cycle k in step 300. This linearisation point is obtained as per the calculations of step 100. It corresponds to the last state predicted prior to the next time synchronization.

(39) To simplify notations, the transition matrix .sub.k1.fwdarw.k({circumflex over (X)}.sub.,k1 will be noted from now on as .sub.k1.fwdarw.k, and the observation matrix H.sub.k({circumflex over (X)}.sub.H,k) noted from now on as H.sub.k. .sub.k1.fwdarw.k and H .sub.k are therefore each expressed as a function of an overall state corresponding to their respective linearisation points {circumflex over (X)}.sub.,k1 and {circumflex over (X)}.sub.H,k.

(40) As said previously, the steps are repeated recursively at the following iteration of instant k+1 at the cadence 2, a cadence of the Kalman filter.

(41) Adjustment of Transition and Observation Matrices

(42) The linearisation step taken in each Kalman cycle constitutes a weak point of the estimator E.

(43) Therefore the estimation process comprises in step 300 an additional adjustment step as a function of an observability condition, adjusting the matrices generated by this linearisation step before their use by the Kalman filter: on the one hand the transition matrix (which is used to propagate the covariance matrix), and on the other hand the observation matrix.

(44) The observability condition consists of constraint imposed on the kernel of the observability matrix (the definition of this matrix is indicated in annex 6) via adjustment of the transition and observation matrices so as to include a model predetermined of sub-vectorial non-observable space in the kernel. This model of non-observability consists of an equation in the state space of a sub-vectorial space represented by a base, as a function of an overall state, wherein this sub-vectorial space is updated at each Kalman cycle. The fact of including this sub-space in the kernel of the observability matrix makes it non-observable and diminishes the risk of incoherence if the model is pertinent.

(45) The overall state from which the sub-vectorial space made non-observable is calculated corresponds to the linearisation point of the observation function (see annex 6). A non-observable vector in fact has a zero image via the observation matrix, and this vector corresponds to a linearised state close to an overall state corresponding to the linearisation point of the observation function.

(46) During the adjustment step, a corresponding matrix difference is calculated for each of the two transition and observation matrices.

(47) By convention, starred notations will be used hereinbelow to designate the transition and observation matrices output by the adjustment step.

(48) The data flow for adjustment calculation is shown schematically in FIG. 8, revealing the fact that adjustment of the observation matrix depends on the linearisation point of the observation function in the current Kalman cycle, and that adjustment of the transition matrix depends on the linearisation point of the observation function in the current cycle and in the preceding cycle.

(49) It is therefore necessary to exploit a base B.sub.k generating the sub-vectorial non-observable space at cycle k, and a base B.sub.k1 generating the sub-vectorial space non-observable at cycle k1.

(50) On a Kalman cycle it is possible to store the linearisation point and generate the non-observable base twice per cycle, as shown in figure X. It is equally possible on a Kalman cycle to store the non-observable base so as to generate it once per cycle only. This second option corresponds to the principle diagram of FIG. 6. The latter details the adjustment functions described in FIG. 5.

(51) FIG. 6 details the calculations resulting in obtaining the two matrix differences relative to the transition and observation matrices at the Kalman cycle of instant k.

(52) The transition matrix of the Kalman cycle k1 at the cycle k and the observation matrix at the cycle k are adjusted as per a model of non-observability using the overall state vectors {circumflex over (X)}.sub.H,k1 and {circumflex over (X)}.sub.H,klinearisation points of the observation function at cycles k1 and k. In a step 301, this model estimates the non-observable space of linearised states in the form of a primary non-observable base B.sub.k.sup.1. Such a base can be calculated at each Kalman cycle by using a function analytical obtained during designing of the filter as per the method presented in annex 6 by means of a dynamic system having continuous time, then applied to the dynamic system having discrete time used by the navigation unit.

(53) Two examples of situations at risk, which can be treated according to the process of the invention are described in annex 1. A model of non-observable base specific to the static alignment is developed in annex 2.

(54) Simplifying assumptions are necessarily made to work out this model during the design phase as no equation containing a limited number of variables can represent the real world. It is possible that the true error cloud reduces slowly in all directions of the state space but that due to non-linearities and estimation noises, the EKF estimates a faster decrease along some axes, which crates incoherence. It is possible to model the slow decrease along a certain axis by zero decrease over time and consider this non-observable axis so as to impose a restriction on it. This simplification slightly reduces precision but guarantees coherence of estimations. But as it is no longer necessary to employ model noises to prevent the estimated 3 sigma envelope from reducing along the non-observable axes, a precision gain is a final result.

(55) The static alignment described in annex 1 is an example of a situation in which the model does not consider minimal imperceptible movements, which in the short term have no effect, but, which can be observed in the long term, causing a slow decrease of the true 3 sigma envelope along some axes. In the interests of simplification of the model it is proposed not to consider and model a non-observable base by considering that these small movements are zero, which is an example of simplification. In the second example of annex 1 the soft movement can also be overlooked for modelling the non-observable base.

(56) If the base B.sub.k.sup.1 is orthogonal there is a solution described in annex 3 for adjusting the matrices simply. But this condition is generally unsatisfactory. The proposed solution consists of orthogonalising the base, then making the adjustment. The processing events described below are justified by the final paragraph of annex 3 dealing with the non-orthogonal base.

(57) In an orthogonalisation step 302, the primary base B.sub.k.sup.1 is orthogonalised so as to obtain a secondary base B.sub.k.sup.2 according to a variant of the Gramm-Schmitt method, wherein the vectors are not made unitary since standardisation is included implicitly in the adjustment formula of the matrices presented hereinbelow. This operation generates a matrix

(58) T k = [ a i , j , k ] 1 i q 1 j q
of coefficients having served orthogonalisation.

(59) Let q be the number of vectors of the primary non-observable base.

(60) B.sub.k.sup.1=[{circumflex over (v)}.sub.1,1,k {circumflex over (v)}.sub.1,2,k . . . {circumflex over (v)}.sub.1,q,k], and

(61) B.sub.k.sup.2=[{circumflex over (v)}.sub.2,1,k {circumflex over (v)}.sub.2,2,k . . . {circumflex over (v)}.sub.2,q,k] are put forward.

(62) In the cycle k, the orthogonalisation step can make the following calculation:

(63) v ^ 2 , 1 , k = v ^ 1 , 1 , k , T k = [ 0 ] 1 i q 1 j q Then : i 2 , i q : v ^ 2 , i , k = v ^ 1 , i , k - .Math. j = 1 i - 1 a i , j , k v ^ 2 , j , k With a i , j , k = { ( v ^ 2 , j , k T v ^ 1 , i , k ) if 2 i q ( v ^ 2 , j , k T v ^ 2 , j , k ) _ and 1 j < i 0 if not : coefficients grouped in T k

(64) This base B.sub.k.sup.2 and the orthogonalisation coefficients T.sub.k are stored in step 303 during a Kalman cycle.

(65) In a step 308 the base B.sub.k.sup.2 calculates a matrix difference H.sub.k relative to the observation matrix H.sub.k, without inversion of any matrix, which allows to reduce the calculation load. This step can be omitted if the observation matrix is independent of the estimated state. In this case, the matrix difference is zero.

(66) It is possible to decompose the matrix difference H.sub.k relative to the observation matrix H.sub.kinto q elementary matrix differences:

(67) H k = .Math. i = 1 q H i , k

(68) This decomposition parallelises the calculation of each elementary matrix difference and shortens the calculation time of step 308.

(69) According to annex 3, each elementary matrix difference can be calculated as follows: For i between 1 and q,

(70) H i , k = - 1 v ^ 2 , i , k T v ^ 2 , i , k ( H k v ^ 2 , i , k ) v ^ 2 , i , k T

(71) Also, in the cycle k, the base B.sub.k.sup.1 and the coefficients T.sub.k1 (stored in the preceding Kalman cycle k1) reconstruct a non-observable base B.sub.k/k1.sup.3 in a reconstruction step referenced 304. This base reconstructed at the instant k depends on the primary base at the instant k, but also the orthogonalisation coefficients at the instant k1. Its index is therefore noted k/k1 corresponding to the instant k, knowing the estimations at the instant k1. The bases B.sub.k1.sup.2 and B.sub.k/k1.sup.3 calculate the matrix difference .sub.k1.fwdarw.k without inversion of matrix.

(72) B.sub.k/k1.sup.3=[{circumflex over (v)}.sub.3,1,k/k1 {circumflex over (v)}.sub.3,2,k/k1 . . . {circumflex over (v)}.sub.3,q,k/k1] is put forward.

(73) In the cycle k, reconstruction step 304 can make the following calculation:

(74) Coefficients a j , k - 1 extracted from T k - 1 = [ a i , j , k - 1 ] 1 i q 1 j q v ^ 3 , 1 , k / k - 1 v ^ 1 , 1 , k Then : i 2 , i q : v ^ 3 , i , k / k - 1 = v ^ 1 , i , k - .Math. j = 1 i - 1 a i , j , k - 1 v ^ 3 , j , k / k - 1

(75) In a step referenced 306, a matrix difference relative to the transition matrix is calculated.

(76) It is possible to decompose the matrix difference .sub.i,k1.fwdarw.k relative to the transition matrix .sub.i,k1.fwdarw.k into q elementary matrix differences:

(77) k - 1 .fwdarw. k = .Math. i = 1 q i , k - 1 .fwdarw. k

(78) According to annex 3, each elementary matrix difference can be calculated as follows:

(79) For i between 1 and q,

(80) 0 i , k - 1 .fwdarw. k = 1 v ^ 2 , i , k - 1 T v ^ 2 , i , k - 1 ( v ^ 3 , i , k / k - 1 - k - 1 .fwdarw. k v ^ 2 , i , k - 1 ) v ^ 2 , i , k - 1 T

(81) This decomposition parallelises the calculation of each elementary matrix difference and can shorten the calculation time of step 306.

(82) To reduce the memory usage and calculations, just a single sub-matrix of the transition matrix can be considered: the one in correspondence with the non-zero values of the non-observable vectors.

(83) The two resulting matrix differences verify an important observability condition, a condition detailed in annex 7.

(84) Also, annex 3 explains the origin of the calculation method, and annex 8 gives a mathematical demonstration of it.

(85) Calculation step 306 also produces a metric representative of an amplitude of adjustment caused by the matrix difference relative to the transition matrix.

(86) The calculation step 308 also produces a metric representative of an amplitude of adjustment caused by the matrix difference relative to the observation matrix.

(87) The same type of metric calculation can be performed for the transition matrix and the observation matrix. For example, for the transition matrix the square of the norm of the matrix difference and that of the transition matrix are calculated separately. The metric corresponds to the ratio of these two magnitudes, or the square root of the ratio of these two magnitudes.

(88) The norm optionally uses weighting coefficients to standardise the magnitudes occurring in the state vector. For example, a position error can be of the order of a few meters, whereas an attitude error can be of the order of a few milli-radians. A matrix norm can correspond to the norm of a vector formed by putting all the terms of the matrix in the form of a vector column (Froebenius norm).

(89) Annex 4 explains how to exploit weighting coefficients in the calculation of metrics.

(90) The calculation of metrics optionally employs weighting functions g.sub.100 ( )and g.sub.H( ). If the observation matrix H.sub.k is independent of the estimated state, the matrix difference H.sub.k is zero, and its metric is zero. It is not useful to calculate it. Identically, if the transition matrix is independent of the estimated state, calculation of the associated metric can be omitted.

(91) Referring again to FIG. 5, in an offsetting step 330, the transition matrix and the associated matrix difference are summed so as to obtain an adjusted transition matrix; besides, the observation matrix and the associated matrix difference are summed so as to obtain an adjusted observation matrix.

(92) The sequence 310 illustrated in FIG. 6 producing a pair of matrix differences and a pair of metrics can be conducted n times in parallel.

(93) It is assumed that for n predetermined situations in the continuous field a non-observable base model is known. This model comes from the resolution of a system of equations representative of a type of respective observation and optionally a respective trajectory of the state vector, whereof a method is given in Annex 6.

(94) In a decision step 320, one of the n pairs of matrix differences is selected (for example the pair obtained by the adjustment corresponding to the situation i), and the offsetting step is carried out only with this pair of matrices to produce the adjusted matrices.

(95) The decision step 320 runs a probability test on each pair of matrix differences, by inspecting the value of the corresponding metrics. This test takes the decision to recognise such and such situation.

(96) The selected matrix difference is the matrix difference candidate associated with the metric representative of the smallest adjustment amplitude. So adjustment is made with the most probable matrix differences. The probability criterion can refer to adjustment of the transition matrix and/or of the observation matrix.

(97) Also, the offsetting is carried out only if the metric of the matrix difference selected is less than a predetermined threshold: it can in fact be preferable not to operate offsetting of transition and observation matrices if this offsetting is not considered reliable enough and introduces extra noise to the system rather than correcting it.

(98) The foregoing estimation process can be performed by a computer program executed by the calculation means of the inertial unit IN. Also, the identification card 1 comprises a computer program product comprising program code instructions for execution of the steps of the process described when this program is executed by the identification card 1.

(99) Annex 1: Examples of Situations at Risk of an Inertial Navigation Unit Exploiting an EKF Filter

(100) A first example relates to an inertial navigation unit in static alignment phase on the ground. In this phase, the local vertical is estimated by measuring the gravity vector by means of accelerometers, and the orientation of the unit is estimated by measuring the angular speed vector of terrestrial rotation by means of gyrometers. The observation used in this phase is typically ZUPT (Zero velocity Update), which is an observation of a virtual sensor indicating that the speed of the carrier is zero relative to the earth. A measurement error is modelled by white noise. Now, it eventuates that the inertial sensors record small non-random movements due for example to wind or effects of dilation due to heat. This corresponds to a determinist measuring error not taken into account in the non-inertial sensor model producing the observation ZUPT.

(101) There is incoherence between the dynamic true system and the dynamic modelled system. Because the duration of the static alignment is short, incoherence is negligible. By contrast there are many applications needing a alignment of several hours, or several days, weeks or months. In this case, it is possible for the incoherence to become preponderant.

(102) A second example relates to an inertial navigation unit mounted on a sailing boat exploiting speed observation by projection in the marker boat. This non-inertial measurement is taken in general by means of a Loch sensor and it can also be a relative measurement relative to the water. The fact that it is done in projection in the marker boat creates dependence on measuring vis--vis the attitude of the boat. Assuming that this observation alone is used, it is possible for incoherence to appear between the estimations of the EKF filter and the real world, and for this incoherence to produce substantial errors after several hours if the boat is sailing at a constant heading.

(103) Annex 2: Model of Non-Observable Axes in Static Alignment

(104) This annex presents the modelling of the non-observable base resolving the incoherence of the first example of the annex 1 by means of the proposed process.

(105) Hypotheses: The Kalman filter exploits the model of errors in PHI and the mechanisation functions in free azimuth.

(106) The following notations are used:

(107) TABLE-US-00001 [t] Terrestrial frame. x = axis of rotation [m] Measurement frame [v] Platform frame, in free azimuth T.sub.vt Change of basis matrix from [t] to [v] T.sub.vm Change of basis matrix from [m] to [v] Mechanisation angle .sub.t Terrestrial rotation speed g Gravity module (gravity = heaviness gravitation) g.sub.e Gravity module at the equator .sub.0, K.sub.1, K.sub.2 Constants linked to geodesic constants WGS84 R.sub.x Radius of curvature of the reference ellipsoid in the direction x of [v] R.sub.y Radius of curvature of the reference ellipsoid in the direction y of [v] R.sub.N Radius of curvature of the reference ellipsoid in the north direction R.sub.E Radius of curvature of the reference ellipsoid in the east direction Geodesic torsion on the reference ellipsoid L Latitude z Altitude x Position error according to the axis x of [v] y Position error according to the axis y of [v] z Position error according to the axis z of [v] V.sub.x Speed error according to the axis x of [v] V.sub.y Speed error according to the axis y of [v] V.sub.z Speed error according to the axis z of [v] .sub.x Rotation error according to the axis x of [v] .sub.y Rotation error according to the axis y of [v] .sub.z Rotation error according to the axis z of [v] b.sub.xm Accelerometer bias error according to the axis x of [m] b.sub.ym Accelerometer bias error according to the axis y of [m] b.sub.zm Accelerometer bias error according to the axis z of [m] K.sub.xm Accelerometer scale factor error according to the axis x of [m] K.sub.ym Accelerometer scale factor error according to the axis y of [m] K.sub.zm Accelerometer scale factor error according to the axis z of [m] d.sub.xm Gyro deviation error according to the axis x of [m] d.sub.ym Gyro deviation error according to the axis y of [m] d.sub.zm Gyro deviation error according to the axis z of [m]

(108) The components of states occurring in the equations are presented in an arbitrary order, which is the following:

(109) N = [ [ x y z ] [ x y z ] [ b xm b ym b zm ] [ K xm K ym K zm ] [ d xm d ym d zm ] ] Other zero components : [ x y z ] Position errors in the platform frame [ v ] [ x y z ] Attitude errors in the platform frame [ v ] [ b xm b ym b zm ] Accelerometer bias errors in the measurement frame [ m ] [ K xm K ym K zm ] Accelerometer scale factor errors in the measurement frame [ m ] [ d xm d ym d zm ] Gyro deviation errors in the measurement frame [ m ] Mechanisation angle error in free azimuth

(110) The acquired measurements are speeds. The carrier is assumed to be perfectly immobile in translation and rotation. The sensor defaults are assumed to be constant over time.

(111) TABLE-US-00002 Accelerometers gyrometers d dt [ b xm b ym b zm ] = [ 0 0 0 ] d dt [ K xm K ym K zm ] = [ 0 0 0 ] d dt [ d xm d ym d zm ] = [ 0 0 0 ]

(112) These equations form a stationary system of form:

(113) d dt [ x y z ] = [ V x V y V z ] d dt [ V x V y V z ] = T vm [ - b xm - b ym - b zm ] + T vm [ - K xm 0 0 0 - K ym 0 0 0 - K zm ] T mv [ 0 0 g ] - [ 0 - g 0 g 0 0 0 0 0 ] [ x y z ] + [ 0 - m 21 - m 31 m 21 0 - m 32 m 31 m 32 0 [ V x V y V z ] + [ 0 0 0 0 0 0 p 31 p 32 p 33 ] [ x y z ] + [ q 1 q 2 q 3 ] d dt [ x y z ] = T vm [ d xm d ym d zm ] + A ( [ c 1 c 2 c 3 ] ) [ x y z ] + [ 1 - 1 R x 0 ] V x + [ 1 R y - 1 ] V y + t [ a 1 a 2 a 3 ] x + t [ b 1 b 2 b 3 ] y + t [ - T vt 21 T vt 11 0 ]

(114) The search method of non-observable vectors:

(115) Searching for

(116) [ V x V y V z ] , [ b xm b ym b zm ] , [ K xm K ym K zm ] , [ d xm d ym d zm ]
such as:

(117) n 0 , d n dt n [ V x V y V z ] = [ 0 0 0 ]

(118) The non-observable axes are the following:

(119) TABLE-US-00003 Axis 1: v.sub.1 Axis 2: v.sub.2 Axis 3: v.sub.3 Axis 4: v.sub.4 Axis 5: v.sub.5 [ [ x y z ] [ V x V y V z ] [ x y z ] [ b xm b ym b zm ] [ K xm K ym K zm ] [ d xm d ym d zm ] ] [ [ 0 0 0 ] [ 0 0 0 ] [ 1 0 0 ] T mv [ 0 - g 0 ] [ 0 0 0 ] T mv [ 0 - c 3 c 2 ] 0 ] 0 [ [ 0 0 0 ] [ 0 0 0 ] [ 0 1 0 ] T mv [ g 0 0 ] [ 0 0 0 ] T mv [ c 3 0 - c 1 ] 0 ] [ [ 0 0 0 ] [ 0 0 0 ] [ 0 0 1 ] [ 0 0 0 ] [ 0 0 0 ] T mv [ - c 2 c 1 0 ] 0 ] [ [ 0 0 0 ] [ 0 0 0 ] [ 0 0 0 ] [ 0 0 - gT mv 33 ] [ 0 0 1 ] [ 0 0 0 ] 0 ] [ [ 1 0 0 ] [ 0 0 0 ] [ 0 0 0 ] T mv [ 0 0 p 31 ] [ 0 0 0 ] T mv [ - t a 1 - t a 2 - t a 3 ] 0 ] Axis 6: v.sub.6 Axis 7: v.sub.6 Axis 8: v.sub.8 Axis 9: v.sub.9 Axis 10: v.sub.10 [ [ x y z ] [ V x V y V z ] [ x y z ] [ b xm b ym b zm ] [ K xm K ym K zm ] [ d xm d ym d zm ] ] [ [ 0 1 0 ] [ 0 0 0 ] [ 0 0 0 ] T mv [ 0 0 p 32 ] [ 0 0 0 ] T mv [ - t b 1 - t b 2 - t b 3 ] 0 ] [ [ 0 0 1 ] [ 0 0 0 ] [ 0 0 0 ] T mv [ 0 0 p 33 ] [ 0 0 0 ] [ 0 0 0 ] 0 ] [ [ 0 0 0 ] [ 0 0 0 ] [ 0 0 0 ] [ - gT mv 13 0 0 ] [ 1 0 0 ] [ 0 0 0 ] 0 ] [ [ 0 0 0 ] [ 0 0 0 ] [ 0 0 0 ] [ 0 - gT mv 23 0 ] [ 0 1 0 ] [ 0 0 0 ] 0 ] [ [ 0 0 0 ] [ 0 0 0 ] [ 0 0 0 ] [ 0 0 0 ] [ 0 0 0 ] T mv [ t T vt 21 - t T vt 11 0 ] 1 ]

(120) Annex 3: Restriction of the Filter

(121) Matrix differences are applied to the transition and observation matrices at each Kalman cycle as per a model of q non-observable axes {circumflex over (v)}.sub.1,k({circumflex over (X)}.sub.H,k), {circumflex over (v)}.sub.2,k({circumflex over (X)}.sub.H,k), . . . {circumflex over (v)}.sub.q,k({circumflex over (X)}.sub.H,k) obtained at each cycle k according to functions of the overall state {circumflex over (X)}.sub.H,k corresponding to the linearisation point of the observation matrix in the Kalman period. .sub.k.fwdarw.k+1({circumflex over (X)}.sub.,k) is replaced by: *.sub.k.fwdarw.k+1({circumflex over (X)}.sub.,k, {circumflex over (X)}.sub.H,k, {circumflex over (X)}.sub.H,k+1)=.sub.k.fwdarw.k+1({circumflex over (X)}.sub.,k)+.sub.k.fwdarw.k+1({circumflex over (X)}.sub.,k, {circumflex over (X)}.sub.H,k, {circumflex over (X)}.sub.H,k+1)close to .sub.k.fwdarw.k+1({circumflex over (X)}.sub.,k), and such as for i between 1 and q: Equation 1: {circumflex over (v)}.sub.i,k+1({circumflex over (X)}.sub.H,k+1)=*.sub.k.fwdarw.k+1({circumflex over (X)}.sub.,k,{circumflex over (X)}.sub.H,k,{circumflex over (X)}.sub.H,k+1), and .sub.k.fwdarw.k+1({circumflex over (X)}.sub.H,k,{circumflex over (X)}.sub.H,k+1) minimum vis-vis all possible solutions. H.sub.k({circumflex over (X)}.sub.H,k) is replaced by H*.sub.k({circumflex over (X)}.sub.H,k)+H.sub.k({circumflex over (X)}.sub.H,k)+H.sub.k({circumflex over (X)}.sub.H,k) close to H.sub.k({circumflex over (X)}.sub.H,k), and such as for i between 1 and q:

(122) .sup.H k(.sup.1. H

(123) Equation 2: H*.sub.k({circumflex over (X)}.sub.H,k).sub.i,k({circumflex over (X)}.sub.H,k) minimum relative to all possible solutions.

(124) The matrix norm in question is the Froboenius norm.

(125) The matrices *.sub.k.fwdarw.k+1({circumflex over (X)}.sub.,k, {circumflex over (X)}.sub.H,k,{circumflex over (X)}.sub.H,k+1)and H.sub.k*({circumflex over (X)}.sub.H,k)are used in the propagation step of the covariance matrix, and of the calculation of the Kalman gain.

(126) The following notations will be adopted from now on: *.sub.k.fwdarw.k+1({circumflex over (X)}.sub.,k,{circumflex over (X)}.sub.H,k,{circumflex over (X)}.sub.k.fwdarw.k+1) is noted .sub.k.fwdarw.k+1* .sub.k.fwdarw.k+1({circumflex over (X)}.sub.H,k,{circumflex over (X)}.sub.H,k+1)is noted .sub.k.fwdarw.k+1 H*.sub.k({circumflex over (X)}.sub.H,k) is noted H*.sub.k H.sub.k({circumflex over (X)}.sub.H,k) is noted H.sub.k

(127) Resolution of equations 1 and 2 in the event where the modelled non-observable vectors {circumflex over (v)}.sub.l,k, {circumflex over (v)}.sub.2,k, . . . {circumflex over (v)}.sub.q,k are orthogonal to each other at any time k:

(128) The total matrix difference of the transition matrix corresponds to the sum of q independent matrix differences. The same applies for the matrix difference of the observation matrix.

(129) 0 k - 1 .fwdarw. k = .Math. i = 1 q i , k - 1 .fwdarw. k H k = .Math. i = 1 q H i , k
And, for i between 1 and q:

(130) i , k - 1 .fwdarw. k = 1 v ^ i , k - 1 T v ^ i , k - 1 ( v ^ i , k / k - 1 - k - 1 .fwdarw. k v ^ i , k - 1 ) v ^ i , k - 1 T H i , k = - 1 v ^ i , k T v ^ i , k ( H k v ^ i , k ) v ^ i , k T
The demonstration of these relationships is given in annex 8 according to a geometric method.
Resolution of equations 1 and 2 for a non-orthogonal base, and for any q:

(131) This base defines a model of the non-observable vectorial space: any linear combination of the base vectors is non-observable.

(132) The image by the observation matrix adjusted by any linear combination of these vectors is therefore zero. It is therefore possible to orthogonalise the base, then apply the preceding method to calculate the matrix difference of the observation matrix.

(133) On the other hand, the image by the adjusted transition matrix of a linear combination of non-observable vectors corresponds to this same linear combination applied to the images by the same transition matrix of each one of the non-observable starting vectors. It is therefore possible to determine the linear combination for orthogonalising the base of non-observable vectors to the cycle k1, and to use this same combination to transform the model of non-observable base at the cycle k. This applies the preceding method for calculating the matrix difference of the transition matrix.

(134) Annex 4: Weighted Metrics

(135) Scalars m.sub.and m.sub.H called metrics accompany the calculated adjustment matrices and gave an indication on the relative amplitude of the adjustment, the aim being to modify the transition and observation matrices as little as possible.

(136) Different methods are possible to compare .sub.k1.fwdarw.k to .sub.k1.fwdarw.k, and H.sub.k to H.sub.k. The aim is to have function increasing by

(137) .Math. k - 1 .fwdarw. k .Math. k - 1 .fwdarw. k and .Math. H k .Math. .Math. H k .Math.
or elementary consituents

(138) .Math. i , k - 1 .fwdarw. k .Math. k - 1 .fwdarw. k and .Math. H i , k .Math. .Math. H k .Math. ,
these norms being able to be calculated in the same way as that of a vector by grouping all the coefficients of the matrix in a single column.
Examples are given below:

(139) m = .Math. k - 1 .fwdarw. k .Math. .Math. k - 1 .fwdarw. k .Math. or m = ( .Math. k - 1 .fwdarw. k .Math. .Math. k - 1 .fwdarw. k .Math. ) 2 or m = .Math. i = 1 N .Math. i , k - 1 .fwdarw. k .Math. .Math. k - 1 .fwdarw. k .Math. m H = .Math. H k .Math. .Math. H k .Math. or m H = ( .Math. H k .Math. .Math. H k .Math. ) 2 or m H = .Math. i = 1 N .Math. H i , k .Math. .Math. H k .Math.

(140) Variants are possible using weighting functions and to balance the weight of the different components in the calculation of matrix norms. The same examples as the preceding can be reprised given these functions.

(141) For example:

(142) m = .Math. g ( k - 1 .fwdarw. k ) .Math. .Math. g ( k - 1 .fwdarw. k ) .Math. and m H = .Math. g H ( H k ) .Math. .Math. g H ( H k ) .Math.

(143) A possible realisation of these weighting functions consists of first constructing a weighting function f of a state vector, then deducing g.sub.and g.sub.H.

(144) So by this method, associated with the vector U=[u.sub.1 u.sub.2 . . . u.sub.L ].sup.T by a weighting function f is a vector U=f(U)=[a.sub.1 a.sub.2 . . . a.sub.L]U=[a.sub.1u.sub.1 a.sub.2u.sub.2 . . . a.sub.Lu.sub.L].sup.T where the real coefficients a.sub.1, a.sub.2, . . . a.sub.L form the weighting coefficients. The latter weigh the transition matrix as follows:

(145) g ( [ t 11 t 12 .Math. t 1 L t 21 t 22 t 2 L .Math. .Math. t L 1 t L 2 t 1 L ] ) = [ a 1 a 1 t 11 a 1 a 2 t 12 .Math. a 1 a N t 1 L a 2 a 1 t 21 a 2 a 2 t 22 a 2 a N t 2 L .Math. .Math. a L a 1 t L 1 a L a 2 t L 2 a L a L t L L ]

(146) Because if V=U then f(V)=g .sub.()f(U) and vice versa. These are 2 equivalent ways of writing the relationship between U and V, the second calculating norms in a balanced way.

(147) In fact, by placing

(148) = [ t 11 t 12 .Math. t 1 L t 21 t 22 t 2 L .Math. .Math. t L 1 t L 2 t L L ] ,
U=[u.sub.1 u.sub.2 . . . u.sub.L].sup.T and V =[v.sub.1 v.sub.2 . . . v.sub.L].sup.T
This gives

(149) g ( ) f ( U ) = [ a 1 a 1 t 11 a 1 a 2 t 12 .Math. a 1 a L t 1 L a 2 a 1 t 21 a 2 a 2 t 22 a 2 a L t 2 L .Math. .Math. a L a 1 t L 1 a L a 2 t L 2 a L a L t L L ] [ a 1 u 1 a 2 u 2 .Math. a L u L ] = [ a 1 t 11 a 1 t 12 .Math. a 1 t 1 L a 2 t 21 a 2 t 22 a 2 t 2 L .Math. .Math. a L t L 1 a L t L 2 a L t L L ] [ u 1 u 2 .Math. u L ]
That is

(150) g ( ) f ( U ) = [ a 1 a 2 .Math. a L ] [ t 11 t 12 .Math. t 1 L t 21 t 22 t 2 L .Math. .Math. t L 1 t L 2 t 1 L ] [ u 1 u 2 .Math. u L ] = [ a 1 a 2 .Math. a L ] U
Therefore g.sub.()f(U)=[a.sub.1a.sub.2 . . . a .sub.L].sup.V
Therefore f =g .sub.0(0 f (U)

(151) 0 Similarly g H ( [ t 11 t 12 .Math. t 1 L t 21 t 22 t 2 L .Math. .Math. ] ) = [ 1 a 1 t 11 1 a 2 t 12 .Math. 1 a L t 1 L 1 a 1 t 21 1 a 2 t 22 1 a L t 2 L .Math. .Math. ]

(152) Annex 5: Notion of Dynamic System and Application to Navigation

(153) This is a determinist system of equations comprising an evolution equation and an observation equation. The evolution equation describes a state vector as a function of its former state and a command. The observation equation provides a scalar or vector dependent on the state vector. These equations can be non-linear. There are systems having continuous time and systems having discrete time.

(154) Continuous time system : { X . ( t ) = f ( X ( t ) , u ( t ) ) z ( t ) = h ( X ( t ) ) Discrete time system : { x k + 1 = f ( x k , u k ) z k = h ( x k )

(155) Where X(t) and x.sub.k, designate the state of the system, u(t) and u.sub.k designate an input command, z(t) and z.sub.k, designate observation at a continuous instant t or discrete instant k, according to the continuous or discrete mode of representation.

(156) The equations of the classic Newton mechanics describe the state of a mobile marker as a function of its acceleration and its angular speed at any time. These equations constitute an example of continuous evolution function, the state vector combining information on position, speed, and orientation of the mobile marker, and the command vector comprising acceleration and angular speed.

(157) In the prior art of an inertial navigation unit calculations can be made discretely only. The result is that this continuous evolution function is modelled in the form of a discrete evolution function. Different known methods convert the continuous evolution equations into discrete evolution equations.

(158) The observation function modelled in the dynamic system having discrete time of the inertial unit describes measurements taken by a particular non-inertial sensor. Several observation functions can be conducted successively over time as a function of sensors used.

(159) The dynamic system having time discrete used in the navigation unit inertial is therefore a model as faithful as possible of another dynamic system, having continuous time, corresponding to the true world. The observability properties of these two systems are therefore different.

(160) The observability properties of a dynamic system characterise the information on the state of the system at any given instant, which can be obtained by taking into account observations made later and evolution and observation functions. These properties depend especially on the command, that is, movements of the carrier in the case of the inertial navigation unit.

(161) Annex 6: Observability of a System Having Continuous Time

(162) Consider a dynamic non-linear continuous system, which at the instant 0 is in a state X(0). Hereinbelow the propagation function and the input command channel the system into states X(t), defining a particular trajectory in the state space.

(163) As this trajectory is known, it can be assumed that the state initial now differs from X(O) by a minor value X(0) called linearised state, and that the input commands are the same as in the preceding situation. The following states are X(t)+X(t) At each step the observation function collects information on the current state. The problem here is knowing if this information from start to finish of the relevant scenario know X(0). The space of the states linearised at the instant 0 is composed by an observable sub-space and a non-observable sub-space. The non-observable space at the instant 0 characterised by the state X(0) can be deduced as per a novel system dynamic for describing the linearised states by knowing the trajectory X(t):

(164) { X . ( t ) = f X ( X ( t ) , u ( t ) ) X ( t ) = F ( X ( t ) , u ( t ) ) X ( t ) z ( t ) = ( dh dX ) X ( t ) X ( t ) = H ( X ( t ) ) X ( t )
where F and H are matrices variable over time corresponding to the evolution matrix and the observation matrix of the system dynamic linearised for a given trajectory.

(165) The non-observable space comprises linearised states X(t) such as

(166) d k z ( t ) dt k = 0
for k=1,2,3, . . . and t<0

(167) The solutions form a vectorial space of linearised states. There is a base B(0) of this space forming non-observable vectors B(0)=(X.sub.1(0),X.sub.2(0), . . . X.sub.4(0) at the instant 0.

(168) Annex 7: Observability of a System with Discrete Time

(169) Consider a discrete non-linear dynamic system, which at the instant 0 is in a state X.sub.0. At different discrete successive instants, the propagation function and the input command channel the system in states X.sub.1, X.sub.2, . . . Knowing these successive states, it is assumed that the initial state now differs from X.sub.0 by a minor value X.sub.0 called linearised state, and that the input commands are the same as in the preceding situation. The following successive states are X.sub.1+X.sub.1, X.sub.2+X.sub.2, . . . At each step the observation function collects information on the current state. The problem is how to ascertain whether or not this information, from the scenario in question's start to finish, makes it possible to know X.sub.0. The space of linearised states at the instant 0 is composed of an observable sub-space and a non-observable sub-space. An observer operating on the principle of linearisation, such as the EKF, could never estimate an error in the non-observable space of linearised states.

(170) The non-observable space at the instant 0 characterised by the state X.sub.0 can be deduced by considering the dynamic system of the linearised states X.sub.k, carrying out the evolution function and the observation function linearised at the different states X.sub.0,X.sub.1, X.sub.2, . . . The evolution function linearised at the instant k is called transition matrix noted .sub.k.fwdarw.k+1. The observation function linearised at the instant k is noted H.sub.k(X.sub.k).

(171) A first observation of the difference X.sub.0 about X.sub.0 is made by means of the observation matrix H.sub.0(X.sub.0) corresponding to the observation function linearised in X.sub.0. A prediction of the difference X.sub.1 about X.sub.1 is made from the difference X.sub.0 about X.sub.0 by means of the transition matrix .sub.0.fwdarw.1(X.sub.0) corresponding to the evolution function linearised in X.sub.0. A second observation of the difference ox.sub.1 about .sub.x, is made by means of the observation matrix H.sub.1(X.sub.1), etc . . .

(172) It is known that the non-observable space of the system describing the perturbation is the kernel of the matrix:

(173) M ( X 0 ) = [ H 0 ( X 0 ) H 1 ( X 1 ) 0 .fwdarw. 1 ( X 0 ) H 2 ( X 2 ) 1 .fwdarw. 2 ( X 1 ) 0 .fwdarw. 1 ( X 0 ) .Math. ]

(174) The non-observable space and the observable space therefore depend on the initial reference state X.sub.0, of the input command (inertial data in the case of inertial units), and evolution and observation functions.

(175) It can be supposed that the vector v.sub.0(X.sub.0) forms part of the kernel of M(X.sub.0).

(176) Proposing : { v 1 ( X 1 ) = 0 .fwdarw. 1 ( X 0 ) v 0 ( X 0 ) v 2 ( X 2 ) = 1 .fwdarw. 2 ( X 1 ) v 1 ( X 1 ) v 3 ( X 3 ) = 2 .fwdarw. 3 ( X 2 ) v 2 ( X 2 ) .Math. , gives : { H 0 ( X 0 ) v 0 ( X 0 ) = 0 H 1 ( X 1 ) v 1 ( X 1 ) = 0 H 2 ( X 2 ) v 2 ( X 2 ) = 0 .Math.

(177) At a given time, an observability condition is formed for a non-observable direction by the system of equations:

(178) { v k + 1 ( X k + 1 ) = k .fwdarw. k + 1 ( X k ) v k ( X k ) H k + 1 ( X k + 1 ) v k + 1 ( X k + 1 ) = 0
Where v.sub.k(X.sub.k) form a particular non-observable vector at a discrete instant k.

(179) This condition applies any time and is linked to the trajectory X.sub.0, X.sub.1, X.sub.2, . . .

(180) In the EKF, this trajectory is noised by the estimation errors. This noise tends to reduce the kernel of the observability matrix, and therefore reduce the dimension of the non-observable space.

(181) Also the transition and observation functions are not linearised at the same point. The set X.sub.0, X.sub.1, X.sub.2, . . . is replaced by a set {circumflex over (X)}.sub.H,0, {circumflex over (X)}.sub.,1, {circumflex over (X)}.sub.H,1, {circumflex over (X)}.sub.,2, {circumflex over (X)}.sub.H,2, {circumflex over (X)}.sub.,3, . . .

(182) Et : { v 1 ( X H , 1 ) = 0 .fwdarw. 1 ( X , 0 ) v 0 ( X H , 0 ) v 2 ( X H , 2 ) = 1 .fwdarw. 2 ( X , 1 ) v 1 ( X H , 1 ) v 3 ( X H , 3 ) = 2 .fwdarw. 3 ( X , 2 ) v 2 ( X H , 2 ) .Math. , With : { H 0 ( X H , 0 ) v 0 ( X H , 0 ) = 0 H 1 ( X H , 1 ) v 1 ( X H , 1 ) = 0 H 2 ( X H , 2 ) v 2 ( X H , 2 ) = 0 .Math.

(183) The observability condition is written:

(184) { v k + 1 ( X ^ H , k + 1 ) = k .fwdarw. k + 1 ( X ^ , k ) v k ( X ^ H , k ) H k + 1 ( X ^ H , k + 1 ) v k + 1 ( X ^ H , k + 1 ) = 0

(185) Annex 8: Demonstration of the Adjustment Formula of Matrices for an Orthogonal Base of Vectors

(186) Hypothesis: p matrices column U.sub.i such as:
(i,j)tqi j,U.sub.i.sup.TU.sub.j=0 (1)

(187) Let p matrices column beV.sub.i of the same dimension as the U.sub.i.

(188) Let the matrix be A.

(189) Problem: The aim here is to find a matrix A having a minimum norm such that:
(A +A)(U.sub.1U.sub.2. . . U.sub.q)=V.sub.1V.sub.2. . . V.sub.q)

(190) The matrix A+A can be in the form of p matrices as lines in columns, and the vector V.sub.k in the form of q scalars in columns:

(191) ( L 1 + L 1 L 2 + L 2 .Math. L p + L p ) ( U 1 U 2 .Math. U q ) = ( v 11 v 12 .Math. v 1 q v 21 v 22 v 2 q .Math. .Math. .Math. v q 1 v q 2 .Math. v qq )

(192) The aim is therefore to find the L.sub.k having a minimum norm such that:
(k,l){1,2, . . . q}.sup.2,(L.sub.k+L.sub.k)U.sub.1=v.sub.kl(2)

(193) Geometric interpretation of the problem: The line and column matrices are associated with vectors in a vectorial space of dimension greater than q. The same notations are kept by adding an arrow above the variables.

(194) The condition (1) leads to: (i,j)tqij,{right arrow over (U)}.sub.i{right arrow over (U)}.sub.j

(195) The condition (2) leads to the following scalar product:
(k,l){1,2, . . . , q}.sup.2.sup.2, ({right arrow over (L)}.sub.k +{right arrow over (L)}.sub.k).{right arrow over (U)}.sub.1=v.sub.kl

(196) The projection of the vector {right arrow over (L)}.sub.k on the oriented axis defined by the unitary vector

(197) 0 U .fwdarw. 1 .Math. U .fwdarw. 1 .Math. has L .fwdarw. k .Math. U .fwdarw. 1 .Math. U .fwdarw. 1 .Math. = L .fwdarw. k .Math. U .fwdarw. 1 .Math. U .fwdarw. 1 .Math.
for abscissa on this axis. This is the governing cosine of {right arrow over (L)}.sub.k according to this axis.

(198) The aim here is {right arrow over (L)}.sub.k such as the projection of {right arrow over (L)}.sub.k+{right arrow over (L)}.sub.k on this axis is

(199) ( L .fwdarw. k + L .fwdarw. k ) .Math. U .fwdarw. 1 .Math. U .fwdarw. 1 .Math. = v kl .Math. U .fwdarw. 1 .Math. .

(200) {right arrow over (L)}.sub.k can be decomposed according to the vectors U.sub.i, which generate a vectorial space of dimension q, and according to a residual component {right arrow over ()}.sub.k orthogonal to this space:
{right arrow over (L)}.sub.k=.sub.1=1.sup.q{right arrow over (L)}.sub.kl+{right arrow over ()}.sub.k

(201) And therefore

(202) ( L .fwdarw. k + L .fwdarw. k 1 ) .Math. U .fwdarw. 1 .Math. U .fwdarw. 1 .Math. = v kl .Math. U .fwdarw. 1 .Math. .

(203) FIG. 7 shows that there is an infinite number of solutions {right arrow over (X)}.sub.k1. The minimal solution corresponds to a vector {right arrow over (L)}.sub.k1 colinear to {right arrow over (U)}.sub.1.

(204) According to the figure this gives:

(205) L .fwdarw. k 1 = v kl .Math. U .fwdarw. 1 .Math. U .fwdarw. 1 .Math. U .fwdarw. 1 .Math. - L .fwdarw. k .Math. U .fwdarw. 1 .Math. U .fwdarw. 1 .Math. U .fwdarw. 1 .Math. U .fwdarw. 1 .Math. L .fwdarw. k 1 = ( v kl .Math. U .fwdarw. 1 .Math. - L .fwdarw. k .Math. U .fwdarw. 1 .Math. U .fwdarw. 1 .Math. ) U .fwdarw. 1 .Math. U .fwdarw. 1 .Math. L .fwdarw. k 1 = 1 .Math. U .fwdarw. 1 .Math. 2 ( v kl - L .fwdarw. k .Math. U .fwdarw. 1 ) U .fwdarw. 1

(206) This therefore gives {right arrow over ()}.sub.k={right arrow over (0)}, and:

(207) L .fwdarw. k = .Math. l = 1 q 1 .Math. U .fwdarw. 1 .Math. 2 ( v kl - L .fwdarw. k .Math. U .fwdarw. 1 ) U .fwdarw. 1

(208) Matrix notation therefore again gives:

(209) L k = .Math. l = 1 q 1 U 1 T U 1 ( v kl - L k U 1 ) U 1 T

(210) Reforming the matrices A and A, after some manipulations, gives the relationship:

(211) A = .Math. l = 1 q 1 U 1 T U 1 ( V 1 - AU 1 ) U 1 T