Method and system for combining sensor data

11815355 · 2023-11-14

Assignee

Inventors

Cpc classification

International classification

Abstract

A method and system for combining data obtained by sensors, having particular application in the field of navigation systems, are disclosed. The techniques provide significant improvement over state-of-the-art Markovian methods that use statistical noise filters such as Kalman filters to filter data by comparing instantaneous data with the corresponding instantaneous estimates from a model. In contrast, the techniques disclosed herein use multiple time periods of various lengths to process multiple sensor data streams, in order to combine sensor measurements with motion models at a given time epoch with greater confidence and accuracy than is possible with traditional “single epoch” methods. The techniques provide particular benefit when the first and/or second sensors are low-cost sensors (for example as seen in smart phones) which are typically of low quality and have large inherent biases.

Claims

1. A system comprising: at least one sensor, mounted on a platform, configured to make measurements from which platform position or platform movement may be determined, in order to determine an evolution of platform position; and a processor adapted to perform, for each of a plurality of time instances, the steps of: obtaining, in a first time period, first data from at least one sensor mounted on the platform, where the at least one sensor makes measurements from which platform position or movement is determined; obtaining, in a second time period, second data from the at least one sensor mounted on the platform; comparing the first and second data to each other and to a motion model representing expected motion of the platform over at least a part of the respective first and second time periods to obtain corrected first data and corrected second data; and determining the evolution of the position of the platform, where the evolution is constrained by at least one of the corrected first data and corrected second data.

2. The system of claim 1, wherein the at least one sensor comprises at least one of: an accelerometer, a gyroscope, a magnetometer, a barometer, a GNSS unit, a radio frequency receiver, a pedometer, a camera, a light sensor, a pressure sensor, a strain sensor, a proximity sensor, a RADAR, and a LIDAR.

3. The system of claim 1, wherein the platform is an electronic user device.

4. The system of claim 3, wherein the electronic user device is a smart phone or a wearable device.

5. A method of determining an evolution of position of a platform over time as the platform is being transported on a vehicle or a human user, comprising: obtaining, in a first time period, first data from at least one sensor mounted on the platform, where the at least one sensor makes measurements from which platform position or platform movement is determined; obtaining, in a second time period, second data from the at least one sensor mounted on the platform; comparing the first and second data to each other and to a motion model representing expected motion of the platform over at least a part of the respective first and second time periods to obtain corrected first data and corrected second data; and determining the evolution of the position of the platform, where the evolution is constrained by at least one of the corrected first data and corrected second data.

6. The method of claim 5, further comprising analyzing the first and second data over at least a part of the respective first and second time periods in order to assess the reliability and/or accuracy of the first and second data.

7. The method of claim 6, wherein analyzing the first and second data comprises performing self-consistency checks on the data across the corresponding time period.

8. The method of claim 6, wherein the duration of the first time period, the second time period and/or their amount of overlap relative to each other and the respective time instance is dynamically adjusted in response to the assessed reliability and/or accuracy of the associated sensor data.

9. The method of claim 5, wherein the first data and/or second data are iteratively analyzed forwards and/or backwards in time.

10. The method of claim 9, wherein a first analysis of the first and/or second data is based on at least one confidence threshold and the at least one confidence threshold is modified after the first analysis; wherein a subsequent analysis of said first and/or second data is based on the modified confidence threshold.

11. The method of claim 5, wherein the motion model comprises at least one of a motion context component comprising a motion context of the platform, and a position context component comprising a position context of the platform.

12. The method of claim 5, wherein the motion model comprises at least one parameter that quantitatively describes an aspect of the motion, and/or at least one function that may be used to determine a parameter that quantitatively describes an aspect of the motion.

13. The method of claim 12, wherein the at least one parameter is one of: a pedestrian step length, a pedestrian speed, a pedestrian height, a pedestrian leg length, a stair rise, a stair run or a compass-heading to direction-of-motion offset.

14. The method of claim 12, wherein the at least one function is a function determining the pedestrian step length or speed.

15. The method of claim 5, wherein at least one component of the motion model is automatically determined from an analysis of the first and/or second data, or wherein at least one component of the motion model is automatically determined from an analysis of data obtained from the at least one sensor during at least one previous determination of an evolution of platform position.

16. The method of claim 5, wherein the constraining of the evolution comprises determining a measurement bias of the at least one sensor and correcting for said bias.

17. The method of claim 5, wherein at least one of the first and second time periods extends into the future with respect to a present time instance in which the evolution is being determined.

18. The method of claim 5, wherein the at least one sensor comprises at least one of: an accelerometer, a gyroscope, a magnetometer, a barometer, a GNSS unit, a radio frequency receiver, a pedometer, a camera, a light sensor, a pressure sensor, a strain sensor, a proximity sensor, a RADAR, and a LIDAR.

19. The method of claim 5, wherein the platform comprises an electronic user device.

20. The method of claim 5 wherein the at least one sensor used to obtain the first data is different from the at least one sensor used to obtain the second data.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) Examples of the present techniques introduced herein will now be described with reference to the attached drawings, in which:

(2) FIG. 1 is a schematic logical diagram of a portable device that may implement the techniques introduced herein;

(3) FIG. 2 schematically represents data that has been obtained by sensors in first and second time periods;

(4) FIG. 3 schematically illustrates example position data obtained by a GNSS sensor between two time instances;

(5) FIG. 4 schematically illustrates the evolution of the position of a smart phone as determined between time instances;

(6) FIG. 5 schematically represents data that has been obtained over first and second time periods;

(7) FIG. 6 is a flow diagram outlining the main steps of one embodiment of the techniques introduced herein;

(8) FIG. 7 is a flow diagram outlining the main steps of a further embodiment of the techniques introduced herein, and;

(9) FIG. 8 is a schematic overview of how data from at least one sensor may be used to determine the evolution of a metric of interest.

DETAILED DESCRIPTION

(10) FIG. 1 is a schematic logical diagram of a portable device 100 that may implement the techniques introduced herein. In this example, the portable device 100 is a smart phone, but it will be appreciated that the techniques introduced herein may be implemented on a range of devices such as wearable devices (e.g. smart watches and other jewellery), tablets, laptop computers etc.

(11) FIG. 1 shows the relevant components of the smart phone 100, which includes a processor 1, communications module 2, memory 3, screen 4, local storage 5 (non-volatile memory) and a battery 7. The communications module 2 comprises components necessary for wireless communication, such as a receiver, transmitter, antenna, local oscillator and signal processor.

(12) The device 100 also comprises an inertial measurement unit (IMU) 10, which here includes an accelerometer 12, a gyroscope 14 and a magnetometer 16. The accelerometer is configured to measure the acceleration of the device; the gyroscope is configured to measure the rotation rate of the device, and the magnetometer is configured to measure the strength and direction of the local magnetic field, and hence the compass heading of the device 100. The accelerometer, gyroscope and magnetometer may be MEMs devices, which are typically tri-axial devices, where each orthogonal axis comprises a separate sensor. Other inertial sensors may be included in such an IMU.

(13) The device 100 further comprises a GNSS sensor 20, such as a GPS or GLONASS sensor (or both), and a light sensor 30. In other embodiments, other sensors may be used, examples of which have been discussed above.

(14) Each of the communications module 2, memory 3, screen 4, local storage 5, battery 7, IMU 10, GNSS sensor 20 and light sensor 30 is in logical communication with the processor 1. The processor 1 is also in logical communication with Navigation Solution Unit (NSU) 40, which is operable to obtain data from the device sensors and determine from said data, amongst other metrics, the position and velocity of the device. For simplicity, in the following description, the evolution of the metric of interest is the position of the device 100, i.e. its trajectory over time.

(15) The NSU 40 may be implemented in hardware, firmware, software, or a combination thereof.

(16) Conventionally, if it is desired to track the evolution of a device's position between two time instances T.sub.1 and T.sub.2, as schematically illustrated in FIG. 2, a solution may be provided by performing, at each time instance in the time interval T.sub.1 to T.sub.2, single integration of the data from the rate gyroscope 14 to determine device attitude and double integration (while correcting for gravity) of the data obtained from the accelerometer sensor 12 in order to obtain position data. However, as has been discussed above and as is well known in the art, such solutions based on consumer-grade inertial sensors are typically subject to large accumulated attitude and position errors after only a matter of seconds due to the numerical integration of measurements from noisy and often poorly calibrated sensors. Similarly, although GNSS sensors may provide accurate position data when the device has a clear line of sight to the satellite constellation, position data from such sensors is subject to large errors (or may not be available at all) when no direct line of sight is available, such as when the user is inside a building or travelling through an “urban canyon”.

(17) FIG. 2 schematically illustrates how the techniques introduced herein overcome these problems in order to provide an accurate and reliable trajectory of the device between the time instances T.sub.1 and T.sub.2.

(18) Box 210 in FIG. 2 schematically represents data obtained by the sensors of the IMU 10, GNSS sensor 20 and light sensor 30 over the time period between T.sub.0 (for example when the user turned the smart phone on in the morning) to time T.sub.3, which in the current example is the present time. For the purposes of this discussion, this time period between T.sub.0 and T.sub.3 will be referred to as the second time period 210.

(19) Similarly, box 220 schematically represents data obtained by the sensors of the IMU 10, GNSS sensor 20 and light sensor 30 over the time period between T.sub.1 and T.sub.2 which, for the purposes of this discussion, will be referred to as the first time period. In this example, the evolution of the metric of interest (position) is being measured between time instances T.sub.1 and T.sub.2 corresponding to the first time period, although this is not necessarily the case. For example, the evolution of the position may be desired to be determined between the time instances T.sub.0 and T.sub.2.

(20) The data obtained in the first time period (the first data) and the data obtained in the second time period (the second data) are provided to the NSU 40 which calculates the desired solution, in this case the trajectory of the device between time instances T.sub.1 and T.sub.2. The solution is constrained by both the first data and the second data, and a motion model of the device between time instances T.sub.1 and T.sub.2.

(21) The motion model represents the general motion of the device, and preferably comprises three components: a position context, a motion context and at least one parameter that quantitatively represents the motion of the device. The position context is the context in which the smart phone is being supported, and may comprise the attitude of the device, for example a heading to direction-of-motion offset. More generally, this can be used to infer the mode in which the device is being carried, which in the case of a smart phone may be in a pocket, beside the ear, in the user's hand swinging by his side, being held in front of the user's face etc. The motion context may generally describe the mode by which the user of the smart phone (and hence the smart phone itself) is moving, for example walking, running, crawling, cycling etc. Finally, the at least one parameter quantitatively describes an aspect of the motion, for example, a step length or speed of the user.

(22) As the solution is constrained by the first data, second data and the motion model, this means that erroneous data obtained from the sensors may be corrected for in providing the position solution. For example, if the motion model comprises a motion context of “walking” and a position context of “in a user's pocket”, then any data obtained by, say, the GNSS sensor that does not align with the motion model (for example a spurious data event that would not be possible by walking, such as a sudden and large change in position) can be corrected for or removed when providing the position solution.

(23) The motion model may be pre-selected by a user of the smart phone, for example through interaction with an appropriate graphical user interface (GUI) presented to the user via screen 4. At time T.sub.1 for example, the user may decide to go for a run, and select a motion model that comprises a motion context of “running”. When determining the evolution of the smart phone's position between the time instances T.sub.1 and T.sub.2, the solution provided by the NSU 40 is constrained by the first data, second data and the “running” motion model.

(24) Alternatively, the first and/or second data may be used to automatically select, by an analysis module 42 of the NSU 40, a motion model for use in generating the position solution. The second data may be analysed by the analysis module 42, and the data from the accelerometer and gyroscope sensors used to determine a current motion and/or position context. The accelerometer 12 may determine a step cadence of the user and determine that the motion context is “walking”, and furthermore that the step length of the user is 0.8 m. In addition to data obtained from the gyroscope sensor 14 that may be used to determine a position context, the light sensor 30 may detect minimal or no light during the time period between T.sub.0 and T.sub.3, and it can therefore be inferred that the smart phone is in an enclosed space such as a pocket or a bag if the time is during daylight hours. Therefore, the second time period can be thought of as providing a context under which the data from the first time period is processed to provide the location solution. The first data may also be used to select a motion model automatically.

(25) In particularly advantageous embodiments, the techniques introduced herein take advantage of the fact that a user of a portable device such as a smart phone 100 is likely to make numerous such journeys. Therefore, the motion model parameters for a particular user and device/sensor combination may be automatically learned by machine learning algorithms and refined over a plurality of such journeys spanning a wide range of motion and position contexts. For example, it may be determined that after a plurality of journeys, a reliable step length for the user of a smart phone is 0.75 m rather than the initially-determined 0.8 m, thereby providing more accurate navigation solutions. The motion models and their various parameters may be stored in the local storage 5 on the device, or by other addressable storage means (e.g. through use of a client-server system), and indexed by position context and/or motion context. Subsequently, a stored motion model and its corresponding parameters may be automatically selected from the addressable storage when a particular motion and/or position context is determined.

(26) A user may use more than one device, for example a smart phone and a wearable device such as a smart watch. In other embodiments the motion model and associated parameters may be further indexed by device and/or user, such that, in general, a motion model appropriate for a device and its current user may be selected automatically.

(27) In preferred embodiments, the data obtained during the first 220 and second 210 time periods is analysed by the analysis module 42 of the NSU 40 during the determination of the device trajectory between time instances T.sub.1 and T.sub.2. Such analysis advantageously allows for the assessment of the reliability and accuracy of the data obtained during the first and second time periods, in order to provide improved navigation solutions as compared to state-of-the-art systems that instead rely on instantaneous estimates. As seen in FIG. 2 for example, the second time instance T.sub.2 is in fact in the past with respect to the present time, which here is T.sub.3. Instantaneous position data is not always required, and indeed in many cases users of a device would be willing to forego instantaneous results for more accurate position data. The time lag between T.sub.2 and T.sub.3 may be of the order of 1 s, or longer.

(28) Such analysis as performed by the analysis module 42 may comprise self-consistency analysis (analysing the data obtained by a sensor across the respective time period) and cross-reference analysis, where the data from one sensor is compared with the data from another sensor and/or the motion model. The analysis performed by the analysis module may comprise analysing the obtained data forward and/or backwards in time, or as a batch process.

(29) Take, for example, a situation where the motion model is “walking”, but on analysis of the data obtained from the GNSS sensor, a data event is observed that does not fit with such a walking motion model (e.g. a sudden shift in position by 50 m). This data event can be flagged as unreliable and hence not used in the final position solution provided by the NSU 40.

(30) The analysis performed by the analysis module 42 may comprise processing the obtained data forwards and backwards in time, which allows any asymmetries in them to be used in determining their reliability and accuracy. FIG. 3 schematically illustrates one-dimensional position data obtained by the GNSS sensor 20. In this example the plotted change in position between the time instances T.sub.A and T.sub.B is caused by erroneous position data. When processing the data forwards in time, the gradual change in position over the time period A may initially be seen as a possible “allowed” motion of the device (the allowed motions being constrained, for example, by the current motion model and/or measurements from the IMU). In such circumstances the erroneous GNSS position data from time period A is used to generate the navigation solution. The rapid change in position over time period B falls outside the allowed range of motion of the device, indicating there is a potential problem with the GNSS position data.

(31) However, with just the forward-in-time processing it is unclear whether the data before or after time instance T.sub.B is in error. In a naïve system that only uses forwards-in-time analysis some or all of the “accurate” position data after time instance T.sub.B (illustrated here at “C”) may be rejected, as these are now inconsistent with the navigation solution that has been corrupted by the erroneous data from time period A. In extreme cases the system might never recover the true navigation solution. By applying backwards-in-time processing to the same data (i.e. from T.sub.B to T.sub.A), the almost discontinuous jump in position over the time period B is detected as being inconsistent with the allowed motion of the device, and consequently the position data is ignored (or assigned lower confidence) when combining data to obtain the navigation solution. This continues until such times as the position data is deemed consistent with the current navigation solution and the “allowed” motion of the device from the motion model. For example, in FIG. 3 all or most of the erroneous position data between times T.sub.B to T.sub.A would be correctly ignored (or given lower confidence) by the NSU 40 during backwards-in-time processing.

(32) The analysis may comprise iteratively processing the obtained data forward and/or backwards in time. For example, confidence thresholds applied to an initial pass of the data may be such that all of the obtained data is allowed to be used for the navigation solution provided by the NSU 40. Of course, these data may include erroneous results such as the GNSS data obtained between time instance T.sub.A and T.sub.B depicted in FIG. 3; however, it is important not to lose data that may represent the actual motion taken by the device. On subsequent passes of the data, those from particular time periods (such as between T.sub.A and T.sub.B) may not meet a new confidence threshold determined after the first pass, and are therefore ignored or corrected for. The benefit of processing data backwards in time rather than just using multiple passes forwards in time is that the uncertainty on any derived parameters increases over time in the absence of measurements. Therefore parameters can be better estimated overall in regions such as between T.sub.A and T.sub.B by processing the data either side of that region in both directions and combining the estimates provided by these two passes via a weighted mean or similar calculation.

(33) Confidence thresholds may be set initially according to the known statistical noise properties of the device sensors, or from information derived from other sensors. For example, the accelerometer 12 may provide acceleration information in order to set thresholds on the expected variation in the GNSS frequency measurements during dynamic motions. On a subsequent pass through the data, the biases on the accelerometer may be known more accurately (for example, zero velocity update analysis may be performed during an initial analysis which reveals bias information). This means that the acceleration data is more accurate and reliable and therefore the confidence threshold on the expected frequency variation can be tighter (i.e. smaller variation). As a result, any error in frequency measurement that was allowed during the first pass of the data may now be ignored on the second pass due to the tighter (more confident) confidence thresholds.

(34) Referring back to FIG. 3, as discussed above, on the first pass of the data, the initial confidence thresholds may be such that the GNSS data obtained between time instances T.sub.A and T.sub.B are seen as being consistent with the “allowed” motion of the device from the motion model. During the course of the first-pass processing, better estimates of the IMU 10 biases are determined, which can be used on subsequent passes of the data to provide more accurate IMU data and, consequently, tighter confidence thresholds for other sensor data. For example, on a subsequent pass, the GNSS position data between time instances T.sub.A and T.sub.B might now be found to be inconsistent with the more tightly constrained allowed motion of the device, and, as a result, be filtered out of (or assigned lower confidence in) the final trajectory solution.

(35) Confidence thresholds and the related analyses forwards and backwards in time may be applied to sensor data as described, and also to derived data and metrics of interest, including platform states, pose, and derived or latent variables such as sensor biases and calibration parameters.

(36) In FIG. 2, the second time period 210 is illustrated as extending between time instance T.sub.0 and T.sub.3, and the first time period 220 as extending between T.sub.1 and T.sub.2. Advantageously, the durations of the first and second time periods are chosen so as to allow for optimal assessment by the analysis module 42 of the reliability and accuracy of the data obtained from the device sensors. Furthermore, in some embodiments, the durations of the first and/or second time periods may be dynamically changed in response to the assessment by the analysis module. For example, the second time period may be extended because of the erroneous data between time periods T.sub.A and T.sub.B, such that the extended time period allows for a more reliable interpretation of the data (for example so as to automatically select a motion model).

(37) FIG. 4 schematically illustrates the evolution of the position of the smart phone 100 as determined between time instances T.sub.1 and T.sub.2. The hatched boxes 100a, 100b, 100c, 100d, 100e, 100f represent time periods in which the absolute position of the device 100 has been determined with high confidence (“confident sections”), for example because of the availability of high quality GNSS data. The orientations of the confident sections also indicate the orientation of the device (e.g. position context) at that time period. The line 400 between the confident sections illustrates the potential trajectory of the device (i.e. the evolution of its position) as determined by the NSU 40. The time instances T.sub.A and T.sub.B are illustratively shown as a time period where the GNSS data were deemed to be too unreliable for determining the absolute position of the device, as has been explained above in relation to FIG. 3. Other metrics may be used to invoke the rejection (or reduction in confidence) of GNSS data, for example the number of satellites in use being fewer than a predetermined number, the signal strength of the GNSS data being below a predetermined threshold etc.

(38) Typically, when the GNSS data are analysed, if an event that is deemed to be unreliable is found (such as region B in FIG. 3), then the GNSS data may be deemed unreliable for a certain time period before and after the “flagged” event, as the error in the data from the GNSS sensor may increase gradually rather than abruptly. For example, again referring to FIG. 3, the GNSS trace would be flagged as unreliable from time instant T.sub.A rather than from the start of time period T.sub.B.

(39) In order to determine the trajectory of the device between the confident sections, the data obtained in the confident sections are analysed and used to determine biases in the accelerometer 12, gyroscope 14 and magnetometer 16 sensors, calculate step length parameters of the motion model, and determine the direction-of-motion to heading offset, such that the data obtained from the IMU sensors, together with the motion model, may be used to determine the trajectory of the device accurately and reliably between the confident hatched regions.

(40) Alternatively or in addition, data obtained from the accelerometer 12, gyroscope 14 and magnetometer 16 during the time periods between the confident sections may be used in combination with multiple estimates of one or more parameters to determine a plurality of possible trajectories, and the trajectory that best aligns with the confident sections may be used. For example, a plurality of direction-of-motion to heading-offset estimates may be used to generate trial trajectories during the low-confidence time periods, with the optimal trajectory being chosen as that which best aligns with the end-points of the trajectory in the high-confidence time periods. The trajectory between the confident sections (as well as the data and system parameters contributing to its construction) may also be constrained using prior information obtained from a map to help determine which of the plurality of trajectories is the most likely. For example, if one of the possible trajectories involved crossing a river where no bridge was located, that particular trajectory could be ruled out; or if the map data indicates that a pedestrian would most likely travel along a straight path while the navigation solution shows a curved trajectory, this could be used to identify and correct a yaw-axis gyroscope bias.

(41) In further embodiments, a least-squares fit of the entire trajectory (or piecewise fitting of sections of the trajectory) constrained by the data from the confident sections may be carried out.

(42) As is schematically seen in FIG. 2, the second time period may extend into the future with respect to the first time period (and the second time instant T.sub.2). In other words, when determining the evolution of a metric of interest between two time instances T.sub.1 and T.sub.2, data from the future with respect to T.sub.2 is used to constrain the solution. This provides a more accurate determination of the evolution of the metric of interest, as constraints may be interpolated for future time periods with respect to the second time instance T.sub.2, rather than simply extrapolated. In other words, the extension of the second time period into the future with respect to the second time instance T.sub.2 allows for a more accurate context, or “overall picture” of the data from which the evolution of the metric of interest is to be determined.

(43) However, there are some situations, for example navigation, where real-time information is demanded by a user of such a device. In these cases, the second time instance T.sub.2 may be at, or within a small interval behind, present time, as schematically illustrated in FIG. 5. The relative positions of the second time period 210 and first time period 220 are also shown, with both time periods ending at time T.sub.2 and only extending into the past with respect to T.sub.2.

(44) Therefore, the trajectory of the device may be provided to the user in near-real-time, with the solution for the present time only constrained by past data. However, advantageously, position solutions for previous times may be continually updated as new data are obtained. For example, at time instance T.sub.2, both the first and second time periods extend into the future with respect to and therefore at T.sub.2, a more accurate and reliable position of the device at T.sub.1 may be determined as compared to the solution determined within the small interval behind at the time instance T.sub.1 itself. In this manner, the techniques introduced herein provide for near-real-time determination of the evolution of a metric of interest, as well as an improved determination of said evolution over time as more data are obtained.

(45) FIG. 6 is a flow diagram 600 outlining the main steps of one embodiment of the techniques introduced herein. At step 601, a user of the device preselects a motion model, typically by interaction with a suitable GUI via a screen of the device. As discussed above, a motion model may comprise a motion context, a position context and at least one quantitative parameter describing the motion. At step 601, the user may pre-select at least one of these components. For example, the user may simply select a motion model having a motion context of “running”.

(46) At step 602, the NSU 40 obtains data from the first time period, and at step 603 obtains data from the second time period. Although these are set out in method 600 as separate steps, it will be appreciated that the NSU 40 may obtain the data from the first and second time periods substantially simultaneously. In other embodiments the data from the second time period may be obtained before the data from the first time period.

(47) At step 604, the first and second data are analysed by the analysis module 42, as described above, and at step 605, the NSU 40 determines the evolution of the metric of interest between first and second time instances. In the series of steps set out in method 600, the motion model is shown as being pre-selected before the obtaining of data. However, it will be appreciated that the motion model may be selected by the user after the obtaining of the data.

(48) FIG. 7 is a flow diagram 700 outlining the main steps of a further embodiment of the techniques introduced herein, in this case where the motion model is automatically selected from analysis of the first and second data. At step 701, the NSU 40 obtains data from the first time period, and at step 702 obtains data from the second time period. In the same manner as described above in relation to FIG. 6, steps 701 and 702 may in some cases occur substantially simultaneously or in reverse order.

(49) At step 703 the first and second data are analysed by the analysis module 42.

(50) At step 704, from the analysis performed at step 703, the motion and/or position context of the device is determined.

(51) At decision step 705, it is determined whether or not a motion model parameter that corresponds to the current user, device, and the position and/or motion context that was determined in step 704, is stored in addressable storage (for example local storage 5). If such a motion model does exist, this is recalled (step 708) and at step 709 the NSU 40 determines the evolution of the metric of interest between the first and second time instances, with the evolution constrained by the first data, second data and the motion model including the at least one parameter recalled from local storage. The quantitative motion model parameter may comprise an estimate of the step length of the user for example. As has been discussed above, advantageously such a parameter may have been refined during previous analyses of data obtained by the device for the user.

(52) If, at step 705, it is determined that no corresponding motion model parameter exists, then the method moves to optional step 706 where the determined motion and/or position context is used to determine at least one parameter associated with the motion of the device. Optionally, this may be stored in addressable storage (step 707), indexed by the determined motion and/or position context and the identity of the device. Subsequently, at step 709 the NSU 40 determines the evolution of the metric of interest between time instances T.sub.1 and T.sub.2, with the evolution constrained by the first data, second data and motion model.

(53) FIG. 8 is a schematic overview of how data from at least one sensor may be used to determine a navigation solution, for example for the smart phone 100 described above. Box 801 schematically represents the navigation solution generated by the NSU 40, at time instance T.sub.1, which here is at, or within a small interval behind, the present time. The navigation solution of the NSU is based on data from the IMU, the sensors of which may have sampling rates in the region of ˜100-1000 samples per second, and so the time period of box 801 may be of the order of ˜1-10 ms. In conventional solutions, this “instantaneous” data obtained from an IMU would simply be used to output the navigation solution for that instant in time, and would therefore be subject to large error drift.

(54) As schematically shown in FIG. 8, the techniques introduced herein may utilize data obtained from past and future time periods with respect to the NSU/IMU time instance T.sub.1 in order to constrain the navigation solution at that time instance. Information from any of the boxes 801 to 804 may be analysed intra- and inter-box using a variety of methods in order to optimally filter and combine all of the available data, thus resulting in an improved navigation solution. In this example, box 802, labelled “ZUPT”, represents data obtained by the IMU, and which has been analysed to determine whether or not the IMU was stationary at time instances during the time interval represented by box 802. Periods of detected zero velocity may be used, for example, to reset the velocity measurement being tracked by the NSU, or to determine biases in the inertial sensors of the IMU which may then be used to constrain the NSU solution at time instance T.sub.1.

(55) Box 803 may represent data obtained from a GNSS sensor that can be used to provide position and velocity data to further constrain the NSU solution. Additionally, information from box 802 may be used to filter out GNSS data in box 803 that is in violation of a confidently determined zero velocity condition. Attitude data may be obtained from an accelerometer and/or a gyroscope and/or magnetometer sensor.

(56) A motion model schematically illustrated at box 804 may have been pre-selected or automatically determined (for example through analysis of the raw IMU data, or from the position, velocity and attitude data). The motion model may include one or more parametric models describing some aspect of the user's motion. Analyses of sensor data contained in one or more of the boxes 801 to 804 may be performed to extract values subsequently used as input to such parametric models. For example, stepping cadence may be extracted from accelerometer data, and subsequently input to a function modelling pedestrian step length. As has been described above, the motion model at box 804 may be used to aid filtering of input sensor data and constrain the navigation solution.

(57) FIG. 8 illustrates boxes 802, 803 and 804 as having different sizes (i.e. time periods), but this does not necessarily have to be the case and is for the purposes of illustration only. However, advantageously, the data obtained and used to constrain the NSU solution at time instance T.sub.1 have been obtained over longer time periods, and extend into both the past and future, with respect to time instance T.sub.1. The evolution of the metric of interest (here positioning data determined from the NSU) may be determined between first and second time instances by combining the positioning solution obtained at a plurality of such time instances.

(58) As will be appreciated, the diagram of FIG. 8 is for illustrative purposes only, and the boxes may represent different sensor data streams and analyses.

(59) The main example used in the detailed description has been that of a smart phone, where the measurement sensors are all positioned on a common platform (i.e. the smart phone itself). However, in other embodiments, data from sensors positioned on two (or more) platforms may be used in order to determine the evolution of a metric of interest. For example, in some embodiments, a first platform may be a smart phone carried by a user and the second platform may be a smart watch worn by the user. In other words, both platforms here can be thought of as freely-moving sensor sets that provide independent measurements about the user. By determining the evolution of a metric of interest for both platforms (e.g. position data for both the smart phone and smart watch), the evolution of that metric of interest can be reliably inferred for the user (i.e. the trajectory of the user). Furthermore, on analysis of the sensor data, comparisons between the data obtained on the different platforms may be performed. For example, data obtained from an inertial sensor of the smart watch may be cross-referenced with data obtained from a GNSS sensor of the smart phone.

(60) In another example, a vehicle has sensors mounted onto its chassis (the first platform) which measure the vehicle's acceleration and rate-of-turning. The driver carries a smartphone in his pocket (the second platform) which has a built-in GNSS receiver, as well as sensors measuring acceleration, rate of turning etc. Data obtained from all of these sensors are analysed together using the methods outlined above to determine the trajectory of the driver's pocket and/or the vehicle with the lowest uncertainty.