Method and system for combining sensor data
11815355 · 2023-11-14
Assignee
Inventors
- Ramsey Faragher (Cambridge, GB)
- Mark Crockett (Hertfordshire, GB)
- Peter Duffett-Smith (Huntingdon, GB)
Cpc classification
G01C21/005
PHYSICS
G01C21/188
PHYSICS
G01S19/48
PHYSICS
G01C21/12
PHYSICS
G01S19/485
PHYSICS
G01S19/396
PHYSICS
International classification
G01C21/00
PHYSICS
G01C21/06
PHYSICS
G01C21/16
PHYSICS
G01C22/00
PHYSICS
Abstract
A method and system for combining data obtained by sensors, having particular application in the field of navigation systems, are disclosed. The techniques provide significant improvement over state-of-the-art Markovian methods that use statistical noise filters such as Kalman filters to filter data by comparing instantaneous data with the corresponding instantaneous estimates from a model. In contrast, the techniques disclosed herein use multiple time periods of various lengths to process multiple sensor data streams, in order to combine sensor measurements with motion models at a given time epoch with greater confidence and accuracy than is possible with traditional “single epoch” methods. The techniques provide particular benefit when the first and/or second sensors are low-cost sensors (for example as seen in smart phones) which are typically of low quality and have large inherent biases.
Claims
1. A system comprising: at least one sensor, mounted on a platform, configured to make measurements from which platform position or platform movement may be determined, in order to determine an evolution of platform position; and a processor adapted to perform, for each of a plurality of time instances, the steps of: obtaining, in a first time period, first data from at least one sensor mounted on the platform, where the at least one sensor makes measurements from which platform position or movement is determined; obtaining, in a second time period, second data from the at least one sensor mounted on the platform; comparing the first and second data to each other and to a motion model representing expected motion of the platform over at least a part of the respective first and second time periods to obtain corrected first data and corrected second data; and determining the evolution of the position of the platform, where the evolution is constrained by at least one of the corrected first data and corrected second data.
2. The system of claim 1, wherein the at least one sensor comprises at least one of: an accelerometer, a gyroscope, a magnetometer, a barometer, a GNSS unit, a radio frequency receiver, a pedometer, a camera, a light sensor, a pressure sensor, a strain sensor, a proximity sensor, a RADAR, and a LIDAR.
3. The system of claim 1, wherein the platform is an electronic user device.
4. The system of claim 3, wherein the electronic user device is a smart phone or a wearable device.
5. A method of determining an evolution of position of a platform over time as the platform is being transported on a vehicle or a human user, comprising: obtaining, in a first time period, first data from at least one sensor mounted on the platform, where the at least one sensor makes measurements from which platform position or platform movement is determined; obtaining, in a second time period, second data from the at least one sensor mounted on the platform; comparing the first and second data to each other and to a motion model representing expected motion of the platform over at least a part of the respective first and second time periods to obtain corrected first data and corrected second data; and determining the evolution of the position of the platform, where the evolution is constrained by at least one of the corrected first data and corrected second data.
6. The method of claim 5, further comprising analyzing the first and second data over at least a part of the respective first and second time periods in order to assess the reliability and/or accuracy of the first and second data.
7. The method of claim 6, wherein analyzing the first and second data comprises performing self-consistency checks on the data across the corresponding time period.
8. The method of claim 6, wherein the duration of the first time period, the second time period and/or their amount of overlap relative to each other and the respective time instance is dynamically adjusted in response to the assessed reliability and/or accuracy of the associated sensor data.
9. The method of claim 5, wherein the first data and/or second data are iteratively analyzed forwards and/or backwards in time.
10. The method of claim 9, wherein a first analysis of the first and/or second data is based on at least one confidence threshold and the at least one confidence threshold is modified after the first analysis; wherein a subsequent analysis of said first and/or second data is based on the modified confidence threshold.
11. The method of claim 5, wherein the motion model comprises at least one of a motion context component comprising a motion context of the platform, and a position context component comprising a position context of the platform.
12. The method of claim 5, wherein the motion model comprises at least one parameter that quantitatively describes an aspect of the motion, and/or at least one function that may be used to determine a parameter that quantitatively describes an aspect of the motion.
13. The method of claim 12, wherein the at least one parameter is one of: a pedestrian step length, a pedestrian speed, a pedestrian height, a pedestrian leg length, a stair rise, a stair run or a compass-heading to direction-of-motion offset.
14. The method of claim 12, wherein the at least one function is a function determining the pedestrian step length or speed.
15. The method of claim 5, wherein at least one component of the motion model is automatically determined from an analysis of the first and/or second data, or wherein at least one component of the motion model is automatically determined from an analysis of data obtained from the at least one sensor during at least one previous determination of an evolution of platform position.
16. The method of claim 5, wherein the constraining of the evolution comprises determining a measurement bias of the at least one sensor and correcting for said bias.
17. The method of claim 5, wherein at least one of the first and second time periods extends into the future with respect to a present time instance in which the evolution is being determined.
18. The method of claim 5, wherein the at least one sensor comprises at least one of: an accelerometer, a gyroscope, a magnetometer, a barometer, a GNSS unit, a radio frequency receiver, a pedometer, a camera, a light sensor, a pressure sensor, a strain sensor, a proximity sensor, a RADAR, and a LIDAR.
19. The method of claim 5, wherein the platform comprises an electronic user device.
20. The method of claim 5 wherein the at least one sensor used to obtain the first data is different from the at least one sensor used to obtain the second data.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Examples of the present techniques introduced herein will now be described with reference to the attached drawings, in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
DETAILED DESCRIPTION
(10)
(11)
(12) The device 100 also comprises an inertial measurement unit (IMU) 10, which here includes an accelerometer 12, a gyroscope 14 and a magnetometer 16. The accelerometer is configured to measure the acceleration of the device; the gyroscope is configured to measure the rotation rate of the device, and the magnetometer is configured to measure the strength and direction of the local magnetic field, and hence the compass heading of the device 100. The accelerometer, gyroscope and magnetometer may be MEMs devices, which are typically tri-axial devices, where each orthogonal axis comprises a separate sensor. Other inertial sensors may be included in such an IMU.
(13) The device 100 further comprises a GNSS sensor 20, such as a GPS or GLONASS sensor (or both), and a light sensor 30. In other embodiments, other sensors may be used, examples of which have been discussed above.
(14) Each of the communications module 2, memory 3, screen 4, local storage 5, battery 7, IMU 10, GNSS sensor 20 and light sensor 30 is in logical communication with the processor 1. The processor 1 is also in logical communication with Navigation Solution Unit (NSU) 40, which is operable to obtain data from the device sensors and determine from said data, amongst other metrics, the position and velocity of the device. For simplicity, in the following description, the evolution of the metric of interest is the position of the device 100, i.e. its trajectory over time.
(15) The NSU 40 may be implemented in hardware, firmware, software, or a combination thereof.
(16) Conventionally, if it is desired to track the evolution of a device's position between two time instances T.sub.1 and T.sub.2, as schematically illustrated in
(17)
(18) Box 210 in
(19) Similarly, box 220 schematically represents data obtained by the sensors of the IMU 10, GNSS sensor 20 and light sensor 30 over the time period between T.sub.1 and T.sub.2 which, for the purposes of this discussion, will be referred to as the first time period. In this example, the evolution of the metric of interest (position) is being measured between time instances T.sub.1 and T.sub.2 corresponding to the first time period, although this is not necessarily the case. For example, the evolution of the position may be desired to be determined between the time instances T.sub.0 and T.sub.2.
(20) The data obtained in the first time period (the first data) and the data obtained in the second time period (the second data) are provided to the NSU 40 which calculates the desired solution, in this case the trajectory of the device between time instances T.sub.1 and T.sub.2. The solution is constrained by both the first data and the second data, and a motion model of the device between time instances T.sub.1 and T.sub.2.
(21) The motion model represents the general motion of the device, and preferably comprises three components: a position context, a motion context and at least one parameter that quantitatively represents the motion of the device. The position context is the context in which the smart phone is being supported, and may comprise the attitude of the device, for example a heading to direction-of-motion offset. More generally, this can be used to infer the mode in which the device is being carried, which in the case of a smart phone may be in a pocket, beside the ear, in the user's hand swinging by his side, being held in front of the user's face etc. The motion context may generally describe the mode by which the user of the smart phone (and hence the smart phone itself) is moving, for example walking, running, crawling, cycling etc. Finally, the at least one parameter quantitatively describes an aspect of the motion, for example, a step length or speed of the user.
(22) As the solution is constrained by the first data, second data and the motion model, this means that erroneous data obtained from the sensors may be corrected for in providing the position solution. For example, if the motion model comprises a motion context of “walking” and a position context of “in a user's pocket”, then any data obtained by, say, the GNSS sensor that does not align with the motion model (for example a spurious data event that would not be possible by walking, such as a sudden and large change in position) can be corrected for or removed when providing the position solution.
(23) The motion model may be pre-selected by a user of the smart phone, for example through interaction with an appropriate graphical user interface (GUI) presented to the user via screen 4. At time T.sub.1 for example, the user may decide to go for a run, and select a motion model that comprises a motion context of “running”. When determining the evolution of the smart phone's position between the time instances T.sub.1 and T.sub.2, the solution provided by the NSU 40 is constrained by the first data, second data and the “running” motion model.
(24) Alternatively, the first and/or second data may be used to automatically select, by an analysis module 42 of the NSU 40, a motion model for use in generating the position solution. The second data may be analysed by the analysis module 42, and the data from the accelerometer and gyroscope sensors used to determine a current motion and/or position context. The accelerometer 12 may determine a step cadence of the user and determine that the motion context is “walking”, and furthermore that the step length of the user is 0.8 m. In addition to data obtained from the gyroscope sensor 14 that may be used to determine a position context, the light sensor 30 may detect minimal or no light during the time period between T.sub.0 and T.sub.3, and it can therefore be inferred that the smart phone is in an enclosed space such as a pocket or a bag if the time is during daylight hours. Therefore, the second time period can be thought of as providing a context under which the data from the first time period is processed to provide the location solution. The first data may also be used to select a motion model automatically.
(25) In particularly advantageous embodiments, the techniques introduced herein take advantage of the fact that a user of a portable device such as a smart phone 100 is likely to make numerous such journeys. Therefore, the motion model parameters for a particular user and device/sensor combination may be automatically learned by machine learning algorithms and refined over a plurality of such journeys spanning a wide range of motion and position contexts. For example, it may be determined that after a plurality of journeys, a reliable step length for the user of a smart phone is 0.75 m rather than the initially-determined 0.8 m, thereby providing more accurate navigation solutions. The motion models and their various parameters may be stored in the local storage 5 on the device, or by other addressable storage means (e.g. through use of a client-server system), and indexed by position context and/or motion context. Subsequently, a stored motion model and its corresponding parameters may be automatically selected from the addressable storage when a particular motion and/or position context is determined.
(26) A user may use more than one device, for example a smart phone and a wearable device such as a smart watch. In other embodiments the motion model and associated parameters may be further indexed by device and/or user, such that, in general, a motion model appropriate for a device and its current user may be selected automatically.
(27) In preferred embodiments, the data obtained during the first 220 and second 210 time periods is analysed by the analysis module 42 of the NSU 40 during the determination of the device trajectory between time instances T.sub.1 and T.sub.2. Such analysis advantageously allows for the assessment of the reliability and accuracy of the data obtained during the first and second time periods, in order to provide improved navigation solutions as compared to state-of-the-art systems that instead rely on instantaneous estimates. As seen in
(28) Such analysis as performed by the analysis module 42 may comprise self-consistency analysis (analysing the data obtained by a sensor across the respective time period) and cross-reference analysis, where the data from one sensor is compared with the data from another sensor and/or the motion model. The analysis performed by the analysis module may comprise analysing the obtained data forward and/or backwards in time, or as a batch process.
(29) Take, for example, a situation where the motion model is “walking”, but on analysis of the data obtained from the GNSS sensor, a data event is observed that does not fit with such a walking motion model (e.g. a sudden shift in position by 50 m). This data event can be flagged as unreliable and hence not used in the final position solution provided by the NSU 40.
(30) The analysis performed by the analysis module 42 may comprise processing the obtained data forwards and backwards in time, which allows any asymmetries in them to be used in determining their reliability and accuracy.
(31) However, with just the forward-in-time processing it is unclear whether the data before or after time instance T.sub.B is in error. In a naïve system that only uses forwards-in-time analysis some or all of the “accurate” position data after time instance T.sub.B (illustrated here at “C”) may be rejected, as these are now inconsistent with the navigation solution that has been corrupted by the erroneous data from time period A. In extreme cases the system might never recover the true navigation solution. By applying backwards-in-time processing to the same data (i.e. from T.sub.B to T.sub.A), the almost discontinuous jump in position over the time period B is detected as being inconsistent with the allowed motion of the device, and consequently the position data is ignored (or assigned lower confidence) when combining data to obtain the navigation solution. This continues until such times as the position data is deemed consistent with the current navigation solution and the “allowed” motion of the device from the motion model. For example, in
(32) The analysis may comprise iteratively processing the obtained data forward and/or backwards in time. For example, confidence thresholds applied to an initial pass of the data may be such that all of the obtained data is allowed to be used for the navigation solution provided by the NSU 40. Of course, these data may include erroneous results such as the GNSS data obtained between time instance T.sub.A and T.sub.B depicted in
(33) Confidence thresholds may be set initially according to the known statistical noise properties of the device sensors, or from information derived from other sensors. For example, the accelerometer 12 may provide acceleration information in order to set thresholds on the expected variation in the GNSS frequency measurements during dynamic motions. On a subsequent pass through the data, the biases on the accelerometer may be known more accurately (for example, zero velocity update analysis may be performed during an initial analysis which reveals bias information). This means that the acceleration data is more accurate and reliable and therefore the confidence threshold on the expected frequency variation can be tighter (i.e. smaller variation). As a result, any error in frequency measurement that was allowed during the first pass of the data may now be ignored on the second pass due to the tighter (more confident) confidence thresholds.
(34) Referring back to
(35) Confidence thresholds and the related analyses forwards and backwards in time may be applied to sensor data as described, and also to derived data and metrics of interest, including platform states, pose, and derived or latent variables such as sensor biases and calibration parameters.
(36) In
(37)
(38) Typically, when the GNSS data are analysed, if an event that is deemed to be unreliable is found (such as region B in
(39) In order to determine the trajectory of the device between the confident sections, the data obtained in the confident sections are analysed and used to determine biases in the accelerometer 12, gyroscope 14 and magnetometer 16 sensors, calculate step length parameters of the motion model, and determine the direction-of-motion to heading offset, such that the data obtained from the IMU sensors, together with the motion model, may be used to determine the trajectory of the device accurately and reliably between the confident hatched regions.
(40) Alternatively or in addition, data obtained from the accelerometer 12, gyroscope 14 and magnetometer 16 during the time periods between the confident sections may be used in combination with multiple estimates of one or more parameters to determine a plurality of possible trajectories, and the trajectory that best aligns with the confident sections may be used. For example, a plurality of direction-of-motion to heading-offset estimates may be used to generate trial trajectories during the low-confidence time periods, with the optimal trajectory being chosen as that which best aligns with the end-points of the trajectory in the high-confidence time periods. The trajectory between the confident sections (as well as the data and system parameters contributing to its construction) may also be constrained using prior information obtained from a map to help determine which of the plurality of trajectories is the most likely. For example, if one of the possible trajectories involved crossing a river where no bridge was located, that particular trajectory could be ruled out; or if the map data indicates that a pedestrian would most likely travel along a straight path while the navigation solution shows a curved trajectory, this could be used to identify and correct a yaw-axis gyroscope bias.
(41) In further embodiments, a least-squares fit of the entire trajectory (or piecewise fitting of sections of the trajectory) constrained by the data from the confident sections may be carried out.
(42) As is schematically seen in
(43) However, there are some situations, for example navigation, where real-time information is demanded by a user of such a device. In these cases, the second time instance T.sub.2 may be at, or within a small interval behind, present time, as schematically illustrated in
(44) Therefore, the trajectory of the device may be provided to the user in near-real-time, with the solution for the present time only constrained by past data. However, advantageously, position solutions for previous times may be continually updated as new data are obtained. For example, at time instance T.sub.2, both the first and second time periods extend into the future with respect to and therefore at T.sub.2, a more accurate and reliable position of the device at T.sub.1 may be determined as compared to the solution determined within the small interval behind at the time instance T.sub.1 itself. In this manner, the techniques introduced herein provide for near-real-time determination of the evolution of a metric of interest, as well as an improved determination of said evolution over time as more data are obtained.
(45)
(46) At step 602, the NSU 40 obtains data from the first time period, and at step 603 obtains data from the second time period. Although these are set out in method 600 as separate steps, it will be appreciated that the NSU 40 may obtain the data from the first and second time periods substantially simultaneously. In other embodiments the data from the second time period may be obtained before the data from the first time period.
(47) At step 604, the first and second data are analysed by the analysis module 42, as described above, and at step 605, the NSU 40 determines the evolution of the metric of interest between first and second time instances. In the series of steps set out in method 600, the motion model is shown as being pre-selected before the obtaining of data. However, it will be appreciated that the motion model may be selected by the user after the obtaining of the data.
(48)
(49) At step 703 the first and second data are analysed by the analysis module 42.
(50) At step 704, from the analysis performed at step 703, the motion and/or position context of the device is determined.
(51) At decision step 705, it is determined whether or not a motion model parameter that corresponds to the current user, device, and the position and/or motion context that was determined in step 704, is stored in addressable storage (for example local storage 5). If such a motion model does exist, this is recalled (step 708) and at step 709 the NSU 40 determines the evolution of the metric of interest between the first and second time instances, with the evolution constrained by the first data, second data and the motion model including the at least one parameter recalled from local storage. The quantitative motion model parameter may comprise an estimate of the step length of the user for example. As has been discussed above, advantageously such a parameter may have been refined during previous analyses of data obtained by the device for the user.
(52) If, at step 705, it is determined that no corresponding motion model parameter exists, then the method moves to optional step 706 where the determined motion and/or position context is used to determine at least one parameter associated with the motion of the device. Optionally, this may be stored in addressable storage (step 707), indexed by the determined motion and/or position context and the identity of the device. Subsequently, at step 709 the NSU 40 determines the evolution of the metric of interest between time instances T.sub.1 and T.sub.2, with the evolution constrained by the first data, second data and motion model.
(53)
(54) As schematically shown in
(55) Box 803 may represent data obtained from a GNSS sensor that can be used to provide position and velocity data to further constrain the NSU solution. Additionally, information from box 802 may be used to filter out GNSS data in box 803 that is in violation of a confidently determined zero velocity condition. Attitude data may be obtained from an accelerometer and/or a gyroscope and/or magnetometer sensor.
(56) A motion model schematically illustrated at box 804 may have been pre-selected or automatically determined (for example through analysis of the raw IMU data, or from the position, velocity and attitude data). The motion model may include one or more parametric models describing some aspect of the user's motion. Analyses of sensor data contained in one or more of the boxes 801 to 804 may be performed to extract values subsequently used as input to such parametric models. For example, stepping cadence may be extracted from accelerometer data, and subsequently input to a function modelling pedestrian step length. As has been described above, the motion model at box 804 may be used to aid filtering of input sensor data and constrain the navigation solution.
(57)
(58) As will be appreciated, the diagram of
(59) The main example used in the detailed description has been that of a smart phone, where the measurement sensors are all positioned on a common platform (i.e. the smart phone itself). However, in other embodiments, data from sensors positioned on two (or more) platforms may be used in order to determine the evolution of a metric of interest. For example, in some embodiments, a first platform may be a smart phone carried by a user and the second platform may be a smart watch worn by the user. In other words, both platforms here can be thought of as freely-moving sensor sets that provide independent measurements about the user. By determining the evolution of a metric of interest for both platforms (e.g. position data for both the smart phone and smart watch), the evolution of that metric of interest can be reliably inferred for the user (i.e. the trajectory of the user). Furthermore, on analysis of the sensor data, comparisons between the data obtained on the different platforms may be performed. For example, data obtained from an inertial sensor of the smart watch may be cross-referenced with data obtained from a GNSS sensor of the smart phone.
(60) In another example, a vehicle has sensors mounted onto its chassis (the first platform) which measure the vehicle's acceleration and rate-of-turning. The driver carries a smartphone in his pocket (the second platform) which has a built-in GNSS receiver, as well as sensors measuring acceleration, rate of turning etc. Data obtained from all of these sensors are analysed together using the methods outlined above to determine the trajectory of the driver's pocket and/or the vehicle with the lowest uncertainty.