Optical tracking vehicle control system and method
RE048527 ยท 2021-04-20
Assignee
Inventors
- David R. Reeve (Chapel Hill, AU)
- Andrew John Macdonald (Graceville, AU)
- Campbell Robert Morrison (Corinda, AU)
Cpc classification
B62D15/025
PERFORMING OPERATIONS; TRANSPORTING
G01S19/48
PHYSICS
G05D1/027
PHYSICS
G05D1/0253
PHYSICS
G05D1/0272
PHYSICS
G01S19/43
PHYSICS
International classification
G05D1/00
PHYSICS
B62D15/00
PERFORMING OPERATIONS; TRANSPORTING
B62D15/02
PERFORMING OPERATIONS; TRANSPORTING
Abstract
A vehicle control system having a controller and a spatial database adapted to provide spatial data to the controller at control speed. The spatial data provided from the spatial database to the controller includes images collected from an optical sensor subsystem in addition to other data collected by a variety of sensor types, including a GNSS or inertial measurement system. The spatial data received by the controller from the database forms at least part of the control inputs that the controller operates on to control the vehicle. The advantage provided by the present invention allows control system to think directly in terms of spatial location. A vehicle control system in accordance with one particular embodiment of the invention comprises a task path generator, a spatial database, at least one external spatial data receiver, a vehicle attitude compensation module, a position error generator, a controller, and actuators to control the vehicle.
Claims
.[.1. A system for controlling a vehicle, the vehicle including an automatic steering system and roll, pitch and yaw axes, and the control system comprising: a spatial database containing spatial data corresponding to GPS-defined positions in the region; a controller mounted on said vehicle and adapted for computing guidance signals, to receive spatial data from the spatial database at control speed, and to control the steering of the vehicle; a guidance subsystem mounted on said vehicle and connected to said controller, said guidance subsystem being adapted for receiving said guidance signals from said controller and utilizing said guidance signals for guiding said vehicle; external spatial data sources mounted on said vehicle, comprising at least an optical movement sensor subsystem adapted for optically sensing movement of said vehicle relative to a surface over which said vehicle is traveling; said optical movement sensor subsystem including an optical movement sensor connected to said controller and adapted for providing optically-sensed vehicle movement signals thereto corresponding to optically-sensed relative vehicle movement; said optical movement sensor subsystem including an optical movement sensor and an optimal estimator providing a statistically optimal estimate of the position and attitude information received from the optical movement sensor; said optimal estimator including algorithms that receive the position and attitude information from the optical movement sensor and converts said information into a calculated or determined position and attitude of said vehicle producing a statistically optimal estimate of the calculated or determined position and attitude of said vehicle; said controller being adapted for computing said guidance signals utilizing said vehicle movement signals; the controller correlating images from said optical movement sensor subsystem to obtain data relating to the vehicle's motion; a vehicle reference point located at an intersection of the vehicle roll, pitch and yaw axes; and the spatial database being adapted to receive updated spatial data from the controller and the external spatial data sources as the vehicle traverses the region..].
.[.2. The system for controlling a vehicle according to claim 1, further comprising: a global navigation satellite system (GNSS) positioning subsystem mounted on said vehicle and adapted for providing GNSS-derived position signals to said controller; said controller using said GNSS-derived position signals for computing said guidance signals; said GNSS positioning subsystem including a pair of antennas mounted on said vehicle; and said antennas receiving GNSS ranging signals corresponding to their respective geo-reference locations..].
.[.3. The system for controlling a vehicle according to claim 2, further comprising: said processor being adapted for computing an attitude of said vehicle using ranging differences between the GNSS signals received by said antennas; and said GNSS antennas being mounted on said vehicle in transversely-spaced relation..].
.[.4. The system for controlling a vehicle according to claim 3, further comprising: said vehicle including a motive component and an implement connected to said motive component; a GNSS antenna mounted on said implement and connected to said GNSS receiver; and said guidance subsystem being adapted for automatically steering said vehicle utilizing said positioning signals to accommodate an offset between said tractor and implement and correct relative positioning of said tractor and implement to maintain said implement on a guide path..].
.[.5. The system for controlling a vehicle according to claim 4, further comprising: said guidance subsystem including an hydraulic steering valve block connected to said controller and to a steering mechanism of said vehicle; and said guidance subsystem including a graphic user interface (GUI) adapted for displaying a guide path of said vehicle..].
.[.6. The system for controlling a vehicle according to claim 5, further comprising: a GNSS base station including a radio transmitter and a radio receiver; said vehicle including an RF receiver adapted to receive RF transmissions from said base station; and a real-time kinematic (RTK) correction subsystem using carrier phase satellite transmissions with said vehicle in motion..].
.[.7. The system for controlling a vehicle according to claim 1 wherein said optical movement sensor subsystem includes: a pair of said optical movement sensors fixedly mounted in spaced relation on said vehicle..].
.[.8. The system for controlling a vehicle according to claim 1, wherein said external spatial data sources mounted on the vehicle further comprise: a GNSS system including an antenna and a receiver; an inertial navigation system (INS) including a gyroscope and an accelerometer; and a tilt sensor..].
.[.9. A control system as claimed in claim 8, wherein the controller uses the GPS system, the inertial navigation system, the gyroscope, the accelerometer and the tilt sensor to generate a control signal for controlling the vehicle..].
.[.10. A system for controlling an agricultural vehicle, the vehicle including an automatic steering system and roll, pitch and yaw axes, and the control system comprising: a spatial database containing spatial data corresponding to GPS-defined positions in the region; a controller mounted on said vehicle and adapted for computing guidance signals, to receive spatial data from the spatial database at control speed, and to control the steering of the vehicle; a guidance subsystem mounted on said vehicle and connected to said controller, said guidance subsystem being adapted for receiving said guidance signals from said controller and utilizing said guidance signals for guiding said vehicle; external spatial data sources mounted on said vehicle, comprising at least an optical movement sensor subsystem adapted for optically sensing movement of said vehicle relative to a surface over which said vehicle is traveling; said optical movement sensor subsystem including an optical movement sensor connected to said controller and adapted for providing optically-sensed vehicle movement signals thereto corresponding to optically-sensed relative vehicle movement; said optical movement sensor subsystem including an optical movement sensor and an optimal estimator providing a statistically optimal estimate of the position and attitude information received from the optical movement sensor; said optimal estimator including algorithms that receive the position and attitude information from the optical movement sensor and converts said information into a calculated or determined position and attitude of said vehicle producing a statistically optimal estimate of the calculated or determined position and attitude of said vehicle; said controller being adapted for computing said guidance signals utilizing said vehicle movement signals; the controller correlating images from said optical movement sensor subsystem to obtain data relating to the vehicle's motion; a vehicle reference point located at an intersection of the vehicle roll, pitch and yaw axes; the spatial database being adapted to receive updated spatial data from the controller and the external spatial data sources as the vehicle traverses the region; a global navigation satellite system (GNSS) positioning subsystem mounted on said vehicle and adapted for providing GNSS-derived position signals to said controller; said controller using said GNSS-derived position signals for computing said guidance signals; said GNSS positioning subsystem including a pair of antennas mounted on said vehicle; said antennas receiving GNSS ranging signals corresponding to their respective geo-reference locations; said processor being adapted for computing an attitude of said vehicle using ranging differences between the GNSS signals received by said antennas; said GNSS antennas being mounted on said vehicle in transversely-spaced relation; said vehicle including a motive component and an implement connected to said motive component; a GNSS antenna mounted on said implement and connected to said GNSS receiver; said guidance subsystem being adapted for automatically steering said vehicle utilizing said positioning signals to accommodate an offset between said tractor and implement and correct relative positioning of said tractor and implement to maintain said implement on a guide path; said guidance subsystem including an hydraulic steering valve block connected to said controller and to a steering mechanism of said vehicle; said guidance subsystem including a graphic user interface (GUI) adapted for displaying a guide path of said vehicle; a GNSS base station including a radio transmitter and a radio receiver; said vehicle including an RF receiver adapted to receive RF transmissions from said base station; and a real-time kinematic (RTK) correction subsystem using carrier phase satellite transmissions with said vehicle in motion..].
.[.11. A method for controlling a vehicle within a region to be traversed, the vehicle including an automatic steering system and roll, pitch and yaw axes, the method comprising the steps: providing a spatial database; populating said database with spatial data corresponding to GPS-defined positions in the region; providing a position error generator; providing a controller; mounting said controller to said vehicle; traversing the region with said vehicle; receiving spatial data with said controller from the spatial database at control speed; controlling the steering of the vehicle with the controller as the vehicle traverses the region; providing the controller with a task path generator; receiving data from the spatial database with the controller and controller task path generator; providing the controller with a vehicle attitude compensation module; mounting external spatial data sources, including at least an optical movement sensor subsystem, on said vehicle and optically sensing movement of said vehicle relative to a surface over which said vehicle is traveling; said optical movement sensor subsystem including an optimal estimator providing a statistically optimal estimate of the position and attitude information received from the optical movement sensor; providing said optimal estimator with algorithms that receive the position and attitude information from the optical movement sensor and convert said information into a calculated or determined position and attitude of said vehicle producing a statistically optimal estimate of the calculated or determined position and attitude of said vehicle; populating said spatial database with ground images from said optical movement sensor subsystem; inputting said ground images to the controller; correlating the images with said controller to obtain data relating to the vehicle's motion; designating and locating a vehicle reference point at an intersection of the vehicle roll, pitch, and yaw axes; and updating said spatial database with spatial data from the controller and said external spatial data sources as the vehicle traverses the region..].
.[.12. The method for controlling a vehicle according to claim 11, further comprising the steps: providing a global navigation satellite system (GNSS) positioning subsystem mounted on said vehicle and providing GNSS-derived position signals to said controller; providing said GNSS positioning subsystem with a pair of antennas mounted on said vehicle; receiving with said antennas GNSS ranging signals corresponding to their respective geo-reference locations; and computing with said processor an attitude of said vehicle using ranging differences between the GNSS signals received by said antennas..].
.[.13. The method for controlling a vehicle according to claim 12, further comprising the steps: mounting said GNSS antennas on said vehicle in transversely-spaced relation..].
.[.14. The method for controlling a vehicle according to claim 12, further comprising the steps: providing said vehicle with a motive component and an implement connected to said motive component; mounting a GNSS antenna on said implement and connecting said implement-mounted GNSS antennas to said GNSS receiver; and said guidance subsystem automatically steering said vehicle utilizing said positioning signals to accommodate an offset between said tractor and said implement and to maintain said implement on a guide path..].
.[.15. The method according to claim 11, which includes the additional steps of: providing said optical movement sensor subsystem with a pair of optical movement sensors; and fixedly mounting said optical movement sensors in spaced relation on said vehicle..].
.[.16. The method for controlling a vehicle according to claim 11, wherein said external spatial data sources mounted on the vehicle further comprise: a GNSS system including an antenna and a receiver; an inertial navigation system (INS) including a gyroscope and an accelerometer; and a tilt sensor..].
.[.17. The method for controlling a vehicle according to claim 16, wherein the controller uses the GPS system, the inertial navigation system, the gyroscope, the accelerometer and the tilt sensor to generate a control signal for controlling the vehicle..].
.Iadd.18. An apparatus for controlling a vehicle, the apparatus comprising: a spatial database containing spatial data corresponding to absolute positions in a region; and a controller, in communication with a single gimbal-mounted optical movement sensor of the vehicle or plural optical movement sensors mounted on the vehicle in transversely-spaced relation, the controller configured to: convert position and attitude information from the single optical movement sensor or the plural optical movement sensors into a calculated position and attitude of the vehicle, wherein the calculated attitude defines a roll, yaw, and pitch of the vehicle, steer the vehicle using the calculated position and attitude of the vehicle and the spatial data from the spatial database, and update the spatial database with updated spatial data as the vehicle traverses the region..Iaddend.
.Iadd.19. The apparatus of claim 18, further comprising an optimal estimator to calculate the calculated position and attitude of the vehicle by calculating a statistically optimal estimate of the position and attitude information received from the single optical movement sensor or the plural optical movement sensors..Iaddend.
.Iadd.20. The apparatus of claim 19, wherein the optimal estimator includes algorithms that receive the position and attitude information from the single optical movement sensor or the plural optical movement sensors and convert the information into the calculated position and attitude of the vehicle by calculating the statistically optimal estimate..Iaddend.
.Iadd.21. A method of controlling a vehicle having a single gimbal-mounted optical movement sensor mounted thereon or plural optical movement sensors mounted thereon in transversely-spaced relation, the method comprising: converting position and attitude information from the single optical movement sensor or the plural optical movement sensors into a calculated position and attitude of the vehicle, wherein the calculated attitude defines a roll, yaw, and pitch of the vehicle; steering the vehicle using the calculated position and attitude of the vehicle and spatial data corresponding to absolute positions in a region; wherein the spatial data is from a database, and the method further comprises updating the database with updated spatial data as the vehicle traverses the region..Iaddend.
.Iadd.22. The method of claim 21, further comprising calculating a statistically optimal estimate of the position and attitude information received from the single optical movement sensor or the plural optical movement sensors..Iaddend.
.Iadd.23. The method of claim 22, further comprising: receiving the position and attitude information from the single optical movement sensor or the plural optical movement sensors; and converting the received information into the calculated position and attitude of the vehicle by calculating the statistically optimal estimate..Iaddend.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Certain embodiments, aspects and features of the invention will now be described and explained by way of example and with reference to the drawings. However, it will be clearly appreciated that these descriptions and examples are provided to assist in understanding the invention only, and the invention is not limited to or by any of the embodiments, aspects or features described or exemplified.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22) .[.
(23)
(24)
(25)
(26)
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
(27) 1. Introduction and Environment
(28) As required, detailed embodiments of the present invention are disclosed herein; however, it is to be understood that the disclosed embodiments are merely exemplary of the invention, which may be embodied in various forms. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present invention in virtually any appropriately detailed structure.
(29) Certain terminology will be used in the following description for convenience in reference only and will not be limiting. For example, up, down, front, back, right and left refer to the invention as oriented in the view being referred to. The words inwardly and outwardly refer to directions toward and away from, respectively, the geometric center of the embodiment being described and designated parts thereof. Global navigation satellite systems .[.(GNSS).]. are broadly defined to include GPS (U.S.), Galileo (Europe, proposed), .[.GLONASS.]. .Iadd.Globalnaya Navigazionnaya Sputnikovaya Sistema (GLONASS) .Iaddend.(Russia), Beidou (China), Compass (China, proposed), .[.IRNSS.]. .Iadd.Indian Regional Navigation Satellite System (IRNSS) .Iaddend.(India, proposed), .[.QZSS.]. .Iadd.Quasi-Zenith Satellite System (QZSS) .Iaddend.(Japan, proposed) and other current and future positioning technology using signals from satellites, using single or multiple antennae, with or without augmentation from terrestrial sources. Inertial navigation systems (INS) include gyroscopic (gyro) sensors, accelerometers and similar technologies for providing output corresponding to the inertia of moving components in all axes, i.e. through six degrees of freedom (positive and negative directions along longitudinal X, transverse Y and vertical Z axes). Yaw, pitch and roll refer to moving component rotation about the Z, Y and X axes respectively. Said terminology will include the words specifically mentioned, derivatives thereof and words of similar meaning.
(30) 2. Optical Vehicle Control System 2
(31)
(32) The tractor 10 is fitted with a steering control system. The steering control system includes .Iadd.a GNSS receiver 13 connected to antennas 20, .Iaddend.a controller 14 and a steering valve block 15. The controller 14 suitably includes a computer memory that is capable of having an initial path of travel entered therein. The computer memory is also adapted to store or generate a desired path of travel. The controller 14 receives position and attitude signals from one or more sensors (to be described later) and the data received from the sensors are used by the controller 14 to determine or calculate the position and attitude of the tractor. The controller 14 then compares the position and attitude of the tractor with the desired position and attitude of the tractor. If the determined or calculated position and attitude of the tractor deviates from the desired position and attitude of the tractor, the controller 14 issues a steering correction signal that interacts with a steering control mechanism. In response to the steering correction signal, the steering control mechanism makes adjustments to the angle of steering of the tractor, to thereby assist in moving the tractor back towards the desired path of travel. The steering control mechanism may comprise one or more mechanical or electrical controllers or devices that can automatically adjust the steering angle of the vehicle. These devices may act upon the steering pump, the steering column or steering linkages.
(33) In one embodiment of the present invention, the steering control algorithm may be similar to that described in our U.S. Pat. No. 6,876,920, which is incorporated herein by reference and discloses a steering control algorithm, which involves entering an initial path of travel (often referred to as a wayline). The computer in the controller 14 then determines or calculates the desired path of travel, for example, by determining the offset of the implement being towed by the tractor and generating a series of parallel paths spaced apart from each other by the offset of the implement. This ensures that an optimal working of the field is obtained. The vehicle then commences moving along the desired path of travel. One or more sensors provide position and attitude signals to the controller and the controller uses those position and attitude signals to determine or calculate the position and attitude of the vehicle. This position and attitude is then compared with the desired position and attitude of the vehicle. If the vehicle is spaced away from the desired path of travel, or is pointing away from the desired path, the controller generates a steering correction signal. The steering correction signal may be generated, for example, by using the difference between the determined position and attitude of the vehicle and the desired position and attitude of the vehicle to generate an error signal, with the magnitude of the error signal being dependent upon the difference between the determined position and attitude and the desired position and attitude of the vehicle. The error signal may take the form of a curvature demand signal that acts to steer the vehicle back onto the desired path of travel. Steering angle sensors in the steering control mechanism may monitor the angle of the steering wheels of the tractor and send the data back to the controller to thereby allow the controller to correct for understeering or oversteering.
(34) In an alternative embodiment, the error signal may result in generation of a steering guidance arrow on a visual display unit to thereby enable the driver of the vehicle to properly steer the vehicle back onto the desired path of travel. This manual control indicator may also be provided in conjunction with the steering controls as described in paragraph above.
(35) It will be appreciated that the invention is by no means limited to the particular algorithm described, and that a wide variety of other steering control algorithms may also be used.
(36) In general terms, most, if not all, steering control algorithms operate by comparing a determined or calculated position and attitude of the vehicle with a desired position and attitude of the vehicle. The desired position and attitude of the vehicle is typically determined from the path of travel that is entered into, or stored in, or generated by, the controller. The determined or calculated position and attitude of the vehicle is, in most, if not all, cases determined by having input data from one or more sensors being used to determine or calculate the position and attitude of the vehicle. In U.S. Pat. No. 6,876,920, GNSS sensors, accelerometers, wheel angle sensors and gyroscopes are used as the sensors in preferred embodiments of that patent.
(37) Returning now to
(38) The actual details of the controller will be readily understood by persons skilled in the art and need not be described further.
(39) The tractor 10 shown in
(40) In the embodiment shown in
(41) The optical tracking movement sensor 16 may comprise the operative part of an optical computer mouse. Optical computer mice incorporate an optoelectronics sensor that takes successive pictures of the surface on which the mouse operates. Most optical computer mice use a light source to illuminate the surface that is being tracked. Changes between one frame and the next are processed by an image processing part of a chip embedded in the mouse and this translates the movement of the mouse into movement on two axes using a digital correlation algorithm. The optical movement sensor 16 may include an illumination source for emitting light therefrom. The illumination source may comprise one or more LEDs. The optical movement sensor may also include an illumination detector for detecting light reflected from the ground or the surface over which the vehicle is travelling. Appropriate optical components, such as a lens (preferably a telecentric lens), may be utilized to properly focus the emitted or detected light. A cleaning system, such as a stream of air or other cleaning fluid, may be used to keep the optical path clean. The optical movement sensor 16 may comprise a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS) sensor. The optical movement sensor 16 may also include an integrated chip that can rapidly determine the relative movement along an axis of the vehicle and the relative movement across an axis of the vehicle by analysing successive frames captured by the illumination detector. The optical movement sensor can complete hundreds to thousands of calculations per second.
(42) The optical movement sensor 16 generates signals that are indicative of the relative movement of the vehicle along the vehicle's axis and the relative movement of the vehicle across the vehicle's axis. The signals are sent to the controller 14. The signals received by the controller 14 are used to progressively calculate or determine changes in the position and attitude of the vehicle. In the embodiment shown in
(43) Only one optical movement sensor 16 is illustrated in
(44) The alternative embodiment shown in
(45) 3. Alternative Embodiment Optical Control System 102
(46)
(47) The embodiment shown in
(48) The GNSS receiver on the tractor 110 receives GNSS signals from the constellation of GNSS satellites via GNSS antenna 120 mounted on the tractor 110. The signals are sent to controller 114. The signals received from GNSS receiver(s) 113 on tractor 110 are corrected by the error correction signal sent from the transmitter .[.138.]. .Iadd.136.Iaddend.. Thus, an accurate determination of position of the tractor can be obtained from the differential GNSS system.
(49) The controller 114 also receives position signals from the optical movement sensor 116. As described above with reference to the embodiment in
(50) 4. Alternative Embodiment Optical Control System 202
(51)
(52) The embodiment shown in
(53) The sensor assembly 221 provides relative position and attitude information to the controller 214. Similarly, the optical movement sensor 216 also provides relative position and attitude information to controller 214. The controller uses both sets of information to obtain a more accurate determination of the position and attitude of the vehicle. This will be described in greater detail hereunder. Also, as described above with reference to the embodiments in
(54) 5. Alternative Embodiment Optical Control System 302
(55)
(56) The embodiment shown in
(57)
(58) In
(59) The optical movement sensor(s) 16 of
(60)
(61) In state space representations, the variables or parameters used to mathematically model the motion of the vehicle, or aspects of its operation, are referred to as states x.sub.i. In the present case, the states may include the vehicle's position (x,y), velocity
(62)
heading h, radius of curvature r, etc. Hence the states may include x.sub.1=x, x.sub.2=y, x.sub.3=h, x.sub.4=.[.h.]..Iadd.r.Iaddend.,
(63)
etc. However, it will be appreciated that the choice of states is never unique, and the meaning and implications of this will be well understood by those skilled in the art.
(64) The values for the individual states at a given time are represented as the individual entries in an n1 state vector:
X(t)=[x.sub.1(t)x.sub.2(t)x.sub.3(t)x.sub.4(t). . . x.sub.n(t)].sup.T
where n is the number of states.
(65) In general, the mathematical model used to model the vehicle's motion and aspects of its operation will comprise a series of differential equations. The number of equations will be the same as the number of states. In some cases, the differential equations will be linear in terms of the states, whereas in other situations the equations may be nonlinear in which case they must generally be linearized about a point in the state space. Linearization techniques that may be used to do this will be well known to those skilled in this area.
(66) Next, by noting that any j.sup.th order linear differential equations can be re-written equivalently as a set .Iadd.of .Iaddend.j first order linear differential equations, the linear (or linearized) equations that represent the model can be expressed using the following state equation:
(67)
where: A is an nn matrix linking the state time derivatives to the states themselves, U(t) is an m1 matrix containing the external forcing inputs in the mathematical model, B is an nm matrix linking the state derivatives to the inputs, m is the number of inputs, Ew(t) is a quantity (represented by an n1 vector) called the process noise. The process noise represents errors in the model and vehicle dynamics which exist in the actual vehicle but which are not accounted for in the model. As Ew(t) represents an unknown quantity, its contents are not known. However, for reasons that will be understood by those skilled in this area, in order to allow statistically optimized signal processing and state estimation Ew(t) is generally assumed to be Gaussian, white, have zero mean and to act directly on the state derivatives. It is also assumed that the process noise element associated with each individual state is uncorrelated with the process noise element of the other states.
(68) The quantities that are desired to be known about the vehicle (the real values for which are generally also measured from the vehicle itself, if possible) are the outputs y, from the model. Each of the outputs generated by the linear (or linearized) model comprises a linear combination of the states x, and inputs u, and so the outputs can be defined by the output or measurement equation:
Y(t)=CX(t)+DU(t)+Mv(t)
where C is a jn matrix linking the outputs to the states, D is a jm matrix linking the outputs to the inputs, j is the number of outputs, and Mv(t) is a quantity (represented by an n1 vector) called the measurement noise. The measurement noise represents errors and noise that invariably exist in measurements taken from the actual vehicle. Like Ew(t) above, Mv(t) is assumed to be Gaussian, white, have zero mean, to act directly on the state derivatives and to be uncorrelated with the process noise or itself.
(69) Next, it will be noted that both the state equation and the measurement equation defined above are continuous functions of time. However, continuous time functions do not often lend themselves to easy digital implementation (such as will generally be required in implementing the present invention) because digital control systems generally operate as recursively repeating algorithms. Therefore, for the purpose of implementing the equations digitally, the continuous time equations may be converted into the following recursive discrete time equations by making the substitutions set out below and noting that (according to the principle of superposition) the overall response of a linear system is the sum of the free (unforced) response of that system and the responses of that system due to forcing/driving inputs. The recursive discrete time equations are:
X.sub.k+1=FX.sub.k+GU.sub.k+1+Lw.sub.k+1
Y.sub.k+1=ZY.sub.k+JU.sub.k+1+Nw.sub.k+1
where k+1 is the time step occurring immediately after time step k, Z=C, J=D and Nv is the discrete time analog of the continuous time measurement noise Mv(t). F is a transition matrix which governs the free response of the system. F is given by:
F=e.sup.At GU.sub.k+1 is the forced response of the system, i.e. the system's response due to the driving inputs. It is defined by the convolution integral as follows:
(70)
(71)
(72) However, as noted above, the quantity Ew(t) is not deterministic and so the integral defining Lw.sub.k+1 cannot be performed (even numerically). It is for this reason that it is preferable to use statistical filtering techniques. The optimal estimator shown in
(73) In general, a Kalman filter operates as a predictor-corrector algorithm. Hence, the algorithm operates by first using the mathematical model to predict the value of each of the states at time step k+1 based on the known inputs at time step k+1 and the known value of the states from the previous time step k. It then corrects the predicted value using actual measurements taken from the vehicle at time step k+1 and the optimized statistical properties of the model. In summary, the Kalman filter comprises the following equations each of which is computed in the following order for each time step:
(74)
where the notation k+1|k means the value of the quantity in question at time step k+1 given information from time step k. Similarly, k+1|k+1 means the value of the quantity at time step k+1 given updated information from time step k+1. P is the co-variance in the difference between the estimated and actual value of X. Q is the co-variance in the process noise. K is the Kalman gain which is a matrix of computed coefficients used to optimally correct the initial state estimate. R is the co-variance in the measurement noise. is a vector containing measurement values taken from the actual vehicle. is a quantity called the innovation which is the difference between the measured values actually taken from the vehicle and values for the corresponding quantities estimated by the model. .Iadd.T is the transpose operator..Iaddend. .Iadd.I is the identity matrix..Iaddend.
(75) The operation of the discrete time Kalman filter which may be used in the optimal estimator of the present invention is schematically illustrated in
(76) Returning now to
(77) The error calculation module 62 uses the statistically optimal estimate of the position and attitude of the tractor obtained from the optimal estimator 60 and the desired position and attitude of the tractor determined from the required control path to calculate the error in position and attitude of the tractor. This may be calculated as an error in the x-coordinate, an error in the y-coordinate and an error in the heading of the position and attitude of the tractor. These error values are represented as Ex, Ey and Eh in
(78)
(79) In cases where a GNSS outage occurs, the optical movement sensor continues to provide position and attitude data to the optimal estimator. In such circumstances, control of the vehicle can be effected by the information received from the optical movement sensor alone.
(80) As a further benefit arising from the system shown in
(81)
(82)
(83) 6. Alternative Embodiment Optical Control System 402
(84)
(85) The embodiments shown in
(86)
(87) The optical movement chip 506 sends signals to the optimal estimator, as shown in
(88) The present invention provides control systems that can be used to control the movement of the vehicle or an implement associated with the vehicle. The control system includes an optical movement sensor that may be the operative part of an optical computer mouse. These optical movement sensors are relatively inexpensive, provide a high processing rate and utilize proven technology. Due to the high processing rate of such optical movement sensors, the control system has a high clock speed and therefore a high frequency of updating of the determined or calculated position of the vehicle or implement. The optical movement sensor may be used by itself or it may be used in conjunction with a GNSS system, one or more inertial sensors, or one or more vehicle based sensors. The optical movement sensor can be used to augment the accuracy of inertial and/or other sensors. In particular, the optical movement sensor can be used to debias yaw drift that is often inherent in inertial sensors.
(89) 7. Alternative Embodiment Vehicle Control System 600
(90) As described in the background section above, one of the problems with existing vehicle control systems is that they are inherently one-dimensional or linear in nature. The inherent linear nature of existing control systems is illustrated schematically in
(91) Next,
(92) The components of the control system in the particular embodiment shown in
(93) The main control unit 603 receives GPS signals from the GPS antenna 604, and it uses these (typically in combination with feedback and/or other external spatial data signals) to generate a control signal for steering the vehicle. The control signal will typically be made up of a number of components or streams of data relating to the different parameters of the vehicle being controlled, for example the vehicle's cross-track error, heading error, curvature error, etc. These parameters will be described further below. The control signal is amplified using suitable signal amplifiers (not shown) to create a signal that is sufficiently strong to drive the actuators 605. The actuators 605 are interconnected with the vehicle's steering mechanism (not shown) such that the actuators operate to steer the vehicle as directed by the control signal.
(94) In some embodiments, further actuators (not shown) may also be provided which are interconnected with the vehicle's accelerator and/or braking mechanisms, and the control signal may incorporate components or signal streams relating to the vehicle's forward progress (i.e. its forward speed, acceleration, deceleration, etc.). In these embodiments, the component(s) of the control signal relating to the vehicle's forward progress may also be amplified by amplifiers (not shown) sufficiently to cause the actuators which are interconnected with the accelerator/braking mechanism to control the vehicle's acceleration/deceleration in response to the control signal.
(95) The vehicle 601 may also be optionally provided with one or more optical sensors 606, one or more inertial sensors .Iadd.(IS) .Iaddend.607 and a user terminal .Iadd.(UT) .Iaddend.608. One form of optical sensor 606 that may be used may operate by receiving images of the ground beneath the vehicle, preferably in rapid succession, and correlating the data pertaining to respective successive images to obtain information relating to the vehicle's motion. Other forms of optical sensor may also be used including LIDAR (Light Detection and Ranging) or sensors which operate using machine vision and/or image analysis. If present, the one or more inertial sensors 607 will typically include at least one gyroscope (e.g., a rate gyroscope), although the inertial sensors 607 could also comprise a number of sensors and components (such as accelerometers, tilt sensors and the like) which together form a sophisticated inertial navigation system (INS). The vehicle may be further provided with additional sensors (not shown) such as sensors which receive information regarding the location of the vehicle relative to a fixed point of known location in or near the field, magnetometers, ultrasonic range and direction finding and the like. The data generated by these additional sensors may be fed into the database and used by the control system to control the vehicle as described below.
(96) In embodiments where the main control unit 603 comprises an industrial PC or the like, the user terminal 608 may comprise a full computer keyboard and separate screen to enable the user to utilize the full functionality of the computer. However, in embodiments where the main control unit is a purpose-built unit containing only hardware relating to the vehicle's control system, the terminal 608 may comprise, for example, a single combined unit having a display and such controls as may be necessary for the user to operate the vehicle's control system. Any kind of controls known by those skilled in this area to be suitable may be used on the main control unit, including keypads, joysticks, touch screens and the like.
(97) In
(98) In order to control the steering of the vehicle, there are three parameters that should be controlled. These are the cross-track error, the heading error and the curvature error. The physical meaning of these parameters can be understood with reference to
Heading Error=Hh
Those skilled in the art will recognize that both h and H are inherently directional quantities.
(99) Finally, the curvature error is the difference between the actual instantaneous radius of curvature r of the vehicle's motion and the desired instantaneous radius of curvature R. The curvature error is given by:
Curvature Error=1/R1/r
(100) It will also be clearly appreciated that there may be many other vehicle variables or parameters which also need to be controlled if, for example, acceleration/deceleration or the vehicle's mode of equipment operation are also to be controlled.
(101) Referring next to
(102) In the overall operation of the control system, the desired path trajectory for the vehicle is first entered into the control system by the user via the user terminal 608. The task path generator then interprets this user-defined path definition and converts it into a series of points of sufficient spatial density to adequately represent the desired path to the requisite level of precision. The task path generator typically also defines the vehicle's desired trajectory along the user-defined path, for example, by generating a desired vehicle position, a desired heading H and a desired instantaneous radius of curvature R for each point on the path. This information is then loaded into the spatial database. The way in which this and other spatial information is stored within the database in representative embodiments, and in particular the way in which pieces of data are given memory allocations according to their spatial location, is described further below.
(103) As the vehicle moves along the user-defined path, it will invariably experience various perturbations in its position and orientation due to, for example, bumps, potholes, subsidence beneath the vehicle's wheels, vehicle wheel-spin, over/under-steer, etc. Those skilled in this area will recognize that a huge range of other similar factors can also influence the instantaneous position and orientation of the vehicle as it moves. One of the purposes of the present control system is to automatically correct for these perturbations in position and orientation to maintain the vehicle on the desired path (or as close to it as possible).
(104) As the vehicle moves, the control system progressively receives updated information regarding spatial location from the external spatial data sources. The external spatial data sources will typically include GPS. However, a range of other spatial data sources may also be used in addition to, or in substitute for GPS. For example, the inertial navigation systems (INS), visual navigation systems, etc. described above may also be used as external data sources in the present control system.
(105) Those skilled in the art will recognize that the spatial data collected by the external spatial data sources actually pertains to the specific location of the external spatial data receivers, not necessarily the vehicle/implement reference location itself (which is what is controlled by the control system). In
(106) In addition to this, changes in the vehicle's attitude will also influence the spatial position readings received by the different receivers. For example, if one of the vehicle's wheels passes over, or is pushed sideways by a bump, this may cause the vehicle to rotate about at least one (and possibly two or three) of the axes shown in
(107) In order to compensate for the difference in position between the vehicle's reference point and the location of the spatial data receiver(s), and also to account for changes in the vehicle's orientation, a vehicle attitude compensation module is provided. This is shown in
(108) Those skilled in the art will recognize that the one or more external spatial data sources will progressively receive updated data readings in rapid succession (e.g., in real time or as close as possible to it). These readings are then converted by the vehicle attitude compensation module and fed into the spatial database. The readings may also be filtered as described above. Therefore, whilst each reading from each spatial data source is received, converted (ideally filtered) and entered into the spatial database individually, nevertheless the rapid successive way in which these readings (possibly from multiple parallel data sources) are received, converted and entered effectively creates a stream of incoming spatial data pertaining to the vehicle's continuously changing instantaneous location and orientation. In order to provide sufficient bandwidth, successive readings from each external spatial data source should be received and converted with a frequency of the same order as the clock speed (or at least one of the clock speeds) of the controller, typically 3 Hz-12 Hz or higher.
(109) Referring again to
(110) The position error generator then uses this information to calculate an instantaneous error term for the vehicle. The error term incorporates the vehicle's instantaneous cross-track error, heading error and curvature error (as described above). The error term is then fed into the controller. The controller is shown in greater detail in
(111) From
(112) In
(113) The external obstacle detection input may comprise any form of vision based, sound based or other obstacle detection means, and the obstacle detection data may be converted by the vehicle attitude compensation module (just like the other sources of external data discussed above) and then fed into the spatial database. Where the control system incorporates obstacle detection, it is then necessary for the task path generator to be able to receive updated information from the spatial database. This is so that if an obstacle is detected on the desired path, an alternative path that avoids the obstacle can be calculated by the task path generator and re-entered into the database. The ability of the task path generator to also receive data from the spatial database is indicated by the additional arrow from the spatial database to the task path generator in
(114) FIGS. .[.4-6.]. .Iadd.16-18 .Iaddend.graphically represent the operation of the control system. .[.However, it is also useful to consider the way in which the vehicle's parameters and dynamics are represented for the purposes of implementing the control system. Those skilled in the art will recognize that a range of methods may be used for this purpose. However, it is considered that one method is to represent the parameters and dynamics in state space form..].
(115) .[.In state space representations, the variables or parameters used to mathematically model the motion of the vehicle, or aspects of its operation, are referred to as states xi. In the present case, the states may include the vehicle's position (x,y), velocity.]..[.
(116)
.].
.[.heading h, radius of curvature r etc. Hence the states may include xi=x,.]..[.
(117)
.].
.[.Etc. However, it will be appreciated that the choice of states is never unique, and the meaning and implications of this will be well understood by those skilled in the art..].
(118) .[.The values for the individual states at a given time are represented as the individual entries in an n1 state vector:.].
.[.X(t)=[x.sub.1(t)x.sub.2(t)x.sub.3(t)x.sub.4(t). . . x.sub.n(t)].sup.T.].
.[.where n is the number of states..].
(119) .[.In general, the mathematical model used to model the vehicle's motion and aspects of its operation will comprise a series of differential equations. The number of equations will be the same as the number of states. In some cases, the differential equations will be linear in terms of the states, whereas in other situations the equations may be nonlinear in which case they must generally be linearised about a point in the state space. Linearisation techniques that may be used to do this will be well known to those skilled in this area..].
(120) .[.Next, by noting that any j.sup.th order linear differential equations can be re-written equivalently as a set j first order linear differential equations, the linear (or linearized) equations that represent the model can be expressed using the following state equation:.]..[.
(121)
.].
.[.Where:.]. .[.A is an nn matrix linking the state time derivatives to the states themselves,.]. .[.U(t) is an m1 matrix containing the external forcing inputs in the mathematical model,.]. .[.B is an nm matrix linking the state derivatives to the inputs,.]. .[.m is the number of inputs,.]. .[.Ew(t) is a quantity (represented by an n1 vector) called the process noise. The process noise represents errors in the model and vehicle dynamics which exist in the actual vehicle but which are not accounted for in the model. As Ew(t) represents an unknown quantity, its contents are not known. However, for reasons that will be understood by those skilled in this area, in order to allow statistically optimised signal processing and state estimation Ew(t) is generally assumed to be Gaussian, white, have zero mean and to act directly on the state derivatives. It is also assumed that the process noise element associated with each individual state is uncorrelated with the process noise element of the other states..].
(122) .[.The process noise represents errors in the model and vehicle dynamics which exist in the actual vehicle but which are not accounted for in the model. As Ew(t) represents an unknown quantity, its contents are not known. However, for reasons that will be understood by those skilled in this area, in order to allow statistically optimized signal processing and state estimation Ew(t) is generally assumed to be Gaussian, white, have zero mean and to act directly on the state derivatives. It is also assumed that the process noise element associated with each individual state is uncorrelated with the process noise element of the other states..].
(123) .[.The quantities that are desired to be known about the vehicle (the real values for which are generally also measured from the vehicle itself, if possible) are the outputs y1 from the model. Each of the outputs generated by the linear (or linearized) model comprises a linear combination of the states xi and inputs ui, and so the outputs can be defined by the output or measurement equation:.].
.[.Y(t)=CX(t)+DU(t)Mv(t).]. .[.Where C is a jn matrix linking the outputs to the states,.]. .[.D is a jm matrix linking the outputs to the inputs,.]. .[.j is the number of outputs, and.]. .[.M v(t) is a quantity (represented by an n1 vector) called the measurement noise. The measurement noise represents errors and noise that invariably exist in measurements taken from the actual vehicle. Like Ew(t) above, M v(t) is assumed to be Gaussian, white, have zero mean, to act directly on the state derivatives and to be uncorrelated with the process noise or itself..].
(124) .[.Next, it will be noted that both the state equation and the measurement equation defined above are continuous functions of time. However, continuous time functions do not often lend themselves to easy digital implementation (such as will generally be required in implementing the present invention) because digital control systems generally operate as recursively repeating algorithms. Therefore, for the purpose of implementing the equations digitally, the continuous time equations may be converted into the following recursive discrete time equations by making the substitutions set out below and noting that (according to the principle of superposition) the overall response of a linear system is the sum of the free (unforced) response of that system and the responses of that system due to forcing/driving inputs. The recursive discrete time equations are:.].
.[.Xk+1=FXk+GUk+1+Lwk+1.].
.[.Yk+1=ZXk+JUk+1+Nvk+1.].
.[.where k+1 is the time step occurring immediately after time step k, Z=C, J=D and Nv is the discrete time analog of the continuous time measurement noise Mv(t). F is a transition matrix which governs the free response of the system. F is given by:.].
.[.F=eA.].
.[.GU.sub.k+1 is the forced response of the system, i.e. the system's response due to the driving inputs. It is defined by the convolution integral as follows:.]..[.
(125)
.].
(126) .[.Similarly, the quantity Lw.sub.k+1 is the (forced) response of the system due to the random error inputs that make up the process noise. Hence, conceptually this quantity may be defined as:.]..[.
(127)
.].
(128) .[.However, as noted above, the quantity Ew(t) is not deterministic and so the integral defining Lw.sub.k+1 cannot be performed (even numerically). It is for this reason that it is preferable to use statistical filtering techniques such as a Kalman Filter to statistically optimize the states estimated by the mathematical model..].
(129) .[.In general, a Kalman Filter operates as a predictor-corrector algorithm. Hence, the algorithm operates by first using the mathematical model to predict the value of each of the states at time step k+1 based on the known inputs at time step k+1 and the known value of the states from the previous time step k. It then corrects the predicted value using actual measurements taken from the vehicle at time step k+1 and the optimized statistical properties of the model. In summary, the Kalman Filter comprises the following equations each of which is computed in the following order for each time step:.]..[.
(130)
.].
.[.where the notation k+1|k means the value of the quantity in question at time step k+1 given information from time step k. Similarly, k+1|k+1 means the value of the quantity at time step k+1 given updated information from time step k+1. [0135]P is the co-variance in the difference between the estimated and actual value of X. [0136]Q is the co-variance in the process noise. [0137]K is the Kalman gain which is a matrix of computed coefficients used to optimally correct the initial state estimate. [0138]R is the co-variance in the measurement noise. [0139] is a vector containing measurement values taken from the actual vehicle..].
(131) .[.The operation of the discrete time state space equations outlined above, including the Kalman gain and the overall feedback closed loop control structure, are represented graphically in
(132) In relation to the spatial database, it is mentioned above that a wide range of methods are known for arranging data within databases. One commonly used technique is to provide a hash table. The hash table typically operates as a form of index allowing the computer (in this case the control system CPU) to look up a particular piece of data in the database (i.e. to look up the location of that piece of data in memory). In the context of the present invention, pieces of data pertaining to particular locations along the vehicle's path are assigned different hash keys based on the spatial location to which they relate. The hash table then lists a corresponding memory location for each hash key. Thus, the CPU is able to look up data pertaining to a particular location by looking up the hash key for that location in the hash table which then gives the corresponding location for the particular piece of data in memory. In order to increase the speed with which these queries can be carried out, the hash keys for different pieces of spatial data can be assigned in such a way that locality is maintained. In other words, points which are close to each other in the real world should be given closely related indices in the hash table (i.e. closely related hash keys).
(133) The spatial hash algorithm used to generate hash keys for different spatial locations in representative embodiments of the present invention may be most easily explained by way of a series of examples. To begin, it is useful to consider the hypothetical vehicle path trajectory shown in
(134) As outlined above, in the present invention all data is stored within the spatial database with reference to spatial location. Therefore, it is necessary to assign indices or hash keys to each piece of data based on the spatial location to which each said piece of data relates. However, it will be recalled that the hash table must operate by listing the hash key for each particular spatial location together with the corresponding memory location for data pertaining to that spatial location. Therefore, the hash table is inherently one-dimensional, and yet it must be used to link hash keys to corresponding memory allocations for data that inherently pertains to two-dimensional space.
(135) One simple way of overcoming this problem would be to simply assign hash keys to each spatial location based only on, say, the Y coordinate at each location. The hash keys generated in this way for each point on the vehicle path in
(136) TABLE-US-00001 TABLE 1 Spatial Hash Key Generated Using only the Y Coordinate (X,Y) Hash key Hash key coordinates (hexadecimal) (decimal) (0, 0) 0x0 0 (1, 0) 0x0 0 (2, 0) 0x0 0 (3, 0) 0x0 0 (4, 0) 0x0 0 (0, 1) 0x1 1 (1, 1) 0x1 1 (2, 1) 0x1 1 (3, 1) 0x1 1 (4, 1) 0x1 1 (0, 2) 0x2 2 (1, 2) 0x2 2 (2, 2) 0x2 2 (3, 2) 0x2 2 (4, 2) 0x2 2 (0, 3) 0x3 3 (1, 3) 0x3 3 (2, 3) 0x3 3 (3, 3) 0x3 3 (4, 3) 0x3 3 (0, 4) 0x4 4 (1, 4) 0x4 4 (2, 4) 0x4 4 (3, 4) 0x4 4 (4, 4) 0x4 4
(137) The prefix 0x indicates that the numbers in question are expressed in hexadecimal format. This is a conventional notation.
(138) Those skilled in the art will recognize that the above method for generating hash keys is far from optimal because there are five distinct spatial locations assigned to each different hash key. Furthermore, in many instances, this method assigns the same hash key to spatial locations which are physically remote from each other. For instance, the point (0,1) is distant from the point (4,1), and yet both locations are assigned the same hash key. An identically ineffective result would be obtained by generating a hash key based on only the X coordinate.
(139) An alternative method would be to generate hash keys by concatenating the X and Y coordinates for each location. The hash keys generated using this method for each point on the vehicle path in
(140) TABLE-US-00002 TABLE 2 Hash Keys Generated by Concatenating the X and Y Coordinates (X, Y) Hash key Hash key coordinates (hexadecimal) (decimal) (0, 0) 0x0 0 (1, 0) 0x100 256 (2, 0) 0x200 512 (3, 2) 0x302 770 (4, 2) 0x402 1026 (0, 3) 0x3 3
.].
(141) TABLE-US-00003 TABLE 2 Hash Keys Generated by Concatenating the X and Y Coordinates (X, Y) Hash key Hash key coordinates (hexadecimal) (decimal) (0, 0) 0x0 0 (1, 0) 0x100 256 (2, 0) 0x200 512 (3, 0) 0x300 768 (4, 0) 0x400 1024 (0, 1) 0x1 1 (1, 1) 0x101 257 (2, 1) 0x201 513 (3, 1) 0x301 769 (4, 1) 0x401 1025 (0, 2) 0x2 1 (1, 2) 0x102 258 (2, 2) 0x202 514 (3, 2) 0x302 770 (4, 2) 0x402 1026 (0, 3) 0x3 3 (1, 3) 0x103 759 (2, 3) 0x203 515 (3, 3) 0x303 771 (4, 3) 0x403 1027 (0, 4) 0x4 4 (1, 4) 0x104 260 (2, 4) 0x204 516 (3, 4) 0x304 772 (4, 4) 0x404 1028
(142) In order to understand how the numbers listed in Table 2 above were arrived at, it is necessary to recognize that in the digital implementation of the present control system, all coordinates will be represented in binary. For the purposes of the present example which relates to the simplified integer based coordinate system in
(143) Hence, to illustrate the operation of the spatial hash key algorithm used to generate the numbers in Table 2, consider the point (3,3). Those skilled in the art will understand that the decimal number 3 may be written as 11 in binary notation. Therefore, the location (3,3) may be rewritten in 8-bit binary array notation as (00000011,00000011). Concatenating these binary coordinates then gives the single 16-bit binary hash key 0000001100000011 which can equivalently be written as the hexadecimal number 0x303 or the decimal number 771. The process of converting between decimal, binary and hexadecimal representations should be well known to those skilled in the art and need not be explained.
(144) It will be noted from Table 2 above that concatenating the X and Y coordinates leads to unique hash keys (in this example) for each spatial location. However, the hash keys generated in this way are still somewhat sub-optimal because points which are located close to each other are often assigned vastly differing hash keys. For example, consider the points (0,0) and (1,0). These are adjacent point in the real world. However, the hash keys assigned to these points using this method (written in decimal notation) are 0 and 256 respectively. In contrast, the point (0,4) is much further away from (0,0) and yet it is assigned the much closer hash key 4. Therefore, this algorithm does not maintain locality, and an alternative algorithm would be preferable.
(145) Yet a further method for generating hash keys is to use a technique which shall hereinafter be referred to as bitwise interleaving. As for the previous example, the first step in this technique is to represent the (X,Y) coordinates in binary form. Hence, using the 8-bit binary array representation discussed above, the point (X,Y) may be re-written in 8-bit binary array notation as (X1X2X3X4X5X6X78, Y1Y2Y3Y4Y5Y6Y7Y8). Next, rather than concatenating the X and Y coordinates to arrive at a single 16-bit binary hash key, the successive bits from the X and Y binary coordinates are alternatingly interleaved to give the following 16-bit binary hash key X1Y1X2Y2X3Y3X4Y45Y5X6Y6X7YX8Y8. The hash keys generated using this method for each point on the vehicle path in
(146) TABLE-US-00004 TABLE 3 Hash Keys Generated by Bitwise Interleaving the X and Y Coordinates (X, Y) (X, Y) Hash key Hash key coordinates (hexadecimal) (decimal) (0, 0) 0x0 0 (1, 0) 0x2 2 (2, 0) 0x8 8 (3, 0) 0xa 10 (4, 0) 0x20 32 (0, 0) 0x1 1 (1, 0) 0x3 3 (2, 0) 0x9 9 (3, 0) 0xb 11 (4, 0) 0x21 33 (0, 2) 0x4 4 (1, 2) 0x6 6 (2, 2) 0xc 12 (3, 2) 0xc 14 (4, 2) 0x24 36 (0, 3) 0x5 5 (1, 3) 0x6 7 (2, 3) 0xd 13 (3, 3) 0xf 15 (4, 3) 0x25 37 (0 ,4) 0x10 16 (1, 4) 0x12 18 (2, 4) 0x18 24 (3 ,4) 0x1a 26 (4, 4) 0x30 48
(147) To further illustrate the operation of the spatial hash algorithm used to generate the numbers in Table 3, consider the point (3,4). As noted above, the decimal number 3 may be written as 11 in binary notation. Similarly, decimal number 4 is written as 100 in binary. Therefore, the location (3,4) may be rewritten in 8-bit binary array notation as (00000011,00000100). Bitwise interleaving these binary coordinates then gives the single 16-bit binary hash key 0000000000011010, which can equivalently be written as the hexadecimal number 0x1a or the decimal number 26.
(148) From Table 3 it will be seen that generating hash keys by bitwise interleaving the X and Y coordinates leads to unique hash keys (in this example) for each spatial location. Also, the hash keys generated in this way satisfy the requirement that points which are close together in the real world are assigned closely related hash keys. For example, consider again the points (0,0) and (1,0). The hash keys now assigned to these points by bitwise interleaving (when written in decimal notation) are 0 and 2 respectively. Furthermore, the point (0,1) which is also nearby is also assigned the closely related hash key 1. Conversely, points which are separated by a considerable distance in the real world are given considerably differing hash keys, for example, the hash key for (4,3) is 37.
(149) From the example described with reference to Table 3, it can be seen that generating hash keys by bitwise interleaving the binary X and Y coordinates preserves locality. This example therefore conceptually illustrates the operation of the bitwise interleaving spatial hash algorithm that may be used with representative embodiments of the present invention. However, the above example is based on the simplified integer based coordinate system shown in
(150) The fact that GPS and other similar systems which describe spatial location typically do so using IEEE double-precision floating-point numbers (not simple integers). For instance, GPS supplies coordinates in the form of (X,Y) coordinates where X corresponds to longitude, and Y corresponds to latitude. Both X and Y are given in units of decimal degrees.
(151) the fact that certain spatial locations have negative coordinate values when described using GPS and other similar coordinate systems. For example, using the WGS84 datum used by current GPS, the coordinates (153.00341,27.47988) correspond to a location in Queensland, Australia (the negative latitude value indicates southern hemisphere).
(152) Complexities inherent in representing numbers in accordance with the IEEE double-precision floating-point numbers standard.
(153)
(154) A double-precision floating-point number represented in accordance with the IEEE 754 standard comprises a string of 64 binary characters (64 bits) as shown in
(155) Hence, actual exponent value=written exponent value-exponent bias.
(156) The exponent bias is 0x3ff=1023. Consequently, the maximum true exponent value that can be represented (written in decimal notation) is 1023, and the minimum true exponent value that can be represented is 1022.
(157) Finally, the remaining 52 bits form the mantissa. However, as all non-zero numbers must necessarily have a leading 1 when written in binary notation, an implicit 1 followed by a binary point is assumed to exist at the front of the mantissa. In other words, the leading 1 and the binary point which must necessarily exist for all non-zero binary numbers is simply omitted from the actual written mantissa in the IEEE 64-bit standard format. This is so that an additional bit may be used to represent the number with greater precision. However, when interpreting numbers which are represented in accordance with the IEEE standard, it is important to remember that this leading 1 and the binary point implicitly exist even though they are not written.
(158) Bearing in mind these issues, it is possible to understand the actual spatial hash algorithm used in representative implementations of the present control system. A worked example illustrating the operation of the spatial hash algorithm to generate a hash key based on the coordinate (153.0000, 27.0000 is given in the form of a flow diagram in
(159) From
(160) After normalising the coordinates, the next step is to convert the respective coordinates from their representations in decimal degrees into binary IEEE double-precision floating-point number format. This is shown as step 3) in
(161) Next, the binary representations of the two coordinates are split into their respective exponent (11 bits) and mantissa (52 bits) portions. This is step 4) in
(162) After de-biasing the exponents, the resulting exponents are then adjusted by a selected offset. The size of the offset is selected depending on the desired granularity of the resulting fix-point number. In the particular example shown in step 6) of
(163) After adjusting the exponent, the next step is to resurrect the leading 1 and the binary point which implicitly exist in the mantissa but which are left off when the mantissa is actually written (see above). Hence, the leading 1 and the binary point are simply prepended to the mantissa of each of the coordinates. This is step 7) in
(164) The mantissa for each coordinate is then right-shifted by the number of bits in the corresponding exponent. The exponents for each coordinate are then prepended to their corresponding mantissas forming a single character string for each coordinate. There is then an optional step of discarding the high-order byte for each of the two bit fields. This may be done simply to save memory if required, but is not necessary. Finally, the resultant bit fields for each coordinate are bitwise interleaved to obtain a single hash key corresponding to the original coordinates. In the example shown in
(165) Those skilled in the art will recognize that various other alterations and modifications may be made to the particular embodiments, aspects and features of the invention described without departing from the spirit and scope of the invention may be made to the particular embodiments, aspects and features of the invention described without departing from the spirit and scope of the invention.