Method and system for logging vehicle behavior

11321970 · 2022-05-03

Assignee

Inventors

Cpc classification

International classification

Abstract

A mobile telecommunications device configured to log driving information associated with a vehicle is described. The mobile telecommunications device comprises: a sensor set comprising at least one of an image sensor, an audio sensor, an accelerometer and a positioning module, or a combination thereof; a processor; and a memory; the mobile telecommunications device being configured to: determine, based at least in part on sensor data from the device's sensor set, a start of a driving period during which the mobile device is present in the vehicle and the vehicle is in use, process the sensor data from the sensor set during the driving period to derive driving information associated with how the vehicle is driven, mobile telecommunications device being configured to process the sensor data automatically, using a neural network provided in the mobile device, to determine whether the driving information represents an acceptable or unacceptable driving pattern; and store at least some of the driving information to the memory.

Claims

1. A mobile telecommunications device configured to log driving information associated with a vehicle, the mobile telecommunications device comprising: a sensor set comprising at least one of an image sensor, an audio sensor, an accelerometer and a positioning module, or a combination thereof; a processor; and a memory; the mobile telecommunications device being configured to: determine, based at least in part on sensor data from the device's sensor set, a start of a driving period during which the mobile device is present in the vehicle and the vehicle is in use, process the sensor data from the sensor set during the driving period to derive driving information associated with how the vehicle is driven, mobile telecommunications device being configured to process the sensor data automatically, using a neural network provided in the mobile device, to determine whether the driving information represents an acceptable or unacceptable driving pattern; store at least some of the driving information to the memory; and, wherein the mobile telecommunications device is configured to have a training period during which sensor data from the device's sensor set is used to determine a benchmark for a particular driver and to use the benchmark during the driving period.

2. A mobile telecommunications device of claim 1, wherein the driving information is derived without data from the vehicle sensors.

3. A mobile telecommunications device of claim 1, wherein the mobile telecommunications device is configured to detect the occurrence of a predetermined event and in response to take at least one predetermined action.

4. A mobile telecommunications device of claim 1, wherein the mobile telecommunications device is configured to process the driving information to generate a driving score.

5. A mobile telecommunications device of claim 4, wherein the mobile telecommunications device is configured to use the driving score to define an insurance premium for a driver of the vehicle.

6. A mobile telecommunications device of claim 5, wherein the mobile telecommunications device is removably affixed to the vehicle during the driving period.

7. A mobile telecommunications device of claim 1, further comprising a user interface and wherein the mobile telecommunications device is configured to determine, based at least in part on the inputs received by the user interface, the start of the driving period.

8. A mobile telecommunications device of claim 1, wherein the mobile telecommunications device is a smartphone.

9. A mobile telecommunications device of claim 8, wherein the mobile telecommunications device is controlled by a downloaded application to use the neural network to determine whether the driving information represents an acceptable or unacceptable driving pattern.

10. A mobile telecommunications device according to claim 1, wherein the mobile telecommunications device is configured to detect predetermined driving events, the predetermined driving events comprising at least one of a vehicle acceleration event, a vehicle braking event, a vehicle cornering event, a vehicle orientation event and a vehicle swerving event.

11. A mobile telecommunications device according to claim 1, wherein the mobile telecommunications device comprises an event detector, configured to detect the occurrence of a predetermined driving event using an event indication model provided in the mobile telecommunications device.

12. A mobile telecommunications device according to claim 11, wherein the event indication model comprises a pattern of predetermined data values and the event detector is configured to detect the occurrence of an event by carrying out pattern recognition by matching sensor data values to the pattern of predetermined data values of the event indication model.

13. A mobile telecommunications device of claim 1, wherein the mobile telecommunications device comprises an event detector, configured to detect the occurrence of a predetermined driving event using an event indication model provided in the mobile telecommunications device and wherein mobile telecommunications device is configured to modify the event indication model in response to determining the benchmark.

14. A mobile telecommunications device configured to log driving information associated with a vehicle, the mobile telecommunications device comprising: a sensor set comprising at least one of an image sensor, an audio sensor, an accelerometer and a positioning module, or a combination thereof; a processor; and a memory; the mobile telecommunications device being configured to: determine, based at least in part on sensor data from the device's sensor set, a start of a driving period during which the mobile device is present in the vehicle and the vehicle is in use, process the sensor data from the sensor set during the driving period to derive driving information associated with how the vehicle is driven, mobile telecommunications device being configured to process the sensor data automatically, using a neural network provided in the mobile device, to determine whether the driving information represents an acceptable or unacceptable driving pattern; store at least some of the driving information to the memory; and, wherein the mobile telecommunications device is configured to control the mobile device to selectively disable functions of the mobile telecommunications device during the driving period in dependence on the driving information.

15. The mobile telecommunications device of claim 14, wherein the mobile telecommunications device is configured to control the mobile device to selectively disable and/or divert incoming voice calls during the driving period.

16. The mobile telecommunications device of claim 14, wherein the mobile telecommunications device comprises a wireless telecommunications module operable to download a controlling application and the processor is configurable by the controlling application.

17. A data-logging system for logging driving information comprising: a database for storing a plurality of accounts, each account having a unique identifier and the database being arranged to store driving information associated with at least one of a vehicle and a driver; a communications interface arranged to communicate with a remote mobile telecommunications device according to claim 1, and receive therefrom: a unique identifier for association of the mobile device with a corresponding one of the plurality of accounts; and driving information to be logged to that corresponding account.

18. A mobile telecommunications device configured to process driving information associated with a vehicle, the mobile telecommunications device comprising: a sensor set comprising at least one of an image sensor, an audio sensor, an accelerometer and a positioning module, or a combination thereof; and a processor, the processor being configurable as a neural network; the mobile telecommunications device being configured to process sensor data from the sensor set during a driving period, during which the mobile device is present in the vehicle and the vehicle is in use, to derive driving information associated with how the vehicle is driven, the mobile telecommunications device being configured to process the sensor data automatically, using the neural network, to determine whether the driving information represents an acceptable or unacceptable driving pattern; and, wherein the mobile telecommunications device is configured to determine, based at least in part on the sensor data from the device's sensor set, a start of the driving period.

19. A mobile telecommunications device of claim 18, further comprising a user interface and wherein the mobile telecommunications device is configured to determine, based at least in part on the inputs received by the user interface, the start of the driving period.

20. The mobile telecommunications device of claim 18, wherein the mobile telecommunications device is configured to detect the occurrence of a predetermined event and in response to take at least one predetermined action.

21. The mobile telecommunications device of claim 18, wherein the mobile telecommunications device is configured to control the mobile device to selectively disable functions of the mobile telecommunications device during the driving period in dependence on the driving information.

22. The mobile telecommunications device of claim 21, wherein the mobile telecommunications device is configured to control the mobile device to selectively disable and/or divert incoming voice calls during the driving period.

23. The mobile telecommunications device of claim 18, wherein the mobile telecommunications device comprises a wireless telecommunications module operable to download a controlling application and the processor is configurable by the controlling application.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) In order that the invention may be more readily understood, reference will now be made, by way of example, to the accompanying drawings in which:

(2) FIG. 1 is a schematic illustration of a system within which the mobile device of various embodiments of the present invention may be used;

(3) FIG. 2 is an illustration of the automobile of FIG. 1 configured with a mobile device in accordance various embodiments of the present invention;

(4) FIG. 3 is a schematic illustration of the functional components of the mobile device of FIG. 2;

(5) FIG. 4a is a process flow chart, illustrating method steps executable by the mobile device of FIGS. 2 and 3;

(6) FIG. 4b is a more detailed process flow chart, providing an example of how a driving incident or event may be detected in accordance with embodiments of the present invention; and

(7) FIGS. 5 to 24 illustrate a graphical user interface of the mobile device according to FIGS. 1 to 4b.

DETAILED DESCRIPTION OF THE EMBODIMENTS

(8) Specific embodiments are now described with reference to the appended figures.

(9) A preferred embodiment of the present invention relates to a mobile telecommunications device for recording events associated with a vehicle, such as a car. In particular, the mobile telecommunications device is loaded with an application—a ‘mobile app’—which is arranged to record and document the events surrounding an incident involving the vehicle (a ‘driving incident’), such as a vehicle collision. The mobile application may be referred to as the ‘Witness’ application in the ensuing description.

(10) FIG. 1 illustrates a system 1 within which a vehicle, such as an automobile 3 configured with a mobile telecommunications device executing the Witness application, may communicate with one or more remotely located devices or systems. Such devices may relate to one or more servers, such as an insurance server 5, an emergency services server 7, a content provider server 9, and a personal computer 11. Communication between the automobile 3 and the one or more remotely located devices 5, 7, 9, 11 may be effected via a shared communication network 13. The shared communication network may relate to a wide area network (WAN) such as the Internet, and may also comprise telecommunications networks. For example, the mobile telecommunications device of the present embodiments may be arranged to communicate with any one of the remotely located devices 5, 7, 9, 11 via a mobile telecommunications base station 15, which is coupled to the shared mobile telecommunications network 13.

(11) A main purpose of the mobile telecommunications device, when arranged within a vehicle in motion, such as the illustrated automobile 3, is to monitor and process sensor data captured by the mobile device, to derive driving information associated with the moving automobile 3. As mentioned previously, this driving information is subsequently used by the mobile device to determine if a driving incident or event has occurred. The term “driving incident” as used within the present context may relate to an event such as a collision, a near-miss, dangerous driving, erratic driving or similar. Similarly, the driving information may be used as a means for monitoring a driver's driving performance, and determining a driving characteristic, or user profile for the subject driver. In other words, the sensor data and/or the driving information may be used to determine for example, if the driver is a cautious driver, or a reckless driver. This information may be used, for example by an insurance provider, to conduct a risk assessment and to tailor an insurance premium to the subject driver on the basis of a determined driving characteristic. Further details of this alternative embodiment are described below.

(12) The mobile telecommunication device's functionality is provided by the Witness application, which is installed in the device's local non-volatile storage, and which is executed by the device's native processor. The application may be downloaded to the mobile device via the shared communications network 13 and telephone base station 15, from the content provider server 9.

(13) In use (once the application has been configured for execution on the mobile telecommunications device), when a driving incident has been detected by the mobile telecommunications device, data, comprising one or more of mobile device sensor data, derived driving information, and captured image data may be automatically forwarded to one or more of the remotely located devices 5, 7, 9, 11. For example, the mobile device may be configured such that when a driving incident is detected, such as a collision involving an automobile 3, driving information (including image data captured by the mobile device) derived from the sensor data is automatically forwarded to the insurance server 5, via the base station 15 and shared communications network 13. In this way, data relating to the driving incident (a collision in this example), is automatically forwarded to the automobile user's insurance provider, for use in claim settlement, and/or to determine culpability.

(14) Similarly, once a driving incident has been detected, data may also be automatically forwarded to the emergency services server 7. For example, such data may comprise the position information of the automobile 3, along with an automated S.O.S. message requesting assistance from the emergency services. Forwarded data may also comprise sensor data, and any relevant derived driving information, such as speed of impact, the g-forces the vehicle was subjected to during the collision, or any other information wherefrom the severity of the collision may be determined, and which may be used to assist the emergency services in coordinating the appropriate level of response.

(15) Optionally, an electronic message may be sent to a personal contact preselected by the user. For example, an automated message, such as an e-mail, may be forwarded to the PC 11 of the user nominated personal contact, informing the personal contact that the user of the vehicle 3 has been involved in a driving incident. Similarly, and due to the mobile device's telecommunications functionality, an electronic text message, such as an SMS (Short Message Service) may be forwarded to the telephone of the user selected personal contact, informing the contact of the driving incident. The mobile device may be equally be arranged to communicate with a personal contact via any known electronic messaging service and/or instant messaging service too. For example, via Apple's® iMessage, or RIM's® BlackBerry Messenger (BBM) service.

(16) This functionality of forwarding a message to a nominated contact, may also be of particular benefit for managing large fleets of vehicles. For example, a car hire service. In this way, if any of the vehicles comprised in the fleet are involved in a driving incident, the car hire service may be automatically notified of the identity of the specific vehicle involved in the driving incident. In such embodiments, the nominated personal contact would be preselected by the vehicle fleet manager. The option of providing a second user selected personal contact is also envisaged. In this way, a message may be forwarded to both the car hire service for example, and to the driver's selected personal contact.

(17) In preferred embodiments the mobile telecommunications device relates to a mobile telephone having native processing functionality, and preferably relates to a smartphone. Preferably, the mobile telephone comprises a camera arranged to enable image capture, and preferably arranged to enable a sequence of images to be captured taken in temporal succession. In other words, in preferred embodiments the mobile telephone comprises a camera configured to enable video footage of a sequence of events to be captured.

(18) Alternatively, the mobile telecommunications device may relate to a PDA, a notepad such as an iPad®, or any other mobile device comprising local processing means and means for communicating with a telecommunications network.

(19) FIG. 2 provides a more detailed view of the automobile 3 of FIG. 1, and illustrates a preferred arrangement of the mobile telecommunications device 17 within the automobile 3. In the illustrated embodiment, the mobile telecommunications device 17 relates to a smartphone configured with an image capture device, such as a camera. Preferably, the device 17 is arranged within the vehicle, such that the camera has a clear line of sight of the road in the principle direction of travel of the automobile 3. For example, the device 17 may be attached to the windshield 19 of the automobile 3 in an adapter 21. The adapter 21 may comprise a flexible telescopic arm, configured with a suction cup at one end for affixing the adapter to the windshield 19, and a dock arranged at the opposite end for securely holding the telecommunications device 17 in place. The telescopic arm enables the position of the device 17 to be varied, such that a clear line of sight in the direction of travel may be obtained.

(20) The details of the adapter 21 are irrelevant for present purposes, with the exception that it must enable a clear line of sight in the principle direction of travel of the vehicle to be obtained, and it must firmly affix the mobile to the vehicle. Affixing the mobile telecommunications device 17 to the automobile 3, ensures that the mobile device 17 is able to accurately capture the automobile's state of motion. By principle direction of travel is intended the main direction of travel of the vehicle when operated in a conventional way, indicated by the arrow A in FIG. 2. In other words, the forward direction of travel. The skilled reader will appreciate that whilst most vehicles, such as an automobile, may have more than one direction of travel (e.g. travelling backwards when in reverse gear), the majority of vehicles have a primary direction of travel, which is the intended direction of travel for any transit of substantial length and/or distance. Arranging the telecommunications device 17 relative to the direction of principal travel, ensures that the camera (not shown) of the telecommunications device 17 is well placed to capture any image data which may be pertinent to a subsequently detected driving incident, such as a collision.

(21) FIG. 3 is a schematic of the modular components of the mobile telecommunications device 17 of FIG. 2. Preferably the mobile telecommunications device 17 comprises: an image capture module 21, an accelerometer 23, a GPS receiver 25, an audio capture module 27, a communications module 29, a processor module 31 and a storage module 33. The image capture module 21, accelerometer 23, GPS receiver 25, and audio capture module 27 form a sensor set and are generically referred to as data capture modules in the ensuing description, and are differentiated from the communications module 29, processor module 31 and storage module 33, in that they comprise sensors for sampling physical data.

(22) This sampled physical data, which is also referred to as sensor data, is subsequently processed by the processor module 31 and stored in the storage module 33.

(23) The image capture module 21 may relate to any image capture device, such as an integrated camera commonly found in smartphones or similar mobile devices. As mentioned previously, the image capture module 21 is preferably configured to capture a plurality of images taken in temporal succession, such as provided by a video camera.

(24) The accelerometer 23 is arranged to provide information regarding the motion of the automobile 3 along all three dimensional axes. For example, the accelerometer 23 provides information regarding the pitch, yaw and roll of the automobile 3. Sensor data captured from the accelerometer 23 may be used to determine the g-forces the automobile has been subjected to. This is particularly useful in determining the severity of a collision. In general, the greater the g-forces experienced in a collision, the greater the risk of serious injury to the passengers of the vehicle. This information may assist the emergency services in forming an initial assessment of the severity of a collision. Furthermore, this type of data may also assist the emergency services and/or insurance provider to obtain a better understanding of the driving incident. For example, in the event of a collision, this data may assist the emergency services and/or insurance provider to obtain a better understanding of how the collision occurred. This information may subsequently be used for dispute resolution, and/or for determining culpability.

(25) The GPS receiver 25 is arranged to provide location information, such as positional coordinates, and can also provide velocity information. When combined with the accelerometer sensor data, the GPS receiver data can be used to provide a very accurate model of a driving incident, such as a collision. In particular, the GPS sensor data provides velocity data, such as the velocity of impact. The velocity data enables one to determine if a vehicle was being driven at speeds greater than the legally permitted maximum speeds.

(26) The audio capture module 27 provides means, such as a microphone, for recording audio data, which might be generated by a driving incident. This includes any sounds generated externally to the vehicle, for example the sound of an impact, or the sound of the crumple zone being crushed. Additionally, sounds generated internally to the vehicle are also recorded. Such audio data may also help to recreate a driving incident, and understanding the causes of the incident.

(27) The communication module 29 provides the mobile telecommunications device 17 with functionality for communicating with the remotely located devices 5, 7, 9, 11 of FIG. 1. The communication module 29 comprises a wireless telecommunications sub-module 31 enabling communication over a telecommunications network. An optional wi-fi communication sub-module is also envisaged. Similarly, the presence of wired communication modules are also envisaged, such as a USB port and/or an IEEE 1394 interface (more commonly known as FireWire™) to support wired communication with a remote device, such as a personal computer or similar. Such a connection may be useful for the purposes of side-loading the application to the mobile device.

(28) As mentioned previously, sensor data captured from any one of the aforementioned data capture modules 21, 23, 25, 27 is processed by the processor module 33, to generate driving information. By driving information is intended any data which may be derived from raw sensor data captured by any one of the aforementioned modules 21, 23, 25, 27. For example, g-force data is driving information which is derived from the sensor data sampled by the accelerometer 23. The skilled reader will be familiar with the plurality of types of driving information that may be derived from sensor data sampled by the aforementioned modules, and accordingly a complete list of the different types of driving information that may be derived from sampled sensor data is omitted for brevity.

(29) The processor module 33 is also configured to analyse sampled sensor data and generated driving information to determine if a driving incident has occurred (described in more detail below).

(30) Sampled sensor data is stored in storage 34, which is operatively coupled to the processor module 33, along with any generated driving information. The storage 34 is preferably configured with a FIFO (First In First Out) storage buffer 34a, and a permanent storage component 34b. In preferred embodiments, the data capture modules are configured to sample data at periodic intervals. Preferably, these intervals are sufficiently small, of the order of milliseconds, such that for practical purposes the data capture modules may be considered to sample data continuously. The sampled data, along with any derived driving information is preferably stored in the storage buffer 34a, unless a driving incident has been identified, in which case the associated sensor data and driving information is stored in the permanent storage component 34b to avoid undesirable overwriting.

(31) In preferred embodiments, the FIFO storage buffer 34a is provided with a finite amount of storage space. Nonetheless, said storage space may be predefined by the user as will be described below. In any case, once this storage space has been exhausted, the oldest recorded data is overwritten by newly sampled data, and this cycle of overwriting older data with newly sampled data is continuously carried out during operation of the telecommunications device 17, unless a driving incident has been detected, in which case, and as mentioned previously, all data related to the driving incident is stored in a long term protected storage 34b to safeguard it from being overwritten by newer data.

(32) In preferred embodiments, the mobile telecommunications device 17 may be configured with a data recording strategy by the user. This might define the frequency with which sensor data is sampled. In other words, how many measurements are made per unit of time. Furthermore, the recording strategy also defines how data is recorded. In preferred embodiments, sampled sensor data is stored in data files in the buffer 34a. Each data file represents a plurality of sequentially sampled sensor data, captured over a defined period of time, which will be referred to as a ‘data file period’. This is best illustrated by considering captured image data, such as video footage. A video period may be defined, which period defines the unit of physical time covered by captured video footage comprised in a single video data file—this is the data file period for image data. The video data file is subsequently stored in the buffer 34a. For example, a five minute video period setting, instructs the processor 33 to store all sequentially captured image data captured by the image capture module 21 over a period of five minutes, in a separate video data file. It is to be understood that whilst the image capture module 21 is continuously sampling image data (in other words, it is continuously capturing image data), this plurality of captured image data is grouped together for storage in video data files, each data file representing a five minute data file period.

(33) Sampled sensor data and derived driving information is also continuously analysed by the processor module 33 for the purposes of detecting a driving incident. As soon as a driving incident is identified, the associated sensor data and derived driving information is stored in a data file in accordance with the data recording strategy. Returning to the example described in the preceding paragraph, this entails combining the video footage captured within a temporal window of five minutes leading up to the detected driving incident, in a single video file, and storing this data file in protected storage 34b. Whilst each different type of sensor data may be recorded in separate data files, in preferred embodiments all the different types of sensor data sampled by the mobile telecommunications device 17 are stored together in a single data file in accordance with the user selected data recording strategy. This means that the five minute data file referred to above, preferably also comprises GPS data, accelerometer data, and audio data sampled over the five minute time period.

(34) Data files are stored in the buffer 34a, unless they are associated with a driving incident, in which case they are then stored in protected storage 34b, which cannot be overwritten. Once the storage space comprised in the buffer 34a has been exhausted, the oldest data file is overwritten by a newer data file.

(35) Data compression methods may also be used in conjunction with the present invention to improve the use of storage. For example, data comprised in data files which have not been associated with a driving incident may be compressed using compression techniques, which techniques will be known to the skilled reader. Similarly, within a data file associated with a driving incident, sensor data captured at time coordinates which are distant from the determined driving incident may be compressed. In this way, the resolution of sensor data which is directly pertinent to a driving incident is maintained, whilst sensor data which may be less relevant to the driving incident is maintained at a lower resolution.

(36) Since the sensors of the mobile telecommunications device 17 are continuously recording sensor data, even when a driving incident is detected, the device 17 may be configured to comprise sensor data and/or driving information recorded/derived shortly after the driving incident in the same data file, since this data may also be relevant for understanding a driving incident. Furthermore, this also enables the mobile telecommunications device to record multiple driving incidents. For example, a multiple collision.

(37) The processor module 33 may be configured with encryption means, enabling stored data files to be encrypted to prevent data tampering. Envisaged encryption means may comprise both software solutions and hardware solutions. For example the processor module may comprise a cryptoprocessor, or the processor may be configured with code to carry out a cryptographic process.

(38) FIG. 4a is a flow chart outlining the method carried out by the mobile telecommunications device 17, to determine if a driving incident has occurred, in accordance with a preferred embodiment. An application is downloaded from the content provider server 9 of FIG. 1, onto the mobile telecommunications device 17, at step 36, as previously described. This may be done over a telecommunications network. The application provides the mobile telecommunications device 17 with the previously described functionality, when executed on the device. The mobile telecommunications device is configured within the vehicle, at step 38. This may comprise affixing the mobile telecommunications device 17 to the vehicle via an adapter, as described previously.

(39) The recording strategy is specified at step 40. As mentioned previously, this comprises setting the period of time that each recorded data file represents. Furthermore, it may also comprise defining the size of the buffer 34a and/or the number of data files that are to be stored within the buffer. Preferably, the recording strategy is specified only once, upon first use of the mobile telecommunications device 17. Where the recording strategy has already been defined, method step 40 is skipped, and the method continues with step 42.

(40) The start of a driving period is registered, at step 42. The start of the driving period determines when the recording of sampled sensor data begins. The start of a driving period may be manually entered by the user via the application's graphical user interface (GUI). Alternatively, the start of the driving period may be automated. For example, the mobile device 17 may be configured to initiate the driving period once sensor data above a minimum threshold value is recorded, indicative of the vehicle being in motion. For example, once a velocity greater than 20 kilometres per hour is detected.

(41) Alternatively, the start of the driving period may be initiated once the application determines that the mobile telecommunications device 17 has been affixed to the vehicle. For example, the adapter 21 may comprise a registration module (not shown) arranged to register the installation and fixation of the mobile device to the vehicle and/or adapter. The registration module may comprise an NFC device. When the mobile device is brought into close proximity with the registration module, a driving period is initiated.

(42) Once the driving period has been initiated, sensor data is sampled and recorded in storage 34, at step 44. Additionally, the sampled sensor data is used to generate driving information by the processor module 33. The sampled sensor data and the driving information is continuously analysed by the processor module 33, at step 46.

(43) The processor module 33 determines if a driving incident has been detected, at step 48. This is determined on the basis of the analysis carried out at step 46. If a driving incident has been detected, all the sensor data and driving information associated with the data file period, is stored in a data file in protected storage 34b, at step 50.

(44) There are several ways in which a driving incident may be detected. Preferably, this is an automated process, wherein the processor module 33 determines a driving incident has occurred on the basis of an observed marked variation in sensor data and/or driving information. The term ‘marked variation’ is to be understood to relate to a significant change in sensor data and/or driving information occurring over a very short time period. In other words, a detected impulse in recorded sensor data and/or driving information, indicative of a significant change in the state of motion of the vehicle occurring over a short time period. For example, a sudden increase in the g-forces the vehicle is subjected to, may be indicative of a collision and if observed, result in a driving incident being determined to have occurred by the processor module 33.

(45) Predefined threshold values may also be used to automate the detection of a driving incident. For example, each data type and/or derived driving information (e.g. acceleration, velocity, g-force, pitch, roll, yaw etc.) may be associated with a threshold value. When any one of these threshold values is exceeded, the processor module 33 may be configured to determine that a driving incident has occurred. Similarly, the automated detection of a driving incident may require that a threshold condition associated with a combination of predefined threshold values, each threshold value being associated with a different data type and/or type of driving information, must be exceeded, in order for the processor module 33 to automatically determine that a driving incident has occurred. For example, in the event of a collision, it is reasonable to expect to observe a marked variation in g-force, accompanied by a marked variation in velocity. Accordingly, the threshold condition may require that in order for a collision to be automatically detected, both a marked variation in g-force and a marked variation in speed, in excess of predefined threshold values must be observed.

(46) Similarly, the occurrence of a driving incident may also be recorded manually by the user via the application GUI. This may be beneficial for the purposes of documenting data associated with a low velocity collision—colloquially referred to as a ‘fender-bender’—which may not result in any marked variations in sampled sensor data, and therefore may not be automatically detected.

(47) Once a driving incident, such as a collision, has been detected, and the data file comprising the associated sensor data and driving information has been securely stored in protected storage 34b, the data file is transmitted to the insurance server 5 of FIG. 1, at step 52. The data file informs the insurance provider that a driving incident has occurred and provides the insurance provider with the sensor data and derived driving information. This data assists the insurance provider in better understanding the collision, in addition to assisting with determining culpability, where more than one vehicle are involved.

(48) Similarly, once a driving incident has been detected, an S.O.S. message may be automatically forwarded from the mobile telecommunications device 17 to the emergency services server 7, at step 54. The S.O.S. message may also comprise sensor data and derived driving information, which may assist the emergency services in coordinating their response.

(49) As mentioned previously, the mobile telecommunications device 17 will continue to sample and record sensor data even once a driving incident has been detected, unless the mobile telecommunications device 17 detects that the driving period has terminated, at step 56. If the driving period has terminated, the present method is terminated and the device stops recording sensor data. The end of a driving period may be automatically detected by the processor module 33, if certain conditions are met. For example, if the measured velocity of the vehicle remains zero for a predetermined period of time, the processor module 33 may infer that the vehicle is stationary and no longer being driven, and accordingly ceases recording sensor data, and the method is terminated. If instead the processor determines that the driving period has not yet terminated, a new data file period is initiated, at step 58, and steps 44 through 62 are repeated.

(50) Where a driving incident is not detected at step 48, the processor module 33 will determine if the data file period has expired, at step 60. If the data file period has expired, then the sensor data and the derived driving information generated during the data file period is combined and stored in a single data file, for storage in the buffer 34b, at step 62. The processor 3 then determines, at step 56, if the driving period has terminated. If the driving period has not terminated, a new data file period is initiated, at step 58. The mobile telecommunications device 17 continues to sample sensor data, to derive driving information, and steps 44 through 62 are repeated, until the driving period is determined to have terminated.

(51) FIG. 4b provides more detail regarding how a driving incident may be automatically detected (i.e. steps 46 and 48 of FIG. 4a) in a preferred embodiment, where the mobile telecommunications device 17 is configured to sample audio data, accelerometer data, and GPS data only. Each one of these types of data is analysed in turn, at steps 64, 66 and 68. The analysis may comprise comparing measured sensor data and/or derived driving information with a data model. The model may comprise defined threshold values for different data types. Sampled sensor data and/or driving information may be compared with the data model to determine if a driving incident has occurred.

(52) For example, the analysis of sampled audio data, at step 64, may comprise comparing the recorded audio data with predetermined audio data models representing specific sounds. Such sounds may relate to the sound of tyre squeals, the sound of deforming metal, the sound of breaking glass, passenger screaming and/or shouting, the sound of airbag deployment, and any other sound which may be associated with a driving incident. Effectively, this may be considered audio fingerprinting, which serves to identify specific sounds associated with a driving incident by carrying out a signal profile analyses of the audio signal captured by the audio capture module 27. To achieve this, the storage device 34 may comprise a database of prestored audio sound files. The audio sound files represent sounds associated with a driving incident. The captured audio signal is compared with the sound files comprised in the database, to identify matches between the captured audio signal and the database of sound files. This facilitates the audio fingerprinting of specific sounds, indicative of a driving incident, present within the captured audio data signal captured by the audio capture module 27.

(53) Similarly, the analysis of sampled accelerometer data, at step 66, may comprise comparing the sampled data with predetermined accelerometer data models. The data models represent specific states of motion of the vehicle. For example, this might comprise defining threshold values for yaw, pitch and roll, which if exceeded, indicate a state of motion of the vehicle indicative of a driving incident. For example, a measured yaw value above a predetermined threshold value may be indicative of the vehicle having lost traction and is fishtailing and/or skidding. Similarly, a roll and/or pitch value above a predetermined threshold value may be indicative of the vehicle having overturned.

(54) Accelerometer sensor data is also used for deriving driving information such as g-forces. Analysis of g-force data is also used to determine if a driving incident has occurred. For example, approximate g-force values for detecting certain driving incidents are as follows:

(55) Harsh braking—a deceleration of greater than 2.5 m/s.sup.2 or forward G-force of greater than 0.7 G for more than 400 msec.

(56) Harsh acceleration—from stationary, an acceleration greater than 2.5 m/s.sup.2 or backward G-force of greater than 0.7 G for more than 800 msec.

(57) Harsh swerving—lateral G-forces greater than 0.7 G for more than 400 msec.

(58) The data models are preferably preconfigured and are comprised within the application executed on the mobile telecommunications device. Different data models are used depending on the type of vehicle the telecommunications device is being used with. Different vehicle types will experience different states of motion during regular operation, which must be considered in determining if a driving incident has occurred. For example, a motorcycle will display more roll and/or pitch than an automobile during regular operation. Accordingly, different data models and/or threshold values must be used to automate the identification of a driving incident for different vehicle types. The specific data models used may be selected during an initial configuration of the mobile telecommunications device, by indicating the type of vehicle the device is being used with.

(59) On the basis of the audio data analysis and the accelerometer data analysis, the mobile telecommunications device determines, at step 72, if a driving incident has occurred. If it is determined that a driving incident has occurred, then the telecommunications device proceeds with step 50 of FIG. 4a. If instead a driving incident is not detected, the telecommunications device proceeds with step 60 of FIG. 4a.

(60) The GPS data analysis, at step 68, comprises analysing positional data and velocity data for any anomalous readings. For example, a sudden deceleration followed by a zero-velocity reading lasting a predetermined period of time, may be indicative of a collision. If such a zero-velocity reading is observed at step 70, in conjunction with anomalous audio and/or accelerometer sensor at step 72, then a driving incident is determined and the mobile telecommunications device proceeds with step 50 of FIG. 4a. This is a further example of a threshold condition, discussed previously.

(61) The previously described mobile telecommunications device and method may also be used to monitor and generate a driver profile. The driver profile may be indicative of the type of driver a user is. For example, this may comprise determining if a user is a calm and patient driver, or an aggressive driver. Also, this may comprise determining whether a user regularly flouts speed limits, and/or ignores safety distances. This type of information may be used by an insurance provider to conduct a personalised risk assessment for individual users. Insurance premium rates may then be tailored to the specific user on the basis of the personalised risk assessment.

(62) For example, analysis of captured image data, such as video footage, may be used to determine if a user regularly flouts safety distances. In preferred embodiments, the mobile telecommunications device is arranged within the subject vehicle to have a clear line of sight of the road in the direction of principle motion. Accordingly, the number plate of any vehicle preceding the subject vehicle will be captured by the image capture module. Since the physical dimensions of number plates are standardised and known in each country, they may be used as a metric to scale the captured image. When combined with the known optical characteristics of the image capture module, this enables the distance of the image capture module from the image object (i.e. the preceding vehicle) to be determined at the time of image capture. This information may then be used to see if a user adheres to recommended safety distances. A user that is observed to regularly flout recommended safety distances, may be considered to represent a greater risk, and accordingly may be required to pay a greater insurance premium than a user who regularly respects recommended safety distances.

(63) Image analysis can also be used to determine driving conditions and the driving environments. For example, image processing can detect road signs, such as stop signs or traffic lights. Furthermore, driving conditions, as affected by the weather can be determined. For example, if the windscreen wipers are detected to be in motion, it can be inferred to be raining. Once the driving conditions are so determined, an assessment of the driving performance of a user can be made by determining whether the user reacts or behaves appropriately to the driving conditions. For example, if the driver is seen to be jumping red lights, or driving dangerously in rain or snow, then a higher risk profile may be assigned to that driver.

(64) Similarly, accelerometer and g-force data may be used to determine if a user has an erratic driving style. For example, a user that generates sharp variations in g-force data and accelerometer data during regular operation of a vehicle, may be considered to drive erratically, and therefore at greater risk of being involved in an accident, and insurance premium rates for the user may be tailored accordingly.

(65) The mobile telecommunications device may also be configured to interface and communicate directly with a vehicles native telemetry systems. For example, the majority of modern cars have inbuilt electronic control systems or engine management systems, arranged to collect data regarding vehicle system performance. This data may be communicated to the mobile telecommunications device either via a wired physical connection, such as USB (Universal Serial Bus), or via a wireless communication protocol such as Bluetooth®. This data may subsequently be used by the mobile telecommunications device to complement sensor data captured directly by the telecommunications device. In this way a more accurate model of a driving incident, and/or of a user profile may be generated. For example, native vehicle telemetric systems may comprise electronic tyre pressure sensors, and are able to detect if a tyre is under and/or over-inflated. This information may be communicated to the mobile telecommunications device and may help to explain the causes for a driving incident, such as a sudden loss of traction resulting from a burst tyre.

(66) Further features of the Witness application, are set out below.

(67) Benchmarking

(68) It will be appreciated that different sensor types, phones, mounting positions, vehicles, drivers and road conditions may generate differing outputs for driving behaviour that is ‘safe’. To account for this, the Witness application may have the following functionality:

(69) During a ‘training mode’ (e.g. first week of enabling the Witness application) the input from the sensors are used to build up a ‘benchmark’ for a particular driver's typical driving conditions.

(70) Assuming an accident does not occur during this training mode, the benchmark data can be subsequently used to assess the occurrence of driving incident.

(71) There is preferably an option to notify the Witness application of a change in parameters (e.g. different driver, driving off-road etc). Thus a number of ‘profiles’ may be set up. Each profile may require an independent training mode period.

(72) If the Witness application incorrectly detects that a driving incident has occurred, it can receive feedback from the user to modify its sensitivities. E.g. a more aggressive driver is actually driving.

(73) Crash/Collision Management

(74) On detection of a crash (or other driving incident) the Witness application is arranged to take one or more of the following actions:

(75) Announce that it has detected an accident (audio/screen prompt)

(76) Call the emergency services (with option to cancel)—e.g: Audio/screen prompt: ‘Witness has detected that you have been involved in a (serious) accident and will call the emergency services. If this is not the case, please cancel within 10 seconds.’

(77) Provide reassurance

(78) Provide the user with a checklist of things to do:

(79) Take photographs of vehicles involved in the incident (inc number plates)

(80) Take down name, address, insurance details etc of 3.sup.rd parties involved in incident

(81) Communicate to the insurance company that an incident has been detected (e.g. low bandwidth data or text message)

(82) Protect the high quality recorded data so it is not overwritten

(83) If appropriate—or in response to a request from a communication to the phone from the insurance company—transmit all or selected portions of the recorded data associated with the incident.

(84) N.B. Use of the communication channels is controlled by the Witness application to prioritise essential communications such as calls to the emergency services.

(85) Note that data transmission from the mobile telephone to the insurance company may be automatic (for example, triggered by a detected incident) or manual (for example, in response to a request from the insurance company). In the latter case, the Witness application may include functionality to allow the insurance company to browse through the data files available on the mobile telephone so as to select one or more for transmission to the insurance company.

(86) Data Processing for Generating a User Profile

(87) Further features of the user profile generation embodiment are summarised:

(88) If the Witness application incorrectly detects an event or incident of significance, and receives feedback that the vehicle was not involved in an accident (but was in a near miss)—this could alter the risk profile of the driver. For example—if this happens frequently, but no accident occurs over a given period, this could be an indicator that the driver is good at reacting to potential hazards.

(89) Erratic driving (e.g. jerky steering or braking detected by G-force sensor).

(90) Driving faster than the speed limit allocated to a given road (detected by GPS).

(91) Further details regarding the features and functionality of the Witness application, in particular the graphical user interface of the Witness application, are now described.

(92) Referring to FIG. 5, a first page 80 of the Witness application user manual is shown in which an image of the Main Menu (the top-level menu) 82 is displayed. The Main Menu is displayed when a user first runs the application on a smart-phone, such as the iPhone® 4. The main menu includes the following user-selectable buttons:

(93) Recording Screen 84

(94) File Management 86

(95) Settings 88

(96) Accident Management 90

(97) Information 92 (displays up a manual, as shown in FIGS. 5 to 14).

(98) Selecting the Recording Screen button 84 opens the Recording Screen 94—the second image shown in FIG. 5. The Recording Screen 94 contains a video feed 96 from the camera of the smart-phone, which occupies the majority of the visible screen area. Overlaid on to the video feed is the detected speed of the vehicle 98 (e.g. 0 mph), heading 100 (e.g. south by south-west) and the elapsed recorded time 102 (e.g. 00:00). Displayed in a left column of the Recording Screen are additional user selectable buttons:

(99) Keep current video 104 (pressing this button will automatically copy the current video—and the previous video segment—to the protected storage 34b, and prevents that information from being overwritten. The user is advised to press this button in the event of a driving incident that needs to be recorded).

(100) Take photo 106 (captures a still photograph).

(101) Exit Recorder 108 (returns to main screen).

(102) Start/Stop Recording 110 (starts recording video footage—and other data).

(103) Referring to FIG. 6, a second page 112 of the Witness application user manual is shown, in which an image of the Recording Screen 114 is shown and described operating in a map-displaying mode rather than a video-feed mode.

(104) During recording, (i.e. when the ‘Start/Stop Recording’ button 110 is pressed) the Exit Recorder button in the Recording Screen is substituted with a Map Display button 116. Pressing it will toggle between the modes showing the video feed and a map of the current location.

(105) Referring to FIG. 7, a third page 118 of the Witness application user manual is shown, in which the File Management Screen 120 is shown and described. The File Management Screen 120 can be accessed by pressing the File Management Button 86 in the Main Menu.

(106) The File Management Screen 120 displays video and associated data (e.g. telemetry data) that has been previously recorded. The stored data is contained in either a protected area of storage or in a “Recording Stack”. Data files in the protected area are saved and so not overwritten as part of a Recording Strategy. In contrast, data files in the Recording Stack may be overwritten as part of the Recording Strategy.

(107) As illustrated in FIG. 7, the bottom section of the list represents the “Kept” data files 122, whereas the top section of the list represents the “Recording Stack” 124. “Kept” data files 122 can be edited and deleted from the File Management Screen 120, whereas “Recording Stack” data files cannot. Editing “Kept” data files can involve renaming them.

(108) The recorded data files are listed on screen along with the time and date of the recording and the electronic size of the data file. Accordingly, the user is provided with feedback about how big the data files are, and so if the smart-phone is low in storage, the user can elect to delete certain “Kept” data files. “Recording Stack” data files will be automatically overridden by the Recording Strategy.

(109) The controls on the toolbar at the bottom of the recordings screen allow a user to change the selection mode of the video stack and includes:

(110) Video 126 (when highlighted, if a data file is selected, a video recording playback screen will be displayed).

(111) Map 128 (when highlighted, if a data file is selected, then a map will be displayed showing the area over which the recording was made).

(112) Export 130 (when highlighted, a selected data file will be passed to an Export Screen where export options will be provided).

(113) Keep 132 (when highlighted, if a “Recording Stack” data file is selected, then a user will be prompted to name it, and then it will be stored as a “Kept” data file).

(114) The icons in the File Management Screen change in dependence on the selected mode. For example, FIG. 20 shows the icons 134 displayed when the Video selection mode is highlighted; FIG. 21 shows the icons 136 displayed when the Map selection mode is highlighted; FIG. 22 shows the icons 138 displayed when the Export selection mode is highlighted and FIG. 23 shows the icons 140 displayed when the Keep selection mode is highlighted. Advantageously, this provides improved feedback to a user about what selection mode is highlighted and so what action is likely from the selection of a data file.

(115) Referring to FIG. 8, a fourth page 142 of the Witness application user manual is shown, in which an image of a Recording Playback screen 144 is shown and described. The Recording Playback screen is invoked by highlighting the Video Mode 126 in the File Management Screen 120 and selecting an appropriate data file.

(116) In the Recording Playback screen 144 it is possible to play back a pre-recorded video feed associated with a chosen data file. As well as playing back the recorded video feed, the Recording Playback screen also displays other associated data such as telemetry data 146. For example, date, time, speed, forward/backward G-forces, latitude, longitude and heading information is overlaid onto the video feed. As the video is played back, these values will typically change depending on the behaviour of the vehicle, as recorded by the mobile device.

(117) Forward and backward G-forces are those that correspond to the forward and backward movement of the vehicle and are the primary metric for determining an event such as a crash. However, other G-forces (e.g. up/down and side-to-side) may also be measured by the device—even if they are not necessarily displayed on the mobile screen.

(118) As recited in FIG. 8, the Recording Playback screen provides further user controls in the form of:

(119) Playback Position Scrub Bar 148 (horizontal bar at bottom of screen)

(120) Playback Speed Bar 150 (vertical bar at left side of screen)

(121) Pause 152 (pauses playback)

(122) Eject 154 (return to previous screen)

(123) Loop 156 (plays the data file continuously)

(124) The Playback Position Scrub Bar 148 allows a user to jump to different positions within the recording without necessarily needing to watch all of the recorded footage. Advantageously, this allows a user to more quickly locate a desired item of footage within a given data file. Also, the Playback Speed Bar 150 can be used to speed up or slow down the playback of the data file. This allows a desired item of footage to be found more quickly through sped-up playback, and also allows an item of footage to be more carefully analysed through slowed-down playback. Furthermore, it is possible to zoom in and out of a region of the video file using a ‘pinch and zoom’ movement as is standard with most multi-touch touch-screen devices.

(125) Referring to FIG. 9, a fifth page 158 of the Witness application user manual is shown, in which a different type of Recording Playback screen 160 is shown and described. Specifically, the image represents playback of a pre-recorded data file as can be invoked by highlighting the Map Mode 128 in the File Management Screen 120 and selecting an appropriate data file.

(126) In contrast with the Video Mode playback, this Map Mode playback shows a map of the area 162 where the recording took place. A blue breadcrumb trail 164 is overlaid on to the map showing the extent of movement of the vehicle during the recorded period. A scrub slider 166 is shown, which is user-operable to adjust the time within the recorded period. As the slider is adjusted, a pin 168 moves along the blue breadcrumb 164 to show the position of the vehicle at the time as specified by the slider 166. Tapping the pin 168 displays associated telemetry data 170 at that position and time.

(127) Referring to FIG. 10, a sixth page 172 of the Witness application user manual is shown, in which an image of a Video Export Screen 174 is described. The Video Export Screen 174 may be invoked by highlighting the Export button 130 in the File Management Screen 120 and selecting an appropriate data file.

(128) The Video Export Screen 174 allows the quality of the video associated with the selected data file to be adjusted, before that video is exported. Advantageously, this can allow the user to control the size of the data to be exported. Telemetry data (i.e. sensor data) is also exported, embedded within the video file. Video data can be exported with or without sound, depending on the permissions of the user. Export typically involves copying data files from the mobile device to a local computer (e.g. via a data cable) or a remote server (e.g. via a wireless connection).

(129) Referring to FIG. 11, a seventh page 176 of the Witness application user manual is shown, in which an image of the Settings Screen 178 is displayed and described. The Settings Screen 178 allows the operation of the Recording Strategy to be controlled. Specifically, the Settings Screen allows a user to select the number of video segments that the application should store, and the size of those segments (i.e. a data file period). Accordingly, the user is able to set a limit to the storage usage of the Recording Stack (i.e. the size of the buffer 34a) part of the program. The user can also control the storage usage of the Recording Stack through the use of the video quality buttons.

(130) Note that the Witness application may be arranged to calculate the remaining memory available on the smart-phone and suggest the appropriate settings automatically.

(131) The Recording Strategy involves maintaining a user-controlled number of video segments. When a new video recording is initialised—instantiating a new segment—this is written over the oldest video segment. Thus only the most recently recorded videos are maintained in the Recording Stack.

(132) Another setting that can be controlled in the Settings Screen 178 is the G-force threshold at which the Witness application will assume that a crash has taken place. It is expected that different vehicles and driving styles will need different G-force thresholds to be set to ensure a reasonable sensitivity to crash forces whilst also prevent crash detection false positives. It should be noted that although the manual recites “Raise the sensitivity if you find that crash detection is triggered during normal driving . . . ” it is the sensitivity threshold that is to be raised, and not the sensitivity itself. A slider 180 allows the sensitivity threshold to be set via the touch-sensitive screen.

(133) The Settings Screen 178 also has a button 182 to allow a user to define more settings via a More Settings Screen.

(134) Referring to FIG. 12, an eighth page 184 of the Witness application user manual is shown, in which an image of the More Settings Screen 186 is displayed and described. Here, it is possible for the user to select speed units 188 and also select whether the map should be displayed during recording 190, and at which speed it should be displayed in favour of the video feed. This is a safety feature of the Witness application that hides the video feed during recording when the vehicle is detected as travelling above a predetermined speed. The video feed is replaced by a map of the location of the vehicle—as is typical with in-vehicle GPS devices. Note that although the on-screen video feed is replaced with a map, video recording continues in the background.

(135) In alternatives, the application may be arranged to detect the vehicle speed, and at a particular speed, switch off the screen entirely. It should be understood that the device will continue to record video, telemetry and other information even when the screen is switched off. Entirely switching off the screen of the device is advantageous as it significantly reduces the drain on the battery of the mobile device.

(136) Note that the Witness application is also arranged to interface with the phone to detect low-light conditions and in response change the brightness of the screen to prevent the user/driver being dazzled. This can also save battery life.

(137) The More Settings Screen 186 also includes a Personal Details button 192 which, when pressed invokes a Personal Details Screen.

(138) Referring to FIG. 13, a ninth page 194 of the Witness application user manual is shown, in which an image of the Personal Details Screen 196 is displayed and described.

(139) Here, the name 198, vehicle registration number 200, mobile number 202 and email address 204 to be used in an emergency, can be specified by the user. In the event of a detected incident, these details, along with information regarding the detected incident (e.g. time of incident, location and optionally telemetry data) will be emailed to the specified email address automatically. This can ensure that the chosen recipient of that email will be informed immediately about the occurrence and nature of the detected incident.

(140) This Personal Details Screen 196 also allows the user to set whether it is possible for the video stack to be deleted 206. A security PIN protects the changing of this option so that if a first user having the PIN (for example, a parent) can review the driving style and behaviour of a second user not having the PIN (for example, their son or daughter) by reviewing the Recording Stack—as the Recording Stack cannot be deleted by that second user. As shown in FIG. 24, the PIN can be set for the first time by entering it twice into the appropriate PIN fields 208.

(141) Referring to FIG. 14, a tenth page 210 of the Witness application user manual is shown, in which images of an Accident Management Screen 212 and Witness Details Screen 214 are displayed and described.

(142) The Accident Management Screen 212 can be invoked by the user selecting the accident management button 90 on the Main Menu, or can be automatically switched to after the Witness application has detected that there has been an incident (e.g. via G-forces exceeding a threshold level). Similarly, data can automatically be permanently stored as “Kept” and/or sent if high G-forces are detected.

(143) As can be seen on the Accident Management Screen 212, there are the following user selectable buttons:

(144) Accident Advice/What to do 214

(145) This option provides guidance as to what to do during an accident—see FIGS. 15 and 16 for the displayed Accident Advice Screens.

(146) Witnesses 216

(147) This option invokes functionality to allow the collection of information from or about witnesses to an incident. An image of the Witnesses Detail information summary screen 214 is shown in FIG. 14. FIG. 17 shows the Witness Detail information collection screen 218. Voice notes can also be taken via the microphone of the smart-phone.

(148) Photos 220

(149) This option enables the camera to allow a user to capture images associated with an incident.

(150) Drivers 222

(151) This option invokes functionality to allow collection about the other drivers involved in an incident, such as their vehicle and insurance details. The Drivers Detail information collection screen 224 is shown in FIG. 18.

(152) Your Details 226

(153) This option provides a readily available store of the details of the user—as to be provided to other drivers—and can contain name, address and insurance details. The ‘Your Details’ screen 228 is shown in FIG. 19.

(154) Details recorded via the Accident Management Screen 212 are electronically authenticated and feature a time-stamp. In alternatives, this data may be encrypted to ensure the integrity of the stored data.

(155) Furthermore, in alternatives, the exchange of the details of the drivers and witnesses can be conducted, at least in part, via Bluetooth® (for example, between mobile devices)—and/or via email. Relevant information associated with the driver is pre-stored on the Witness application in a format that can be sent readily, via a communication channel such as Bluetooth® or email.

(156) Finally, note that in each Screen or Menu, there is a button 230 provided to go back to the previous menu. Also, when certain actions are performed or selected by a user, the Witness application is arranged to provide an audible feedback signal (for example, a beep). For example, this could be in response to starting a recording, keeping a video segment and/or a stopped recording.

(157) Alternatives and Extensions

(158) The Witness application can be extended to interface with remote users to allow them to control the operation of the application as well as view information logged by the application.

(159) In particular, the present application can be extended to a scenario where the Witness application is under the control of an insurance company. Such an insurance company may provide insurance to the user, and so may have an interest in the behaviour of that user, and moreover, a vehicle under the control of that user, as described previously.

(160) In such a scenario, the insurance company will maintain a device monitoring system that can interface with a mobile device running the Witness application. In fact, the device monitoring system may interface with a large number of mobile devices, each running an individual copy of the Witness application, logging respective user/vehicle behaviours. This could the insurance provider server 5 of FIG. 1.

(161) With such a monitoring system, it is unpractical to actively monitor each and every mobile device and receiving and handling the sheer quantity of data from each and every device would also be very difficult.

(162) Consequently, the device monitoring system and the mobile devices are advantageously arranged to automatically and intelligently interface with one another in a way that minimises the computational and bandwidth burden on the device monitoring system.

(163) In particular, each mobile device running the Witness application is arranged to make a determination as to whether there is a need to transfer data to the device monitoring system. For example, the Witness application may be arranged to automatically send data to the device monitoring system that only has high G-force activity associated with it, as this may be indicative of a crash or another driving incident. Alternatively, the Witness application may be arranged to send detailed or high-bandwidth information if a user indicates that an accident has occurred.

(164) In either case, the video segments and associated sensor and/or derived driving information data associated with the relevant event will be sent. However, no other data (for example, from another date) will be sent.

(165) Alternatively, less computationally or bandwidth intensive information may be sent on a periodic basis—for example, every day, week or month. Such general information may be sent to establish a profile of a particular user. For example, many high G-force related activities may indicate that the driver is driving aggressively. A determination about the profile can be made automatically by requesting and automatically analysing further data. For example, the location and speed information can be used to determine whether a vehicle is found to frequently break the speed limit. If this is the case, then the profile of the user can be set accordingly.

(166) Such general and periodically obtained information can also be used to remotely adjust the settings of the application. For example, many high G-force related activities may instead indicate that the set up of the car is such that the mobile device is subjected to high G forces (rather than the driver driving aggressively). If the driving is determined to be safe, but the mobile device is setting off many false positives, then the device monitoring system can automatically adjust the settings of the application. For example, the G-force sensitivity threshold may be increased remotely.

(167) Thus, the initial data that is automatically transmitted from the mobile device to the vehicle monitoring system is limited in bandwidth so as not to overload the vehicle monitoring system as a whole. However, after processing or analysis is performed on the initial data, further information may be requested. For example, further information may be requested automatically by the system or manually by a controller acting for the insurance company. Such further information may include high resolution video logs and sensor data and derived driving information such as G-force measurements.

(168) It will be understood that the mobile device may also automatically keep certain data at random. Furthermore, the reason for keeping certain data may also be logged (e.g. logged as a result of a manual request by the user, or in response to a high G-force event).

(169) The mobile device and device monitoring system may also be arranged to highlight detected transgressions. For example, driving during curfew hours, or driving at locations where insurance is not valid (e.g. other countries, off-road, race tracks etc).

(170) As will be appreciated, the interaction of the mobile device and the device monitoring system has the advantage of actively improving user driving behaviour. That is, if the user knows that their logged driving behaviour may adversely effect their insurance policy, then it is likely that the user will be dissuaded from driving recklessly. In view of this, an insurance company may be able to incentivise users through lower insurance premiums.

(171) However, so that users do not wrongfully benefit from such incentives, it is necessary to enforce the correct and consistent use of the Witness application.

(172) It would not be beneficial from the perspective of the insurance company if the user could choose when to enable the application. For example, if a user could choose to disable the application when speeding then the effectiveness of the application would be reduced.

(173) Accordingly, the application may include measures to guarantee that the application is enabled whenever a given insured vehicle is being driven.

(174) Such measures may involve matching data recorded by the Witness application with that recorded independently by the vehicle. For example, the Witness application records the distance travelled during every journey. To ensure the summed distances of all journeys tracked by the Witness application tally with the total travelled distance of the vehicle, the user may be prompted to enter the odometer mileage periodically.

(175) If the distance recorded by the Witness application does not correlate with the difference between odometer readings, then the discrepancy will be flagged to the user and/or the insurance company. A substantial discrepancy will typically indicate that the Witness application has not been monitoring all vehicle journeys and the appropriate action can be taken (e.g. the user can be warned, insurance premium may be raised etc).

(176) It will be appreciated that such a tallying exercise depends on the Witness application being used every time one particular vehicle is driven. However, in alternatives, if the Witness application is used with different vehicles, the Witness application may be arranged to register the different vehicles so that their respective odometer readings can be tallied with distance recordings associated with each respective vehicle.

(177) Other measures can be implemented in conjunction with the Witness application to guarantee that the application is enabled whenever a given vehicle is being driven. For example, the mobile device on which the Witness application is installed may comprise an NFC device, such as an RFID tag. The NFC device may be compatible with a complementary device on a so-called ‘smart-holster’ into which the mobile device may be fitted during operation.

(178) The smart-holster interacts with the NFC device on the smart-phone to determine whether or not the smart-phone is inserted into the smart-holster. The smart-holster can then be interfaced with the engine management system of the vehicle so that the vehicle may be activated and driven only when the mobile device is in place within the smart-holster.

(179) It will be appreciated that the mobile device has so far been described in the context of a smart-phone. However, it will be appreciated by a person skilled in the art that other devices may also be suitable for performing the functions described in relation to the Witness application. For example, the Witness application may be adapted to run on a general purpose tablet-style computing device, such as an iPad®.

(180) Furthermore, it will be understood that features, advantages and functionality of the different embodiments described herein may be combined where context allows. In addition, a skilled person will appreciated that the functionality described above may be implemented using the mobile device suitably programmed.

(181) Having described several exemplary embodiments of the present invention and the implementation of different functions of the device in detail, it is to be appreciated that the skilled addressee will readily be able to adapt the basic configuration of the device to carry out described functionality without requiring detailed explanation of how this would be achieved. Therefore, in the present specification several functions of the device have been described in different places without an explanation of the required detailed implementation as this not necessary given the abilities of the skilled addressee to code functionality into the device.