METHOD FOR PRODUCING A MODEL FOR AUTOMATED PREDICTION OF INTERACTIONS OF A USER WITH A USER INTERFACE OF A MOTOR VEHICLE
20230146013 · 2023-05-11
Inventors
- David Bethge (Stuttgart-Feuerbach, DE)
- Jannik Wolf (Filderstadt, DE)
- Marco Wiedner (Karlsruhe, DE)
- Mohamed Kari (Essen, DE)
Cpc classification
B60W50/14
PERFORMING OPERATIONS; TRANSPORTING
G06F3/017
PHYSICS
B60W40/08
PERFORMING OPERATIONS; TRANSPORTING
G06N5/01
PHYSICS
B60W2050/0083
PERFORMING OPERATIONS; TRANSPORTING
G06F3/016
PHYSICS
B60W50/0097
PERFORMING OPERATIONS; TRANSPORTING
International classification
B60W50/00
PERFORMING OPERATIONS; TRANSPORTING
B60W50/14
PERFORMING OPERATIONS; TRANSPORTING
B60W40/08
PERFORMING OPERATIONS; TRANSPORTING
Abstract
A method for producing a model (15) for automated prediction of interactions of a user with a user interface of a motor vehicle. Vehicle operating logs (11, 12, 13) are provided and each includes a record of a time sequence of user interactions with the user interface. Each of the user interactions recorded in the vehicle operating logs (11, 12, 13) is assigned context information (21, 22) that includes a functional category (21) of the user interaction and a driving state (22) of the motor vehicle at the time of the user interaction. Training data (14) are generated based on the vehicle operating logs (11, 12, 13) and the associated context information (21, 22). A context-sensitive interaction model (15) is trained by machine learning on the basis of the training data (14) to make a prediction about a future user interaction based on a time sequence of past user interactions.
Claims
1. A method (10) for producing a model (15) for automated prediction of interactions of a user with a user interface of a motor vehicle, the method comprising: providing vehicle operating logs (11, 12, 13) where each vehicle operating log (11, 12, 13) includes a record of a time sequence of user interactions with the user interface; assigning context information (21, 22) to each of the user interactions recorded in the vehicle operating logs (11, 12, 13), the context information (21, 22) including a functional category (21) of the user interaction and a driving state (22) of the motor vehicle at the time of the user interaction, and training data (14) being generated based on the vehicle operating logs (11, 12, 13) and the associated context information (21, 22); and training a context-sensitive interaction model (15) by machine learning based on the training data (14) to make a prediction about a future user interaction based on a time sequence of past user interactions.
2. The method (10) of claim 1, wherein the functional category (21) for assignment to the respective user interaction is selected from predetermined functional categories that include: navigation, vehicle information, settings, telephony, multimedia, tuner, network connection, digital address book, and digital vehicle operating manual.
3. The method of claim 1, wherein the driving state (22) for assignment to the respective user interaction is selected from predetermined driving states, and the selection is made on the basis of a driving speed of the motor vehicle at the time of the user interaction.
4. The method of claim 3, wherein the driving state (22) is selected from two predetermined driving states, including a first driving state that is selected when the driving speed of the motor vehicle is above a predetermined threshold value and if not a second driving state is selected.
5. The method of claim 1, further comprising selecting a classifier for the context-sensitive interaction model (15) based on the training data (14) and the selecting is carried out after assigning context information (21, 22) and before training the context-sensitive interaction model (15), wherein the selection of the classifier being carried out by means of a grid search with a cross-validation on the training data (14), and the classifier is selected from: an Extra Trees classifier, a Random Forest classifier, an AdaBoost classifier, a Gradient Boosting classifier, a support-vector machine, and a decision tree.
6. The method of claim 1, further comprising establishing a classifier of the context-sensitive interaction model (15), the classifier being a decision tree having a maximum tree depth of 8.
7. A method for automated prediction of interactions of a user with a user interface of a motor vehicle, comprising using an interaction model (15) produced by the method (10) of claim 1 to make a prediction about a future user interaction based on a time sequence of past user interactions in a prediction step, wherein the prediction being made about a future user interaction which in terms of time immediately follows a most recently carried out user interaction.
8. The method of claim 7, further comprising making a prediction of an input mode of the future user interaction, wherein the input mode is one of: touchscreen, hardkey, speech.
9. The method of claim 7, further comprising adjusting a control panel and/or a display field of the user interface based on the prediction about the future user interaction, wherein the adjusting includes a scaling and/or shifting of a displayed content or a visual highlighting of a control element and/or a display element.
10. The method of claim 7, further comprising reading vehicle operating data (31, 32, 33) from a data network of the motor vehicle and/or one or more other motor vehicles for providing the vehicle operating logs (11, 12, 13) and reading the vehicle operating data (31, 32, 33) from a data network of the motor vehicle for recording the time sequence of past user interactions, and the vehicle operating data (31, 32, 33) being read from a CAN bus.
11. A data processing unit for a motor vehicle, characterized in that the data processing unit is configured to carry out the method of claim 7.
12. A motor vehicle comprising a data network and a data processing unit, the motor vehicle being configured to carry out the method (10) of claim 1 and the data processing unit being configured to read vehicle operating data (31, 32, 33) from the data network and, based on the vehicle operating data (31, 32, 33), provides vehicle operating logs (21, 22, 23), and the data processing unit is configured to carry out the assigning of the context information (21, 22) to each of the user interactions recorded in the vehicle operating logs (11, 12, 13) and to carry out the training of a context-sensitive interaction model (15).
Description
BRIEF DESCRIPTION OF THE DRAWING
[0027]
DETAILED DESCRIPTION
[0028] At the outset, it should be understood that should be understood that the elements and functions described herein and shown in
[0029] Those skilled in the art will appreciate that the blocks shown in the flow chart of
[0030] The functions illustrated schematically in the flow chart of
[0031]
[0032] In a subsequent assigning step 2, each of the user interactions recorded in the vehicle operating logs 11, 12, 13 is assigned context information in the form of a functional category 21 of the user interaction and a driving state 22 of the motor vehicle at the time of the user interaction. The functional category 21 records which functionality of the HMI or the motor vehicle is associated with the respective user interaction. The interaction can be assigned to the fields of navigation, vehicle information, settings, telephony, multimedia, tuner, network connection, digital address book, vehicle operating manual, for example. The possible functional categories are numbered, and the number is converted to a bit sequence for further processing by a one-hot encoding. The driving state 22 is characterized by a binary variable that assumes the value one if the vehicle is moving faster than 8 km/h and otherwise assumes the value zero. A complete stop of the vehicle and slow rolling (for example in front of a traffic light) are thus combined and jointly distinguished from a moving state. The interaction logs 11, 12, 13 paired with the context information 21, 22 then serve as the training set 14 on which the creation of the model is based.
[0033] Lastly, in the training step 3, a context-sensitive interaction model 15 is trained by machine learning on the basis of the training data 14 to make a prediction about a future user interaction on the basis of a time sequence of past user interactions. The classifier of the model 15 is a decision tree having a maximum tree depth of 8. For example, the decision tree can be trained to predict whether the next input will be via the touch screen (category “touch”), by actuating or activating a hardkey (category “hardkey”) or by a voice input (category “speech”). The model 15 is therefore trained to answer the following question: “If the user were to interact with the HMI at the present time, what would the input mode of that interaction be.” The prediction of model 15 can then be used to prepare or initiate the interaction. Two applications are possible, for example. First, based on the prediction, the user's attention can be directed visually. A display may remain dimmed in low ambient light conditions when “hardkey” or “speech” are predicted, for example, but illuminated when “touch” is predicted. Another application is to reduce interactions with the touch screen, in particular when the driver is stressed by the current driving situation. The interaction model 15 can, for instance, be used to predict the next input (for example via a one- or multi-finger gesture) on the touch screen and make the adjustment to the display triggered by the input without the input actually taking place.
[0034] It is to be appreciated that the various features shown and described are interchangeable, that is a feature shown or described in one embodiment may be incorporated into another embodiment. It is further to be appreciated that the methods, functions, algorithms, etc. described above may be implemented by any single device and/or combinations of devices forming a system, including but not limited to storage devices, processors, memories, FPGAs, DSPs, etc.
[0035] While non-limiting embodiments are disclosed herein, many variations are possible which remain within the concept and scope of the present disclosure. Such variations would become clear to one of ordinary skill in the art after inspection of the specification, drawings and claims herein. The present disclosure therefore is not to be restricted except within the spirit and scope of the appended claims.
[0036] Furthermore, although the foregoing text sets forth a detailed description of numerous embodiments, it should be understood that the legal scope of the present disclosure is defined by the words of the claims set forth below. The detailed description is to be construed as exemplary only and does not describe every possible embodiment, as describing every possible embodiment would be impractical, if not impossible. One could implement numerous alternate embodiments, using either current technology or technology developed after the filing date of this document, which would still fall within the scope of the claims.