METHOD AND DEVICE FOR SUPPORTING THE DRIVER OF A MOTOR VEHICLE

20200231169 · 2020-07-23

    Inventors

    Cpc classification

    International classification

    Abstract

    The invention relates to a method and a device for supporting the driver of a motor vehicle. A device which can be arranged in the motor vehicle or is fitted in the motor vehicle is connected to the smartphone of the driver or passenger, and data from the smartphone and additional received data is processed and transmitted to the driver acoustically and/or visually in a prepared manner according to specified criteria. The device according to the invention has a display (3), a speech input unit (4a), and a gesture sensor (8).

    Claims

    1. A method for supporting the driver of a motor vehicle, wherein a device which can be fitted or arranged in the motor vehicle is connected to the smartphone of the driver or passenger as an assistant, and data from the smartphone as well as further received data are processed electronically according to predetermined program sequences and transmitted to the driver in acoustic or visual form, having been prepared systematically.

    2. The method according to claim 1, comprising the processing and execution of functions of the assistant, such as language processing and dialog management, as well as processing received data and smartphone data, takes place on the smartphone, and thus, independently of the vehicle and the device, a powerful and up-to-date software and hardware platform for carrying out these functions is guaranteed.

    3. The method according to claim 1, comprising the smartphone data are telephone numbers or contact data or address data or incoming e-mails or messages or internet data or SMS messages or MMS messages or audio files or MP3 audio files or audio playlists or map data or navigation details or personal data of the driver or passenger and the received data are navigation data or traffic information or radio or TV transmissions or digital audio streams or congestion reports or WhatsApp messages or Facebook Messenger messages or weather data.

    4. The method according to claim 1, comprising the speech recognition takes place either on the smartphone in the smartphone application (local speech recognition) or via the cloud interface online via a cloud-based speech recognition service (online speech recognition) or that the speech of the user is recorded on the device via a microphone, this recording is then transmitted wirelessly to the smartphone via Bluetooth (HFP), where speech recognition takes place continuously or smartphone data are used in speech processing to improve recognition performance of the speech processing or that dialog management of the speech processing takes place locally on the smartphone, but is constantly updated via a cloud interface using a script file or that the speech processing is optimized continuously by analyzing the speech inputs which have taken place, and recognition performance is improved.

    5. The method according to claim 1, comprising, in addition to the speech processing, functions of the assistant are also operated via gesture control or input devices of the vehicle, and functions of the assistant are undertaken multimodally, depending on context complementarily or redundantly.

    6. The method according to claim 1, comprising the data to be processed and transmitted are selected depending on the driving situation or traffic situation or time of day or weather or user preference or historical use of the data by the user or preferences of other users or historical use of the data by other users, and data processing is designed to be a self-learning system.

    7. The method according to claim 1, comprising the data to be processed and transmitted are selected according to the following method steps: recording the vehicle and traffic situation data such as speed, navigation data, traffic volume, road geometry, road width, number of lanes, frequency of accidents, speed restrictions, general traffic reports and special situations such as construction sites. recording further context data such as current weather, visibility, temperature, time of day, light conditions, retrieving user preferences and historical usage behavior of the user, determining the complexity of the traffic situation according to a complexity index of traffic data and context data, determining the priority of transmission and provision of data from preferences of the user, historical usage behavior of the user, preferences of other users and of historical user behavior of other users, correlating the recorded data with predetermined program sequence patterns, outputting or temporary or permanent suppression of such prioritized data for use by the driver or passenger.

    8. A device for supporting the driver of a motor vehicle by a device which can be arranged in a motor vehicle, and can be connected to a smartphone of the driver or of a passenger as an assistant, wherein the device has a display, a speech input unit, a speech output unit and a gesture sensor.

    9. The device according to claim 8, comprising the device has an electronic switching circuit arranged on a printed circuit board, with a data storage device as well as a data processing device as well as Bluetooth components.

    10. The device according to claim 8, comprising the device is an auxiliary device with a display arranged in a housing and with a speech input unit and a speech output unit as well as a fixing device for fixing to the dashboard or to the windshield of the motor vehicle.

    11. The device according to claim 10, comprising the housing has a circular cross-section and the following further constituents: a disk, a ring, a printed circuit board, a loudspeaker, a housing magnet, a rear housing part, contact pins, a main magnet.

    12. The device according to claim 10, comprising, for detachable fixing of the device to a holder, wherein fixing regions are arranged both on the device and also on the holder and wherein the fixing region has at least one magnet and two fixing sectors arranged in mirror inversion on the rear of the device, which sectors are provided with electrical contacts arranged in mirror inversion, and the fixing region has at least one magnet and a fixing sector with electrical contacts at the holder, wherein, in assembled state, the magnets and and the electrical contacts of the fixing sector are in contact with the electrical contacts of the fixing sector or the fixing sector, such that, depending on the contact of the fixing sector or the fixing sector to the fixing sector a positioning of the device rotated about 180 is realized.

    13. The device according to claim 12, comprising the fixing sector is a semicircular recess and the fixing sector is a semicircular bar, wherein the recesses and the bar, in assembled state, correspond to one another, or the electrical energy supply, a data exchange, the transmission of radio signals and optionally further received data is realized via the electrical contact between the electrical contacts and the electrical contacts.

    14. The device according to claim 12, comprising the holder has a suction cup for fixing or a USB terminal.

    15. The device according to claim 10, comprising the housing has a rear housing part and a ring connected thereto with openings in its front region, wherein the display and a printed circuit board with microphones is arranged within the ring.

    16. The method according to claim 2, comprising the smartphone data are telephone numbers or contact data or address data or incoming e-mails or messages or internet data or SMS messages or MMS messages or audio files or MP3 audio files or audio playlists or map data or navigation details or personal data of the driver or passenger and the received data are navigation data or traffic information or radio or TV transmissions or digital audio streams or congestion reports or WhatsApp messages or Facebook Messenger messages or weather data.

    17. The method according to claim 2, comprising the speech recognition takes place either on the smartphone in the smartphone application (local speech recognition) or via the cloud interface online via a cloud-based speech recognition service (online speech recognition) or that the speech of the user is recorded on the device via a microphone, this recording is then transmitted wirelessly to the smartphone via Bluetooth (HFP), where speech recognition takes place continuously or smartphone data are used in speech processing to improve recognition performance of the speech processing or that dialog management of the speech processing takes place locally on the smartphone, but is constantly updated via a cloud interface using a script file or that the speech processing is optimized continuously by analyzing the speech inputs which have taken place, and recognition performance is improved.

    18. The method according to claim 3, comprising the speech recognition takes place either on the smartphone in the smartphone application (local speech recognition) or via the cloud interface online via a cloud-based speech recognition service (online speech recognition) or that the speech of the user is recorded on the device via a microphone, this recording is then transmitted wirelessly to the smartphone via Bluetooth (HFP), where speech recognition takes place continuously or smartphone data are used in speech processing to improve recognition performance of the speech processing or that dialog management of the speech processing takes place locally on the smartphone, but is constantly updated via a cloud interface using a script file or that the speech processing is optimized continuously by analyzing the speech inputs which have taken place, and recognition performance is improved.

    19. The method according to claim 2, comprising, in addition to the speech processing, functions of the assistant are also operated via gesture control or input devices of the vehicle, and functions of the assistant are undertaken multimodally, depending on context complementarily or redundantly.

    20. The method according to claim 3, comprising, in addition to the speech processing, functions of the assistant are also operated via gesture control or input devices of the vehicle, and functions of the assistant are undertaken multimodally, depending on context complementarily or redundantly.

    Description

    [0040] There are shown in:

    [0041] FIG. 1 an overview of the system components

    [0042] FIG. 2 an exploded view of the device components of an auxiliary device

    [0043] FIG. 3 a sectional view with device components of an auxiliary device

    [0044] FIG. 4 a representation of the communication connections

    [0045] FIG. 5 an overview of the whole system

    [0046] FIG. 6 the rear of the device according to the invention, with fixing region in top view and lateral top view

    [0047] FIG. 7 the fixing region of the holder in top view and lateral top view

    [0048] FIG. 8 the device mounted on the holder

    [0049] FIG. 9 a sectional view of the principle structure of the device

    [0050] FIG. 10 a detailed view of the arrangement of openings and microphones

    [0051] FIG. 11 a front view of the device with the openings in the ring

    [0052] Device Structure

    [0053] As shown in FIG. 1, the device 1 according to the invention is electronically connected to a smartphone 1a of the driver or of a passenger, as well as, in the present embodiment example, to a car radio 1b, which can be designed with or without navigation system. The device 1 can also be connected to the audio system of the motor vehicle and to elements of the on-board electronics. In the present embodiment example, the electronic connection between smartphone 1a and device 1 is a Bluetooth connection 1c.

    [0054] The car radio 1b can be connected to the device via a cable 1d, a Bluetooth connection or via a FM transmitter 10.

    [0055] Smartphone 1a and device 1 can be connected to a USB power supply unit 15 which in the present embodiment example has two USB terminals.

    [0056] Inside the housing 2, the device 1 has a printed circuit board 5 with an electronic circuit 5a with data storage unit 6 and data processing unit 7. In the present embodiment example, the electronic circuit 5a is furthermore connected to an amplifier 12 connected to a speech output unit 4b designed as a loudspeaker, a graphics driver 13, a FM receiver 17, a gesture sensor 8, an energy management group 14 connected to a power supply unit 11, the Bluetooth components 16 and a switch-off device 18 connected to a speech input unit 4a designed as a microphone, which switches between waking and sleeping states for reasons of energy conservation.

    [0057] Via the Bluetooth connection 1c, when the device is in entered into operation, a data transfer, with which all essential data are transferred internally, takes place between the smartphone 1a and the device 1.

    [0058] FIGS. 2 and 3 show the detailed structure of the device 1 and of the housing 2 of the device 1 designed as an auxiliary device. The disk 9a, ring 9c, printed circuit board 5, power supply unit 11 designed as a battery, loudspeaker 4b , housing magnet 9d, rear housing part 9e, main magnet 9g and textile cover 9h which can be arranged as an alternative to rear housing part 9e are essential constituents. In addition to making the device 1 more aesthetically appealing, the textile cover 9h has the advantage that the effect of strong solar radiation on the device 1 is reduced.

    [0059] The disk 9a is made of glass, the ring 9c of metal, preferably aluminum, and the rear part 9e of plastic. The display 3, the printed circuit board 5 and the housing 2 are installed in the ring 9c via the rear housing part 9e, i.e. the printed circuit board 5, the housing 2 and the display 3 are installed via the ring 9c.

    [0060] Due to the small dimensions of the device 1 and the part-spherical shape, the structure of the housing 2 is substantially different to square devices.

    [0061] As can be seen from FIG. 3, a substantially three-part design is realized with the central ring 9c as the base, as well as the housing 2 and the display 3. The printed circuit board 5 is installed in the ring 9c.

    [0062] The preferred methods of connection between smartphone 1a, device 1 and car radio 1b to the loudspeaker 4b are shown in FIG. 4. In the present embodiment example, the smartphone 1a is connected to the device 1 via Bluetooth 16. In the present embodiment example, the device 1 is connected to the car radio 1b via Bluetooth 16 or via a jack connection 1d or via VHF, i.e. FM transmitter and FM receiver.

    [0063] If the device 1 is designed as an auxiliary device, it is fixed in the field of vision of the driver, preferably to the dashboard or to the windshield, by means of a holder.

    [0064] Connection between Device, Smartphone and Vehicle

    [0065] The device 1 or the auxiliary device connects, automatically and wirelessly, to the smartphone 1a of the driver or the smartphone 1a of a passenger (see FIG. 1, 1c) and to the audio system of the motor vehicle (FIG. 1, 1d). Naturally, an acoustic data output is also possible, directly from the auxiliary device, by means of the loudspeaker. Connection to the audio system of the motor vehicle can be wireless via Bluetooth 1c or via an installed VHF transmitter, or in wired manner to the audio system of the motor vehicle via the motor vehicle electronics or a jack connection. The connection to the smartphone 1a is via Bluetooth 1c and Bluetooth Low Energy (BLE). The following Bluetooth profiles are used: [0066] 1) Hands-free Profile (HFP) (a) for transmitting speech data from the device 1 to the smartphone 1a and from the smartphone 1a to the device 1, and (b) by making telephone calls in hands free mode via the installed loudspeaker or the connected vehicle loudspeaker. [0067] 2) Advanced Audio Distribution Profile (A2DP) for transmitting high-quality stereo audio data from the smartphone 1a to the device 1, and from the device 1 to the audio system of the vehicle, if this is connected to the device 1 via Bluetooth 1c. [0068] 3) Phonebook Access Profile (PBAP), in order to be able to access the call lists of the smartphone 1a. [0069] 4) Audio/Video Remote Control Profile (AVRCP) for transmitting commands from the motor vehicle to the device 1.

    [0070] Non-auditory data, such as for example the graphic user interface, gesture control and other control signals, data which are transmitted from the device 1 to the smartphone 1a (working state, states of other applications), are transmitted via Bluetooth Low Energy (BLE).

    [0071] Smartphone Application

    [0072] An associated smartphone application which communicates with the device 1 or auxiliary device (FIG. 5) via the Bluetooth interface 1c is located on the smartphone 1a. Communication between the device 1 and the smartphone application is controlled via the Chris driver module (FIG. 5, 32) in the smartphone application. Via this module, the smartphone application can communicate with different devices or auxiliary devices. Numerous essential processes take place in the smartphone application, above all (a) speech processing of speech signals (FIGS. 5, 33 and 39), dialog management (FIG. 5, 34) and speech output (FIG. 5, 35), (b) tethering of interfaces to smartphone functions (telephony, address book, messages, etc., FIG. 5, 36) and software modules (navigation, streaming music services, etc., FIG. 5, 37), (c) tethering and cloud-based services (messaging, logging & analytics, etc., FIG. 5, 38), (d) control and algorithms for functions such as adaptive speech output and context processing (FIG. 5, 39).

    [0073] Speech Processing

    [0074] The speech processing chain consists of the following constituents: [0075] 1) Speech-based activation of speech recognition (Wake Word Detection) [0076] 2) Speech recognition (Automated Speech Recognition, ASR) [0077] 3) Interpretation of speech input (Natural Language Understanding, NLU) [0078] 4) Dialog Management (DM) [0079] 5) Generating speech outputs (Natural Language Generation, NLG) [0080] 6) Speech synthesis (Text to Speech, TTS)

    [0081] Speech-based activation of speech recognition takes place locally on the device 1 or the auxiliary device. Communication in the vehicle is continuously analyzed until the keyword for activating speech recognition is detected, at which point the speech data transmission from the microphone of the device 1 or auxiliary device in the speech recognition system is started in the smartphone application. Speech is recognized via a hybrid system which can carry out speech recognition both on the smartphone 1a in the smartphone application and also via the cloud interface online via an online speech recognition service. Consequently, the whole process chain of speech recognition via interpretation, dialog management, and speech synthesis and output can be carried out locally on the smartphone 1a in the smartphone application.

    [0082] As a rule, the local speech recognition in the smartphone application is used, as here the recognition can take place more quickly and robustly and is also more suitable from a data protection point of view. Purely internet-based speech recognition is unsuitable for motor vehicles as these are repeatedly in areas without or with inadequate mobile phone data connection. For special applications, such as for example recognition of addresses or points of interests (POI), the local speech recognition in the smartphone application is supplemented by cloud-based speech recognition.

    [0083] For the purposes of speech recognition, the microphone signal is transmitted to the smartphone 1a via the Bluetooth connection by means of the Bluetooth Handsfree Profile (HFP). To increase the recognition performance, special microphones for speech recognition are used which are independent of the microphone used for hands-free telephony, and which, via beamforming and echo cancellation, transmit a clear speech signal of the driver to the smartphone 1a and its associated smartphone application. The speech signal is transmitted from the device 1 or auxiliary device to the smartphone 1a at a bit rate optimized for the speech recognition system.

    [0084] The recognition performance of the speech recognition is also increased considerably vis-a-vis other speech processing systems in motor vehicles by (a) user data such as the address book of the user, previous destinations in the navigation system or metadata of audio files being used as grammars, allowing the system for example to improve its ability to recognize names which the user has in his smartphone address book, and (b) continuously improved speech recognition models which can be imported into the smartphone application via the cloud interface.

    [0085] Interpretation of speech inputs and dialog management Speech inputs are interpreted in the NLU module of the smartphone application (FIG. 5, 39), wherein the speech input in is broken down into intention and further information (for example name, title, etc.) and then transferred to the dialog management module (DM module, FIG. 5, 34). There, depending on these inputs, the next step in the system interaction is determined and output via the speech generation and speech synthesis. Speech is output again via the device 1 or auxiliary device, or via the audio system of the motor vehicle when this is connected to the device 1 or auxiliary device.

    [0086] Dialog is managed by the dialog management module (DM module, FIG. 5, 34) in the smartphone application. An essential feature of this module is that the actual dialog execution, i.e. the input/response rules, are stored in a script file which can be updated regularly via the cloud interface without the smartphone application itself needing to be updated. Thus, continuously optimized dialog management can be made possible for the user.

    [0087] Continuous Optimization of Speech Recognition

    [0088] If approved by the user, the speech inputs are stored and transmitted to a server-based logging and analysis system via the cloud interface, at which server-based system it is indexed by keyword, semi-automatically, and then used for further analysis with the aim of optimizing speech recognition performance.

    [0089] Interaction Via Gesture Control

    [0090] Since specific interactions with the system via speech input either cannot be uncovered or can only be uncovered under certain circumstances, the device 1 or auxiliary device also has a gesture sensor (FIG. 1, 17). The gesture sensor recognizes the following gestures: [0091] 1) Swiping gesture from right to left (left) [0092] 2) Swiping gesture from left to right (right) [0093] 3) Swiping gesture from bottom to top (up) [0094] 4) Swiping gesture from top to bottom (down) [0095] 5) Held hand (High 5) [0096] 6) Moving the hand away from the device (near) [0097] 7) Moving the hand towards the device (far)

    [0098] Inter alia, the following functions can be reproduced via this set of gestures: [0099] a) Scrolling from one list entry to the next list entry, such as for example in a list of contacts, songs or messages or from one menu entry to the next menu entry (left gesture, context-dependent) [0100] b) Scrolling from one list entry to the previous list entry or from one menu entry to the previous menu entry (right gesture, context-dependent) [0101] c) Cancelling actions (down gesture, context-dependent) [0102] d) Returning to the previous step (down gesture, context-dependent) [0103] e) Selecting menu or list entry (up gesture or High 5 gesture, context-dependent) [0104] f) Going back up to the next menu (down gesture, context-dependent) [0105] g) Increasing volume, for example in a telephone call or music playback (far gesture, context-dependent) [0106] h) Decreasing volume (near gesture, context-dependent) [0107] i) Pausing, starting or picking up music playback (High 5 gesture, context-dependent) [0108] j) Muting (Mute) sound during a telephone call (High 5, context-dependent)

    [0109] Interaction Via Input Devices of the Vehicle

    [0110] The device can recognize and use standardized commands via the Bluetooth AVCRP profile as a third input mechanism, via a Bluetooth connection 1c to the car, or to the infotainment system or car radio 1b of the car. For example, where supported by the vehicle the control keys of a multifunction steering wheel can be used in this way as input mechanism. Bluetooth AVRCP is a profile for remotely controlling audio or video devices. It supports commands such as next song (forward), previous song (backward), pause and playback, louder or softer. These are used in the following manner in the Chris assistant: [0111] a) Scrolling from one list entry to the next list entry, such as for example in a list of contacts, songs or messages or from one menu entry to the next menu entry (fast forward and forward AVRCP command) [0112] b) Scrolling from one list entry to the previous list entry or from one menu entry to the previous menu entry (backward AVRCP command) [0113] c) Cancelling actions (exit AVRCP command) [0114] d) Returning to the previous step (exit AVRCP command) [0115] e) Selecting menu or list entry (select AVRCP command, context-dependent) [0116] f) Going back up to the next menu (exit AVRCP command, context-dependent) [0117] g) Increasing volume, for example in a telephone call or music playback (volume up AVRCP command) [0118] h) Decreasing volume (volume down AVRCP command, context-dependent) [0119] i) Pausing, starting or picking up music playback (play, pause and stop AVRCP command, context-dependent) [0120] j) Muting (Mute) sound during a telephone call (mute AVRCP command, context-dependent)

    [0121] AVRCP commands are processed by the Bluetooth connection 1c of the device 1 or auxiliary device with the vehicle identifying the device 1 or auxiliary device as the audio source which supports AVRCP commands. The AVRCP commands are received in the device 1 or auxiliary device and then converted as pure data models then further transmitted to the associated smartphone app, where they are then processed, context-dependent, according to the above diagram. In this way, input possibilities of the vehicle can be used not only for audio playback but also for control in menus and dialogs.

    [0122] Redundant and Complementary Multi-Modal Interaction

    [0123] A further essential feature of the Chris assistant is that the three input mechanisms speech input, gesture control and control via input devices of the vehicle can be carried out multimodally, in addition. Thus, for example the operation can take place in a list of entries (such as for example contacts, songs or messages) as follows: [0124] A speech command (for example forward) or gesture (left, context-dependent) or vehicle input mechanism (skip next) selects the next entry on the list [0125] A speech command (for example back) or gesture (right, context-dependent) or vehicle input mechanism (skip back) selects the previous entry on the list

    [0126] Where expedient, as many interactions as possible are provided, depending on context, either redundantly or complementarily, in order to be able to supply the driver with the best choice of mode for his preferences for the respective situation.

    [0127] Speech and Display Output

    [0128] A further advantage of the invention is secure data output which takes place in the form of acoustic data output as speech output and in the form of visual data output as display representation. Depending on context, information can be provided either only as speech output (for example during an enquiry in dialog processing), only as a display (for example displaying a music album during music playback) or as combined display and speech output (for example displaying a contact and reading out the name).

    [0129] Detachable Fixing of Devices to a Holder

    [0130] As can be seen from FIG. 6, the device 1 has a fixing region 20 on its rear 22, said fixing region being divided into two fixing sectors 23 and 24 arranged in mirror inversion. The fixing sectors 23, 24 are arranged in a recess. The fixing sectors 23, 24 are arranged in mirror inversion and are provided with electrical contacts 25 positioned in mirror inversion. In the present embodiment example, five electrical contacts 25 are arranged in each of the two fixing sectors 23, 24. Naturally, it is also possible to arrange a greater or smaller number of electrical contacts. A magnet 9g is arranged centrally in the fixing region 20. Openings 30 for loudspeakers are located laterally adjacent to the fixing region 20.

    [0131] FIG. 7 shows the design of the fixing region 21 at the holder 19. In contrast to the fixing region 20, the fixing region 21 has only one fixing sector 26 at the device 1. The fixing sector 26 is designed as a bar with a semicircular shape. In the present embodiment example, a total of five electrical contacts 27 are arranged in the fixing sector 26 which correspond to the electrical contacts 25 in the fixing sectors 23 and 24 of the device 1. A magnet 9g is arranged centrally in the fixing region 21.

    [0132] A suction cup 28 for fixing the holder 19 to a smooth surface is located at the holder 19.

    [0133] FIG. 8 shows the completely installed unit of device 1 and holder 19. In the assembled state, the semicircular bar of the fixing sector 26 and the blind bar 31, designed as elevations, have engaged in the fixing sectors 23, 24 designed as recesses and have established the electrical connection via the electrical contacts 25 and 27 which have been brought into contact. As the fixing sectors 23, 24 are designed symmetrically identical to the device 1, the device 1 can be rotated about 180 and then, in the opposite installation position, with the device 1 via the electrical contacts 25 and 27.

    [0134] The mechanical connection is realized by the cooperation between the magnets 9d and 9g.

    [0135] In the present embodiment example, a USB terminal 29 is arranged at the holder 19.

    [0136] As can be seen from FIG. 9, the housing 2 has a rear housing part 9e, a ring 9c and a display 3 arranged in the ring 9c, as well as a printed circuit board 5. As can be seen, the ring 9c forms a central component of the overall housing structure. The ring 9c connects to the rear housing part 9e and receives the printed circuit board 5 and the display 3 in its internal region.

    [0137] Microphone Ring

    [0138] FIG. 10 shows the enlarged detailed view of the opening 40 and of a microphone 4a opposite the opening 40, connected by the sound channel 41.

    [0139] A joint 42 is arranged in the front region 40a of the ring 9c, in which joint the openings 40 are located, almost invisible to the observer. Via the openings 40 in the joint 42, the sound reaches the microphone 4a, designed in the present embodiment example as a directional microphone, via the sound channel 41.

    [0140] FIG. 11 shows a front view of the display 3 and the ring 9c. In its front region 40a, the ring 9c has the joint 42 in which the openings 40 are arranged. In the present embodiment example, a total of four openings 40 are arranged which are positioned opposite the microphone 4a.

    [0141] The invention is not limited to the embodiment examples represented here. Instead, it is possible, by combining the means and features, to realize further embodiments without going beyond the scope of the invention.

    LIST OF REFERENCES

    [0142] 1 Device

    [0143] 1a Smartphone

    [0144] 1b Car radio

    [0145] 1c Bluetooth connection

    [0146] 1d AUX (jack) connection

    [0147] 2 Housing

    [0148] 3 Display

    [0149] 4a Microphone, speech input unit

    [0150] 4b Loudspeaker, speech output unit

    [0151] 5 Printed circuit board

    [0152] 5a Circuit

    [0153] 6 Data storage unit

    [0154] 7 Data processing unit

    [0155] 8 Gesture sensor

    [0156] 9a Disk

    [0157] 9c Ring

    [0158] 9d Magnet, housing magnet

    [0159] 9e Rear housing part

    [0160] 9g Magnet, main magnet

    [0161] 9h Textile cover

    [0162] 10 FM transmitter

    [0163] 11 Power supply unit

    [0164] 12 Amplifier

    [0165] 13 Graphics driver

    [0166] 14 Energy management group

    [0167] 15 USB power supply unit

    [0168] 16 Bluetooth components

    [0169] 17 FM receiver

    [0170] 18 Switch-off device

    [0171] 19 Holder

    [0172] 20 Fixing region

    [0173] 21 Fixing region

    [0174] 22 Rear

    [0175] 23 Fixing sector

    [0176] 24 Fixing sector

    [0177] 25 Electrical contact

    [0178] 26 Fixing sector

    [0179] 27 Electrical contact

    [0180] 28 Suction cup

    [0181] 29 USB terminal

    [0182] 30 Loudspeaker opening

    [0183] 31 Blind rail

    [0184] 32 Chris driver module

    [0185] 33 Speech processing (ASR module)

    [0186] 34 Dialog management (DM module)

    [0187] 35 Speech output (TTS module)

    [0188] 36 Tethering of interfaces to smartphone functions

    [0189] 37 Tethering of interfaces to software functions

    [0190] 38 Tethering of interfaces to cloud based services

    [0191] 39 Control and algorithms for functions such as adaptive speech output and context processing (NLU module)

    [0192] 40 Openings

    [0193] 40a Front region

    [0194] 41 Sound channel

    [0195] 42 Joint