METHOD FOR OPERATING A VIRTUAL ASSISTANT FOR A MOTOR VEHICLE AND CORRESPONDING BACKEND SYSTEM

20200079215 ยท 2020-03-12

Assignee

Inventors

Cpc classification

International classification

Abstract

An avatar interface of a virtual assistant is presented to a user in a motor vehicle and a predefined set of accessible elements is provided to be selected by the user. The accessible elements may be operating functions and/or information data. When at least one user statement of the user is received over the avatar interface; a question-answering logic is operated in the virtual assistant for determining at least one of the accessible elements that the user requests by the at least one user statement; and the at least one identified accessible element is made available to the user.

Claims

1. A method for operating a virtual assistant for a motor vehicle, comprising: presenting an avatar interface of the virtual assistant to a user in the motor vehicle; storing a predefined set of accessible elements to the user, the set of accessible elements including at least one of operating functions and information data; receiving at least one user statement from the user via the avatar interface; processing the at least one user statement, by a question-answering logic implemented using a machine learning engine in the virtual assistant, to determine at least one of the accessible elements requested by the at least one user statement; and producing for the user the at least one of the accessible element requested by the user.

2. The method according to claim 1, further comprising personalizing the question-answering logic with respect to the user by the machine learning engine.

3. The method according to claim 2, wherein the personalizing of the question-answering logic uses reinforcement learning.

4. The method according to claim 3, wherein the accessible elements include at least one operating function implemented outside the motor vehicle.

5. The method according to claim 4, wherein the at least one operating function is implemented in at least one of a home automation cloud, a smart city platform and an internet-of-things controlling system.

6. The method according to claim 5, further comprising: receiving, by the virtual assistant, planning data from the user, the planning data describing at least one of activity and service that the virtual assistant is to perform during a future time interval; and performing, by the virtual assistant in the motor vehicle, the at least one of activity and service according to the planning data.

7. The method according to claim 6, wherein the receiving of the at least one user statement by the avatar interface uses at least one of natural language recognition, gesture recognition, facial pose recognition, a virtual reality presentation and an augmented reality presentation.

8. The method according to claim 7, further comprising configuring the virtual assistant for one of a plurality of different motor vehicles.

9. The method according to claim 8, further comprising continuing, by the virtual assistant, a task begun in the motor vehicle in another motor vehicle.

10. The method according to claim 1, wherein the accessible elements include at least one operating function implemented outside the motor vehicle.

11. The method according to claim 10, wherein the at least one operating function is implemented in at least one of a home automation cloud, a smart city platform and an internet-of-things controlling system.

12. The method according to claim 1, further comprising: receiving, by the virtual assistant, planning data from the user, the planning data describing at least one of activity and service that the virtual assistant is to perform during a future time interval; and performing, by the virtual assistant in the motor vehicle, the at least one of activity and service according to the planning data.

13. The method according to claim 1, wherein the receiving of the at least one user statement by the avatar interface uses at least one of natural language recognition, gesture recognition, facial pose recognition, a virtual reality presentation and an augmented reality presentation.

14. The method according to claim 1, further comprising configuring the virtual assistant for one of a plurality of different motor vehicles.

15. The method according to claim 1, further comprising continuing, by the virtual assistant, a task begun in the motor vehicle in another motor vehicle.

16. A backend system for a virtual assistant of a motor vehicle, comprising: a memory storing a predefined set of accessible elements, including at least one of operating functions and information data; and at least one processing unit configured to present an avatar interface of the virtual assistant to a user in the motor vehicle; and receive at least one user statement from the user via the avatar interface; process the at least one user statement, by a question-answering logic implemented using a machine learning engine in the virtual assistant, to determine at least one of the accessible elements requested by the at least one user statement; and produce for the user the at least one accessible element requested by the user.

17. The backend system according to claim 16, wherein the at least one processing unit is further configured to personalize the question-answering logic with respect to the user by reinforcement learning in the machine learning engine.

18. The backend system according to claim 16, wherein the at least one processing unit is in at least one of a home automation cloud, a smart city platform and an internet-of-things controlling system.

19. The backend system according to claim 16, wherein the at least one processing unit is further configured to receive, by the virtual assistant, planning data from the user, the planning data describing at least one of activity and service that the virtual assistant is to perform during a future time interval, and perform, by the virtual assistant in the motor vehicle, the at least one of activity and service according to the planning data.

20. The backend system according to claim 16, wherein the at least one processing unit is further configured to receive the at least one user statement by the avatar interface using at least one of natural language recognition, gesture recognition, facial pose recognition, a virtual reality presentation and an augmented reality presentation.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0026] These and other aspects and advantages will become more apparent and more readily appreciated from the following description of the exemplary embodiment, taken in conjunction with the accompanying drawing.

[0027] The single drawing illustrates a backend system linked to at least one motor vehicle for presenting a virtual assistant in each motor vehicle.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

[0028] In the embodiment explained in the following, the described components of the embodiment each represent individual features which are to be considered independently of each other and thereby are also to be regarded as a component in individual manner or in another combination than the described combination. Furthermore, the described embodiment can also be supplemented by further features already described.

[0029] In the FIGURE identical reference symbols indicate elements that provide the same function.

[0030] The FIGURE illustrates a backend system 10 that may be based on at least one internet server or a cloud server or a fog computing structure. The backend system 10 may be operated in the internet 11. The FIG. also shows a vehicle 12 which can be a motor vehicle, e.g. a passenger vehicle or a truck. The backend system 10 may operate a software for providing a virtual assistant 13. An avatar interface 14 for interacting with a user 15 in vehicle 12 may be controlled or operated by the virtual assistant 13. For example, the avatar interface may be implemented on the basis of a display 16 on which a virtual character 14 may be presented or animated as the avatar interface 14. For operating the avatar interface an electronic control unit 17 of vehicle 12 may be provided. The avatar interface 14 and the virtual assistant 13 in backend system 10 may be interconnected by a communication link 18 which may be based on a wireless link 19. The wireless link 19 may be provided by a communication unit 20 of vehicle 12 and a communication device 21. Communication unit 20 can be based on a mobile radio device (e.g. 4G or 5G) and/or a WIFI module. Accordingly, the communication device 21 may include a radio network and/or a WIFI network.

[0031] Using the virtual assistant 13, the user 15 may control a vehicle component 22 and/or an external component 23. Additionally or alternatively, user 15 may access an information source 24 via the virtual assistant 13. The vehicle component 22 may be for example an infotainment system and/or an acclimatization system and/or a media playback system. The external component 23 can be part of e.g. a home automation cloud and/or a smart city platform and/or an internet of things. The information source 24 can include at least one data server of internet 11 and/or a database. The vehicle component 22, the external component 23 and the data source 24 constitute a respective accessible element 25 which is accessible for the user 15 using the virtual assistant 13.

[0032] For accessing such an accessible element 25, user 15 may state at least one statement 26, e.g. a question or a command, to the avatar interface 14. The at least one statement 26 may be stated to the avatar interface 14 on the basis of a dialogue 27 which may be coordinated by the virtual assistant 13. The virtual assistant 13 may operate a question-answering logic 28 for coordinating or leading the dialogue 27. The question-answering logic 28 may be based on a machine learning engine 29, e.g. an artificial neural network. The question-answering logic 28 enables user 15 to find out which accessible elements 25 are available and/or to indirectly describe what the virtual assistant 13 shall do for user 15 and the virtual assistant 13 will then select an appropriate or matching at least one accessible element 25. For example, the user may state at the first statement 26: I feel uncomfortable. The virtual assistant 13 may then continue dialogue 27 by asking for example: Is it the temperature or the shape of the seat?. The user may then be motivated to utter another statement 26: I feel cold.. The virtual assistant 13 may than derive that an activation of a vehicle component 22 for heating the vehicle 12 may be an appropriate solution or accessible element that should be activated for user 15 considering the at least one statement 26 of user 15.

[0033] The virtual assistant 13 may also be linked to further vehicles 30. The vehicles 30 may be equipped in the same way as vehicle 12. When user 15 changes from vehicle 12 to one of vehicles 30, the same avatar interface 14 may be presented to user 15. User 15 may even be enabled to continue a specific task of finding a specific accessible element 25. In other words, user 15 may start stating at least one statement 26 in vehicle 12 and may then continue with stating at least one statement 26 in another vehicle 30 and the virtual assistant 13 may still be able to consider all statements 26 for finding or selecting an appropriate accessible element 25.

[0034] The dialogue 27 may be designed to use, e.g., predefined questions that may enable question-answering logic 28 to derive at least one accessible element 25 matching the at least one statement 26 that user 15 stated for identifying an accessible element that he is looking for.

[0035] An embodiment is described below.

[0036] The underlying idea therefore is to introduce a virtual assistant within the car (motor vehicle) and the virtual assistant based on VR/AR (Virtual Reality/Augmented Reality) so that the car user interacts with the assistant to accomplish at least one of the following tasks: [0037] access and run any functions on the vehicle. [0038] have a personalized interaction in an arbitrary way (language, accent, gesture) as well as have an arbitrary/customizable representation of the assistant, interact about an arbitrary topic (e.g. question asking). [0039] perform an allowed function of system connected to the vehicle/backend.

[0040] The rationale for adding the virtual assistant is the automotive technologies would be too complex to be presented on the current interaction technologies. The virtual assistant is be able to guide the user for selecting appropriate functions/sub-functions or other accessible elements (e.g. making an online software update).

[0041] Virtual assistants can provide additional value to the luxury cars in the era of automated vehicles because the users of luxury cars would be able to enjoy the services from the assistants.

[0042] Virtual assistants are able to learn from the users based on machine learning. Based on this, the users would be able to have personalized interaction with the assistants in arbitrary language and gesture. This would provide natural interaction with the users as well as provide possibility to add emotions.

[0043] The user is able to plan a set of activities/services that may be performed within a car during his travel (e.g. similar to flight attendants). E.g. Assistant asks the user to take medicines at a particular time, assistant takes photographs while driving through AIps, assistant wakeups the user at 6 AM. The user may configure the virtual assistant representation based his interests.

[0044] The user may be provided access to the related systems such as smart homes, smart cities to perform appropriate actions from his car. For example, sitting in a car he can ask virtual assistant to start AC at home 5 mts before arrival.

[0045] By collecting data with time on a user as well as set of users, the performance and the quality of the virtual assistants improves with time. Based on the virtualization, the physical components for the user interaction will be reduced significantly.

[0046] The technical realization of the approach includes the following additions in the vehicle and the automotive backend environment:

[0047] VR and AR based control device is added to the vehicle to present the virtualization. A virtual agent/virtual assistant engine and a machine learning are added to control the virtual assistant, its inputs and outputs as well as learning. A camera-based system and speech engine are added to identify user inputs. A comprehensive deep learning engine is provided in the backend to optimize and update the functionality of the virtual assistant.

[0048] The automotive backend adds integrations to backends of the related systems such as smart home, smart cities, hospitals.

[0049] Overall the idea is to add AI (artificial intelligence, machine learning) in a virtual assistant. The following capabilities may then be provided:

[0050] 1. Question answering (interact about an arbitrary topic (e.g. question asking)). A car user may ask an arbitrary question to the virtual assistant. The assistant would use machine learning/AI techniques to answer the questions by accessing data from the related environments such as internet, smart homes.

[0051] In a vehicle a data center may be accessible without the need to explain all accessible elements. The vehicle itself would be related to many other systems such as automotive backend, internet, smart homes. Without such AI guided concepts it would be difficult to present large scale data to humans.

[0052] Using AI interaction between the human and the virtual agent may be improved/optimized at a person level.

[0053] The IOT/AI approach from the virtual assistant can support to monitor and support functions remotely from the connected systems such as smart homes (switching on AC at home, reporting status of a relative at the hospital).

[0054] In addition to the above, the also provides the following advantages: [0055] It adds flexibility on inputs and outputs. For example, the idea may get input from an arbitrary way (e.g. natural language, gesture+facial pose, VR) and also get the output (e.g. natural language, gesture+facial pose, VR) in arbitrary way from the virtual assistant.

[0056] Furthermore, based on the backed system and personalization 1) the same virtual agent and 2) the learning continuity may be maintained in an arbitrary vehicle (i.e. several vehicles).

[0057] If the driving is automated, the user of a vehicle gets possibility to use the travel time by performing operations on the related systems. The interaction would be completely different.

[0058] Overall, the example shows how an in-car virtual assistant may be provided by the invention.

[0059] A description has been provided with particular reference to preferred embodiments thereof and examples, but it will be understood that variations and modifications can be effected within the spirit and scope of the claims which may include the phrase at least one of A, B and C as an alternative expression that means one or more of A, B and C may be used, contrary to the holding in Superguide v. DIRECTV, 358 F3d 870, 69 USPQ2d 1865 (Fed. Cir. 2004).