SYSTEM AND METHOD FOR PROVIDING AUTOMATED DIGITAL ASSISTANT IN SELF-DRIVING VEHICLES

20200223352 · 2020-07-16

Assignee

Inventors

Cpc classification

International classification

Abstract

Method and system for displaying a digital human-like avatar in self-driving vehicles (SDV) including detecting a set of environmental data/characteristics in the vehicle surrounding area by sensors/cameras generating signals to be input for traditional machine learning classifiers on a computer vision module (CVM); receiving control system inputs from the sensors and processing actions to be performed; after receiving outputs from the control system, executing autonomous driving actions by an actuator system; and based on inputs from the CVM and a personalization module, combined with actions and car status information from the control system, generating a digital avatar performing human-like reactions/expressions/gestures to properly communicate/indicate the current and future actions of the SDV for external people on a SDV display device.

Claims

1. A system for providing automated digital assistant in a self-driving vehicle comprising: a computer vision module employing computer vision techniques using information obtained from cameras/sensors installed in the vehicle for understanding the environment around the autonomous vehicle; a personalization module for customizing the digital assistant/avatar according to the vehicle owner's preference and considering external conditions detected by sensors/cameras; and a digital assistant/avatar generator module generating an avatar based on the inputs from the computer vision module and personalization module, combined with the plan/set of actions and vehicle status information from a control system of the vehicle.

2. The system, according to claim 1, wherein the digital assistant/avatar generator module is able to generate: a digital assistant/avatar able to perform a plurality of human-like reactions, expressions, gestures and signs to properly communicate/indicate the current actions and the future actions of the self-driving vehicle for external people; and a plurality of messages to provide additional information about actions and status of the self-driving car, and acknowledgement of pedestrian presence and actions.

3. A method for providing automated digital assistant in a self-driving vehicle comprising the steps of: detecting a set of environmental data/characteristics in a surrounding area of the vehicle by a plurality of sensors/cameras generating signals to be input for traditional machine learning classifiers on a computer vision module; receiving by of a control system the-inputs from the sensors and processing a set of actions to be performed; after receiving outputs from the control system, executing a set of autonomous driving actions by actuator system; based on inputs from the computer vision module and a personalization module, combined with the plan/set of actions and car status information from the control system, generating by an avatar generator module a digital avatar performing a plurality of human-like reactions/expressions/gestures to properly communicate/indicate the current actions and the future actions of the self-driving vehicle for external people on a transparent display device of the vehicle.

4. The method, according to claim 3, wherein the machine learning classifiers include support vector machines, random forest, neural networks and nearest neighbors.

5. The method according to claim 3, wherein the avatar communicates via text and/or images to present additional information and, if necessary/allowed, some self-driving vehicle status.

6. The method, according to claim 3, wherein avatar generation can be implemented using computer graphics, Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) know-how, facial mapping/scanning/rendering and machine learning.

7. The method, according to claim 3, wherein avatar is displayed on a vehicle windshield, side window, rear window or any external display.

8. The method, according to claim 3, further comprising customization of the avatar by means of a personalization module according to the user preference.

9. The method, according to claim 3, wherein the digital avatar is permanently presented on the display during the vehicle trip/ride.

10. The method, according to claim 3, wherein the digital avatar alternatively disappears when no action/communication is necessary and reappears whenever the computer vision module detects the presence of an external person or object.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0049] The objectives and advantages of the current invention will become clearer through the following detailed description of the example and non-limitative pictures presented at the end of this document.

[0050] FIG. 1A discloses an example of a preferred embodiment of the invention, displaying the avatar and additional information in the self-driving car windshield or another front display.

[0051] FIG. 1B discloses another example of a preferred embodiment of the invention, displaying the avatar and additional information in the rear window or another rear display of the self-driving car.

[0052] FIG. 2 discloses that proposed avatar relies on existing modules of self-driving car (Sensor Systems and Control System) to determine a proper human-like reaction/gesture and present additional information to external people, based on a set of actions (established by Control System to Actuator Systems).

[0053] FIG. 3 discloses the proposed avatar comprising Computer vision module, Personalization module and Avatar generator module.

[0054] FIG. 4A disclose an example when the self-driving car detects a pedestrian, the proposed avatar starts to communicate with him/her, in order to inform the next planned actions. In this example, the avatar informs that the self-driving car is aware of the pedestrian presence and the next action will be slow down the speed.

[0055] FIG. 4B discloses that the avatar may also be displayed on rear window and communicate to the driver at the car behind, warning about the next planned actions. In this example, the avatar informs that a pedestrian is crossing ahead, and the self-driving car is slowing down.

[0056] FIG. 4C discloses that the avatar keeps providing/updating status/feedbacks to make the pedestrian feel comfortable and safer.

[0057] FIG. 4D discloses that the avatar may also be displayed on rear window, providing/updating status/feedbacks to the driver at the car behind.

[0058] FIG. 4E discloses an example in which as the self-driving car stops before the pedestrian, the avatar changes its expression/gestures, updates the info/status and recommends the pedestrian to cross.

[0059] FIG. 4F discloses that the avatar may also be displayed on rear window, informing to the driver at the car behind that pedestrian will cross the street, and the self-driving car stopped (0 mph).

[0060] FIG. 4G discloses an example in which while the pedestrian is crossing the street, the avatar changes the expression to indicate that the self-driving car is waiting.

[0061] FIG. 4H discloses that the avatar may also be displayed on rear window, and communicating to the driver at the car behind that the pedestrian is still crossing, and the self-driving car stopped (0 mph).

[0062] FIG. 4I discloses an example in which after the pedestrian crosses the street, the gesture/expression of avatar may be changed again (e.g.: a standard expression), acknowledges the conclusion of pedestrian action, inform next actions and updates the info/status of self-driving car.

[0063] FIG. 4J discloses that the avatar may also be displayed on rear window, presenting a thankful message to the driver at the car behind, and informing that the self-driving car will continue to ride (3 mph).

[0064] FIG. 4L discloses the 5 situations/steps of the example (use case) for the avatar displayed in the windshield or another front display (communication to the pedestrian). The avatar gestures/expressions and the information change according to the situation/environment.

[0065] FIG. 4M discloses the 5 situations/steps of the example (use case) for the avatar displayed in the rear window or another rear display (communication to the car behind). The avatar gestures/expressions and the information change according to the situation/environment.

DETAILED DESCRIPTION OF THE INVENTION

[0066] Considering the human behavior related to self-driving cars (i.e., people want to be somehow notified when they are seen by an automated vehicle), the present invention proposes a digital avatar to communicate the current actions and the future actions (intentions/plans) of the said self-driving car for external people.

[0067] FIG. 1A discloses an exemplary embodiment of the proposed solution. A self-driving car comprises at least one external display 10, wherein the proposed avatar 20 is displayed. The avatar 20 is a digital assistant that virtually and visuallyrepresents a human being (for instance, a representation of the car owner, one of the passengers, etc.) or an animated character able to reproduce human form, expression and gestures.

[0068] In order to better signaling to the external people (pedestrians, cyclists, other drivers), the said avatar 20 may be displayed on the self-driving car windshield. Alternatively, or complementarily, the proposed avatar may also be displayed on the side and rear windows (FIG. 1B). It is necessary to equip the self-driving car with external displays, for instance substituting the window's glass, in order to present the proposed avatar to the external people. One possibility, among others, could be installing curved/convex/semispheric display on the top of the car roof, what could provide 360 degrees avatar visibility.

[0069] According to FIG. 2, the proposed avatar 20 relies on all other existing technologies and modules/systems that enable driverless/autonomous/self-driving vehicles: sensor systems 30 to sense/detect/recognize a set of environmental data/characteristics in the surrounding area of the vehicle (e.g.: temperature, lane marks, other vehicles, pedestrians, etc.); control system 40 comprising processors to receive inputs from sensors, prepare/establish a plan/set of actions and provide outputs to actuators; actuator systems 50 to execute a set of autonomous driving actions (e.g.: acceleration, braking, steering/swerve, lights, honk, etc.); navigation systems to establish geolocation; etc.

[0070] Based on the plan/set of actions (established by the control system to control/command actuator systems), the avatar 20 may determine a plurality of human-like reactions/expressions/gestures (e.g.: hand waves, head nods, many other gestures that indicate intentions) to properly communicate/indicate the current actions and the future actions (intentions/plans) of the said self-driving car for external people.

[0071] Besides displaying the avatar 20 gestures, it can also communicate via text and/or images to present additional information and, if necessary/allowed, some self-driving car status (e.g.: accelerating, breaking, stop, current speed, etc.).

[0072] FIG. 3 discloses a more detailed view about the proposed avatar 20, comprising: computer vision module 60, personalization module 80 and avatar generator module 70.

[0073] Computer vision module 60: For understanding the environment around the vehicle many existing approaches for autonomous vehicles rely on Artificial Intelligence and machine learning techniques, some of them including analysis of images/videos obtained by cameras available in the vehicle. According to the present invention, in order to provide more humanized communications with external people, computer vision techniques using information obtained from cameras installed in the vehicle are employed.

[0074] The proposed computer vision module 60 is able to identify/detect: [0075] a plurality of pedestrian poses (e.g., standing up, arms up, pointing to the car, etc.); [0076] a plurality of human actions (e.g., walking, standing up, running, biking, using phone, etc.); [0077] a plurality of gestures (e.g., hand waving, head nod, thumbs up, okay gesture, left/right turn signal with arms, stop gesture with hand, etc.); [0078] gaze (e.g., looking to the car, looking to smartphone, distracted or looking to somewhere else); [0079] face expressions (e.g., happy, worried, sad, neutral, etc.); [0080] gender recognition (male, woman); [0081] age estimation (e.g.: kid, elder, etc.); and [0082] fashion/clothes recognition.

[0083] All these elements may contribute to increase the quality of human-like interactions of the self-driving car (i.e., the avatar) with the pedestrian/cyclist/driver, in different traffic situations and conditions.

[0084] For the pedestrian/cyclist detection algorithm, the proposed computer vision module 60 can be implemented using different approaches: [0085] One approach is to extract features from images/videos and use these features (hand-crafted descriptors) as input for traditional machine learning classifiers, including, but not limited to, support vector machines (SVM), random forest, neural networks, nearest neighbors, etc. The features can be based on histograms of oriented gradients (HOG), local binary patterns (LBP), color histograms, bags of visual words, etc. [0086] A second approach is implemented by part-based methods, including Deformable Part Models (DPM) (reference is made to Felzenszwalb, P. F., Girshick, R. B., McAllester, D., & Ramanan, D. (2010). Object detection with discriminatively trained part-based models on IEEE transactions on pattern analysis and machine intelligence, 32(9), 1627-1645). [0087] Another approach refers to integrate feature extraction/learning with object detectors (classifiers) trained for pedestrian/cyclist detection, including, but not limited to, known techniques such as Fast R-CNN, Faster R-CNN, YOLO (You Only Look Once), SSD (Single Shot Multibox Detector) and other deep learning techniques for real-time object detection.

[0088] In all these approaches, the computer vision module 60 needs to be trained with the desired functionality. For instance, if the car should recognize human actions, a classifier for action recognition should be trained in order to deploy it to the vehicle afterwards. The same applies for gaze estimation, pose estimation, gesture recognition, face expression recognition, gender recognition, age estimation, etc.

[0089] Personalization module 80: Car owners or passengers/riders can personalize the avatar 20 according to their preferences, like choosing male or female character, hair style, skin color, accessories, etc. Alternatively, the avatar 20 could also be an animated character of user preference (e.g.: a famous cartoon character, etc.).

[0090] The avatar 20 is also personalized depending on external conditions, including, for instance, weather, traffic, time of the day, etc. In cold weather, for instance, the avatar 20 could use cap and gloves; in sunny days, the avatar could use sunglasses. Also depending on external conditions, the messages could be changed/personalized (Good morning!, Have a great evening, Stay tuned, traffic is heavy, etc.).

[0091] For example, since the Computer Vision module 60 is able to detect an elderly pedestrian, the avatar 20 may present more formal/respectful messages. In case of recognizing a kid, the avatar 20 could change appearance to a cartoon character and present more informal/relaxed messages. The same personalization could be done regarding the pedestrian gender (e.g.: Dear lady, please cross, Hello, sir. I saw you!, etc.). Considering the Computer Vision module 60 has means for fashion/clothes recognition, this could improve the Advertisement/Service feature, suggesting shopping options based on the pedestrian clothes and accessories.

[0092] Avatar generator module 70: Based on the inputs from Computer vision module 60 and Personalization module 80, combined with the plan/set of actions and car status information from the Control System, this avatar generator module 70 generates (or updates) a digital avatar 10 to display. This avatar generation can be implemented using computer graphics, facial mapping/scanning/rendering, machine learning, etc. In fact, it can operate in a similar manner to the current AR Emoji creation procedure of Samsung. A library containing pre-set expressions/gestures gives a good flexibility in the characterization of several situations using the avatar 20.

[0093] Also based on inputs from Computer vision module 60 and Personalization module 80, combined with the plan/set of actions and car status info from the Control System, the Avatar generator module 70 may (optionally) prepare text messages to reinforce communication with external people, for instance, a message to confirm presence detection, present the car status (e.g.: accelerating, breaking, stop, current speed, etc.), current actions and the future actions (intentions/plans, e.g.: I will stop), etc.

[0094] There are some possibilities regarding the visibility/presence of the avatar 20. According to a preferred embodiment of the invention, the digital avatar 20 is permanently presented on display 10 during the vehicle trip/ride. During the car movement/trip, the always on display digital avatar 20 can be just modified/customized based on one or more (combined) external conditions.

[0095] Alternatively, another possibility is to keep the avatar 20 disabled (not visible) when no action/communication is necessary to be displayed. According to one or more (combined) external conditions, whenever the avatar needs to communicate any information/intention/action of the self-driving car, then the avatar appears in the windshield or any external display 10 available.

[0096] Some of the main external conditions that changes the avatar status (i.e., when the avatar detects/identifies one or more of these conditions, the avatar becomes enabled/visibleif it was previously disabled/invisibleor change appearance to simulate a reaction of acknowledgementif it was already enabled/visible) are listed below: [0097] Pedestrian/cyclist/car/object detection (e.g.: person, car, animal, etc.); [0098] Presence of a traffic authority, emergency/rescue or police car; [0099] Surrounding environment (e.g. rainy, sunny, snow, day, night, etc.); [0100] Geolocation (e.g.: crowd street, village road, highway, off road, etc.); [0101] Self-driving car condition/status (e.g.: current speed, number of passengers, previous history, etc.); [0102] Some specific self-driving car movements (e.g.: parking, starting/moving after a stop position, change road lane, braking, significant change of speed, turning right/left, reverse gear, etc.).

Detailed Example of Practical Usage

[0103] Suppose a pedestrian suddenly appears in the corner of the street. Using its multiple sensors (e.g.: LiDAR, 3D camera, ultrasonic and/or infrared detectors, etc.), the self-driving car (via Sensor System) detects the pedestrian. In parallel, and according to its multiple sensors, the self-driving car (Control System) also realizes that it is safe to slow down the speed and stop before the cross (for example, the car behind is at a long, safer distance), so that the pedestrian can safely cross the street. Therefore, the self-driving car (Control System) establishes a plan/set of actions to command the actuators (in this case, braking the car until it stops before a distance range).

[0104] Based on this plan/set of actions, the proposed invention determines expressions and reactions of the human-like avatar, and some additional info to clearly communicate the self-driving car intentions (current and next actions). According to FIG. 4A, in a first moment, it is displayed a greeting avatar, which communicates to the pedestrian: [0105] that self-driving car is aware of his/her presence (e.g.: Hi! I see you!); [0106] the next planned actions (e.g.: Slowing down for you to cross); [0107] the self-driving car status (e.g.: current speed, 12 mph).

[0108] Complementarily, as shown in FIG. 4B, the avatar may be displayed on rear window and communicate to the driver at the car behind: [0109] detection of a new fact that will demand further actions (e.g.: Hi! Pedestrian ahead!); [0110] the next planned actions (e.g.: Slowing down to stop); [0111] the self-driving car status (e.g.: current speed, 12 mph).

[0112] In the following moments, shown in FIG. 4C, as the self-driving car approaches the cross, the avatar keeps providing status/feedbacks to make the pedestrian feel comfortable and safer. In this example, it can change the gesture/expression of the avatar (not greeting anymore) and communicates to the pedestrian: [0113] a recommendation (e.g.: Please wait!); [0114] reinforce next planned actions (e.g.: Slowing down for you to cross); [0115] update the self-driving car status (e.g.: current speed, 4 mph).

[0116] Complementarily, in FIG. 4D, the avatar is displayed on rear window and communicates to the driver at the car behind: [0117] a recommendation (e.g.: Attention, please!); [0118] reinforce next planned actions (e.g.: Slowing down to stop); [0119] update the self-driving car status (e.g.: current speed, 4 mph).

[0120] After a few moments, in FIG. 4E, the self-driving car stops before the pedestrian. The gesture/expression of avatar may be changed again (e.g.: a positive sign/gesture to indicate completion of action), and communicates to the pedestrian: [0121] the completion of the action (e.g.: I stopped!); [0122] a recommendation (e.g.: You are now safe to cross.); [0123] update the self-driving car status (e.g.: current speed, 0 mph).

[0124] Complementarily, in FIG. 4F the avatar may be displayed on rear window and communicates to the driver at the car behind: [0125] next actions (e.g.: Pedestrian will cross.); [0126] update the self-driving car status (e.g.: current speed, 0 mph).

[0127] While the pedestrian is crossing the street in FIG. 4G, the gesture/expression of avatar may be changed again (e.g.: a sign/gesture to indicate the self-driving car is waiting), and communicates to the pedestrian: [0128] the current action (e.g.: I am waiting you to cross.); [0129] the self-driving car status (e.g.: current speed, 0 mph).

[0130] Complementarily, in FIG. 4H the avatar may be displayed on rear window and communicates to the driver at the car behind: [0131] the current action (e.g.: Pedestrian crossing.); the self-driving car status (e.g.: current speed, 0 mph).

[0132] After the pedestrian crosses the street in FIG. 41, the gesture/expression of avatar may be changed again (e.g.: a standard expression), and communicates to the pedestrian: [0133] the recognition that the pedestrian has concluded his/her action (e.g.: You crossed. Bye!); [0134] the next action (e.g.: Continuing); update the self-driving car status (e.g.: current speed, 3 mph).

[0135] Complementarily in FIG. 4J, the avatar may be displayed on rear window and communicates to the driver at the car behind: [0136] a message to inform that the pedestrian has concluded his/her action (e.g.: Thank you for waiting.); [0137] the next action (e.g.: Continuing); [0138] update the self-driving car status (e.g.: current speed, 3 mph).

[0139] The FIG. 4L shows a synthesis to facilitate the understanding of the above example, in the case of the avatar displayed on the windshield (front viewcommunication to pedestrian).

[0140] The FIG. 4M shows a synthesis to facilitate the understanding of the above example, in the case of the avatar displayed on the rear window (back viewcommunication to driver in the car behind).

[0141] The exemplary situation detailed above is also valid when the self-driving car detects an unexpected people crossing the street (e.g. a drunken person, a person who tries to cross the street using a smartphone, or any other situation when a person tries to cross without proper attention). As explained above, the usual/traditional or existing sensing systems of self-driven vehicles already consider this kind of unexpected situation, so the proposed avatar receives the information from car sensor systems and control systems to provide the adequate response/reaction for this situation (e.g. present a warning message to the pedestrian and to surrounding cars, while slow down/ stop the car).

Complementary Outputs:

[0142] Besides the avatar itself (which may be displayed/presented in the windshield, rear window and other possible external displays of the car), other existing car elements/actuators can be used in combination with the avatar (e.g. headlight, lantern, turn signal light, tail-lamp, horn/horn). It could also be considered the usage of speakers to audibly interact with the pedestrians: that could be useful for an emergency scenario or even to communicate with visually impaired people.

[0143] As currently there is no standardization for autonomous vehicles signaling, the present invention proposes mapping some car elements/actuators with the criticality level of the avatar communications. For instance, when the car stops for pedestrians to cross the street, besides showing the avatar for this condition, the car can blink the front headlight.

[0144] When the car faces an urgent/critical situation, for instance, a hard brake to avoid running over a pedestrian, the car can also use the horn in combination with the avatar. When no critical situation is detected, no car element is necessary to be used (the avatar can be still displayed, or also become invisible).

Other Use Cases/Examples/Scenarios:

[0145] Self-driving car stops, but the pedestrian is still waiting to cross:

[0146] If, for example, the car already indicated that it will stop or the car has already stopped but the pedestrian is still waiting to cross (i.e., the computer vision module did not recognize the action of walking or running by the pedestrian), the avatar may change the message or even provide other type of alert (e.g., sound, flash light) highlighting that the pedestrian can cross the street. In addition, if the car recognizes that the pedestrian will not cross the street, the avatar can indicate that the car will accelerate again, and the pedestrian should then wait to cross.

Multiple Pedestrians:

[0147] In the case of multiple pedestrians, the computer vision module can identify groups of pedestrians walking (i.e., action recognition), pedestrians distracted (e.g., talking to each other, using phone) and personalize the message in such cases. In the case of multiple pedestrians crossing the street, the car can determine that it will wait some more people to cross (i.e., this requires the computer vision module to count the number of people crossing), present a message indicating that it will accelerate again in some seconds and then accelerate, for instance. The avatar could also alert the pedestrians that it will wait only more X seconds or Y pedestrians before accelerating again.

[0148] In such case of multiple pedestrians, in a preferred embodiment of the present invention, the proposed avatar sends a unique, similar message to all group of pedestrians (and not a specific message to every single pedestrian). Sending differentiated messages to each specific pedestrian can be too complex (manage each group, select specific messages, etc.) in a very short time/period of response (e.g. fraction of seconds). Also, present multiple messages on the display could be confusing for the pedestrians. However, if it happens that, for instance, the group of people finishes crossing the street, but an elderly pedestrian is still crossing, the avatar could personalize the message to that specific person (e.g. I see you are still crossing. Take your time, I will wait).

Direct Vehicle-To-Vehicle Communication:

[0149] Initially, it is considered that the solution (avatar) runs locally in the car, making decisions according to the environment detected by the sensors of the car itself. The avatar and the messages are then presented at one or more displays/windows of the self-driving car, so that people (pedestrians, cyclist, human drivers, etc.) can see it from outside.

[0150] The vehicle-to-vehicle (V2V) communication is the communication standard used in the method and system of the present invention, which is a wireless protocol similar to Wi-Fi (or cellular technologies, like LTE). In this scenario, vehicles are dedicated short-range communications (DSRC) devices, constituting the nodes of a vehicular ad-hoc network (VANET). V2V communication allows vehicles to broadcast and receive omni-directional messages (with a range of 300 meters), creating a 360-degree awareness of other vehicles in proximity (main exchanged information is: speed, location, and direction/heading). Vehicles equipped with this technology can use the messages from surrounding vehicles to determine potential crash threats as they develop. The technology can then employ visual, tactile, and audible alertsor, a combination of these alertsto warn drivers. These alerts allow drivers the ability to act to avoid crashes.

[0151] Taking advantage of the V2V protocol, the present invention can communicate with other vehicles (autonomous or human-driving cars), sending messages with necessary information. In this case, the proposed invention uses the existing V2V protocol as a standard platform. It is not the scope of the invention to propose a novel V2V communication system.

[0152] Additionally, since the vehicle-to-vehicle communication is not the main purpose of this invention, an alternative solution would be implemented through a future server/cloud central system to exchange traffic information. This way, both self-driving and human-driving cars will be able to exchange messages.

Communication with the Driver at Another Car:

[0153] Human driver in the human driving car can see and notice the avatar in self-driving car, in the same way that a pedestrian can do (i.e. by viewing the avatar and the message in the self-driving car's display). In the case of the car communicating with another car that is conducted by a human (i.e., not a self-driving car), the avatar can provide personalized messages for some situations. For instance, if there is a sudden stop by a human-driven car and a self-driving car is coming behind, the avatar in the self-driving car indicates that it has already detected the sudden stop and that it is slowing down, avoiding the human driver to think that the car behind will not stop.

[0154] Additionally, based on V2V communication described above, the self-driving car (avatar) also transmits a message/info to be displayed on the entertainment system of the human-driving car.

Advertisement/Service:

[0155] The avatar informs the car actions/status, like explained in the examples above, and include a personalized add for the pedestrian, for instance. Considering the Computer Vision module has means for fashion/clothes recognition, the avatar could suggest store options based on the pedestrian clothes and accessories.

[0156] For instance, if the computer vision module detects that there is a pedestrian using glasses, the avatar shows car status/actions and suggest shopping options related to new glasses. If it is raining and the computer vision module detects that some pedestrians do not have umbrellas, the avatar can show car status/actions and suggest nearby stores that sell umbrellas.

[0157] Advertisement can also be shown regardless the recognition of the pedestrians. For instance, if the car detects hot weather, the avatar can show, besides car status/actions, nearby ice cream stores or air-conditioning shopping options, etc.

[0158] In view of all that has been described in this document, the proposed method and system contribute to increasing the confidence and comfort of external people (pedestrians, cyclists, drivers in other cars) when interacting with a self-driving car.

[0159] Although the present disclosure has been described in connection with certain preferred embodiments, it should be understood that it is not intended to limit the disclosure to those particular embodiments. Rather, it is intended to cover all alternatives, modifications and equivalents possible within the spirit and scope of the disclosure as defined by the appended claims.