USER INTERFACE AND METHOD FOR THE INPUT AND OUTPUT OF INFORMATION IN A VEHICLE
20200055397 ยท 2020-02-20
Assignee
Inventors
Cpc classification
B60K2360/146
PERFORMING OPERATIONS; TRANSPORTING
G06F3/017
PHYSICS
B60K2360/149
PERFORMING OPERATIONS; TRANSPORTING
G06F3/048
PHYSICS
G06F2203/0381
PHYSICS
G06F3/0416
PHYSICS
B60K35/211
PERFORMING OPERATIONS; TRANSPORTING
B60K35/00
PERFORMING OPERATIONS; TRANSPORTING
International classification
Abstract
A user interface and a method for the inputting and outputting of information in the vehicle includes an image generation unit which generates a projected image. A means for gesture recognition is arranged as a means for detecting an input, and that a means for view detection and following is arranged such that the recognition of the viewing direction and an association of the viewing direction of the driver with an area in the vehicle is carried out. Information about this area is generated in the viewing direction in the form of a projected image and is displayed floating over the area. Recognition of gestures of the driver is carried out. Upon a coincidence of a position of a hand of the driver, detected by the gesture recognition, with the projected image or a component of the projected image, a signal is generated by the central control unit and outputted.
Claims
1. A user interface which comprises an arrangement for the generation of images, the displaying of images and information and a means for detecting a gesture of a user, wherein the arrangement for the generation of images and the means for detecting a gesture are connected to a central control unit, characterized in that an image generating unit which generates a projected image is arranged as an arrangement for the generation of images, that a means for gesture recognition is arranged as a means for the detection of an input, and that a means for view detection and following is arranged.
2. The user interface according to claim 1, characterized in that the image generation unit, the means for gesture recognition and the means for view detection and following are arranged in the interior of a vehicle.
3. The user interface according to claim 1, characterized in that the image generation unit is a laser projector.
4. The user interface according to claim 1, characterized in that the means for gesture recognition is a 3-D camera, an infrared camera or a time-of-flight (ToF) camera.
5. The user interface according to claim 1, characterized in that the means for view detection and following is a 3-D camera.
6. The user interface according to claim 1, characterized in that a heads-up display (HUD) unit is arranged as another means for displaying information in the vehicle.
7. A method for the input and output of information in a vehicle, in which information is outputted by an arrangement for the generation of images and in which inputs of a user are detected by a means for the detection of a gesture, wherein a controlling of the output of information and of the detection of inputs is controlled by a central control unit, characterized in that a recognition of a viewing direction of a driver takes place, that an association of the viewing direction of the driver with an area in the vehicle is carried out, that information is generated to this area in the viewing direction of the driver in the form of a projected image and displayed over the area, that a recognition of gestures of the driver is carried out and that upon a coincidence of a position of a hand of the driver detected by the gesture recognition with the projected image or a component of the projected image a signal is generated and outputted by the central control unit.
8. The method according to claim 7, characterized in that the projected image or its components contains text characters, special signs, symbols, plane or spatial geometric figures in different colors or images.
9. The method according to claim 7, characterized in that an area in the vehicle is associated with a structural group or a system in the vehicle.
10. The method according to claim 7, characterized in that the projected image is displayed adapted to the shape of the area so that the border of the projected image coincides with the boundaries of the area.
11. The method according to claim 7, characterized in that the information displayed in the viewing direction of the driver in the projected image is context-related information.
12. The method according to claim 7, characterized in that the information in the projected image displayed in the viewing direction of the driver is checked for plausibility before the display by an image generation unit.
13. The method according to claim 7, characterized in that the gesture recognition is carried out by a means for gesture recognition by a run time method or by an infrared method.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0037] Other details, features and advantages of embodiments of the invention result from the following description of exemplary embodiments with reference made to the attached drawings. In the drawings:
[0038]
[0039]
[0040]
[0041]
[0042] The present disclosure may have various modifications and alternative forms, and some representative embodiments are shown by way of example in the drawings and will be described in detail herein. Novel aspects of this disclosure are not limited to the particular forms illustrated in the above-enumerated drawings. Rather, the disclosure is to cover modifications, equivalents, and combinations falling within the scope of the disclosure as encompassed by the appended claims.
DETAILED DESCRIPTION
[0043] Those having ordinary skill in the art will recognize that terms such as above, below, upward, downward, top, bottom, etc., are used descriptively for the figures, and do not represent limitations on the scope of the disclosure, as defined by the appended claims. Furthermore, the teachings may be described herein in terms of functional and/or logical block components and/or various processing steps. It should be realized that such block components may be comprised of any number of hardware, software, and/or firmware components configured to perform the specified functions.
[0044]
[0045] In step 3 the generation of a projected image 9 is started, for example, by a laser-based image generating unit 10. The displaying of the projected image 9 on the area being viewed by the driver 11 takes place in step 4. In the example a colored surface with the inscription Engage or Turn on can be displayed over the air outlet opening of the air conditioning system. Green can be selected as color for the projected surface in order to signal to the driver that the displayed selection is possible.
[0046] The selection of the colors can take place using a customary characterization of dangerous states with a red color, suggestion messages with a yellow color and the available options in a green color, which characterization is also widely used in vehicles. There is no limitation to this selection of colorsif, for example, the optical system of the displays in the dashboard uses a blue design, a coordination or adaptation to the existing color tone can advantageously improve the total impression.
[0047] In addition to colored surfaces in different geometrical variations such as, for example, a rectangle, square, circle, ellipse, trapezoid or a triangle, any symbols and characters can be displayed. A display of an image is also possible. Three-dimensional displays can also be generated.
[0048] It is provided that the display of the projected image 9 over the area being viewed by the driver 11 takes place at the moment at which the driver 11 directs his view into this area. Alternatively, the display of the projected image 9 can be started with a set time delay in order to exclude undesired displays which distract the driver 11, for example, for the case that the driver 11 allows his view to move over the dashboard in order to see into the right outside mirror.
[0049] The display of the projected image 9 on a selected area can take place until the driver 11 has made an input or selection. Alternatively, the display can be ended without an input or selection having taken place if the driver 11 changes his direction of view and looks, for example, again in the direction of travel through the windshield 12 at his surroundings 13. It is advantageous that the ending of the display takes place in a time-delayed manner since in this manner the projected image 9 remains over the selected area if the driver 11 briefly changes his direction of view and subsequently returns back to the selected area.
[0050] If, for example, a colored surface with the inscription turn on is shown over the air outlet opening of the air conditioning system, a recognition of the gesture of the movements of the driver 11 takes place in step 5 by the means for gesture recognition 14. If the driver 11 moves his hand, for example, to the surface shown in green with the inscription turn on in order to touch it, so to say, with his hand 15 or with a finger of the hand 15, this gesture is recognized by a means for gesture recognition 16 and the corresponding signal is generated for the central control and evaluation unit. This control and evaluation unit comprises information about the projected image 9 with its position and its selection possibilities as well as about the information regarding the recognized gesture and is therefore capable in step 6 to carry out a check that checks whether the gesture can be correctly associated with the projected image 9 or a component of the projected image 9 such as a button or a key. Therefore a check is made in the example whether the driver 11 has touched, so to say, the surface shown in green with the inscription turn on with his hand 15.
[0051] A check is made here for a coinciding of the positions of the projected image 9 in the recognized position of the hand 15 or of the fingers of the driver 11. Even in this case a coinciding of the positions cannot result in a generation of a corresponding signal which characterizes a coincidence until after the passage of a waiting time.
[0052] If such a coincidence is recognized, the selected function is activated in step 7. In the example shown the turning on of the air conditioning system of the vehicle takes place. For the case that no or no clear coincidence can be recognized, a corresponding error message is generated in step 8, outputted in step 3 to the image generation unit 10 and displayed in step 4. Such an error message can be, for example, a red surface with the inscription mistake or error.
[0053] As has already been shown in the example, an adaptation to a language of a driver or his preference, for example, for a color or shape can be carried out.
[0054]
[0055] After having recognized the direction of the view of the driver 11, who is only indicated in
[0056] For example, a means for gesture recognition 16 is arranged adjacent to the image generation unit 10 and can readily detect the area of the driver 11. The driver 11 can turn on the air conditioning system by a suitable gesture in which he brings the position of his hand 15 or of a finger of this case and 15 into coincidence with the position of the projected image 9, as is shown in
[0057] In this example the air conditioning system was turned off when the means for detecting and following the view 14 detected the area of the air outlet opening of the air conditioning system. Therefore, only the context-related possibility of turning on the air conditioning system was displayed by the projected image 9. In another case in which the air conditioning system is already turned on, the possibility of turning it off is displayed by the projected image 9. This method makes it possible to minimally affect the distraction of the driver 11. Therefore, he can optimally concentrate on what is happening on the stretch course 18 on the street in front of him and on the surroundings 13.
[0058]
[0059] The first selection possibility, which is shown by the image generation unit 10 above an area of an air outlet opening of an air conditioning system, shows a surface, green, for example, with the inscription engage, turn on or open air vent. The second selection possibility, which is shown in an area above a lid of a glove box, shows a green surface with the inscription open glove box.
[0060] In this case the driver 11 can either turn the air conditioning system on or open the lid of the glove box by a suitable gesture which is detected by a means for gesture recognition 16. An alternative embodiment can provide that the driver 11 selects both selection possibilities and then turns on the air conditioning system and also subsequently opens the lid of the glove box, wherein the sequence of his selection can be any one. There is no limitation to the two alternatives shown in this example.
[0061]
[0062] As an alternative, a warning message can be displayed in the visible range of the driver 11 with the inscription warning! in the form of a projected image 9 and in a red color if a critical vehicle state was recognized. This state can occur, for example, if the look of the driver 11 is directed away from the traffic in front of the vehicle for a rather long time onto an area in the vehicle and this is recognized by the means for view detection and following 14.
[0063] Alternatively or additionally, information about too close an interval from a vehicle in front or the recognition of a curve in the road can be used to initiate a warning message.
[0064] In addition to the inscription in the projected image 9, in the case of a recognized left curve a display of an arrow facing left as in
LIST OF REFERENCE NUMERALS
[0065] 1 start view detection and following [0066] 2 determination of the context-based, direction-dependent information [0067] 3 start of the laser projection [0068] 4 display of the projected image [0069] 5 gesture recognition [0070] 6 check gesture selection correct [0071] 7 activation of the selected function [0072] 8 generation of an error message [0073] 9 projected image [0074] 10 image generation unit [0075] 11 driver [0076] 12 windshield [0077] 13 surroundings [0078] 14 means for view detection and following, gesture recognition [0079] 15 hand [0080] 16 means for gesture recognition [0081] 17 steering wheel [0082] 18 stretch course
[0083] The detailed description and the drawings or figures are supportive and descriptive of the disclosure, but the scope of the disclosure is defined solely by the claims. While other embodiments for carrying out the claimed teachings have been described in detail, various alternative designs and embodiments exist for practicing the disclosure defined in the appended claims.