USER INTERFACE AND METHODS FOR INPUTTING AND OUTPUTTING INFORMATION IN A VEHICLE

20200057546 ยท 2020-02-20

Assignee

Inventors

Cpc classification

International classification

Abstract

A user interface and method for inputting and outputting information in a vehicle provides a user interface with a three-dimensional operating element. A laser projection unit generates at least one virtual three-dimensional operating element. A means for gesture recognition as a means for detection of an input are arranged in the interior of a vehicle. At least one virtual three-dimensional operating element is projected in the visual range of a driver by means of a laser projection arrangement. A gesture of the driver is detected by a gesture recognition means. A position of a hand of the driver is detected by means of the gesture recognition that coincides with an area of the virtual operating element. A signal for controlling a vehicle system or a function of a vehicle system is generated by the central control and evaluation unit and is output to the corresponding vehicle system.

Claims

1. A user interface comprising an image generating unit for the representation of images and information, and a means for detection of an input, wherein the arrangement and the means are connected to a central control and evaluation unit, characterized in that a laser projection unit generating at least one virtual three-dimensional operating element as the image generating unit and a means for gesture recognition as a means for detection of an input are arranged in the interior of a vehicle.

2. The user interface according to claim 1, characterized in that the virtual three-dimensional operating element has the form of a cube, a cuboid, a sphere, a pyramid or a cylinder having a round, oval or n-gonal base and top surface.

3. The user interface according to claim 1, characterized in that the virtual three-dimensional operating element has multiple areas, an area being arranged on a surface or part of a surface

4. The user interface according to claim 1, characterized in that a means for gaze detection and eye tracking is arranged in the interior of a vehicle.

5. The user interface according to claim 1, characterized in that the means for gesture recognition and/or the means for gaze detection and eye tracking is a 3D camera or a time-of-flight (ToF) camera.

6. The user interface according to claim 1, characterized in that a heads-up display (HUD) unit is arranged as a further means for displaying information in the vehicle.

7. A method for inputting and outputting information in a vehicle in which information is output by means of an arrangement for image generation and in which inputs of a driver are detected by means for detecting an input, a control of the output of the information and the detection of inputs being controlled by a central control unit, characterized in that at least one virtual three-dimensional operating element is projected in the visual range of a driver and in the interior of a vehicle by means of a laser projection arrangement in such a way that a gesture of the driver is detected by a gesture recognition means, and in that, when a position of a hand of the driver detected by means of the gesture recognition and the virtual operating element or an area of the virtual operating element are coinciding, a signal for controlling a vehicle system or a function of a vehicle system is generated by the central control and evaluation unit and is output to the corresponding vehicle system.

8. The method according to claim 7, characterized in that the virtual operating element is projected with multiple areas, the areas being surfaces of the three-dimensional operating element or sections of an area of the three-dimensional operating element.

9. The method according to claim 8, characterized in that in the areas or sections information is represented in the form of text characters, special characters, symbols, plane or spatial geometric figures in different colors or images.

10. The method according to claim 7, characterized in that the information represented in the areas or sections is contextual information and/or plausibility-checked information.

11. The method according to claim 7, characterized in that a detection of the viewing direction of the driver takes place by means of a means for gaze detection and eye tracking.

12. The method according to claim 7, characterized in that the gesture recognition is carried out by means of a run-time method or an infrared method.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0048] Further details, features and advantages of embodiments of the invention will be apparent from the following description of exemplary embodiments with reference to the accompanying drawings.

[0049] FIG. 1 shows a schematic diagram of a user interface according to the invention,

[0050] FIGS. 2a, 2b show in each case a representation of an alternative option for positioning elements of the user interface according to FIG. 1,

[0051] FIGS. 3a, 3b show in each case a representation of an alternative image generating unit for generating virtual three-dimensional operating elements,

[0052] FIG. 4 shows a representation of an exemplary application of the invention with three-dimensional operating elements in case of an incoming call,

[0053] FIG. 5 shows a representation of a further exemplary application of the invention with a three-dimensional operating element in controlling a volume of a sound system, and

[0054] FIG. 6 shows a representation of a user interface according to the invention with a three-dimensional operating element for controlling various vehicle systems.

[0055] The present disclosure may have various modifications and alternative forms, and some representative embodiments are shown by way of example in the drawings and will be described in detail herein. Novel aspects of this disclosure are not limited to the particular forms illustrated in the above-enumerated drawings. Rather, the disclosure is to cover modifications, equivalents, and combinations falling within the scope of the disclosure as encompassed by the appended claims.

DETAILED DESCRIPTION

[0056] Those having ordinary skill in the art will recognize that terms such as above, below, upward, downward, top, bottom, etc., are used descriptively for the figures, and do not represent limitations on the scope of the disclosure, as defined by the appended claims. Furthermore, the teachings may be described herein in terms of functional and/or logical block components and/or various processing steps. It should be realized that such block components may be comprised of any number of hardware, software, and/or firmware components configured to perform the specified functions.

[0057] FIG. 1 depicts a schematic diagram of a user interface 1 according to the invention. An image generating unit 2 for generating a three-dimensional virtual operating element 3 projects a representation of a, for example, cube-like three-dimensional virtual operating element 3, into the visual range of a driver 4. This projection is carried out in the interior of the vehicle. In the example of FIG. 1, the three-dimensional virtual operating element 3, which hereinafter is referred to in short as operating element 3, is generated in a zone in front of the driver 4 in the area of the steering wheel 9, that is in an area between the represented hands 8 of the driver 4, and is therefore shown in FIG. 1 in a slightly obscured manner.

[0058] While driving the vehicle, the driver 4 can, in his viewing direction, which, for example, is directed substantially forwardly in the direction of travel of the vehicle, perceive both his environment 6 in front of his vehicle through the windshield 5, and at the same time the operating element 3 projected in his visual range. In FIG. 1, the environment 6 is only shown symbolically by a wavy line, but comprises for example roads, paths, vegetation, buildings, people, traffic signs and more.

[0059] For the implementation of the method according to the invention, a means 7 for gesture recognition is arranged in the interior of the vehicle. This means 7 is preferably directed to an area in front of the driver 4 and configured to enable a determination that a movement of a hand 8 or finger of the driver 4 is a gesture. In this context, a gesture is a movement of body parts such as arms, hands or fingers, through which something specific is expressed such as a selection of an offered alternative.

[0060] By a directed movement of a finger of his hand 8 to the represented virtual operating element 3, the driver 4 can affect the recognition of a touch of the operating element 3, as a result of which a control signal characterizing the touching is generated. When the operating element 3 is represented, for example, as a switch-on key of a sound system, then a quasi touch of this operating element 3 with the finger of the driver 4 leads to the generation of a control signal which switches on the sound system. For this purpose, a control and evaluation unit (not shown) is arranged in the vehicle. This control and evaluation unit is connected to the image generating unit 2 and controls the representation of the virtual operating element 3. The control and evaluation unit is also connected to the means 7 for gesture recognition and evaluates or processes the sensor signals of the means 7 for gesture recognition.

[0061] By connecting the control and evaluation unit to the image generating unit 2 and the means 7 for gesture recognition, it is possible to recognize or detect a quasi touch of the operating element 3 with a finger and to generate a corresponding control signal. This control signal is output and transmitted to the vehicle system to be controlled in order to control a function in this vehicle system. Such control can be switching on or off the vehicle system or a change in the volume, in the intensity of the lighting, a track change or station change and much more.

[0062] In order for the driver to be able to recognize which vehicle system or which function of a vehicle system is currently being offered for selection by the virtual three-dimensional operating element 3, it is provided, for example, to represent a symbol or an inscription on a surface of the operating element 3 which enables the driver 4 to recognize the association. Thus, a loudspeaker symbol in conjunction with a plus sign (+) can represent the option of increasing the volume of the sound system, while a loudspeaker symbol in conjunction with a minus sign () represents decreasing the volume.

[0063] In FIG. 1, a means 10 for gaze detection and eye tracking is optionally provided in an area above the windshield 5. This means 10 for gaze detection and eye tracking can, for example, be a camera and is directed at the driver 4. Thus, on the one hand, it can be determined whether the viewing direction of the driver 4 is directed to the outside through the windshield 5 or to an area within the vehicle, such as the dashboard. On the other hand, it can be recognized at which area of the dashboard or vehicle, the driver's 4 gaze is currently directed.

[0064] Thus, for example, an area of the openings for a ventilation system, an area for a display, an area for the control of gear functions or settings and an area of a flap over a glove compartment can be distinguished. To recognize the viewing direction the means 10 for gaze detection and eye tracking is connected to a central control and evaluation unit (not shown), which is controlled by means of a suitable software and has the necessary information relating to the corresponding vehicle equipment. Such information can be stored in a database by model by the vehicle manufacturer and are available for a suitable method for recognizing the viewing direction and the association of the vehicle areas within the vehicle.

[0065] This makes it possible to configure the projection of a virtual operating element 3 dependent on the viewing direction of the driver 4. While, for example, in the immediate visual range in front of the driver 4 in the vicinity of the steering wheel 9, the projection can be completely independent of the viewing direction of the driver 4, in other areas of the vehicle, such as an air outlet of an air conditioning system arranged in the center of the dashboard, a projection of the operating element 3 is carried out depending on the viewing direction of the driver 4.

[0066] Thus, in an exemplary case, an option for switching on or switching off the air conditioning system can be projected in a floating manner by a projection of a virtual operating element 3 in an area above the openings for the air outlet. In another case, a control option for the temperature and/or ventilation can be offered by a representation of another suitable operating element 3 above the same area.

[0067] In this case, also the driver can make a selection by a movement of his hand 8 or his finger, away from the steering wheel 9 towards the area of the virtual operating element 3, which corresponds to its desired function. For example, a virtual operating element 3 with the inscription ON could be provided to switch on the air conditioner.

[0068] This selection of the driver 4 is registered by the means 7 for gesture recognition and a corresponding control signal is generated by the central control and evaluation unit, by means of which the air conditioning system is controlled in such a way that it switches on.

[0069] In addition, it is provided to achieve a restriction of the choices offered on an operating element 3 in such a way that prior to the projection of the operating element 3 it is checked whether the choices are currently available in the current operating state of the vehicle or the corresponding system. If restrictions are present, the projection will be adapted accordingly, thus only plausible choices will be made available. For example, a function of an automatic speed control can be offered only above a minimum speed. An option to switch on a vehicle system can, for example, be offered only if the corresponding vehicle system is currently switched off. These context-related representation of choices leads to a reduction of the information which the driver 4 must perceive in addition to driving the vehicle.

[0070] Furthermore, a subdivision of a surface of a virtual operating element 3 into multiple areas on this surface is provided also. In each of these areas a choice can then be made available by a representation of a corresponding symbol or a corresponding text. In one example, an operating element 3 could be projected for the driver 4, which enables switching on or switching off multiple represented vehicle systems or functions. In another example, an operating element 3 could be projected for the driver 4 which provides both a change in volume as well as a sound setting for a sound system.

[0071] The image generating unit 2 represented in FIG. 1 can, for example, have a laser module 11, a phase arrangement 12 (phase SLM device) for generation of a hologram and a lens 13.

[0072] FIGS. 2a and 2b each represent an alternative option for positioning elements of the user interface according to FIG. 1. A user interface with an image generating unit 2 is shown in each alternative. In addition, the virtual operating elements 3 generated by the image generating unit 2 are represented. In addition, a driver 4 with his hands 8 on the steering wheel 9 and a windshield 5 of a vehicle are shown in each case.

[0073] FIG. 2a shows a variant, in which the means 10 for gaze detection and eye tracking is arranged in the upper area of the windshield 5 and has an orientation at an angle of about 45 degrees to the driver 4. In this representation the means 7 for gesture recognition also arranged in the upper area of the windshield 5. The means 7 for gesture recognition can be a 3D camera, which is realizing a three-dimensional image recording. The means 7 can also be configured as a so-called ToF (Time of flight) camera, which realizes a measurement of distances by means of a run-time method. Alternatively, a system consisting of a 2D camera for recording two-dimensional images and a 3D camera can be utilized. Also, the utilization of a camera operating in the infrared range can be provided. The means 7 for gesture recognition is directed approximately perpendicular to the area in front of the driver 4.

[0074] FIG. 2b shows a variant, in which the means 10 for gaze detection and eye tracking is arranged in the area in front of the driver 4 and is directed almost horizontally or slightly upwards at the driver 4. In this representation, the means 7 for gesture recognition also is arranged in the area in front of the driver 4 and directed towards the latter, at the area of the steering wheel 9.

[0075] A ToF camera, which is connected to a corresponding central control and evaluation unit, for example, can be used as a means 7 for gesture recognition.

[0076] The alternatives represented in FIGS. 2a and 2b are only two exemplary embodiments and do not limit the arrangement according to the invention to these represented options. Further alternatives in which both a gaze recognition and a gesture recognition are ensured are conceivable.

[0077] FIGS. 3a and 3b each represent an alternative image generating unit 2 for generating virtual operating elements 3. While in FIG. 3a a unit consisting of a laser module 11, a phase arrangement 12 as a so-called SLM unit (spatial light modulator/LcoS, LC, AOM) for generating a hologram and a lens 13 is utilized for generating the virtual operating element 3, the image generating unit 2 of FIG. 3b has a unit with a laser background illumination 14 or in a MEMS (micro-electro-mechanical-system) technology, in which varicolored laser beams which are deflected by a mirror system cause a generation of an image, a nanostructure unit 15 (nanostructured static hologram/engineered micro-pixel) and a diffuser unit 16. There is no limitation of the invention to these options of image generation, thus, only exemplary embodiments are shown.

[0078] The laser module 11 advantageously includes a coherent light source such as an RGB laser or a monochrome laser source. The phase arrangement 12 for spatial light modulation (SLM) can be implemented as an LC device, LCoS device, DLP device, an AOM or EOM.

[0079] In FIG. 4, a further exemplary application of the invention is shown in the case of an incoming call. In this example, a mobile telephone of the driver is connected to the central control and evaluation unit in the vehicle. Such connection may be effected utilizing a data transmission according to the USB or Bluetooth technologies and is intended to enable the driver, to control the telephone by input means present in the vehicle. In addition, it is common that a sound system present in the vehicle is used for the acoustic reproduction and recording or input of the voice of the driver 4.

[0080] The example in FIG. 4 shows a representation generated by means of a HUD unit in the area of the windshield 5. An information regarding an incoming call is displayed with the exemplary inscription Eingehender Anruf or Incoming call and a choice Annehmen or Accept? to answer or reject the call. In addition, the name of the caller, in this case John Smith, can also be displayed.

[0081] This output generated by the HUD unit is merely an additional pictorial representation which is not necessary for the method according to the invention. There is no input option or choice for the driver 4.

[0082] An input option or choice is provided by the proposed method and the associated arrangement. For this purpose, two virtual operating elements 3 in the form of two cubes or cuboids are represented in an area in front of the steering wheel 9 by the image generating unit 2. This representation is preferably carried out in a three-dimensional representation of the operating elements 3 in such a way that the first operating element 3 is provided with the inscription Yes or Ja, and the second operating element 3 with the inscription No or Nein in one of its areas, such as a side. In an alternative the areas of the operating elements 3 also can be provided with the signs or symbols Tick custom-character for the answering the call and Cross (X) for declining. Thus, a selection to answer the telephone call by means of the left first operating element 3 and for declining the telephone call by means of the second operating element 3 shown on the right is provided to the driver 4.

[0083] The means 7 for gesture recognition is used to recognize the selection the driver 4 is making between the two operating elements 3, and depending on this recognized selection by means of the central control and evaluation unit, the incoming call is answered or declined. After recognizing a selection made, the generation of the three-dimensional operating elements 3, that is the representation of the two cubes or cuboids, is terminated. The generation of the graphical representation by the HUD unit is also terminated. An exemplary additional representation of the route 18 by the HUD unit is maintained while performing the method according to the invention for inputting and outputting information in a vehicle.

[0084] FIG. 5 shows a further exemplary application of the invention in controlling a volume of a sound system arranged in the vehicle.

[0085] In contrast to FIG. 4, optionally an inscription with the text Music Volume +/ or Musik Lautstrke +/ is displayed in addition to a representation of the further route 18 by the HUD unit. The image generating unit 2 generates a virtual operating element 3 in the form of a three-dimensional wheel, which is provided, for example, with a double arrow and the sign +, for an increase in volume of the sound system, and the sign for a decrease in volume.

[0086] A projection of this choice for changing the volume can take place, for example, if a viewing direction of the driver 4 to an area with a volume control of a sound system is recognized by the means 10 for gaze detection and eye tracking. Alternatively, the projection can take place as a result of a recognized voice command or a prior selection on a previously projected operating element 3.

[0087] The representation of the virtual operating element 3 takes place again in an area in front of the steering wheel 9 and can be reached very easily by the driver 4. The driver 4 can make a selection, for example, in such a way that he touches the virtual operating element 3 on its right half to increase the volume. An increase in volume can, for example, take place by a fixed amount in case of a coincidence recognized using the means 7 for gesture recognition. Alternatively, the volume can be increased for as long as a coincidence between the right half of the operating element 3 and the hand 8 or a finger of the driver 4 is recognized.

[0088] In the event that a coincidence between the left half of the operating element 3 and the hand 8 is recognized, a decrease in volume by a fixed amount takes place or as long as the coincidence is recognized.

[0089] In a particular embodiment, it is provided that the represented virtual operating element 3 is configured to be rotatable like a knob-shaped volume controller and, depending on the direction of rotation, a decrease or increase in volume is performed. Such a rotary movement can be triggered by the driver 4 by stroking along an edge of the wheel and turning it into a rotary movement.

[0090] After setting the volume and a lapse of a fixed waiting time, the projection of the virtual operating element 3 configured as a rotary knob and the representation of the inscription by the HUD-Unit are terminated. In this example too, an additional representation of the route 18 is not affected by the HUD unit.

[0091] As shown in FIG. 6, the virtual operating element 3 can also be represented in the form of a three-dimensional cube, which displays setting options or choices on its sides. For example, the sides or areas of the operating element 3 could depict functions of different vehicle systems or functions of one vehicle system, such as a sound system. In the case of a sound system, for example, choices for volume, radio stations, sound sources, sound settings and similar are represented on the sides of the projected cube. The driver 4 can rotate the virtual cube-like operating element 3 about one or more axes and in doing so bring the desired function to the front of the cube and select by Tapping. When the driver 4 has selected, for example, volume setting, the virtual operating element 3 in the form of a small wheel for volume setting, described already above with respect to FIG. 5, is represented. In addition, an inscription with respect to the current front of the operating element 3, such as for example with the inscription Hauptmen or Main menu is possible by the HUD.

LIST OF REFERENCE NUMERALS

[0092] 1 User interface [0093] 2 Image generating unit, laser projection unit [0094] 3 Three-dimensional virtual operating element [0095] 4 Driver [0096] 5 Windshield [0097] 6 Environment [0098] 7 Means for gesture recognition [0099] 8 Hand [0100] 9 Steering wheel [0101] 10 Means for gaze detection and eye tracking [0102] 11 Laser module [0103] 12 Phase arrangement [0104] 13 Lens [0105] 14 Laser background lighting/MEMS [0106] 15 Nanostructure unit [0107] 16 Diffuser unit [0108] 17 Activating the selected function [0109] 18 Route

[0110] The detailed description and the drawings or figures are supportive and descriptive of the disclosure, but the scope of the disclosure is defined solely by the claims. While other embodiments for carrying out the claimed teachings have been described in detail, various alternative designs and embodiments exist for practicing the disclosure defined in the appended claims.