Method and device for a graphical user interface in a vehicle with a display that adapts to the relative position and operating intention of the user

10592078 ยท 2020-03-17

Assignee

Inventors

Cpc classification

International classification

Abstract

In a method for supplying a graphical user interface in a vehicle, at least one object for representing a first subset of total information is graphically displayed on a display area in at least one first display mode. An operating intention of a user is detected. In addition, a relative position of the user with respect to the display area is ascertained. When the operating intention of the user has been detected, the object is transferred into a second display mode, in which the object is displayed perspectively or three-dimensionally pivoted about an axis at a pivoting angle in the direction of the relative position. In addition, a device is adapted for providing a graphical user interface.

Claims

1. A method for providing a graphical user interface in a vehicle, comprising: graphically displaying, on a display area in at least one first display mode, at least one object for representing a first subset of total information; detecting an operating intention of a user; ascertaining a relative position of the user with respect to the display area; identifying a position of one of a head and eyes of the user; ascertaining a viewing angle, under which the user gazes at the display area from the relative position; wherein the viewing angle includes a vertical component including an angle component of the viewing angle by which the gaze direction of the user deviates in a downward direction from a point on the display area to a plane situated horizontally in the vehicle longitudinal direction that intersects the eyes of the user; and in response to detecting the operating intention of the user, transferring the object into a second display mode, in which the object is displayed perspectively or three-dimensionally pivoted about an axis extending horizontally through the object as a function of the vertical component of the viewing angle, and about an axis extending vertically through the object as a function of a preset value in the direction of the relative position.

2. The method according to claim 1, further comprising: ascertaining whether a first or a second user has the operating intention; and determining a first relative position for the first user and a second relative position for the second user.

3. The method according to claim 1, further comprising detecting, as an operating intention, an approach of the display area by an operating object by the user.

4. The method according to claim 1, wherein in the second display mode, a second subset of the total information is shown adjacent to the first subset of the total information in the object.

5. The method according to claim 4, wherein in the second display mode, the first subset of the total information is shown in highlighted form in comparison with the second subset of the total information.

6. The method according to claim 4, wherein in the second display mode, the second subset of the total information abuts the first subset of the total information perspectively or three-dimensionally in the front and/or in the back.

7. The method according to claim 4, wherein in the second display mode, the first subset of the total information is displayed on the display area in a region that is delimited by a first boundary and a second boundary, and the first subset of the total information is shown in highlighted form in comparison with the second subset of the total information; and wherein the first subset of the total information and the second subset of the total information are shifted in their positions on the display area by an operating action of the user, a first portion of the first subset of the total information being shifted out of the region, while a second portion of the second subset of the total information is shifted into the region, so that the first portion is no longer shown in highlighted form and the second portion is displayed in highlighted form.

8. The method according to claim 1, wherein a bar having a marker is displayed, the bar representing a total quantity of the total information, and a position of the marker on the bar being shifted as a function of the operating action, the position of the marker on the bar representing the position of the first subset of the total information within the total quantity of the total information.

9. The method according to claim 1, wherein the viewing angle is ascertained in response to the detection of the operating intention.

10. A device for providing a graphical user interface in a vehicle, comprising: a processor; a memory; and a display device including a display area; wherein the processor is configured to: graphically display at least one object, on the display area, for representation of a first subset of total information; detect an operating intention of a user, a position of a head or eyes of the user, and a viewing angle, under which the user gazes at the display area from a relative position, in response to the detection of the operating intention; ascertain a vertical component including an angle component of the viewing angle by which the gaze direction of the user deviates in a downward direction from a point on the display area to a plane situated horizontally in the vehicle longitudinal direction that intersects the eyes of the user; ascertain the relative position of the user with respect to the display area; and generate graphical data that represent the object in a first display mode, and by which the object can be transferred into a second display mode, in which the object is displayable pivoted about an axis at a pivoting angle in a direction of the relative position and as a function of a horizontal and/or the vertical component of the viewing angle in response to detection of the operating intention.

11. The device according to claim 10, wherein the processor is further configured to detect an approach by an operating object toward the display area.

12. The device according to claim 10, wherein the processor is further configured to ascertain whether a first user or a second user has the operating intention, the first user being assigned a first relative position and the second user being assigned a second relative position.

13. A vehicle, comprising the device recited in claim 10.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 shows a device according to an exemplary embodiment of the present invention.

(2) FIG. 2 schematically shows a view from above into the vehicle interior.

(3) FIG. 3 shows a placement of the device inside a vehicle.

(4) FIGS. 4 and 5 show displays as they are generated by a first exemplary embodiment of the method.

(5) FIG. 6 shows a flow chart for a first exemplary embodiment of the method.

(6) FIG. 7 shows a further display as it is able to be generated by the first exemplary embodiment of the method.

(7) FIG. 8 shows a display as it is generated by a second exemplary embodiment of the method on the display area.

DETAILED DESCRIPTION

(8) A device 1 according to an example embodiment of the present invention and a placement of device 1 in a vehicle 2 will be explained with reference to FIGS. 1, 2 and 3.

(9) Device 1 is used for providing a user interface in a vehicle 2. It includes a display device 3 having a display area 4. Display area 4 may be made available by a display of any type. Display device 3 is connected to a control unit 5, which generates graphical data that are visibly reproduced for the vehicle passengers via display area 4 in the interior of vehicle 2. Operating objects and display objects, in particular, are able to be shown for the user interface. These operating and display objects assist user N1 in controlling devices of the vehicle. The display objects provide information in addition.

(10) Furthermore, a proximity detection device 6 is provided, by which an approach of an operating object 7, such as a hand of a user N1, toward display area 4 is able to be detected in a detection space 8. Detection space 8 is situated in front of display area 4. Proximity detection device 6 is part of an input device for the user interface. User N1 may execute gestures in detection space 8 in order to control the display on display area 4.

(11) Proximity detection device 6, for example, may include infrared-light sources and infrared-light detectors. As an alternative, proximity detection device 6 may be equipped with an optical system including a camera, which records the gesture executed in detection space 8. In addition, the optical system could encompass a light-emitting diode, which, for instance, emits rectangular wave, amplitude-modulated light. This light is reflected at the hand of user N1 executing the gesture in detection space 8 and reaches a photodiode of proximity detection device 6 after being reflected. An additional light-emitting diode emits rectangular wave, amplitude-modulated light to the photodiode as well, which is phase-shifted by 180, however. The two light signals superpose at the photodiode and cancel each other out if they have exactly the same amplitude. In the event that the signals do not cancel each other out at the photodiode, a control loop regulates the light emission of the second diode such that the total received signal adds up to zero again. If there is a change in position of the hand of user N1 in detection space 8, the light component that arrives at the photodiode from the first light-emitting diode via the reflection at the hand will change as well. This induces the control loop to adapt the intensity of the second light-emitting diode, which therefore means that the control signal is a measure of the reflection of the light emitted by the first diode at the hand of user N1 executing the gesture. A signal that is characteristic of the position of the hand of user N1 can thereby be derived from the control circuit.

(12) In addition, device 1 includes an ascertainment device 9, which can detect whether the operating intention in detection space 8 is executed by a first user N1 or a second user N2. First user N1 is the driver of vehicle 2, in particular, and second user N2, in particular, is the passenger. Moreover, ascertainment device 9 is able to determine a first relative position P1 of first user N1 with regard to display area 4, and a second relative position P2 of second user N2 with regard to display area 4.

(13) The ascertainment of the relative position of a user with regard to display area 4 will be explained by way of example using relative position P1 of first user N1; however, relative position P2 of second user N2 is ascertained in an analogous manner.

(14) As illustrated in FIG. 1, an electrode array 10 is located in seat 11 of first user N1. This electrode device 10 may be used for capacitively coupling an identification code into the body of first user N1. The identification code is able to identify relative position P1 of first user N1 with regard to display area 4, the seating position of first user N1, and also first user N1 himself. The identification code is transmitted via the body of first user N1 and capacitively decoupled at the fingertip of first user N1, so that it can be transmitted to a receiving device accommodated in display device 3.

(15) The receiving device is connected to a control unit 5, which in turn is capacitively coupled to electrode device 10. An electric field having a very limited range of several centimeters or decimeters, for example, is used in the capacitive couplings between electrode device 10 and first user N1, for one, and first user N1 and the receiving device in display device 3, for another. The range of this field substantially corresponds to the size of detection space 8. Relatively low carrier frequencies of several 100 kHz, which lead to quasi-static fields, are employed for the signal transmission, i.e. fields for which the physical principles that apply to static fields are true for the most part. As far as further details of this signal transmission are concerned, reference is made to German Published Patent Application No. 10 2004 048 956 and the additional literature cited therein, which is hereby incorporated into the present application by reference. The circuit devices used in German Published Patent Application No. 10 2004 048 956, in particular, may be used.

(16) The manner in which a viewing angle of first N1 and/or of second user N2 in relation to display area 4 is able to be ascertained will be explained with reference to FIGS. 2 and 3.

(17) Once again, this will be done by way of example for first user N1, but takes place in a similar manner for user N2.

(18) When an operating intention exists, the head and/or eye position of first user N1 is detected to begin with and then compared to a reference image. The reference image may be an image of first user N1 himself, in which case the image then includes the head and/or eye position of first user N1 when gazing at the road. This makes it possible to determine how the head and/or eye position of first user N1 changes when the user gazes at display area 4. As an alternative, the reference image may also be a pattern image.

(19) A deviation to the left or right of the head and/or eye position of first user N1 from a vertical plane 12 extending in parallel with vehicle longitudinal direction B will then be ascertained. Vertical plane 12 intersects center point 14 between the eyes of first user N1, in particular. In this case, the head and/or eye position describes especially the rotation of the head about an axis that lies in plane 12.

(20) A vertical component is ascertained via an upward or downward deviation of the head and/or eye position of first user N1 from a horizontal plane 13 which is situated at a right angle to vertical plane 12. Horizontal plane 13, in particular, intersects the eyes of first user N1. In this case, the head and/or eye position describes the rotation of the head about an axis that lies in plane 13, in particular.

(21) The method according to an example embodiment of the present invention will be described with reference to FIGS. 4 through 6.

(22) FIG. 4 shows a display on the display area before the start of the method, on which an object 15 is displayed in a first display mode.

(23) Object 15 initially includes a first subset of total information, which is displayed in multiple graphical objects 15.1 through 15.6. The various graphical objects 15.1 through 15.6 may include information from different categories. For example, graphical object 15.1 includes navigation information, whereas no displayable information has currently been assigned to graphical object 15.2. Like graphical object 15.6 and graphical object 15.5, graphical object 15.3 includes information related to the weather or climate. Graphical object 15.4 displays the music album that is currently playing.

(24) In step 21 of method 20, an operating intention on the part of first user N1 is detected, i.e. the driver. First user N1 has moved an operating object 7, such as his hand 7, into detection space 8 for this purpose.

(25) In step 22, horizontal component of the viewing angle of first user N1 with regard to display area 4 is ascertained.

(26) In step 23, display area 4 is controlled such that object 15 is transferred into a second display mode. Object 15 is shown pivoted at a pivoting angle about an axis 16, which extends vertically through object 15 and is situated in the center of object 15, for instance. In terms of its value, pivoting angle corresponds to the value of horizontal component of the viewing angle of first user N1 and amounts to 10, for example. Pivoting angle should not exceed a value of 55 because a projection of the perspectively displayed object 15 onto display area 4 may otherwise become too small.

(27) In particular, it is also possible to consider vertical component of the viewing angle in pivoting angle . Object 15 would then be pivoted not only about vertical axis 16, but about a horizontal axis as well.

(28) Graphical objects 15.1 through 15.6, in particular, are also pivoted about axis 16 at pivoting angle . In this manner, the total information included in graphical objects 15.1 through 15.6 is retained in object 15 as a whole.

(29) Following step 23, the object is therefore shown perspectively, and the size of object 15 decreases from the right front to the left rear. This can be understood when looking at the side lengths of object 15, in particular. The first subset of the total information displayed in graphical objects 15.1 through 15.6 also decreases in size from the right front to the left rear. The size characteristic, i.e. the measure of the decrease, of object 15 and the first subset of total information 15.1 through 15.6 is a function of the size of pivoting angle , in particular. The greater the pivoting angle , the more extreme the size difference of object 15 between the right front and the left rear will become.

(30) Moreover, further graphical objects 15.7 through 15.10, which belong to a second subset of the total information, are shown perspectively abutting graphical objects 15.6 and 15.1 in the rear and abutting graphical objects 15.4 and 15.3 of the first subset of the total information in the front. They adjoin graphical objects 15.1 through 15.6 without a gap. This means that the two graphical objects 15.7 and 15.8 with the information displayed therein, are objects 15.1 through 15.10 that are displayed in the largest size, whereas graphical objects 15.9 and 15.10 are graphical objects 15.1 through 15.10 that are displayed in the smallest size.

(31) Furthermore, graphical objects 15.1 through 15.6 of the first subset are shown within two boundary lines 17 and 18. These boundary lines 17 and 18 delimit a region 19 on display area 4 in the second display mode of object 15. Any information appearing in this region and the information entering it is shown in highlighted form. For example, the first subset of the total information, i.e. graphical objects 15.1 through 15.6, may be displayed in color, while the second subset of the total information, i.e. graphical objects 15.7 through 15.10, will be shown in gray.

(32) In other words, following step 23, object 15 displays a larger quantity of total information, which will indicate to first user N1 the total information that may be displayed to him through further operating actions.

(33) First user N1 executes a wiping gesture directed to the left in front of display area 4 in detection space 8. This results in image scrolling, in which graphical objects are displayed in sequence, to the effect that graphical objects including information will disappear from display area 4 one after the other, and other graphical objects will be displayed on display area 4. That is to say, if first user N1 executes a wiping gesture to the left, graphical objects 15.9 and 15.10 on the left would disappear from display area 4 first, graphical objects 15.1 and 15.6 would leave region 19 at least partially toward the left, graphical objects 15.7 and 15.8 on the right would at least partially enter region 19, and graphical objects on the right adjacent to graphical objects 15.7 and 15.8 would at least partially be displayed on display area 4.

(34) In step 24, the gesture is detected by proximity detection device 6 and is converted into a control signal, which actuates display area 4 such that all graphical objects 15.1 through 15.10 displayed in object 15 are shifted to the left rear.

(35) In so doing, the graphical objects enter region 19 as a function of the extent of shift 15.7 and 15.8. Upon entering region 19, graphical objects 15.7 and 15.8 shown in gray will be displayed in color to the extent that they have entered region 19.

(36) On the side of boundary line 17, graphical objects 15.6 and 15.1 simultaneously leave region 19. They will then be shown in gray to the extent that they leave region 19, once again as a function of the magnitude of the shift.

(37) When the operating action, i.e. the wiping gesture, has ended, which is detected when operating object 7, i.e. the hand of first user N1, leaves detection space 8, object 15 changes back to the first display mode. Graphical objects 15.2 through 15.5, 15.7 and 15.8, which are located within region 19 in the second display mode, are now displayed as the first subset of the total information.

(38) In step 23, it is additionally possible to display a bar 31 underneath object 15, which is shown in FIG. 7. Bar 31 represents the quantity of the total information. In addition, bar 31 has a marker 30, and the position of marker 30 on bar 31 informs first user N1 of the manner in which the first subset of the total information can be arranged within the quantity of the total information. This gives user N1 an indication of the quantity of the total information that is still available to him. The length of marker 30, in particular, is decisive for the amount of the quantity of the total information, because the shorter marker 30, the larger the quantity of the total information.

(39) If a wiping gesture is executed, marker 30 on bar 31 is shifted in the direction in which the wiping gesture was executed.

(40) A further exemplary embodiment of the method will be discussed in the following text with reference to FIGS. 4 and 8.

(41) Once again, the display shown in FIG. 4 forms the starting point of the method.

(42) Second user N2 moves operating object 7, i.e. his hand, into detection space 8, which is detected by proximity detection device 6. Via electrode array 10 and ascertainment unit 6, it is determined, in particular, that the user having the operating intention is second user N2, to whom relative position P2 has been allocated. Relative position P2 is thus determined via the identification of the user having the operating intention, i.e. user N2.

(43) The display area is controlled such that object 15 is pivoted in the direction of relative position P2, i.e. in the direction of the passenger.

(44) Pivoting angle can be set in advance, so that only the direction in which object 15 will be pivoted is determined via relative positions P1 and P2. Object 15 would then be pivoted in the direction of first relative position P1 at the same pivoting angle as in the direction of relative position P2.

(45) In the second exemplary embodiment of the method, an active distinction is made between driver and passenger, while in the first exemplary embodiment, a position or a viewing angle of the user having the operating intention is determined.

(46) Both methods are also easily combinable with one another. For example, it is possible to ascertain relative positions P1 and P2 via both methods simultaneously. For instance, only pivoting of object 15 about an axis extending horizontally through the object as a function of horizontal component may then be dependent, while the pivoting angle about axis 16 extending vertically through object 16 is ascertained via a preset value.

(47) If display area 4 includes a touch-sensitive surface, then operating actions may also be executed directly on the surface of display area 4.

(48) In particular, object 15 may also be displayed in a three-dimensional view instead of a perspective view for both alternatives of the method.

LIST OF REFERENCE CHARACTERS

(49) 1 device 2 vehicle 3 display device 4 display area 5 control unit 6 detection device; proximity detection device 7 operating object 8 detection space 9 ascertainment unit 10 electrode array 11 vehicle seat 12 vertical plane 13 horizontal plane 14 center point between the eyes of a user object 15.1-15.6 graphical objects; first subset of the total information 15.7-15.10 graphical objects; second subset of the total information 16 axis 17, 18 boundaries 19 region 20 method 21-24 method steps 30 marker 31 bar N1, N2 first user, second user , horizontal component of a viewing angle vertical component of a viewing angle pivoting angle B vehicle longitudinal direction