User interface and method for signaling a 3D-position of an input means in the detection of gestures

09956878 ยท 2018-05-01

Assignee

Inventors

Cpc classification

International classification

Abstract

In a user interface and to a method for signaling a position of an input mechanism with respect to an area for 3D gesture detection for a user interface, the method includes the following steps: input mechanism of a user are detected in the area for 3D gesture detection, and the position of the input mechanism is signaled by an indicator in the area of the edge of a display unit of the user interface.

Claims

1. A method for signaling a position of an input mechanism with regard to a region for 3D-detection of gestures for a user interface, comprising: detecting the input mechanism of a user in the region for 3D-gesture detection; and signaling the position of the input mechanism by an item of information in a region of edges of a display unit of the user interface.

2. The method according to claim 1, further comprising: detecting a change in a position of the input mechanism with respect to a first coordinate, and in response thereto: modifying the position of the information in a region of horizontal edges of the display unit.

3. The method according to claim 1, further comprising: detecting a change in a position of the input mechanism with respect to a second coordinate, and in response thereto: modifying the position of the information in a region of vertical edges of the display unit.

4. The method according to claim 1, further comprising: detecting a change in a position of the input mechanism with respect to a third coordinate, and in response thereto: modifying a design of the information.

5. The method according to claim 4, wherein the modification of the design of the information relates to transparency, an extension along a respective edge, edge focus, and/or color.

6. The method according to claim 1, wherein the position of the information with respect to boundaries of the edges of the display unit substantially behaves according to a position of the input mechanism with respect to boundaries of the region for 3D-gesture detection.

7. The method according to claim 1, wherein the items of information are displayed on the display unit of the user interface.

8. The method according to claim 1, wherein the items of information are displayed on separate illumination devices next to the display unit of the user interface.

9. The method according to claim 1, further comprising: detecting the input mechanism of a second user in the region for 3-D gesture detection, and in response thereto: modifying a design of the items of information.

10. A user interface, comprising: a display unit; a sensor system; and an evaluation unit; wherein the sensor system is adapted to detect an input mechanism for 3D-gesture control; and wherein the evaluation unit is adapted to signal a position of the input mechanism by an item of information in a region of edges of the display unit.

11. The user interface according to claim 10, further comprising additional illumination devices and/or bar-type illumination elements, located along respective edges of the display, adapted to signal the items of information.

12. The user interface according to claim 10, wherein the user interface is adapted to perform the method recited in claim 1.

13. A non-transitory computer-readable medium storing a set of instructions that when executed by an evaluation unit of a user interface that includes a display unit, a sensor system, and the evaluation unit, the sensor system adapted to detect an input mechanism for 3D-gesture control, the evaluation unit adapted to signal a position of the input mechanism by an item of information in a region of edges of the display unit, perform the method recited in claim 1.

14. A user terminal, comprising the user interface recited in claim 10.

15. The user terminal according to claim 14, wherein the user terminal is arranged as a wireless communications device, a tablet PC, and/or a smartphone.

16. A transportation device, comprising the user interface recited in claim 10.

17. The transportation device according to claim 16, wherein the transportation device is arranged as a vehicle.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 is a schematic view of components of a user interface in a locomotion device according to an example embodiment of the present invention.

(2) FIG. 2 is a schematic view of components of a user interface of a user terminal according to an example embodiment of the present invention.

(3) FIG. 3 is a schematic view of a hand of a user in a 3D-gesture detection region.

(4) FIG. 4 is an alternative schematic view of a hand of a user in a 3D-gesture detection region.

(5) FIG. 5-FIG. 10 illustrate operating steps when operating a user interface according to an example embodiment of the present invention.

(6) FIG. 11 illustrates a user interface according to an example embodiment of the present invention.

(7) FIG. 12 is a flow chart illustrating steps of a method according to an example embodiment of the present invention.

DETAILED DESCRIPTION

(8) FIG. 1 shows a passenger car 10 as a locomotion device, in which a user interface 1 is installed. A screen 2 as display unit is installed in the instrument panel of passenger car 10 and connected in an IT-supported manner to an electronic control unit 3 as an evaluation unit. Electronic control unit 3 in turn is connected to a data memory 5 where references for classified 3D-gestures intended for operating functionalities of user interface 1 are stored. Mounted underneath screen 2 is an infrared LED strip 4 as a sensor system, which defines a 3D-detection region 6. Illumination elements 12a, 12b, 12c, 12d are situated along the edges of screen 2, via which electronic control unit 3 is able to output information in connection with current hand positions of the user in the region of the edges of screen 2 in order to illustrate a position of an input mechanism (such as a hand of a user).

(9) FIG. 2 shows a schematic view of a tablet PC 20 as the user terminal, in which a user interface 1 is integrated. An optical camera 4 as a sensor system for the detection of 3D-gestures is connected to a programmable processor 3 in an IT-supported manner. Below the (partially cut open) screen 2 as the display unit there is also a data memory 5, which is connected to processor 3 in an IT-supported manner. The functions of the illustrated components substantially correspond to the functions listed in connection with FIG. 1.

(10) FIG. 3 shows hand 7 of a user, which is located in a 3D-detection region 6. 3D-detection region is defined by a sensor system 4, which is integrated into center console 8 of a locomotion device. Horizontal boundaries X.sub.min, X.sub.max and vertical boundaries Y.sub.min, Y.sub.max apply to the possibility of a sensor-based detection of hand 7. Current position P of hand 7 is indicated by X.sub.h, Y.sub.h inside the aforementioned boundaries X.sub.min, X.sub.max, Y.sub.min, Y.sub.max. As illustrated, hand 7 is detected in its entirety and can be resolved quite well using sensor technology.

(11) FIG. 4 shows a lateral view of the system illustrated in FIG. 3. Detection region 6 substantially has the cross-sectional area of an inverted triangle, whose lower corner coincides with sensor 4 integrated into center console 8. Lower boundary Z.sub.min of detection region 6 is located here as well. Situated at a distance of approximately 30 to 40 cm from sensor 4 is upper boundary Z.sub.max of detection region 6. Hand 7 is disposed between both boundaries Z.sub.min, Z.sub.max at a current height Z.sub.h.

(12) In partial Figure a), FIG. 5 shows a view of a screen 2 of a user interface, on which a screen view 11 for the selection of a radio station is displayed. Information in the form of blue bars 9a, 9b, 9c, 9d is shown in the edge regions of screen 2, whose boundaries are characterized by an increasing transparency diffusion in the direction of the corners of screen 2. Partial Figure b) shows a view of a hand 7 of a user in detection region 6 of employed sensor 4, the view corresponding to the view of FIG. 3. Position P of hand 7 lies in the center of detection region 6, and the distance from sensor 4 is selected such that, for one, hand 7 fills detection region 6 quite well, but does not extend beyond its boundaries for another.

(13) FIG. 6 shows an operating step, modified in comparison with the situation shown in FIG. 5 on account of a change in position P of hand 7, in which information 9a, 9c disposed along the horizontal edges of screen 2 has migrated to the left in accordance with position P of hand 7. In contrast, the position of references 9b, 9d along the vertical edges of screen 2 has remained unchanged. The distance of hand 7 with respect to sensor 4 is unchanged as well.

(14) In contrast to the illustration shown in FIG. 6, FIG. 7 shows an additional shift of hand 7 of the user in the upward direction or in the driving direction. Accordingly, references 9b, 9d also have moved along the vertical edges of screen 2 in the direction of the upper edge of screen 2.

(15) FIG. 8 once again shows the situation illustrated in FIG. 5 as the starting point for an effect of a modified distance between hand 7 of the user and sensor 4.

(16) FIG. 9 depicts hand 7 of the user, which is located in an upper edge area of 3D-detection region 6. Hand 7 has been moved in the direction of the viewer, and information 9a, 9b, 9c, 9d in the edge regions of screen 2 is displayed with greater transparency or reduced saturation and luminosity. This informs the user that he should move closer to sensor 4 in order to improve the sensor resolution of his gestures.

(17) Starting from FIG. 8, FIG. 10 illustrates the result of a disadvantageously close approach of hand 7 to sensor 4. From the point of the viewer, hand 7 has therefore moved into the drawing plane and for this reason is shown at a reduced size. Leaving the sweet spot in the direction of sensor 4 according to the situation illustrated in FIG. 9 leads to increased transparency or reduced luminosity and saturation of information 9a, 9b, 9c, 9d. This signals to the user that he should increase the distance between his hand 7 and employed sensor 4 in order to improve the resolution of his gestures.

(18) FIG. 11 shows a view of a screen 2, which is surrounded by four additional illumination devices 12a, 12b, 12c, 12d. Two elongated illumination devices 12a, 12c are disposed in the regions of the horizontal edges of screen 2. Accordingly, vertically oriented illumination devices 12b, 12d are situated in the regions of the vertical edges of the screen. Illumination devices 12a, 12b, 12c, 12d are arranged as strip-like illumination elements. To illustrate a position of a hand of a user, illumination devices 12a, 12b, 12c, 12d are designed to indicate information 9a, 9b, 9c, 9d according to the aforementioned comments. Screen 2 may therefore be used for the display of other optional contents in its entirety.

(19) FIG. 12 shows a flow chart that illustrates steps of a method according to an example embodiment of the present invention. In step 100, the hand of a user as input means is detected in a region for 3D-gesture detection. In order to inform the user about the current position of the input mechanism with regard to the edges of the region for 3D-gesture detection, in step 200, the position of the input mechanism is signaled by an item of information in the region of the edges of a display unit of the user interface. The information may substantially be displayed as bar-type or line-type elements, by which the signature of cross hairs indicating the position of the input mechanism is symbolized in the edge regions of the display. In step 300, the user modifies the position of the input mechanism with regard to three coordinates (horizontal, vertical, and with regard to a third coordinate standing perpendicularly on the two aforementioned coordinates). In response, the position of the information in the region of the edges of the display unit as well as an optical design of the information is modified as well in step 400. With regard to the directions perpendicular to the distance between the hand of the user and the sensor, the position of the information in the edge regions of the display unit is modified accordingly. With regard to the modification in the direction of the distance between the hand of the user and the sensor, the visual design of the information is modified. By leaving the sweet spot, the transparency of the information is increased more significantly, which results in reduced luminosity and/or saturation of the information in comparison with the other screen contents. As an alternative, it is also possible to provide the edges of the references with more fuzziness, which indicates to the user that a correction of his hand position is advisable. In step 500, a change in the person of the user is detected in that the driver of the locomotion device removes his hand from the detection region, while the hand of his passenger enters the 3D-detection region. This change is acknowledged in step 600 by a change in the visual design of the information. More specifically, the color and the form of the information are modified. In addition, an icon is superimposed in order to illustrate the position of the user currently taken into account by the system. Optionally, it is possible to use previously defined, alternative design parameters or data sets for the changed user persona. For example, it is also possible to provide different lengths for the information display as a function of the capability of the particular user. The driver will usually operate the user interface more frequently, so that the time for the information output may be shorter than for the passenger. Corresponding parameters can be configured as desired in a setting menu.

LIST OF REFERENCE CHARACTERS

(20) 1 user interface 2 screen 3 evaluation unit 4 sensor system 5 data memory 6 detection range 7 hand of the user 8 center console 9a, 9b, 9c, 9d information 10 passenger car 11 screen content 12a, 12b, 12c, 12d additional illumination devices 20 tablet PC 100-600 method steps P; X.sub.h, Y.sub.h, Z.sub.h position of the hand X.sub.min, Y.sub.min, Z.sub.min, X.sub.max, Y.sub.max, Z.sub.max boundaries of the detection region