Gesture interaction with a driver information system of a vehicle
10585487 ยท 2020-03-10
Assignee
Inventors
Cpc classification
G06F2203/04806
PHYSICS
B60K2360/146
PERFORMING OPERATIONS; TRANSPORTING
G06F3/017
PHYSICS
G06F3/011
PHYSICS
B60K35/00
PERFORMING OPERATIONS; TRANSPORTING
B60K35/10
PERFORMING OPERATIONS; TRANSPORTING
G06F3/04845
PHYSICS
International classification
B60K35/00
PERFORMING OPERATIONS; TRANSPORTING
G06F3/0484
PHYSICS
Abstract
A control system is provided for moving and/or magnifying display content, wherein the control system includes: a display unit having display content; at least one camera, which is designed to record a sensing region in front of the display unit; a gesture-recognition unit, which is coupled to the at least one camera and is designed to recognize a predetermined gesture performed with a hand and a current position of the gesture in the sensing region; and a display-content adapting unit, which is designed to adapt the display content in accordance with a change of the current position of the gesture, in particular to move the display content accordingly in the event of a change of the position in a plane parallel to the display unit and/or to enlarge or reduce the size of the display content in the event of a change of the position toward or away from the display unit.
Claims
1. A control system for moving and/or magnifying display content, comprising: a display unit having the display content; at least one camera which is designed to record a sensing region in front of the display unit; a gesture recognition unit which is coupled to the at least one camera and is designed to recognize a predetermined gesture performed with a hand and a current position of the gesture in the sensing region; and a display content adapting unit which is designed to adapt the display content in accordance with a change of the current position of the gesture, including moving the display content accordingly in the event of a change of the position in a plane parallel to the display unit and/or enlarging or reducing the size of the display content in the event of a change of the position toward or away from, respectively, the display unit; wherein the display content adapting unit is designed to move the display content, in horizontal and vertical directions, distances equal to distances by which the hand is moved in the horizontal and vertical directions; wherein the display content adapting unit is designed, as soon as a last position of the gesture has been acquired, to allow the display content to continue to run for a predetermined amount of time at a last sensed speed at which the position of the gesture has changed, and thereafter to slow down continuously until the display content comes to a standstill; and wherein the gesture recognition unit calculates a corresponding position of the at least one camera in each of x, y, and z directions in accordance with relationships including:
P.sub.VCnx=P.sub.VC1x*2{circumflex over ()}((G.sub.nxG.sub.1x)/f.sub.x),
P.sub.VCny=P.sub.VC(n-1)y+P.sub.VCnx*(G.sub.nyG.sub.(n-1)y))*f.sub.y), and
P.sub.VCnz=P.sub.VC(n-1)z+P.sub.VCnx*(G.sub.nzG.sub.(n-1)z)*f.sub.z), where P.sub.VCnx, P.sub.VCny, and P.sub.VCnz are the corresponding positions, G is the gesture, f.sub.x, f.sub.y, f.sub.z are sensitivity parameters, n is an index for the corresponding positions sensed during an interaction, and 1<nk, where k is a number of position coordinates.
2. The control system as claimed in claim 1, wherein two cameras are provided for recording the sensing region in front of the display unit, and the gesture recognition unit determines the current position of the gesture in the sensing region by stereoscopy.
3. The control system as claimed in claim 1, wherein the at least one camera is an infrared camera, a time-of-flight camera and/or a structured-light camera.
4. The control system as claimed in claim 2, wherein the at least one camera is an infrared camera, a time-of-flight camera and/or a structured-light camera.
5. The control system as claimed claim 1, wherein the gesture recognition unit and/or the display content adapting unit are designed to move and/or to magnify the display content as long as the predetermined gesture is recognized.
6. The control system as claimed in claim 1, wherein, the sensing region is a predetermined space in front of the display unit.
7. The control system as claimed in claim 1, wherein the display content adapting unit is designed to enlarge or to reduce the size of the display content in the case of a change of the position toward or away from the display unit by a factor, the factor corresponding to a power of base 2 with a change of the distance between the position of the gesture and the display unit as exponent.
8. The control system as claimed in claim 1, wherein the display content adapting unit is designed to move the display content with a change in the position in a plane parallel to the display unit correspondingly such that a length of a movement of the display content corresponds to a distance of the movement of the position of the gesture in the plane parallel to the display unit.
9. The control system as claimed in claim 1, wherein the display content is a map section of a map which is stored in a database of a navigation system, and the display content adapting unit adapts the map section in accordance with the change in the current position of the gesture.
10. A vehicle, comprising: a driver information system which has at least one display unit for displaying graphical data; and a control system for moving and/or magnifying display content, including: a display unit having the display content; at least one camera which is designed to record a sensing region in front of the display unit; a gesture recognition unit which is coupled to the at least one camera and is designed to recognize a predetermined gesture performed with a hand and a current position of the gesture in the sensing region; and a display content adapting unit which is designed to adapt the display content in accordance with a change of the current position of the gesture, wherein the adaption includes moving the display content accordingly in the event of a change of the position in a plane parallel to the display unit and/or enlarging or reducing the size of the display content in the event of a change of the position toward or away from, respectively, the display unit; wherein the display content adapting unit is designed to move the display content, in horizontal and vertical directions, distances equal to distances by which the hand is moved in the horizontal and vertical directions; wherein the display content adapting unit is designed, as soon as a last position of the gesture has been acquired, to allow the display content to continue to run for a predetermined amount of time at a last sensed speed at which the position of the gesture has changed, and thereafter to slow down continuously until the display content comes to a standstill; and wherein the gesture recognition unit calculates a corresponding position of the at least one camera in each of x, y, and z directions in accordance with relationships including:
P.sub.VCnx=P.sub.VC1x*2{circumflex over ()}((G.sub.nxG.sub.1x)/f.sub.x),
P.sub.VCny=P.sub.VC(n-1)y+P.sub.VCnx*(G.sub.nyG.sub.(n-1)y))*f.sub.y), and
P.sub.VCnz=P.sub.VC(n-1)z+P.sub.VCnx*(G.sub.nzG.sub.(n-1)z)*f.sub.z), where P.sub.VCnx, P.sub.VCny, and P.sub.VCnz are the corresponding positions, G is the gesture, f.sub.x, f.sub.y, f.sub.z are sensitivity parameters, n is an index for the corresponding positions sensed during an interaction, and 1<nk, where k is a number of position coordinates.
11. The vehicle as claim in claim 10, wherein the vehicle is a motor vehicle.
12. The vehicle as claimed in claim 10, wherein the graphical data is graphical data of a navigation system.
13. The vehicle as claimed in claim 10, wherein the graphical data is graphical data of an Internet browser.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
DETAILED DESCRIPTION OF THE DRAWINGS
(3) The following description explains the proposed control of the display content of a display unit with the example of a control of a displayed map section of a navigation system. Naturally, the control described here can be basically transferred correspondingly to other display contents of a display unit of a driver information system. As a further example, an Internet browser integrated into the driver information system will be mentioned here in which then small contents can be focusedas in the case of the map sectionwith a short and fast gesture.
(4)
(5) The map section 7 can be displaced correspondingly by a change of the position P of the camera 3 in the y- and/or z-direction. By a change in the position P of the camera 3 in the x direction, the map section 7 can be zoomed. That is, when the position P is moved in the x direction toward the map 1, the map section 7 is correspondingly reduced in size. Since the display area of the display unit remains the same, this means that a smaller part of the map 1 is displayed on the display area which means that in the result, zooming took place into map 1. If the position P is moved away from map 1 in the x direction, the map section 7 is correspondingly enlarged. Since the display area of the display unit remains the same, this means that a larger part of map 1 is now displayed on the display area by which means zooming took place out of map 1.
(6) According to the first aspect, the preceding basic principle is utilized for controlling a currently displayed map section 7 on the display unit 5 of a navigation system by way of a gesture control in free space. In this context, the position of a predetermined gesture G is recognized in the form of a particular hand/finger position and recalculated into a position P of the virtual camera 3 of
(7) In this respect,
(8)
(9) The gesture recognition has the advantage that the intention of the user is clearly recognizable and the gesture can be distinguished from other hand movements. As a result, other hand movements within the area of recognition of the cameras 12.1, 12.2 do not lead to any (unintended) adaptation of the map section 7. That is, it is only when the predetermined gesture G is recognized that the map section adapting unit 16 is activated.
(10) A gesture recognition unit 14 with the cameras 12.1, 12.2, can be a component of a control system 10 of the navigation system as separate hardware or can be connected to it or implemented by software routines of the navigation system which are present in any case. The gesture recognition unit 14 is designed both for gesture recognition and for determining the current position P of the gesture G in free space in front of the display unit 5. The gesture recognition is implemented in a familiar manner using corresponding methods of image processing and pattern recognition. The current position P of the gesture G is essentially determined on the basis of the knowledge of the positions of the cameras 12.1, 12.2 and the current angle in space of the lines of sight 12.1r, 12.2r of the respective camera 12.1, 12.2 starting from position P of the gesture G. By this means, the gesture recognition unit 14 can determine the coordinates of position P with sufficient accuracy by use of known calculations of the stereoscopy.
(11) The 3D coordinates of position P of the gesture G are preferably acquired in short time intervals, e.g. every 20 ms, and provided to the map section adapting unit 16 in order to ensure a fluid adaptation of the map section 7 (movement and/or magnification).
(12) As soon as the gesture recognition unit 14 has recognized the predetermined gesture G, the coordinates of the (gesture) position P are acquired by the gesture recognition unit 14 as long as the predetermined hand/finger position for the gesture G is retained by the user. The coordinates (G.sub.nx, G.sub.ny, G.sub.nz) form the basis for the adaptation of the map section 7, based on this, by the map section adapting unit 16. The entire map 1 (compare
(13) Coordinates of the position acquired during this process can be marked with a running index n and stored, the index n running from 1 to k. The last coordinates (G.sub.kx, G.sub.ky, G.sub.kz) correspond to the last position P.sub.k at which the hand H performed the predetermined gesture G or left the technically required sensing region in front of the display unit 5. That is, during an interaction, the gesture recognition unit 14 acquires k position coordinates:
(14) x coordinate: G.sub.1x, G.sub.2x, G.sub.3x, G.sub.4x, G.sub.5x, . . . , G.sub.kx
(15) y coordinate: G.sub.1y, G.sub.2y, G.sub.3y, G.sub.4y, G.sub.5y, . . . , G.sub.ky
(16) z coordinate: G.sub.1z, G.sub.2z, G.sub.3z, G.sub.4z, G.sub.5z, . . . , G.sub.kz
(17) A particular space above or in front of the display unit 5 can be defined as a recognition area. Coordinates G.sub.nxyz of position P are calculated by the gesture recognition unit 14 as control input to the map section adapting unit 16 only when the defined gesture G is recognized and the current position P of gesture G is within the recognition area. If a position P outside the recognition area is determined or reached during an active interaction or if the defined hand position is left, the interaction is aborted. That is, there will then not be any further adaptation of the map section 7.
(18) In order to reach the representation of a particular map section 7 on the display unit 5, the current position P of gesture G is recalculated into a position of a virtual camera (corresponding to camera 3 in
(19) From the currently sensed position P of gesture G with the coordinates (G.sub.nx, G.sub.ny, G.sub.nz), the gesture recognition unit 14 calculates the corresponding position (P.sub.VCnx, P.sub.VCny, P.sub.VCnz) of the virtual camera preferably in accordance with the following relationships, expressed as formulae:
P.sub.VCnx=P.sub.VC1x*2{circumflex over ()}((G.sub.nxG.sub.1x)/f.sub.x)(1.1)
P.sub.VCny=P.sub.VC(n-1)y+P.sub.VCnx*(G.sub.nyG.sub.(n-1)y))*f.sub.y(1.2)
P.sub.VCnz=P.sub.VC(n-1)z+P.sub.VCnx*(G.sub.nzG.sub.(n-1)z)*f.sub.z(1.3)
(20) where f.sub.x, f.sub.y, f.sub.z are sensitivity parameters and n is the index for the positions P sensed during an interaction, where, as explained above, 1<nk applies.
(21) The respective x coordinate corresponds to the current distance of gesture G from the display unit 5 or, respectively, to the height of the virtual camera above map 1 of which map section 7 is displayed on the display unit 5. The relationship between the x coordinate of camera P.sub.VCnx and the x coordinate of gesture position G.sub.nx is used for reducing or enlarging (zooming) the size of the map section. Using the power of 2 has the result that by means of a defined movement of the gesture G or hand H in the x direction, a doubling of the map size shown is thus achieved. By this means, it is possible to change between very many zooming factors with a hand movement.
(22) Furthermore, the y and z coordinates P.sub.VCny, P.sub.VCnz of the virtual camera are dependent on the x coordinate P.sub.VCnx in such a manner that by a corresponding adjustment of the sensitivity parameters f.sub.y and f.sub.z, the length of a map movement on the display unit 5 corresponds to the distance of the movement of the gesture G or hand H in the y and/or z direction. That is, for example, that a hand movement by 10 cm to the right results in a map movement by 10 cm to the right. This results in a perfect mental model as a result of which the control is intuitive for a user.
(23) According to a further advantageous aspect of the display content control, the adaptation of the display content can take into consideration the motion dynamics of gesture G or hand H during the input. Map section adapting unit 16 is designed for this, and after the last coordinates G.sub.k of position P of gesture G have been sent, to allow the map section 7 to continue to run over the map 1 with the last sensed speed of hand H or gesture G, the movement slowing down continuously until it comes to a standstill. This characteristic imparts to the user the feeling that one can nudge the map. This characteristic can be mapped with the following relationships and settings expressed in a formula:
last_vel.sub.xyz=(G.sub.kxyzG.sub.(k-1)xyz)/steplength(2.1)
S.sub.nxyz=0.5*b*(steplength*(nk)){circumflex over ()}2+last_vel.sub.xyz*steplength*(nk);(2.2)
(last_vel.sub.xyz>0)
S.sub.nxyz=0.5*b*(steplength*(nk)){circumflex over ()}2+last_vel.sub.xyz*steplength*(nk);(2.3)
(last_vel.sub.xyz<0)
P.sub.VCnx=P.sub.VC1x*2{circumflex over ()}((G.sub.kx+S.sub.nx)/f.sub.x)(2.4)
P.sub.VCny=P.sub.VC(n-1)y+P.sub.VCnx*(S.sub.nyS.sub.(n-1)y)*f.sub.y(2.5)
P.sub.VCnz=P.sub.VC(n-1)z+P.sub.VCnx*(S.sub.nzS.sub.(n-1)z)*f.sub.z(2.6)
(24) where n>k means that the map section 7 continues after the last gesture position recognized, b is a preset braking factor, e.g. 0.3 m/s, steplength is a steplength in accordance with the current frame rate of the cameras 12.1, 12.2, e.g. 20 ms at 50 frames/s, and last_vel.sub.xyz is the last speed of the movement of the gesture G or hand H, e.g. (G.sub.kxyzG.sub.(k-1)xyz)/steplength.
(25) The continuation of the map section described ends as soon as s.sub.nxyz reaches a defined minimum or a new series of coordinates of a position P of a recognized gesture G is sent.
(26) The foregoing disclosure has been set forth merely to illustrate the invention and is not intended to be limiting. Since modifications of the disclosed embodiments incorporating the spirit and substance of the invention may occur to persons skilled in the art, the invention should be construed to include everything within the scope of the appended claims and equivalents thereof.