System for gaze interaction
10013053 ยท 2018-07-03
Assignee
Inventors
- Markus Cederlund (Sollentuna, SE)
- Robert Gavelin (?kerberga, SE)
- Anders Vennstr?m (Stockholm, SE)
- Anders Kaplan (Uppsala, SE)
- Anders Olsson (Stockholm, SE)
- M?rten Skog? (Danderyd, SE)
Cpc classification
G06F3/0488
PHYSICS
G06F3/017
PHYSICS
G06F2203/0381
PHYSICS
G02B2027/0187
PHYSICS
B60K35/235
PERFORMING OPERATIONS; TRANSPORTING
International classification
Abstract
A method and system for assisting a user when interacting with a graphical user interface combines gaze based input with gesture based user commands. A user of a computer system without a traditional touch-screen can interact with graphical user interfaces in a touch-screen like manner using a combination of gaze based input and gesture based user commands. A solution for touch-screen like interaction uses gaze input and gesture based input as a complement or an alternative to touch-screen interactions with a computer device having a touch-screen. Combined gaze and gesture based interaction with graphical user interfaces can be used to achieve a touchscreen like environment in computer systems without a traditional touchscreen or in computer systems having a touchscreen arranged ergonomically unfavorable for the user or a touchscreen arranged such that it is more comfortable for the user to use gesture and gaze for the interaction than the touchscreen.
Claims
1. A control module for executing object selection commands during user interaction with an information presentation area displaying an object, wherein the control module is configured to: acquire user gaze data signals from a gaze tracking module, the user gaze signals indicating a gaze point on the information presentation area; receive, from an input device that includes a physical touch sensitive area, first user gesture data corresponding to a press of a finger on the physical touch sensitive area; generate a gaze point area on the information presentation area based on the user gaze data signals and on the first user gesture data, the gaze point area being a local area that includes the gaze point; receive, from the input device, second user gesture data corresponding to a movement of the finger on the physical touch sensitive area, the press of the finger followed next by the movement of the finger; determine, based on the gaze point area, that the second user gesture data is associated with a fine tuning command to update the gaze point area; resize, based on the fine tuning command, the gaze point area according to the second user gesture data to generate an updated gaze point area, the updated gaze point area including the object and overlapping with the gaze point area; receive, from the input device, third user gesture data indicating a release of the finger from the physical touch sensitive area, the movement of the finger followed next by the release of the finger; determine, based on the object being included in the updated gaze point area, that the third user gesture data is associated with a select command to select the object; and select, based on the select command, the object in the updated gaze point area.
2. A method for executing object selection commands during user interaction with an information presentation area associated with a computer device, said method comprising: acquiring gaze data signals from a gaze tracking module, the user gaze signals indicating a gaze point on the information presentation area; receiving, from an input device that includes a physical touch sensitive area, first user gesture data corresponding to a press of a finger on the physical touch sensitive area; generating a gaze point area on the information presentation area based on the user gaze data signals and on the first user gesture data, the gaze point area being a local area that includes the gaze point; receiving, from the input device, second user gesture data corresponding to a movement of the finger on the physical touch sensitive area, the press of the finger followed next by the movement of the finger; determining, based on the gaze point area, that the second user gesture data is associated with a fine tuning command to update the gaze point area; resizing, based on the fine tuning command, the gaze point area according to the second user gesture data to generate an updated gaze point area, the updated gaze point area including an object and overlapping with the gaze point area; receiving, from the input device, third user gesture data indicating a release of the finger from the physical touch sensitive area, the movement of the finger followed next by the release of the finger; determining, based on the object being included in the updated gaze point area, that the third user gesture data is associated with a select command to select the object; and selecting, based on the select command, the object in the updated gaze point area.
3. A system for user interaction with an information presentation area, said system comprising: an input device comprising a physical touch sensitive area, said user input device being adapted to detect user gestures in a form of user finger touch on, movement on and release from the physical touch sensitive area; a gaze tracking module adapted to detect gaze data of a user of said information presentation area; a control module configured to: acquire user gaze data signals from the gaze tracking module, the user gaze signals indicating a gaze point on the information presentation area; receive, from the input device, first user gesture data corresponding to a press of a finger on the physical touch sensitive area; generate a gaze point area on the information presentation area based on the user gaze data signals and on the first user gesture data, the gaze point area being a local area that includes the gaze point; receive, from the input device, second user gesture data corresponding to a movement of the finger on the physical touch sensitive area, the press of the finger followed next by the movement of the finger; determine, based on the gaze point area, that the second user gesture data is associated with a fine tuning command to update the gaze point area; resize, based on the fine tuning command, the gaze point area according to the second user gesture data to generate an updated gaze point area, the updated gaze point area including the object and overlapping with the gaze point area; receive, from the input device, third user gesture data indicating a release of the finger from the physical touch sensitive area, the movement of the finger followed next by the release of the finger; determine, based on the object being included in the updated gaze point area, that the third user gesture data is associated with a select command to select the object; and select, based on the select command, the object in the updated gaze point area.
4. A computer device associated with an information presentation area, said computer device comprising: an input device comprising a physical touch sensitive area, said user input device being adapted to detect user gestures in a form of user finger touch on, movement on and release from the physical touch sensitive area; a gaze tracking module adapted to detect gaze data of a user of said information presentation area; a control module configured to: acquire user gaze data signals from the gaze tracking module, the user gaze signals indicating a gaze point on the information presentation area; receive, from the input device, first user gesture data corresponding to a press of a finger on the physical touch sensitive area; generate a gaze point area on the information presentation area based on the user gaze data signals and on the first user gesture data, the gaze point area being a local area that includes the gaze point; receive, from the input device, second user gesture data corresponding to a movement of the finger on the physical touch sensitive area, the press of the finger followed next by the movement of the finger; determine, based on the gaze point area, that the second user gesture data is associated with a fine tuning command to update the gaze point area; resize, based on the fine tuning command, the gaze point area according to the second user gesture data to generate an updated gaze point area, the updated gaze point area including the object and overlapping with the gaze point area; receive, from the input device, third user gesture data indicating a release of the finger from the physical touch sensitive area, the movement of the finger followed next by the release of the finger; determine, based on the object being included in the updated gaze point area, that the third user gesture data is associated with a select command to select the object; and select, based on the select command, the object in the updated gaze point area.
5. A handheld portable device including an information presentation area and comprising an input device comprising a physical touch sensitive area, said user input device being adapted to detect user gestures in a form of user finger touch on, movement on and release from the physical touch sensitive area, and a gaze tracking module adapted to detect gaze data of a user of said information presentation area, said handheld portable device further comprising a control module configured to: acquire user gaze data signals from the gaze tracking module, the user gaze signals indicating a gaze point on the information presentation area; receive, from the input device, first user gesture data corresponding to a press of a finger on the physical touch sensitive area; generate a gaze point area on the information presentation area based on the user gaze data signals and on the first user gesture data, the gaze point area being a local area that includes the gaze point; receive, from the input device, second user gesture data corresponding to a movement of the finger on the physical touch sensitive area, the press of the finger followed next by the movement of the finger; determine, based on the gaze point area, that the second user gesture data is associated with a fine tuning command to update the gaze point area; resize, based on the fine tuning command, the gaze point area according to the second user gesture data to generate an updated gaze point area, the updated gaze point area including the object and overlapping with the gaze point area; receive, from the input device, third user gesture data indicating a release of the finger from the physical touch sensitive area, the movement of the finger followed next by the release of the finger; determine, based on the object being included in the updated gaze point area, that the third user gesture data is associated with a select command to select the object; and select, based on the select command, the object in the updated gaze point area.
6. A system for user interaction with an information presentation area, said system comprising: an input device adapted to detect user gestures, said input device comprising at least one physical touch sensitive area arranged on a steering device of a vehicle or adapted to be integrated in a steering device of a vehicle, said touchpad being adapted to detect user gestures in a form of user finger touch on, movement on and release from the physical touch sensitive area; a gaze tracking module adapted to detect gaze data of a user of said information presentation area; a control module configured to: acquire user gaze data signals from the gaze tracking module, the user gaze signals indicating a gaze point on the information presentation area; receive, from the input device, first user gesture data corresponding to a press of a finger on the physical touch sensitive area; generate a gaze point area on the information presentation area based on the user gaze data signals and on the first user gesture data, the gaze point area being a local area that includes the gaze point; receive, from the input device, second user gesture data corresponding to a movement of the finger on the physical touch sensitive area, the press of the finger followed next by the movement of the finger; determine, based on the gaze point area, that the second user gesture data is associated with a fine tuning command to update the gaze point area; resize, based on the fine tuning command, the gaze point area according to the second user gesture data to generate an updated gaze point area, the updated gaze point area including the object and overlapping with the gaze point area; receive, from the input device, third user gesture data indicating a release of the finger from the physical touch sensitive area, the movement of the finger followed next by the release of the finger; determine, based on the object being included in the updated gaze point area, that the third user gesture data is associated with a select command to select the object; and select, based on the select command, the object in the updated gaze point area.
7. A wireless transmit/receive unit, WTRU, associated with an information presentation area and comprising an input device comprising a touch sensitive area and being adapted to detect user gestures in a form of user finger touch on, movement on and release from the touch sensitive area, and a gaze tracking module adapted to detect gaze data of a viewer of said information presentation area, said WTRU further comprising a control module configured to: acquire user gaze data signals from the gaze tracking module, the user gaze signals indicating a gaze point on the information presentation area; receive, from the input device, first user gesture data corresponding to a press of a finger on the physical touch sensitive area; generate a gaze point area on the information presentation area based on the user gaze data signals and on the first user gesture data, the gaze point area being a local area that includes the gaze point; receive, from the input device, second user gesture data corresponding to a movement of the finger on the physical touch sensitive area, the press of the finger followed next by the movement of the finger; determine, based on the gaze point area, that the second user gesture data is associated with a fine tuning command to update the gaze point area; resize, based on the fine tuning command, the gaze point area according to the second user gesture data to generate an updated gaze point area, the updated gaze point area including the object and overlapping with the gaze point area; receive, from the input device, third user gesture data indicating a release of the finger from the physical touch sensitive area, the movement of the finger followed next by the release of the finger; determine, based on the object being included in the updated gaze point area, that the third user gesture data is associated with a select command to select the object; and select, based on the select command, the object in the updated gaze point area.
8. The control module of claim 1 further configured to: receive, from the input device, fourth user gesture data indicating a first finger press on the physical touch sensitive area; present, at the gaze point, an icon illustrating a location of the gaze point on the information presentation area; receive, from the input device, fifth user gesture data indicating a finger movement on the physical touch sensitive area, the first finger press followed next by the finger movement; determine, based on the icon, that the fifth gesture data is associated with a relocation command to relocate the gaze point to a new location on the information presentation area; relocate, based on the relocation command, the gaze point from the location to the new location; receive, based on the relocating, sixth user gesture data indicating a second finger press concurrent with the first finger press, the finger movement followed next by the second finger press; and manipulate the object based on the sixth user gesture data and the new location of the gaze point.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The drawings are not necessarily drawn to scale and illustrate generally, by way of example, but no way of limitation, various embodiments of the present invention. Thus, exemplifying embodiments of the invention are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings in which like references indicate similar elements. It should be noted that references to an or one embodiment in this discussion are not necessarily to the same embodiment, and such references mean at least one.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27)
(28)
DETAILED DESCRIPTION OF THE INVENTION
(29) As used herein, the term module refers to an application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that execute one or more software programs, a combinational logic circuit, or other suitable components that provide the described functionality. The term module further refers to a specific form of software necessary to practice the methods described herein and particularly the functions described in connection with each specific module. It is believed that the particular form of software will be determined primarily by the particular system architecture employed in the system and by the particular methodologies employed by the system according to the present invention.
(30) The following is a description of exemplifying embodiments in accordance with the present invention. This description is not to be taken in limiting sense, but is made merely for the purposes of describing the general principles of the invention. It is to be understood that other embodiments may be utilized and structural and logical changes may be made without departing from the scope of the present invention.
(31) With reference first to
(32) In the context of the present invention, as mentioned above, the term touchpad (or the term trackpad) refers to a pointing device featuring a tactile sensor, a specialized surface that can translate the motion and position of a user's fingers to a relative position on a screen (information presentation area). Touchpads are a common feature of laptop computers, and are also used as a substitute for a mouse where desk space is scarce. Because they vary in size, they can also be found on personal digital assistants (PDAs) and some portable media players. Wireless touchpads are also available as detached accessories. Touchpads operate in one of several ways, including capacitive sensing and conductance sensing. The most common technology used today entails sensing the capacitive virtual ground effect of a finger, or the capacitance between sensors. While touchpads, like touchscreens, are able to sense absolute position, resolution is limited by their size. For common use as a pointer device, the dragging motion of a finger is translated into a finer, relative motion of the cursor on the screen, analogous to the handling of a mouse that is lifted and put back on a surface. Hardware buttons equivalent to a standard mouse's left and right buttons are positioned below, above, or beside the touchpad. Netbooks sometimes employ the last as a way to save space. Some touchpads and associated device driver software may interpret tapping the pad as a click, and a tap followed by a continuous pointing motion (a click-and-a-half) can indicate dragging. Tactile touchpads allow for clicking and dragging by incorporating button functionality into the surface of the touchpad itself. To select, one presses down on the touchpad instead of a physical button. To drag, instead performing the click-and-a-half technique, one presses down while on the object, drags without releasing pressure and lets go when done. Touchpad drivers can also allow the use of multiple fingers to facilitate the other mouse buttons (commonly two-finger tapping for the center button). Some touchpads have hotspots, locations on the touchpad used for functionality beyond a mouse. For example, on certain touchpads, moving the finger along an edge of the touch pad will act as a scroll wheel, controlling the scrollbar and scrolling the window that has the focus vertically or horizontally. Apple uses two-finger dragging for scrolling on their trackpads. Also, some touchpad drivers support tap zones, regions where a tap will execute a function, for example, pausing a media player or launching an application. All of these functions are implemented in the touchpad device driver software, and can be disabled. Touchpads are primarily used in self-contained portable laptop computers and do not require a flat surface near the machine. The touchpad is close to the keyboard, and only very short finger movements are required to move the cursor across the display screen; while advantageous, this also makes it possible for a user's thumb to move the mouse cursor accidentally while typing. Touchpad functionality is available for desktop computers in keyboards with built-in touchpads.
(33) Examples of touchpads include one-dimensional touchpads used as the primary control interface for menu navigation on second-generation and later iPod Classic portable music players, where they are referred to as click wheels, since they only sense motion along one axis, which is wrapped around like a wheel. In another implementation of touchpads, the second-generation Microsoft Zune product line (the Zune 80/120 and Zune 4/8) uses touch for the Zune Pad. Apple's PowerBook 500 series was its first laptop to carry such a device, which Apple refers to as a trackpad. Apple's more recent laptops feature trackpads that can sense up to five fingers simultaneously, providing more options for input, such as the ability to bring up the context menu by tapping two fingers. In late 2008 Apple's revisions of the MacBook and MacBook Pro incorporated a Tactile Touchpad design with button functionality incorporated into the tracking surface.
(34) The present invention provides a solution enabling a user of a computer system without a traditional touchscreen to interact with graphical user interfaces in a touchscreen like manner using a combination of gaze based input and gesture based user commands. Furthermore, the present invention offers a solution for touchscreen like interaction using gaze input and gesture based input as a complement or an alternative to touchscreen interactions with a computer device having a touchscreen.
(35) The display 20 may hence be any type of known computer screen or monitor, as well as combinations of two or more separate displays. For example, the display 20 may constitute a regular computer screen, a stereoscopic screen, a heads-up display (HUD) in a vehicle, or at least one head-mounted display (HMD).
(36) The computer 30 may, for example, be any one from the group of a personal computer, computer workstation, mainframe computer, a processor in a vehicle, or a handheld device such as a cell phone, portable music player (such as e.g. an iPod), laptop computers, computer games, electronic books and similar other devices. The present invention may also be implemented in intelligent environment where, for example, objects presented on multiple displays can be selected and activated.
(37) In order to produce the gaze tracking signal D.sub.EYE, a gaze tracker unit 40 is included in the display 20, or is associated with the display 20. A suitable gaze tracker is described in the U.S. Pat. No. 7,572,008, titled Method and Installation for detecting and following an eye and the gaze direction thereof, by the same applicant, which hereby is incorporated in its entirety.
(38) The software program or software implemented instructions associated with the gaze tracking module 40 may be included within the gaze tracking module 40. The specific example shown in
(39) The computer system 10 comprises a computer device 30, a gaze tracking module 40, a display 20, a control module 36, 36 and user input means 50, 50 as shown in
(40) The user input means 50, 50 comprises elements that are sensitive to pressure, physical contact, gestures, or other manual control by the user, for example, a touchpad 51. Further, the input device means 50, 50 may also include a computer keyboard, a mouse, a track ball, or any other device, for example, an IR-sensor, voice activated input means, or a detection device of body gestures or proximity based input can be used. However, in the specific embodiments shown in
(41) An input module 32, which may be a software module included solely in a control module 36 or in the user input means 50 or as a module separate from the control module and the input means 50, is configured to receive signals from the touchpad 51 reflecting a user's gestures. Further, the input module 32 is also adapted to interpret the received signals and provide, based on the interpreted signals, gesture based control commands, for example, a tap command to activate an object, a swipe command or a slide command.
(42) If the input module 32 is included in the input means 50, gesture based control commands are provided to the control module 36, see
(43) The control module 36, 36 is further configured to acquire gaze data signals from the gaze tracking module 40. Further, the control module 36, 36 is configured to determine a gaze point area 120 on the information presentation area 20 where the user's gaze point is located based on the gaze data signals. The gaze point area 120 is preferably, as illustrated in
(44) Moreover, the control module 36, 36 is configured to execute at least one user action manipulating a view presented on the graphical information presentation area 20 based on the determined gaze point area and the at least one user generated gesture based control command, wherein the user action is executed with the determined gaze point area as a starting point. The control module 36, 36 may be integrated in the computer device 30 or may be associated or coupled to the computer device 30.
(45) Hence, the present invention allows a user to interact with a computer device 30 in touchscreen like manner, e.g. manipulate objects presented on the information presentation area 20, using gaze and gestures, e.g. by moving at least one finger on a touchpad 51.
(46) Preferably, when the user touches the touchpad 51, the location of the initial gaze point is indicated by a visual feedback, such as a crosshairs or similar sign. This initial location can be adjusted by moving the finger on the touchpad 51. Thereafter, the user can, in a touchscreen like manner, interact with the information presentation area 20 using different gestures and the gaze. In the embodiment including a touchpad, the gestures are finger movements relative the touchpad 51 and each gesture is associated with or corresponds to particular gesture based user command resulting in a user action.
(47) Below, a non-exhaustive number of examples of user actions that can be executed using a combination of gestures and gaze will be discussed with regard to
By pressing the finger harder on the touchpad, i.e. increasing the pressure of a finger touching the touchpad, a sliding mode can be initiated. For example, by gazing at an object, touching the touchpad, increasing the pressure on the touchpad and moving the finger or finger over the touchscreen, the object can be moved or dragged over the information presentation area. When the user removes the finger from the touchpad 51, the touchscreen like session is finished. The user may thereafter start a new touchscreen like session by gazing at the information presentation area 20 and placing the finger on the touchpad 51.
(48) As mentioned, the gesture and gaze initiated actions discussed above are only exemplary and there are a large number of further gestures in combination with gaze point resulting in an action that are conceivable. With appropriate input means many of these gestures can be detected on a touchpad, on a predefined area of a touch screen, in air without physically touching the input means, or by an input means worn on a finger or a hand of the user. Below, some further examples are described: Selection of an object or object part can be made by gazing at that object or object part and pressing a finger (e.g. a thumb), fine tuning by moving the finger and releasing the pressure applied by the finger to select that object or object part; Selection of an object or object part can be made by gazing at that object or object part, pressing a finger (e.g. a thumb), fine tuning by moving the finger, using another finger (e.g. the other thumb) to tap for selecting that object or object part. In addition, a double tap may be used for a double click action and a quick downward movement may be used for a right click. By gazing at a zoomable object or object part presented on the information presentation area while moving a finger (e.g. one of the thumbs) in a circular motion, it is possible to zoom in or out of the said object using the gaze point as the zoom center point, where a clockwise motion performs a zoom in command and a counterclockwise motion performs a zoom out command or vice versa. By gazing at a zoomable object or object part presented on the information presentation area and in connection to this holding one finger (e.g. one of the thumbs) still while moving another finger (e.g. the other thumb) upwards and downwards, it is possible to zoom in or out of the said object using the gaze point as the zoom center point, where an upwards motion performs a zoom in command and a downwards motion performs a zoom out command or vice versa. By gazing at a zoomable object or object part presented on the information presentation area and while pressing hard on a pressure-sensitive touchpad with one finger (e.g. one of the thumbs), it is possible to zoom in or out on the said object using the gaze point as the zoom center point, where each hard press toggles between different zoom levels. By gazing at a zoomable object or object part presented on the information presentation area while double-tapping on a touchpad with one finger (e.g. one of the thumbs), it is possible to zoom in or out of the said object using the gaze point as the zoom center point, where each double-tap toggles between different zoom levels. By gazing at a zoomable object or object part presented on the information presentation area while sliding two fingers (e.g. the two thumbs) simultaneously in opposite horizontal directions, it is possible to zoom that object or object part. By gazing at a zoomable object and in connection to this holding finger (e.g. one thumb) still on the touchscreen while moving another finger (e.g. the other thumb) in a circular motion, it is possible to zoom that object or object part. By gazing at an object or object part presented on the information presentation area and in connection to this holding a finger (e.g one of the thumbs) still on the touchscreen while sliding another finger (e.g. the other thumb), it is possible to slide or drag the view presented by the information presentation area. By gazing at an object or object part presented on the information presentation area and in connection to this holding a finger (e.g one of the thumbs) still on the touchscreen while sliding another finger (e.g. the other thumb), it is possible to slide or drag the view presented by the information presentation area. By gazing at an object or object part presented on the information presentation area and while tapping or double-tapping with a finger (e.g. one of the thumbs), an automatic panning function can be activated so that the presentation area is continuously slided from one of the edges of the screen towards the center while the gaze point is near the edge of the information presentation area, until a second user input is received. By gazing at an object or object part presented on the information presentation area and while tapping or double-tapping with a finger (e.g. one of the thumbs), the presentation area is instantly slided according to the gaze point (e.g. the gaze point is used to indicate the center of where the information presentation area should be slided). By gazing at a rotatable object or object part presented on the information presentation area while sliding two fingers (e.g. the two thumbs) simultaneously in opposite vertical directions, it is possible to rotate that object or object part.
(49) Before the two-finger gesture is performed, one of the fingers can be used to fine-tune the point of action. For example, a user feedback symbol like a virtual finger can be shown on the gaze point when the user touches the touchscreen. The first finger can be used to slide around to adjust the point of action relative to the original point. When user touches the screen with the second finger, the point of action is fixed and the second finger is used for clicking on the point of action or for performing two-finger gestures like the rotate, drag and zoom examples above.
(50) In embodiments of the present invention, the touchscreen like session can be maintained despite that the user has removed the finger or fingers from the touchpad if, for example, a specific or dedicated button or keyboard key is held down or pressed. Thereby, it is possible for the user to perform actions requiring multiple touches on the touchpad. For example, an object can be moved or dragged across the entire information presentation area by means of multiple dragging movements on the touchpad.
(51) With reference now to
(52) The present invention provides a solution enabling a user of a device 100 with a touchscreen 151 to interact with a graphical user interfaces using gaze as direct input and gesture based user commands as relative input. Thereby, it is possible, for example, to hold the device 100 with both hands and interact with a graphical user interface 180 presented on the touchscreen with gaze and the thumbs 161 and 162 as shown in
(53) In an alternative embodiment, one or more touchpads 168 can be arranged on the backside of the device 100, i.e. on the side of the device on which the user normally do not look at during use. This embodiment is illustrated in
(54) The software program or software implemented instructions associated with the gaze tracking module 140 may be included within the gaze tracking module 140.
(55) The device 100 comprises a gaze tracking module 140, user input means 150 including the touchscreen 151 and an input module 132, and a control module 136 as shown in
(56) The input module 132, which may be a software module included solely in a control module or in the user input means 150, is configured to receive signals from the touchscreen 151 reflecting a user's gestures. Further, the input module 132 is also adapted to interpret the received signals and provide, based on the interpreted signals, gesture based control commands, for example, a tap command to activate an object, a swipe command or a slide command.
(57) The control module 136 is configured to acquire gaze data signals from the gaze tracking module 140 and gesture based control commands from the input module 132. Further, the control module 136 is configured to determine a gaze point area 180 on the information presentation area, i.e. the touchscreen 151, where the user's gaze point is located based on the gaze data signals. The gaze point area 180 is preferably, as illustrated in
(58) Moreover, the control module 136 is configured to execute at least one user action manipulating a view presented on the touchscreen 151 based on the determined gaze point area and the at least one user generated gesture based control command, wherein the user action is executed with the determined gaze point area as a starting point. All user actions described in the context of this application may also be executed with this embodiment of the present invention.
(59) In a possible further embodiment, when the user touches the touchscreen 151, the location of the initial gaze point is indicated by a visual feedback, such as a crosshairs or similar sign. This initial location can be adjusted by moving the finger on the touchscreen 151, for example, using a thumb 161 or 162. Thereafter, the user can interact with the touchscreen 151 using different gestures and the gaze, where the gaze is the direct indicator of the user's interest and the gestures are relative to the touchscreen 151. In the embodiment including a touchscreen, the gestures are finger movements relative the touchscreen 151 and each gesture is associated with or corresponds to particular gesture based user command resulting in a user action.
(60) With reference now to
(61) According to an embodiment of the present invention shown in
(62) Further, the input module 232 is configured to determine at least one user generated gesture based control command based on the input signal. For this purpose, the input module 232 further comprises a gesture determining module 220 communicating with the data acquisition module 210. The gesture determining module 220 may also communicate with the gaze data analyzing module 240. The gesture determining module 220 may be configured to check whether the input signal corresponds to a predefined or predetermined relative gesture and optionally use gaze input signals to interpret the input signal. For example, the control module 200 may comprise a gesture storage unit (not shown) storing a library or list of predefined gestures, each predefined gesture corresponding to a specific input signal. Thus, the gesture determining module 220 is adapted to interpret the received signals and provide, based on the interpreted signals, gesture based control commands, for example, a tap command to activate an object, a swipe command or a slide command.
(63) A gaze data analyzing module 240 is configured to determine a gaze point area on the information presentation area 201 including the user's gaze point based on at least the gaze data signals from the gaze tracking module 235. The information presentation area 201 may be a display of any type of known computer screen or monitor, as well as combinations of two or more separate displays, which will depend on the specific device or system in which the control module is implemented in. For example, the display 201 may constitute a regular computer screen, a stereoscopic screen, a heads-up display (HUD) in a vehicle, or at least one head-mounted display (HMD). Then, a processing module 250 may be configured to execute at least one user action manipulating a view presented on the information presentation area 201 based on the determined gaze point area and at least one user generated gesture based control command, wherein the user action is executed with the determined gaze point area as a starting point. Hence, the user is able to control a device or system at least partly based on an eye-tracking signal which described the user's point of regard x, y on the information presentation area or display 201 and based on user generated gestures, i.e. a movement of at least one body part of the user can be detected, generating gesture based control commands via user input means 205 such as a touchpad.
(64) According to another embodiment a control module according to the present invention shown in
(65) With reference to
(66) With reference to
(67) With reference to
(68) The computer device or handheld device 400a is connectable to an information presentation area 401a (e.g. an external display or a heads-up display (HUD), or at least one head-mounted display (HMD)), as shown in
(69) With reference now to
(70) With reference first to
(71) With reference to
(72) With reference to
(73) With reference to
(74) With reference to
(75) With reference to
(76) With reference to
(77) Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as require such operations be performed in the particular order shown or in sequential order, or that all illustrated operation be performed to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementation described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.