METHOD OF REALIZING TOUCH FOR HEAD-UP DISPLAY
20220187600 ยท 2022-06-16
Inventors
Cpc classification
G06F3/017
PHYSICS
B60K35/00
PERFORMING OPERATIONS; TRANSPORTING
G02B2027/0196
PHYSICS
International classification
Abstract
A head up display system for a motor vehicle includes a light field emitter emitting a light field that is reflected off of a windshield of the motor vehicle and that is visible to a human driver of the motor vehicle as a virtual image disposed outside of the windshield. The virtual image includes a plurality of graphical elements. A hand sensor detects a position of a hand of the human driver in space. An electronic processor is communicatively coupled to the light field emitter and to the hand sensor. The electronic processor receives a signal from the hand sensor indicative of the position of a hand of the human driver in space, and determines which one of the graphical elements in the virtual image is aligned with an eye location of the human driver and the detected position of the hand of the human driver in space.
Claims
1. A head up display system for a motor vehicle, the system comprising: a light field emitter configured to emit a light field that is reflected off of a windshield of the motor vehicle and that is visible to a human driver of the motor vehicle as a virtual image disposed outside of the windshield, the virtual image including a plurality of graphical elements; a hand sensor configured to detect a position of a hand of the human driver in space; and an electronic processor communicatively coupled to the light field emitter and to the hand sensor, the electronic processor being configured to: receive a signal from the hand sensor indicative of the position of a hand of the human driver in space; and determine which one of the graphical elements in the virtual image is aligned with an eye location of the human driver and the detected position of the hand of the human driver in space.
2. The system of claim 1 wherein the hand sensor is a light-based sensor.
3. The system of claim 1 wherein the hand sensor comprises a light sensor strip.
4. The system of claim 1 wherein the hand sensor is configured to detect a position of a hand of the human driver within a touch area disposed above a steering wheel of the motor vehicle.
5. The system of claim 1 wherein the electronic processor is configured to respond to the determination of one of the graphical elements in the virtual image being aligned with an eye location of the human driver and the detected position of the hand of the human driver in space by performing a function associated with the one graphical element.
6. The system of claim 1 further comprising an eye sensor communicatively coupled to the electronic processor and configured to detect the eye location of the human driver.
7. The system of claim 6 wherein the eye sensor comprises a driver monitoring system.
8. A head up display method for a motor vehicle, the method comprising: emitting a light field that is reflected off of a windshield of the motor vehicle and that is visible to a human driver of the motor vehicle as a virtual image disposed outside of the windshield, the virtual image including a plurality of graphical elements; detecting a position of a hand of the human driver in space; and determining which one of the graphical elements in the virtual image is aligned with an eye location of the human driver and the detected position of the hand of the human driver in space.
9. The method of claim 8 wherein the detecting of the position of the hand is performed by a light-based sensor.
10. The method of claim 8 wherein the detecting of the position of the hand is performed by a light sensor strip.
11. The method of claim 8 wherein the detecting step includes detecting a position of a hand of the human driver in space within a touch area disposed above a steering wheel of the motor vehicle.
12. The method of claim 8 further comprising performing a function associated with the one graphical element in response to the determining of the one of the graphical elements in the virtual image that is aligned with an eye location of the human driver and the detected position of the hand of the human driver in space.
13. The method of claim 8 further comprising using an eye sensor to detect the eye location of the human driver.
14. The method of claim 13 wherein the eye sensor is included in a driver monitoring system.
15. A head up display system for a motor vehicle, the system comprising: a light field emitter configured to emit a light field that is reflected off of a windshield of the motor vehicle and that is visible to a human driver of the motor vehicle as a virtual image disposed outside of the windshield, the virtual image including a plurality of graphical elements; a hand sensor configured to detect a position of a hand of the human driver in space; an eye sensor configured to detect a position of an eye of the human driver in space; and an electronic processor communicatively coupled to the light field emitter, the hand sensor and the eye sensor, the electronic processor being configured to: receive a first signal from the hand sensor indicative of the position of a hand of the human driver in space; receive a second signal from the eye sensor indicative of the position of an eye of the human driver in space; and determine which one of the graphical elements in the virtual image is aligned with the detected position of the eye of the human driver and the detected position of the hand of the human driver in space.
16. The system of claim 15 wherein the hand sensor is a light-based sensor.
17. The system of claim 15 wherein the hand sensor comprises a light sensor strip.
18. The system of claim 15 wherein the hand sensor is configured to detect a position of a hand of the human driver within a touch area disposed above a steering wheel of the motor vehicle.
19. The system of claim 15 wherein the electronic processor is configured to respond to the determination of one of the graphical elements in the virtual image being aligned with an eye location of the human driver and the detected position of the hand of the human driver in space by performing a function associated with the one graphical element.
20. The system of claim 15 wherein the eye sensor comprises a driver monitoring system.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] The above-mentioned and other features and objects of this invention, and the manner of attaining them, will become more apparent and the invention itself will be better understood by reference to the following description of embodiments of the invention taken in conjunction with the accompanying drawings, wherein:
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
DETAILED DESCRIPTION
[0024] The embodiments hereinafter disclosed are not intended to be exhaustive or limit the invention to the precise forms disclosed in the following description. Rather the embodiments are chosen and described so that others skilled in the art may utilize its teachings.
[0025]
[0026]
[0027]
[0028] With the possible touch area defined, one (typically a car manufacturer) can decide the actual touch area in 3D space. The touch area doesn't necessarily have to cover the entire possible touch area. For instance, the touch area can be just a small area that the user can easily reach, such as somewhere over the steering wheel. This is illustrated in
[0029]
[0030]
[0031] As disclosed hereinabove, the eye location plays a crucial role in the touch experience for HUD. Thus, eye tracking capability in the car to determine the eye location (preferably the 3D location in space) is needed to provide the best and unlimited touch experience to the user for HUD.
[0032]
[0033] In the cases where the eye location information is needed for the HUD touch experience, due to the unpredictable eye position in time, the decision about which of the graphic elements the user intended to touch needs to be calculated in real-time based on the location of the touchable graphic elements, touched point(s) within the defined touch area, and the eye position. The best and most precise results can be obtained by determining all those values in 3D (x, y, z) space. Since the VID, the virtual image display area, and the touch area are fixed/pre-defined and can be made available to the car system, it is possible to obtain 3D values for the touched point and the graphic elements. A driver monitoring system may determine the eye location in three-dimensional space. Whether determination of 3D values or 2D values of eye locations is called for may depend on the use case. For instance, if the touch experience of HUD is limited to determination of gestures (e.g., swipe, pinch, zoom, etc.) only, then only 2D locations of touch point(s) may be called for.
[0034]
[0035]
[0036] As mentioned above, depending on the HUD touch use cases, not all inputs shown in
[0037] Disclosed herein is a method of enabling a touch experience for a HUD display and application wherein the graphic area (e.g., the virtual image in the case of HUD) is not reachable by the user. The present invention may provide a method of defining a touchable area for a HUD application based on the location of the eye box, virtual image, and windshield. The present invention may also provide a method of using a touch sensor that can enable touch for the defined touchable area for a HUD application (which is likely to have the touch area in the air) such as (but not limited to) a light-based sensor touch system. The present invention may further provide a method of using information about (1) the touchable graphic elements' locations, (2) the touched point within the defined touch area, and (3) the eye position to make the decision on touch, which enables the most precise touch experience without any touch use case limitation. However, the requirement of a 3D value for the input to make a proper touch decision for HUD can be relaxed based on the touch use cases, which may limit the touch precision, touch use cases, and touch user experiences.
[0038]
[0039]
[0040] Next, in step 1020, a position of a hand of the human driver in space is detected. For example, hand sensor (e.g., a light sensor strip) 924 may detect the position of the hand of driver 914 in three-dimensional space.
[0041] In a final step 1030, it is determined which one of the graphical elements in the virtual image is aligned with an eye location of the human driver and the detected position of the hand of the human driver in space. For example, simple geometry calculations can be applied and used to decide which element(s) and/or point(s) in virtual image 710 is touched when 3D location information of (1) touchable graphic elements 718.sub.1, 718.sub.2, . . . , 718.sub.N, (2) touched point(s) 730 and (3) the eye position 714 are available. That is, it can be determined which one of the graphical elements 718.sub.1, 718.sub.2, . . . , 718.sub.N in the virtual image 710 is aligned with an eye location of the human driver, as ascertained by driver monitoring system 944, and the position of the hand of the human driver in space, as ascertained by hand sensor 924.
[0042] While this invention has been described as having an exemplary design, the present invention may be further modified within the spirit and scope of this disclosure. This application is therefore intended to cover any variations, uses, or adaptations of the invention using its general principles. Further, this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains.