OPTIMIZE POWER CONSUMPTION OF DISPLAY AND PROJECTION DEVICES BY TRACING PASSENGER'S TRAJECTORY IN CAR CABIN
20210397247 · 2021-12-23
Inventors
Cpc classification
G06F3/017
PHYSICS
G06F3/011
PHYSICS
G06F2203/0381
PHYSICS
G09G3/001
PHYSICS
G06V20/59
PHYSICS
G09G2320/0686
PHYSICS
B60K35/00
PERFORMING OPERATIONS; TRANSPORTING
G06F3/14
PHYSICS
G09G3/342
PHYSICS
International classification
G06F3/14
PHYSICS
Abstract
A method of operating one or more output means for a motor vehicle that includes providing one or more sensors and the one or more output means, detecting an input based on at least one of a position and movement of a passenger body part using the one or more sensors, and controlling power consumption of the one or more output means based on the detected input where this detecting includes detecting one or more inputs, the one or more inputs comprising at least one of a proximity of the body part relative to the one or more output means and a trajectory of the body part relative to the one or more output means.
Claims
1. A method of operating one or more output means for a motor vehicle, the method comprising: providing one or more sensors and the one or more output means ; detecting, using the one or more sensors, an input based on at least one of a position and movement of a passenger body part; and controlling a power consumption of the one or more output means based on the detected input, wherein the detecting comprises detecting one or more inputs, the one or more inputs comprising at least one of: i) a proximity of the body part relative to the one or more output means and ii) a trajectory of the body part relative to the one or more output means.
2. The method of claim 1, wherein the one or more inputs comprise at least one of an eye trajectory of the passenger, an hand trajectory and a gesture of the passenger.
3. The method of claim 1, wherein the one or more inputs comprise at least two inputs.
4. The method of claim 1, wherein the one or more sensors comprise at least one of an infrared sensor, a time-of-flight sensor, an eye tracker, a camera, a 3D camera, a ranging device, or a stereo camera.
5. The method of claim 1, wherein the one or more output means comprises at least one of a display and a projector, a display region and a projector region, a plurality of displays or projectors, a plurality of display regions, or a plurality of projection regions.
6. The method of claim 1, wherein the controlling of the power consumption comprises turning off or reducing power consumption of at least one of a plurality of displays, display regions, projectors or projection regions being comprised by the one or more output means.
7. The method of claim 6, wherein the one or more output means comprises at least one microphone for pure audio signals.
8. The method of claim 7, wherein the controlling of the power consumption comprises turning on or enhancing power consumption of the at least one microphone.
9. The method of claim 8, wherein the controlling of the power consumption comprises turning on or enhancing power consumption of the at least one microphone after turning off or reducing power consumption of the at least one display, display region, projector or projection region.
10. The method of claim 1, wherein the controlling of the power consumption is in response to detecting that the passenger is not looking at or not moving near the one or more output means or at least one region of the one or more output means.
11. The method of claim 1, wherein the controlling of the power consumption comprises at least one power consumption reducing step which comprises at least one of: reducing brightness of a backlight or reducing luminance of an entire area or region of the one or more displays or projectors ; reducing sharpness and resolution of a displayed information on an entire area or region of one or more displays or projectors; at least one of pausing or freezing an image frame on an entire area or region of the one or more displays or projectors; or reducing a refresh rate of the one or more displays or projectors.
12. The method of claim 11, wherein at least two of the power consumption reducing steps are applied simultaneously.
13. The method of claim 1, wherein the one or more output means is adapted to provide at least one of an audio output and a video output.
14. The method of claim 1, wherein the body part of the passenger comprises at least one of a head, a face, a mouth, an eye, a nose, a neck, a torso, a shoulder, an arm, a torso, a hand or a finger.
15. The method of claim 14, wherein the detecting comprises detecting at least one of: a head of the passenger comprising at least one of a line of vision, an eye position, an iris position, a pupil position, a nose position, a posture of the head, a head position, a head orientation, or a facial expression of the passenger; a torso of the passenger comprising at least one of a bodily posture, a body position, a body orientation, a shoulder position, or a shoulder orientation of the passenger, a hand or finger of the passenger comprising at least one of a bodily posture, a body position, a body orientation, a shoulder position, or a shoulder orientation of the passenger at least one of a gesture such as a skimming past, approaching, moving away, splaying of fingers, bending of fingers, touching of fingers, making a fist, a finger or hand position, or a finger or hand orientation, or a mouth of the passenger comprising at least one of a movement of the lips, a noise, or a voice command.
16. The method of claim 1, wherein the one or more output means comprises at least one of displays, monitors, projectors, projection screens, head-up displays, flexible OLED displays, liquid crystal displays, light-transmitting fabrics, or light-transmitting films.
17. The method of claim 1, wherein the one or more output means are arranged in at least one of an instrument panel, a windshield, a headliner, a central console, a ceiling, or in a further interior trim part of the motor vehicle.
18. A method of operating one or more output means for a motor vehicle, the method comprising: providing one or more sensors and the one or more output means; detecting, using the one or more sensors, an input based on at least one of a position and a movement of a passenger body part; and controlling a power consumption of the one or more output means based on the detected input, wherein the detecting comprises detecting one or more inputs, the one or more inputs comprising at least one of: i) a proximity of the body part relative to the one or more output means and ii) a trajectory of the body part relative to the one or more output means.
19. The method of claim 18, wherein the one or more outputs means comprises at least one of: a display and a projector, a display region and a projector region, a plurality of displays or projectors, a plurality of display regions, or a plurality of projection regions.
20. The method of claim 18, wherein the one or more sensors comprise at least one of an infrared sensor, a time-of-flight sensor, an eye tracker, a camera, a 3D camera, a ranging device, or a stereo camera.
21. The method of claim 18, wherein the controlling of the power consumption comprises turning off or reducing power consumption of at least one of a plurality of displays, display regions, projectors or projection regions being comprised by the one or more output means.
22. The method of claim 21, wherein the one or more output means comprises at least one microphone for pure audio signals.
23. The method of claim 22, wherein the controlling of the power consumption comprises turning on or enhancing power consumption of the at least one microphone.
24. The method of claim 23, wherein the controlling of the power consumption comprises turning on or enhancing power consumption of the at least one microphone after turning off or reducing power consumption of the at least one display, display region, projector or projection region.
25. The method of claim 18, wherein the controlling of the power consumption is in response to detecting that the passenger is not looking at or not moving near the one or more output means or at least one region of the one or more output means.
26. The method of claim 1, wherein the controlling of the power consumption comprises at least one power consumption reducing step which comprises at least one of: reducing brightness of a backlight or reducing luminance of an entire area or region of the one or more displays or projectors ; reducing sharpness and resolution of a displayed information on an entire area or region of one or more displays or projectors; at least one of pausing or freezing an image frame on an entire area or region of the one or more displays or projectors; or reducing a refresh rate of the one or more displays or projectors.
27. The method of claim 26, wherein at least two of the power consumption reducing steps are applied simultaneously.
28. The method of claim 18, wherein the body part of the passenger comprises at least one of a head, a face, a mouth, an eye, a nose, a neck, a torso, a shoulder, an arm, a torso, a hand or a finger.
29. The method of claim 28, wherein the detecting comprises detecting at least one of: a head of the passenger comprising at least one of a line of vision, an eye position, an iris position, a pupil position, a nose position, a posture of the head, a head position, a head orientation, or a facial expression of the passenger; a torso of the passenger comprising at least one of a bodily posture, a body position, a body orientation, a shoulder position, or a shoulder orientation of the passenger, a hand or finger of the passenger comprising at least one of a bodily posture, a body position, a body orientation, a shoulder position, or a shoulder orientation of the passenger at least one of a gesture such as a skimming past, approaching, moving away, splaying of fingers, bending of fingers, touching of fingers, making a fist, a finger or hand position, or a finger or hand orientation, or a mouth of the passenger comprising at least one of a movement of the lips, a noise, or a voice command.
30. The method of claim 18, wherein the one or more output means comprises at least one of displays, monitors, projectors, projection screens, head-up displays, flexible OLED displays, liquid crystal displays, light-transmitting fabrics, or light-transmitting films.
31. The method of claim 30, wherein the one or more output means are arranged in at least one of an instrument panel, a windshield, a headliner, a central console, a ceiling, or in a further interior trim part of the motor vehicle.
32. A motor vehicle, comprising; one or more sensors; one or more output means; a controller comprising electrical circuitry configured to perform a method, comprising: detecting, using the one or more sensors, an input based on at least one of a position and movement of a passenger body part; and controlling a power consumption of the one or more output means based on the detected input, wherein the detecting comprises detecting one or more inputs, the one or more inputs comprising at least one of: i) a proximity of the body part relative to the one or more output means and ii) a trajectory of the body part relative to the one or more output means.
33. A motor vehicle, comprising; one or more sensors; one or more output means; a controller comprising electrical circuitry configured to perform a method, comprising: detecting, using the one or more sensors, an input based on at least one of a position and movement of a passenger body part; and controlling a power consumption of the one or more output means based on the detected input, wherein the controlling of the power consumption comprises turning off or reducing power consumption of at least one of a plurality of displays, display regions, projectors or projection regions being comprised by the one or more output means.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0043] Preferred embodiments of the present invention are explained in greater detail below by way of example with reference to the drawings, in which
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
DETAILED DESCRIPTION
[0050] It is to be understood that the disclosure is not limited in its application to the details of construction and to the arrangements of the components set forth in the following description or illustrated in the drawings. The Figures and written description are provided to teach those skilled in the art to make and use the inventions for which patent protection is sought. The disclosure is capable of other embodiments and of being practiced and carried out in various ways. Those skilled in the art will appreciate that not all features of a commercial embodiment are shown for the sake of clarity and understanding.
[0051] In addition, it is to be understood that the phraseology and terminology employed herein are for the purpose of describing the present disclosure and should not be regarded as limiting. For example, the use of a singular term, such as, “a” is not intended as limiting of the number of items. Also, the use of relational terms, such as but not limited to, “top,” “bottom,” “left,” “right,” “upper,” “lower,” “down,” “up,” “side,” are used in the description for clarity in specific reference to the Figures and are not intended to limit the scope of the present disclosure. Further, it should be understood that any one of the features may be used separately or in combination with other features. Other systems, methods, features, and advantages will be or become apparent to those skilled in the art upon examination of the Figures and the description. The term “driver” is used throughout this disclosure but is not limited to a person who is operating or controlling the vehicle; it may refer to any vehicle occupant, person, passenger, or user inside the vehicle, or, in certain circumstances, a person who is outside the vehicle but controlling the vehicle or interested in movement of the vehicle. It is intended that all such additional systems, methods, features, and advantages be included within this description, and be within the scope of the present disclosure.
[0052] A motor vehicle which is designated in its entirety with 10 includes a first camera 12 which is arranged in an instrument panel 14, and a second camera 16 which is installed at the transition between a windshield 18 and the roof 20 of the motor vehicle 10. The camera 16 can, for example, also be integrated into an internal rearview mirror of the motor vehicle 10.
[0053] Furthermore, the motor vehicle 10 includes a plurality of display devices (not represented in
[0054] In order to make possible a non-contact control of the display devices, e.g. the position of a head 24 of a vehicle occupant 26, of the driver of the motor vehicle 10 in the example shown, is observed with the camera 16. The camera 16 detects, for example, both the eyes 28 of the vehicle occupant 26 and his entire head 24. The position of the eyes 28 can be monitored by image recognition of the eyes 28 as an entirety. However, a more precise analysis can also be performed, in which the position of the pupils or the iris of the eyes 28 is observed. In order to determine the position and orientation of the head 24, the camera 16 can observe parts of the head which are particularly easy to recognize such as, for example, the nose 30 of the vehicle occupant 26.
[0055] Furthermore, the further camera 12 in the instrument panel 24 records the position and movement of a hand 32 of the vehicle occupant 26.
[0056] The cameras 12, 16 are preferably depth image cameras, particularly preferably so-called time-of-flight cameras which also supply distance information for each pixel so that the image recognition becomes particularly precise.
[0057] The combination of the detection of the head position, eye position and gestures of the vehicle occupant 26 makes possible, as explained below with reference to
[0058]
[0059]
[0060] If the vehicle occupant 26 then wishes to carry out an operating action, he will first look, for example, at one of the two display areas 36, 38, which is to be the objective of his operating action. He can thus, for example, look at the display area 36 and the line of vision is detected by means of his eye position with a first camera 16. At the same time, it is checked whether the bodily posture of the vehicle occupant 26 coincides with his line of vision in order to verify the recognition of the display area 36 to be selected. To this end, a second camera 16 detects the body orientation. If the vehicle occupant 26 then executes a gesture with his hand 32, which can be detected by the third camera 12, in the example shown a lateral wiping movement between the display areas 36, 38, the display element 40 selected by the look and verified by the body orientation is displaced from the selected display area 36 to the other display area 38.
[0061]
[0062] To this end, further non-contact inputs can also be detected and evaluated. An additional voice recognition or the recognition of brain activities of the user within the meaning of a brain-machine interface is conceivable, for example.
[0063]
[0064] In an example, the power consumption of the areas, devices, displays, or projection devices 50 could be reduced. Various ways of reducing power consumption may include one or more of: reducing brightness of a backlight of a display or reducing the luminance of a projector; reducing sharpness and resolution of a displayed information on a complete area of the display or projection; reducing sharpness and resolution of a displayed information on a partial area of the display or projection; pausing and/or freezing the image frame; and/or reducing the refresh rate of the display. The system may apply any power optimization method alone or in combination with another method to achieve the optimum power efficiency. As a result, not only is power use optimized but the lifetime of the display or projection device 50 is also increased.
[0065] In a preferred example, the one or more sensors 52 may detect more than one input; and in particular, may detect more than just the eye trajectory of a passenger 56. The one or more sensors 52 may also detect a gesture or other movement of the passenger 56 such as a facial gesture, a neck gesture, an arm gesture, a torso gesture, a finger gesture, or any other body-part gesture. The one or more sensors 52 may also detect how far away one or more body-parts of a passenger 56 are from one or more displays or projection devices 50. Based on these detected gestures or movements, the system may apply any one or more of the power optimization methods described above. Further, in a preferred example, the power savings of a device with at least two output means for providing information, such as an audio output means (including, for example microphones for pure audio signals, which require less power than video signals) as well as video output means (including, for example, one or more displays or projections), may be selectively controlled depending on their respective power consumption. That is, based on the detection by the one or more sensors 52, as described throughout this application, the audio output of a display or projection device may remain unaffected while the power consumption of the video output is optimized (such as by reducing brightness, luminance, or turning off the video output altogether). The opposite configuration is also possible—that is, the video output may remain unaffected while the power consumption of the audio output is optimized.
[0066]
[0067] The features of the invention disclosed in the above description, in the drawings and in the claims can be material, both individually and in any combination, for the realization of the invention in its various embodiments.
LIST OF REFERENCE NUMERALS
[0068] Motor vehicle 10 [0069] Camera 12 [0070] Instrument panel 14 [0071] Camera 16
[0072] Windshield 18 [0073] Roof 20 [0074] Headliner 22 [0075] Head 24 [0076] Vehicle occupant 26 [0077] Eye 28 [0078] Nose 30 [0079] Hand 32 [0080] Display device 34 [0081] Display area 36 [0082] Display area 38 [0083] Display element 40 [0084] Display element 42 [0085] Display element 44 [0086] Projection/display 50 [0087] Sensor(s) 52 [0088] Tracking area 54 [0089] Passenger 56 [0090] Steps 60, 62, 64, 66, 68, 70