Automated Activation of a Vision Support System

20190361533 ยท 2019-11-28

    Inventors

    Cpc classification

    International classification

    Abstract

    A method for the automated activation of a vision support system of a vehicle, in particular of a motor vehicle, has improved automatic activation in particular with regard to the prevention of false-negative and false-positive activation. The method detects an activation gesture formed by a movement of the head and/or torso of a vehicle user, in particular of a driver; determines, using the detected activation gesture, a field of vision desired by the vehicle user; and activates the part of the vision support system which images the desired field of vision.

    Claims

    1. A method for automated activation of a vision support system of a vehicle, comprising the steps of: detecting an activation gesture formed by a movement of a head and/or upper body of a vehicle user; determining, on the basis of the detected activation gesture, a field of view desired by the vehicle user; and activating that part of the vision support system which images the desired field of view.

    2. The method according to claim 1, wherein the vehicle user is a vehicle driver.

    3. The method according to claim 1, wherein the movement forming the activation gesture, at least with regard to a movement direction, substantially corresponds to a movement of the head and/or upper body of the vehicle user which is suitable for observing the desired field of view in a manner not supported by the vision support system.

    4. The method according to claim 1, wherein the step of determining, on the basis of the detected activation gesture, the field of view desired by the vehicle user comprises: determining a pattern of the movement of the head and/or upper body forming the activation gesture; assigning the pattern to a comparison pattern stored beforehand in a database; and determining a field of view assigned to the comparison pattern in the database.

    5. The method according to claim 1, wherein the step of activating that part of the vision support system which images the desired field of view comprises: activating an image capture unit, which at least partly captures the desired field of view; and displaying the image captured by the image capture unit on a display unit of the vehicle.

    6. The method according to claim 5, wherein the image capture unit is a vehicle camera.

    7. The method according to claim 5, wherein the step of activating that part of the vision support system that images the desired field of view is carried out depending on an additional condition.

    8. The method according to claim 7, wherein the additional condition is a value of a vehicle state parameter.

    9. The method according to claim 1, wherein the activation gesture is formed by one of: a) a lateral rotation of the head and a forward directed movement of the upper body, wherein a rotation angle of the head is less than a first predetermined rotation angle, b) a lateral rotation of the head and/or of the upper body, wherein a rotation angle of the head is greater than a second predetermined rotation angle, wherein the first and second predetermined rotation angles are preferably identical, c) a movement of the head and/or of the upper body upward and in a direction of a rearview mirror, d) a movement of the head and/or of the upper body downward and in a direction of a windshield, and e) a lateral movement of the head and/or of the upper body.

    10. The method according to claim 7, wherein the additional condition comprises: a) an instantaneous speed below a first predetermined threshold value of the instantaneous speed, b) an active state of a direction indicator of the vehicle, c) a positive occupancy signal of a seat occupancy recognition system of the vehicle, d) an instantaneous speed below a second predetermined threshold value of the instantaneous speed, or e) an absolute value of a steering angle above a first predetermined threshold value of the steering angle.

    11. A vision support system for a vehicle, comprising: a detection unit for detecting an activation gesture formed by a movement of a head and/or upper body of a vehicle user; a determining unit for determining, on the basis of the detected activation gesture, a field of view desired by the vehicle user; an image capture unit for at least partly capturing the desired field of view; and a display unit for displaying the image captured by the image capture unit.

    12. The vision support system according to claim 11, wherein the vehicle user is a vehicle driver.

    13. The vision support system according to claim 11, wherein the image capture unit is a vehicle camera.

    14. The vision support system according to claim 11, wherein a control unit is operatively configured to execute processing for: detecting, via the detection unit, the activation gesture formed by a movement of a head and/or upper body of the vehicle user, determining, via the determining unit, on the basis of the detected activation gesture, the field of view desired by the vehicle user, and activating the image capture unit and the display unit to at least partly capture the desired field of view and display the image captured by the image capture unit.

    15. A vehicle comprising a vision support system according to claim 14.

    16. The vehicle according to claim 15, wherein the vehicle is a motor vehicle.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0063] FIG. 1 is a schematic illustration of one embodiment of the invention.

    [0064] FIG. 2 is a flow diagram of one embodiment of the method according to the invention.

    DETAILED DESCRIPTION OF THE DRAWINGS

    [0065] In the figures, identical reference signs identify identical features of the illustrated embodiments of the invention. It is pointed out that the illustrated figures and the associated description merely involve exemplary embodiments of the invention. In particular, illustrations of combinations of features in the figures and/or the description of the figures should not be interpreted to the effect that the invention necessarily requires the realization of all features mentioned. Other embodiments of the invention may contain fewer, more and/or other features. The scope of protection and the disclosure of the invention are evident from the accompanying patent claims and the complete description. Moreover, it is pointed out that the illustrations are basic illustrations of embodiments of the invention. The arrangement of the individual illustrated elements with respect to one another has been chosen merely by way of example and may be chosen differently in other embodiments of the invention. Furthermore, the illustration is not necessarily true to scale. Individual features illustrated may be illustrated in an enlarged or reduced manner for the purpose of better elucidation.

    [0066] FIG. 1 shows a schematic plan view of a motor vehicle 10 comprising a vision support system 1. An interior camera 14 is arranged in the vehicle 10 such that it captures the region of the head and of the upper body of a driver 2 of the vehicle 10. The vehicle 10 has two exterior cameras 15-l, 15-r, which are arranged respectively on the left and right in the fenders (not designated separately) of the vehicle 10. The cameras 15-l, 15-r respectively capture a field of view 16-l, 16-r laterally with respect to the vehicle 10, the limits of which field of view are indicated schematically by dashed lines in FIG. 1. Furthermore, the vehicle 10 has a head-up display 12 and a central display 13 arranged in a center console. The interior camera 14, the exterior cameras 15-l, 15-r and also the displays 12, 13 are connected to a control unit 11 of the vehicle 10 in each case via a data bus system 17.

    [0067] Referring to FIG. 2, the sequence of the method will now be outlined on the basis of an exemplary traffic situation. In this case, the vehicle 10 is situated on an access road that joins a road at right angles. The intersection between the access road and the road is poorly visible on account of automobiles being parked.

    [0068] The driver 2 of the vehicle 10 cautiously drives the vehicle 10 to the edge of the road, where the vehicle initially comes to a standstill. Before the driver 2 turns onto the road, he/she would like to see the cross traffic. For this purpose, the driver bends his/her upper body forward and turns his/her head toward the left in order to be able to see road users coming from there.

    [0069] The movements of the head and of the upper body of the driver 2 are captured by the interior camera 14. The captured image data are continuously transmitted via the data bus 17 to the control unit 11 and are evaluated there. The activation gesture formed by the movement of the head and of the upper body is detected in this way in step 20.

    [0070] In step 21-1, the control unit 11 evaluates the movement using algorithms for pattern classification and thus determines a pattern of the movement forming the activation gesture.

    [0071] In step 21-2, the control unit 11 searches a database having comparison patterns stored beforehand and assigns the previously determined pattern to one of the comparison patterns.

    [0072] In step 21-3, a field of view assigned to the comparison pattern in the database is determined. If the side view system of the vehicle 10 is configured such that both field of views 16-l and 16-r on the left and right of the vehicle are displayed simultaneously, then these field of views 16-l, 16-r can be assigned to the comparison pattern in the database as joint field of view. By contrast, if a separate display in respect of sides is possible, then two separate entries may be present in the database. The comparison patterns of these entries then differ in the direction of rotation of the head. Exclusively the corresponding field of view 16-l (direction of rotation left) or 16-r (direction of rotation right) is then respectively assigned to the entries.

    [0073] In the present example, in step 21-1, the direction of rotation of the head toward the left is also determined as part of the pattern. The field of view 16-l assigned to the pattern is thus determined in step 21-3.

    [0074] In step 22-1, the vehicle camera 15-l that captures the desired field of view 16-l is activated. Finally, in step 22-2, the image of the field of view 16-l as captured by the camera 15-l is displayed on the head-up display 12 and/or on the central display 13.

    [0075] The driver 2 has thus activated the vision support system 1 by means of a totally intuitive action and can effortlessly see the desired field of view 16-l with the aid of said vision support system.

    LIST OF REFERENCE SIGNS

    [0076] 1 Vision support system [0077] 2 Vehicle driver [0078] 10 Motor vehicle [0079] 11 Control unit [0080] 12 Head-up display [0081] 13 Central display [0082] 14 Interior camera [0083] 15 Exterior camera [0084] 16 Field of view [0085] 17 Data bus [0086] 20-25 Method steps

    [0087] The foregoing disclosure has been set forth merely to illustrate the invention and is not intended to be limiting. Since modifications of the disclosed embodiments incorporating the spirit and substance of the invention may occur to persons skilled in the art, the invention should be construed to include everything within the scope of the appended claims and equivalents thereof.