Method, device, and system for influencing at least one driver assistance system of a motor vehicle
11708079 · 2023-07-25
Assignee
Inventors
Cpc classification
B60W50/02
PERFORMING OPERATIONS; TRANSPORTING
B60W2050/0075
PERFORMING OPERATIONS; TRANSPORTING
G06V20/597
PHYSICS
B60W2540/229
PERFORMING OPERATIONS; TRANSPORTING
B60W2040/0818
PERFORMING OPERATIONS; TRANSPORTING
B60W40/08
PERFORMING OPERATIONS; TRANSPORTING
B60W2050/0297
PERFORMING OPERATIONS; TRANSPORTING
B60W50/038
PERFORMING OPERATIONS; TRANSPORTING
B60W2050/0062
PERFORMING OPERATIONS; TRANSPORTING
B60K28/066
PERFORMING OPERATIONS; TRANSPORTING
B60W50/029
PERFORMING OPERATIONS; TRANSPORTING
International classification
G08B23/00
PHYSICS
B60W40/08
PERFORMING OPERATIONS; TRANSPORTING
Abstract
The present disclosure relates to a method for controlling at least one drive assistance system of a motor vehicle, a device for carrying out the steps of this method and a system including such a device. The disclosure also relates to a motor vehicle including such a device or such a system.
Claims
1. A method for controlling a motor vehicle by a drive assistance system comprising: receiving, from a sensor, a signal value, wherein the sensor comprises at least one of a time-of-flight (TOF) sensor, an eye tracking sensor or a camera provided with its own lighting unit; determining, by the drive assistance system, two or more attention characteristics of a motor vehicle occupant based on the signal value; determining, by the drive assistance system, a movement pattern of the motor vehicle occupant from the two or more attention characteristics, wherein the movement pattern of said motor vehicle occupant is determined by a combination of a head position or a head angle and an upper body movement or rotation, wherein the upper body movement or rotation is a shoulder movement or rotation, and wherein the movement pattern corresponds to a first attention state, determining whether the first attention state corresponds to an inattentive state, wherein the inattentive state is evaluated by artificial intelligence from a plurality of stored images, wherein the plurality of stored images is at least one of simulated, learned, or self-learned images; and controlling the motor vehicle by the drive assistance system in a first way when the first attention state corresponds to the inattentive state.
2. The method according to claim 1, wherein the sensor is supported by at least one of a separate or distant lighting unit to actively illuminate an area in front of the sensor, and wherein the at least one of the separate or distant lighting unit emits electromagnetic radiation outside of a range visible to the motor vehicle occupant.
3. The method according to claim 1, wherein at least one of: an illumination of different areas in an interior of the motor vehicle depends on a condition of the motor vehicle occupant, the motor vehicle and an environment of the motor vehicle, or the illumination of different areas in the interior of the motor vehicle depends on a the distance of the motor vehicle occupant to the sensor.
4. The method according to claim 2, wherein one or more of the sensor or the at least one of the separate or the distant lighting unit is attached to or integrated in at least one of a dashboard, a center console, a retractable or a movable center console, a windscreen, a roof, a headlining, a grab handle, an A-pillar, a B-pillar, a C-pillar, a door component, above a door, a dome-shaped housing in the region of the center of the motor vehicle on the roof or headlining, a display device, a vehicle occupant seat, a head portion of the vehicle occupant seat, a foot portion of the vehicle occupant seat, an armrest of the vehicle occupant seat, a restraint system for the vehicle occupant, a positioning mechanism, a trim or a mobile device.
5. The method according to claim 1, further comprising the step of: in response to determining that the attention state does not correspond to the first attention state, controlling the drive assistance system in a second way.
6. A device for controlling a drive assistance system of a motor vehicle, comprising a processor unit which is configured to carry out the steps of the method according to claim 1.
7. A system for controlling at least one drive assistance system of a motor vehicle, comprising: at least one device according to claim 6; at least one sensor; and at least one of a separate or a distant lighting unit.
8. The system according to claim 7, wherein the sensor further comprises at least one of, a body tracking sensor, or at least one surround view system.
9. A motor vehicle comprising at least one device according to claim 6.
10. A motor vehicle comprising at least one system according to claim 7.
11. A method for controlling a drive assistance system of a motor vehicle, comprising: receiving, from a sensor, at least two signal values representing two or more attention characteristics of a motor vehicle occupant, wherein the sensor comprises at least one of a time-of-flight (TOF) sensor, an eye tracking sensor or a camera provided with its own lighting unit; weighting the at least two signal values depending on the two or more attention characteristics; determining, based on weighting of the at least two signal values, an attention state of the motor vehicle occupant; determining whether the attention state of the motor vehicle occupant corresponds to a first attention state or a second attention state; in response to determining that the attention state corresponds to the first attention state, controlling the drive assistance system in a first way; and in response to determining that the attention state corresponds to the second attention state, controlling the drive assistance system in a second way, wherein the first attention state or the second attention state of said motor vehicle occupant is determined by a combination of a head position or angle and an upper body movement or rotation, and wherein the upper body movement or rotation is a shoulder movement or rotation.
12. The method of claim 11, wherein the second attention state of the motor vehicle occupant is also determined based on an eye tracking.
13. The method of claim 11, wherein the second way includes a brake assistance of the drive assistance system.
14. The method of claim 11, wherein at least one of the head position or angle or the upper body movement or rotation is determined based on a comparison with a plurality of stored images.
15. The method of claim 12, wherein at least one of the head angle position, the upper body angle position, or the eye tracking are determined based on a comparison with a plurality of stored images.
16. The method of claim 14, wherein the plurality of stored images is at least one of simulated, learned, or self-learned through artificial intelligence.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Further features and advantages of the disclosure result from the following description, in which preferred embodiments of the disclosure are explained by means of schematic drawings and diagrams.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
DETAILED DESCRIPTION
(12)
(13) The rearview device 1 includes an eye-tracking sensor 3, which follows the eye movements of the vehicle occupant and thus makes it possible to determine the direction of vision of the vehicle occupant, especially if the direction of vision deviates from the direction of travel of the vehicle. In addition, the same eye-tracking sensor 3 makes it possible to determine the period of time during which the vehicle occupant's eyes are not participating in road traffic, in particular when the vehicle occupant's eyes are closed and/or the direction of vision of the vehicle occupant is in a direction different from the direction of travel of the vehicle. From the eye-tracking sensor 3 the signal values representing the two attention characteristics can be received accordingly.
(14) The sensor 3 has a detection range 5, within which it can track the eyes of a vehicle occupant. Preferably, not only the area of the driver's seat is covered by the detection range 5, but also at least the area of the center console. This means that even a motor vehicle driver who bends to the side still moves within the detection range 5 and can thus be reliably followed by the eye-tracking sensor 3.
(15) The rearview device 1 also includes a Time-Of-Flight (TOF) sensor 7 with an additional infrared illumination unit. The TOF sensor 7 can detect the objects located within a detection range 9. In the acquired image, the TOF sensor 7 can calculate a distance value for each pixel in relation to a reference point, for example the rearview device 1. The TOF sensor 7 can thus detect and evaluate the head and body posture of the vehicle occupant.
(16) This makes it possible to determine the length of time during which the motor vehicle occupant does not participate in road traffic because of his head position and/or during which the head position corresponds to a head position that is twisted, in particular sideways, downwards and/or upwards. It is also possible to determine the head position, in particular a head position in which the occupant is not participating in road traffic and/or a head position twisted sideways, downwards and/or upwards.
(17) The TOF sensor 7 also makes it possible to determine the length of time during which the motor vehicle occupant is not participating in road traffic due to his/her posture and/or during which the posture corresponds to a posture bent, especially sideways and/or downwards. It is also possible to determine the posture, in particular a posture in which the motor vehicle occupant does not participate in road traffic and/or a posture bent to the side and/or downward.
(18) From the TOF sensor 7, the signal values representing four attentional characteristics can be received accordingly.
(19)
(20) In a step 11 of the method the six signal values representing attention characteristics (direction of gaze, duration of averted gaze, head posture, duration of averted head posture, body posture, duration of averted body posture) of the vehicle occupant are received by the eye-tracking sensor 3 and the TOF sensor 7.
(21) In a step 13 at least one attention state of the vehicle occupant is determined on the basis of the signal values.
(22) In a step 15 it is determined whether the attention state of the vehicle occupant corresponds to a first attention state, in particular inattention.
(23) In a step 17, in response to the determination that the attention state corresponds to the first attention state, the drive assistance system is controlled in a first way.
(24)
(25) Accordingly, in this first situation in step 11, both the eye tracking sensor 3 and the TOF Sensor 7 receive a signal value of one (1) each with respect to the direction of gaze, head position and body posture, corresponding to maximum attention. Correspondingly, a state of attention other than the first one is determined in steps 13 and 15. Thus, the drive assistance system is not controlled in the first way.
(26)
(27) Whether the first state of attention is determined in steps 13 and 15 depends on the signal value regarding the attentional characteristic concerning the duration of the averted gaze. Only when the motor vehicle occupant's gaze no longer turns away from the road ahead for a short time is the first attentional state determined in step 13 and determined in step 15, and the drive assistance system is controlled in the first way in step 17. This is because in this case it can be assumed that the vehicle occupant is distracted.
(28)
(29) As in the first and second situation, the upper body 21 is in the correct position. However, in step 11, the eye-tracking sensor 3 receives a signal value of zero (0) with respect to the direction of gaze and the TOF sensor 7, due to the laterally twisted head 19, also receives a signal value of zero (0) with respect to the head posture. In steps 13 and 15, the first state of attention is determined on this basis. Then, in step 17, the drive assistance system is controlled in the first way. In this case, the brake assistant is activated and at least one triggering threshold for intervention by the brake assistant is reduced, since the third situation is classified as particularly critical.
(30)
(31) Due to the weighting of the individual signal values, the first state of attention is only determined in steps 13 and 15 when head 19 and upper body 21 have been turned for a certain period of time. Before that, however, the first attention state is not determined and established.
(32) Although situations one to four of
(33) In monitoring a vehicle occupant, in particular the driver, with at least one TOF sensor, not only can one recognize inattentiveness of the driver, but also recognize certain patterns of movement that suggest what the driver is doing or what he is about to do simultaneously to driving or maybe even instead of driving, as illustrated with respect to
(34) Depending on the recognized movement pattern the inattentiveness can be determined qualitatively to optimize the selection of the appropriate drive assistance system(s) to be initiated.
(35) Typical movement patterns can be recognized through head angle position and upper body movement, as described above with reference to
(36) In an example, an image obtained of the upper body can be a single channel image such that the image input is provided in greyscale model. The head may be selected as a reference object and a rectangle may be provided around a detected upper body in an image. A factor for rescaling the image from one layer to another may be applied, for example with steps of a 10% reduction within upper and lower limits of object sizes. If the object size is larger than the upper limit or smaller than the lower limit, the object may be ignored by the model.
(37) After extracting the upper body from the image, shoulder positions can be determined. 3 points A, B, C may be placed in a plane E as shown in
(38) Turning to
(39) As shown in
(40) After determining the shoulder positions at points C and B, a straight line g.sup..fwdarw. is formed between said points. If the upper body is recognized, the shoulder angle y between the vectors g.sup..fwdarw. and n.sup..fwdarw. is calculated as well as the distances of the points C and B from a sensor, in particular a TOF camera, used for determining the attentional state of the driver.
(41) Still further, a face recognition can be performed. Even if only half of the face is visible to the TOF camera, using a well-known shape predictor for nose, mouth and eyes allows to extract for example 68 points of the face for evaluation purposes. For example, depending on the rotation of the head, points from the ear to the chin on the side seen from the TOF camera can be extracted and set in relation to the windshield to determine the rotation of the head.
(42) Turning to
(43) For a shoulder rotation determination, for example to the right as indicated with arrow 1* in
(44) Each one of the arrows 1*, 2*, 3* and 4* also define a rotational range for determining movement patterns, with different movement patterns being listed in the following table for the left-hand driving of motor vehicles:
(45) TABLE-US-00001 TABLE Shoulder rotation angle Head rotation angle Additional eye tracking Opening glove box. Movement to the right-straight Head rotation between 20°-60° in short Move to the right until line g through left frequencies of permanent + movement to the out of range *1 shoulder/middle/right shoulder right - until out ot range 2* (points A, B, C) is tilted such that right shoulder is more than 10° below left shoulder Writing text message on cell phone Shoulder rotation between 0-10° Head rotation with y = +−10° to the left and to Move downwards the right 2*/3* Head tilting downwards with x = −(10-60°) *4 Head tilting downwards with High frequency alternating high frequency alternating with x = (−5°-+10°) with eye tracking 0° + −5° Conversation with co-driver Shoulder rotation between 0-10° Head rotation With high frequency with y = 50- High frequency with out of 90° range 2*/3* or out of range 2*/3* Looking at children on back seats Shoulder rotation between 30-90° Head rotation with up to 90° to the left or to the out of range 2*/3*, right eyes no longer visible
(46) The above table describes the determination of 4 typical movement patterns, namely opening glove box, writing text message on a cell phone, conversation with co-driver and looking at children on back seats,
(47) for a left-hand driving, with the respective parameters defined via a rotation and tilting of the head and the shoulders of a driver.
(48) Different movement patterns can be trained with the help of artificial intelligence to further improve a drive assistance system selection.
(49) Artificial intelligence is in particular useful when different shoulder rotation angles and/or head rotation angles can be the result of a different movement. For example, the respective angles might be identical for opening a glove box and putting the hand on the knee of a co-driver. In order to distinguish the two movements from each other, a plurality of stored images can be evaluated based on artificial intelligence.
(50) Further, the determination of a movement pattern can comprise a simulation of different sizes of driver and the like.
(51) Accordingly, the information obtained solely from the information listed in above table might not be sufficient to define an exact movement pattern, but already allows to select an appropriate drive assistance. The fine tuning of the drive assistance can be achieved by making usage of artificial intelligence with a sufficient amount images on different movements of different people having different sizes and of simulations.
(52) A light unit 102 suitable for use in a method, system and motor vehicle according to the disclosure is shown in
(53) The light unit 102 comprises the light sources 103′ to 103″″ shown in the exploded view in
(54) As indicated in the side view in
(55) Alternatively, at least one of the light sources 103″ and 103″″ can also be used to illuminate the area or a partial area outside the illumination range of the light source 103″″ in far field F. By using several light sources, the required spatial illumination can be composed of several light sources.
(56) It goes without saying that light propagation always takes place in three-dimensional space and not, as shown here as an example, in a two-dimensional plane.
(57) Even if the arrangement of the light sources 103′ to 103″″ in
(58)
(59) In the 106 motor vehicle, a TOF sensor 107 is integrated into a rearview device in the form of an interior mirror. However, the sensor 107 can also be placed in other positions, such as the dashboard, so the positioning in
(60) Several lighting units are arranged in the vehicle 106. In order to achieve the best possible illumination of a vehicle occupant 109 for the sensor 107, especially partially from the side, a lighting unit is integrated into the vehicle occupant seat 113. In addition or as an alternative, a lighting unit, as shown in
(61) Furthermore, at least one lighting unit can be arranged on the roof liner, for example the roof liner area 115. This allows a good illumination of the vehicle occupant 109 also from above. This positioning also makes it possible to illuminate the central area of the vehicle interior particularly well. Advantageously, the lighting unit is housed inside a dome-shaped housing, from where the lighting unit can illuminate up to 360° in a vertical plane and up to 180° in a horizontal plane. This can be done over several permanently installed lighting units, or the installed lighting unit can make a movement to change the direction of light propagation.
(62) Other suitable positions for lighting units are parts of the A-pillar, the B-pillar and/or the C-pillar, areas of door components such as doors, door frames, windows, window frames and/or corresponding covers, in particular paneling.
(63) The features disclosed in this description, the claims and the figures form the basis for the claimed disclosure, both individually and in any combination with each other, for the respective different embodiments.
REFERENCE SIGN LIST
(64) 1 review device 3 eye tracking sensor 5 surveillance area 7 TOF sensor 9 surveillance area 11, 13, 15, 17 step 19 head 21 upper body 23 steering wheel 25 eyes 102 lighting unit 103, 103′, 103″, 103′″, 103′″ light source 104 optical system 105, 105′, 105″, 105′″, 105′″ optical element 106 motor vehicle 107 sensor 109 vehicle occupant 111, 111′, 111″ handle 113 vehicle occupant seat 115 roof liner area 190 head 210 upper body 211, 212 shoulder 250 eyes A, A′, B, B′ illumination boundary N near field F far field 1* shoulder rotation to the right 2* head rotation to the left with an angle +y° 3* head rotation to the right with an angle −y° 4* head tilting down with an angle x°