Method, device, and system for influencing at least one driver assistance system of a motor vehicle

11708079 · 2023-07-25

Assignee

Inventors

Cpc classification

International classification

Abstract

The present disclosure relates to a method for controlling at least one drive assistance system of a motor vehicle, a device for carrying out the steps of this method and a system including such a device. The disclosure also relates to a motor vehicle including such a device or such a system.

Claims

1. A method for controlling a motor vehicle by a drive assistance system comprising: receiving, from a sensor, a signal value, wherein the sensor comprises at least one of a time-of-flight (TOF) sensor, an eye tracking sensor or a camera provided with its own lighting unit; determining, by the drive assistance system, two or more attention characteristics of a motor vehicle occupant based on the signal value; determining, by the drive assistance system, a movement pattern of the motor vehicle occupant from the two or more attention characteristics, wherein the movement pattern of said motor vehicle occupant is determined by a combination of a head position or a head angle and an upper body movement or rotation, wherein the upper body movement or rotation is a shoulder movement or rotation, and wherein the movement pattern corresponds to a first attention state, determining whether the first attention state corresponds to an inattentive state, wherein the inattentive state is evaluated by artificial intelligence from a plurality of stored images, wherein the plurality of stored images is at least one of simulated, learned, or self-learned images; and controlling the motor vehicle by the drive assistance system in a first way when the first attention state corresponds to the inattentive state.

2. The method according to claim 1, wherein the sensor is supported by at least one of a separate or distant lighting unit to actively illuminate an area in front of the sensor, and wherein the at least one of the separate or distant lighting unit emits electromagnetic radiation outside of a range visible to the motor vehicle occupant.

3. The method according to claim 1, wherein at least one of: an illumination of different areas in an interior of the motor vehicle depends on a condition of the motor vehicle occupant, the motor vehicle and an environment of the motor vehicle, or the illumination of different areas in the interior of the motor vehicle depends on a the distance of the motor vehicle occupant to the sensor.

4. The method according to claim 2, wherein one or more of the sensor or the at least one of the separate or the distant lighting unit is attached to or integrated in at least one of a dashboard, a center console, a retractable or a movable center console, a windscreen, a roof, a headlining, a grab handle, an A-pillar, a B-pillar, a C-pillar, a door component, above a door, a dome-shaped housing in the region of the center of the motor vehicle on the roof or headlining, a display device, a vehicle occupant seat, a head portion of the vehicle occupant seat, a foot portion of the vehicle occupant seat, an armrest of the vehicle occupant seat, a restraint system for the vehicle occupant, a positioning mechanism, a trim or a mobile device.

5. The method according to claim 1, further comprising the step of: in response to determining that the attention state does not correspond to the first attention state, controlling the drive assistance system in a second way.

6. A device for controlling a drive assistance system of a motor vehicle, comprising a processor unit which is configured to carry out the steps of the method according to claim 1.

7. A system for controlling at least one drive assistance system of a motor vehicle, comprising: at least one device according to claim 6; at least one sensor; and at least one of a separate or a distant lighting unit.

8. The system according to claim 7, wherein the sensor further comprises at least one of, a body tracking sensor, or at least one surround view system.

9. A motor vehicle comprising at least one device according to claim 6.

10. A motor vehicle comprising at least one system according to claim 7.

11. A method for controlling a drive assistance system of a motor vehicle, comprising: receiving, from a sensor, at least two signal values representing two or more attention characteristics of a motor vehicle occupant, wherein the sensor comprises at least one of a time-of-flight (TOF) sensor, an eye tracking sensor or a camera provided with its own lighting unit; weighting the at least two signal values depending on the two or more attention characteristics; determining, based on weighting of the at least two signal values, an attention state of the motor vehicle occupant; determining whether the attention state of the motor vehicle occupant corresponds to a first attention state or a second attention state; in response to determining that the attention state corresponds to the first attention state, controlling the drive assistance system in a first way; and in response to determining that the attention state corresponds to the second attention state, controlling the drive assistance system in a second way, wherein the first attention state or the second attention state of said motor vehicle occupant is determined by a combination of a head position or angle and an upper body movement or rotation, and wherein the upper body movement or rotation is a shoulder movement or rotation.

12. The method of claim 11, wherein the second attention state of the motor vehicle occupant is also determined based on an eye tracking.

13. The method of claim 11, wherein the second way includes a brake assistance of the drive assistance system.

14. The method of claim 11, wherein at least one of the head position or angle or the upper body movement or rotation is determined based on a comparison with a plurality of stored images.

15. The method of claim 12, wherein at least one of the head angle position, the upper body angle position, or the eye tracking are determined based on a comparison with a plurality of stored images.

16. The method of claim 14, wherein the plurality of stored images is at least one of simulated, learned, or self-learned through artificial intelligence.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) Further features and advantages of the disclosure result from the following description, in which preferred embodiments of the disclosure are explained by means of schematic drawings and diagrams.

(2) FIG. 1 shows an arrangement of sensors within a rearview device;

(3) FIG. 2 shows a flow chart of a method according to a first aspect of the disclosure;

(4) FIG. 3 shows a first situation of a motor vehicle occupant;

(5) FIG. 4 shows a second situation of a motor vehicle occupant;

(6) FIG. 5 shows a third situation of a motor vehicle occupant;

(7) FIG. 6 shows a fourth situation of a motor vehicle occupant;

(8) FIG. 7 shows a shoulder plane of a motor vehicle occupant for illustrating the determination of a movement pattern;

(9) FIG. 8 shows a motor vehicle occupant with a head rotation as well as tilting and a shoulder rotation for illustrating the determination of a movement pattern;

(10) FIGS. 9a and 9b show an exploded view and a side view of an exemplary additional lighting unit, respectively; and

(11) FIG. 10 shows a schematic top view of a motor vehicle with additional lighting units.

DETAILED DESCRIPTION

(12) FIG. 1 shows a rearview device 1 in the form of a rearview mirror in the interior of a motor vehicle from the perspective of the vehicle occupant, in particular the driver, arranged in an otherwise unspecified motor vehicle.

(13) The rearview device 1 includes an eye-tracking sensor 3, which follows the eye movements of the vehicle occupant and thus makes it possible to determine the direction of vision of the vehicle occupant, especially if the direction of vision deviates from the direction of travel of the vehicle. In addition, the same eye-tracking sensor 3 makes it possible to determine the period of time during which the vehicle occupant's eyes are not participating in road traffic, in particular when the vehicle occupant's eyes are closed and/or the direction of vision of the vehicle occupant is in a direction different from the direction of travel of the vehicle. From the eye-tracking sensor 3 the signal values representing the two attention characteristics can be received accordingly.

(14) The sensor 3 has a detection range 5, within which it can track the eyes of a vehicle occupant. Preferably, not only the area of the driver's seat is covered by the detection range 5, but also at least the area of the center console. This means that even a motor vehicle driver who bends to the side still moves within the detection range 5 and can thus be reliably followed by the eye-tracking sensor 3.

(15) The rearview device 1 also includes a Time-Of-Flight (TOF) sensor 7 with an additional infrared illumination unit. The TOF sensor 7 can detect the objects located within a detection range 9. In the acquired image, the TOF sensor 7 can calculate a distance value for each pixel in relation to a reference point, for example the rearview device 1. The TOF sensor 7 can thus detect and evaluate the head and body posture of the vehicle occupant.

(16) This makes it possible to determine the length of time during which the motor vehicle occupant does not participate in road traffic because of his head position and/or during which the head position corresponds to a head position that is twisted, in particular sideways, downwards and/or upwards. It is also possible to determine the head position, in particular a head position in which the occupant is not participating in road traffic and/or a head position twisted sideways, downwards and/or upwards.

(17) The TOF sensor 7 also makes it possible to determine the length of time during which the motor vehicle occupant is not participating in road traffic due to his/her posture and/or during which the posture corresponds to a posture bent, especially sideways and/or downwards. It is also possible to determine the posture, in particular a posture in which the motor vehicle occupant does not participate in road traffic and/or a posture bent to the side and/or downward.

(18) From the TOF sensor 7, the signal values representing four attentional characteristics can be received accordingly.

(19) FIG. 2 shows a flow chart comprising method steps according to the first aspect of the disclosure:

(20) In a step 11 of the method the six signal values representing attention characteristics (direction of gaze, duration of averted gaze, head posture, duration of averted head posture, body posture, duration of averted body posture) of the vehicle occupant are received by the eye-tracking sensor 3 and the TOF sensor 7.

(21) In a step 13 at least one attention state of the vehicle occupant is determined on the basis of the signal values.

(22) In a step 15 it is determined whether the attention state of the vehicle occupant corresponds to a first attention state, in particular inattention.

(23) In a step 17, in response to the determination that the attention state corresponds to the first attention state, the drive assistance system is controlled in a first way.

(24) FIG. 3 shows a first situation of a vehicle occupant, in particular the driver, in the otherwise unspecified vehicle, including the rearview device 1 of FIG. 1. The motor vehicle occupant, comprising a head 19 and an upper body 21, is located behind the steering wheel 23 of the motor vehicle. The eyes 25 of the vehicle occupant look straight ahead in the direction of travel. Both the head 19 and the upper body 21 are in a correct posture, i.e. a posture in which the vehicle occupant, in particular the driver, can be expected to follow the traffic with maximum attention.

(25) Accordingly, in this first situation in step 11, both the eye tracking sensor 3 and the TOF Sensor 7 receive a signal value of one (1) each with respect to the direction of gaze, head position and body posture, corresponding to maximum attention. Correspondingly, a state of attention other than the first one is determined in steps 13 and 15. Thus, the drive assistance system is not controlled in the first way.

(26) FIG. 4 shows a second situation of the vehicle occupant. As in the first situation, both head 19 and upper body 21 are in a correct posture. However, in step 11, the eye-tracking sensor 3 receives a signal value with respect to the direction of gaze of zero (0), which represents minimal attention with respect to this attention feature. The reason for this is the vehicle occupant's gaze towards, from his or her point of view, the glove compartment at the bottom right.

(27) Whether the first state of attention is determined in steps 13 and 15 depends on the signal value regarding the attentional characteristic concerning the duration of the averted gaze. Only when the motor vehicle occupant's gaze no longer turns away from the road ahead for a short time is the first attentional state determined in step 13 and determined in step 15, and the drive assistance system is controlled in the first way in step 17. This is because in this case it can be assumed that the vehicle occupant is distracted.

(28) FIG. 5 shows a third situation of the vehicle occupant. This third situation represents the typical situation of a conversation between the vehicle occupant and other vehicle occupants, especially the vehicle occupant.

(29) As in the first and second situation, the upper body 21 is in the correct position. However, in step 11, the eye-tracking sensor 3 receives a signal value of zero (0) with respect to the direction of gaze and the TOF sensor 7, due to the laterally twisted head 19, also receives a signal value of zero (0) with respect to the head posture. In steps 13 and 15, the first state of attention is determined on this basis. Then, in step 17, the drive assistance system is controlled in the first way. In this case, the brake assistant is activated and at least one triggering threshold for intervention by the brake assistant is reduced, since the third situation is classified as particularly critical.

(30) FIG. 6 shows a fourth situation of the vehicle occupant. This fourth situation represents the typical situation in which the vehicle occupant bends to the side of the glove compartment. In step 11, the eye-tracking sensor 3 receives a signal value regarding the viewing direction of one (1), since the gaze is still directed to the roadway and therefore maximum attention is paid to this. However, the signal values received by the TOF sensor 7 are now zero (0) with respect to both the head position and the body position and represent minimum attention with respect to these attention characteristics.

(31) Due to the weighting of the individual signal values, the first state of attention is only determined in steps 13 and 15 when head 19 and upper body 21 have been turned for a certain period of time. Before that, however, the first attention state is not determined and established.

(32) Although situations one to four of FIGS. 3 to 6 have been described here with signal values of either zero (0) or one (1), the expert understands that any value between the lower and upper limits can be assumed by the signal. For example, if the upper body 21 is flexed from the correct driving posture, the TOF sensor 7 could receive a changing signal value from one towards zero over time.

(33) In monitoring a vehicle occupant, in particular the driver, with at least one TOF sensor, not only can one recognize inattentiveness of the driver, but also recognize certain patterns of movement that suggest what the driver is doing or what he is about to do simultaneously to driving or maybe even instead of driving, as illustrated with respect to FIGS. 3 to 6.

(34) Depending on the recognized movement pattern the inattentiveness can be determined qualitatively to optimize the selection of the appropriate drive assistance system(s) to be initiated.

(35) Typical movement patterns can be recognized through head angle position and upper body movement, as described above with reference to FIGS. 3 to 6. The determination of movement patterns may, however, be made in many different ways, for example using shoulder recognition and face recognition as described further in the following.

(36) In an example, an image obtained of the upper body can be a single channel image such that the image input is provided in greyscale model. The head may be selected as a reference object and a rectangle may be provided around a detected upper body in an image. A factor for rescaling the image from one layer to another may be applied, for example with steps of a 10% reduction within upper and lower limits of object sizes. If the object size is larger than the upper limit or smaller than the lower limit, the object may be ignored by the model.

(37) After extracting the upper body from the image, shoulder positions can be determined. 3 points A, B, C may be placed in a plane E as shown in FIG. 7 or on a straight line g running from the left shoulder to the right shoulder as shown in FIG. 8. Said 3 points move when the upper body is moved, in particular turned to the right or the left. Moved points can be presented differently, for example with different colors.

(38) Turning to FIG. 7, the shoulder recognition may be performed with the help of points C and B characterizing the two shoulders and point A being arranged in-between. The 3 points A, B and C are defining the plane E that is set at an angle with respect to the plane of a not shown windshield.

(39) As shown in FIG. 7, one reference vector n being a normal vector of the plane E and pointing to the correct viewing direction of the driver, is mapped from said 3 shoulder points A, B, C. In detail, a first vector (AB).sup..fwdarw. may be calculated by subtracting the point A from the point B, and a second vector (AC).sup..fwdarw. may be calculated by subtracting point A from the point C, such that the vector n.sup..fwdarw. can be determined by cross product of the first and second vectors. Therefore the vector n.sup..fwdarw. is perpendicular to the first and second vectors and, thus, perpendicular to the plane E.

(40) After determining the shoulder positions at points C and B, a straight line g.sup..fwdarw. is formed between said points. If the upper body is recognized, the shoulder angle y between the vectors g.sup..fwdarw. and n.sup..fwdarw. is calculated as well as the distances of the points C and B from a sensor, in particular a TOF camera, used for determining the attentional state of the driver.

(41) Still further, a face recognition can be performed. Even if only half of the face is visible to the TOF camera, using a well-known shape predictor for nose, mouth and eyes allows to extract for example 68 points of the face for evaluation purposes. For example, depending on the rotation of the head, points from the ear to the chin on the side seen from the TOF camera can be extracted and set in relation to the windshield to determine the rotation of the head.

(42) Turning to FIG. 8, which shows a head 190 with two eyes 250 as well as an upper body 210 with two shoulders 211, 212. The points A (middle), B (left shoulder) and C (right shoulder) are arranged on a straight line g. In addition, a vector can be obtained from points from the ear (not shown) to the chin (not shown) such that a normal vector thereto points vertically forward in the vehicle. Said normal vector can be designated as 0° y, with head rotations to the left or right are defined by +y° or −y°, respectively, and are indicated by the arrows 2* and 3*. A pitch angle determination is performed relative to the same normal vector and allows head angle calculation for a head tilt up and down. This results in a +x° or −x° rotation, with a downward tilt being shown with the arrow 4* in FIG. 8.

(43) For a shoulder rotation determination, for example to the right as indicated with arrow 1* in FIG. 8, a straight line is drawn through the 3 points C, A, B to build a normal vector (not shown) to the front.

(44) Each one of the arrows 1*, 2*, 3* and 4* also define a rotational range for determining movement patterns, with different movement patterns being listed in the following table for the left-hand driving of motor vehicles:

(45) TABLE-US-00001 TABLE Shoulder rotation angle Head rotation angle Additional eye tracking Opening glove box. Movement to the right-straight Head rotation between 20°-60° in short Move to the right until line g through left frequencies of permanent + movement to the out of range *1 shoulder/middle/right shoulder right - until out ot range 2* (points A, B, C) is tilted such that right shoulder is more than 10° below left shoulder Writing text message on cell phone Shoulder rotation between 0-10° Head rotation with y = +−10° to the left and to Move downwards the right 2*/3* Head tilting downwards with x = −(10-60°) *4 Head tilting downwards with High frequency alternating high frequency alternating with x = (−5°-+10°) with eye tracking 0° + −5° Conversation with co-driver Shoulder rotation between 0-10° Head rotation With high frequency with y = 50- High frequency with out of 90° range 2*/3* or out of range 2*/3* Looking at children on back seats Shoulder rotation between 30-90° Head rotation with up to 90° to the left or to the out of range 2*/3*, right eyes no longer visible

(46) The above table describes the determination of 4 typical movement patterns, namely opening glove box, writing text message on a cell phone, conversation with co-driver and looking at children on back seats,

(47) for a left-hand driving, with the respective parameters defined via a rotation and tilting of the head and the shoulders of a driver.

(48) Different movement patterns can be trained with the help of artificial intelligence to further improve a drive assistance system selection.

(49) Artificial intelligence is in particular useful when different shoulder rotation angles and/or head rotation angles can be the result of a different movement. For example, the respective angles might be identical for opening a glove box and putting the hand on the knee of a co-driver. In order to distinguish the two movements from each other, a plurality of stored images can be evaluated based on artificial intelligence.

(50) Further, the determination of a movement pattern can comprise a simulation of different sizes of driver and the like.

(51) Accordingly, the information obtained solely from the information listed in above table might not be sufficient to define an exact movement pattern, but already allows to select an appropriate drive assistance. The fine tuning of the drive assistance can be achieved by making usage of artificial intelligence with a sufficient amount images on different movements of different people having different sizes and of simulations.

(52) A light unit 102 suitable for use in a method, system and motor vehicle according to the disclosure is shown in FIGS. 9a and 9b. The light unit 102 has a matrix of light-emitting diodes (LED) 103′ to 103″″ and thus a matrix optic, as in a lens array.

(53) The light unit 102 comprises the light sources 103′ to 103″″ shown in the exploded view in FIG. 9a together with optical elements 105′ to 105″″ of an optical system 104, with each of optical elements 105′ to 105″″ being associated with one of the light sources 103′ to 103″″, connected downstream of the respective light source 103′ to 103″″. Thus, for example, a near field can be illuminated by activating the light source 103′ and a far field can be illuminated by activating the light source 103′″.

(54) As indicated in the side view in FIG. 9b, the light beams widen with increasing distance from the light unit 102. The area illuminated by the light source 103′ is limited to the area between two illumination boundaries A and A′, while the area illuminated by the light source 103′″ is limited to the area between illumination boundaries B and B′, in the shown example. Because the available light intensity is distributed over a larger spatial field, the depth illumination decreases at the expense of the illumination field size. Thus, although a large spatial field perpendicular to the direction of light propagation in the near field N can be illuminated with the light source 103′, the light intensity is no longer sufficient to illuminate the depth range in the far field F in order to be able to perform object recognition. On the other hand, although the light source 103′″ can be used to illuminate the depth range in the far field F, the illuminated range in the near field N is smaller than with the light source 103′, so that a near object may not be completely detected. The light sources 103″ and 103″″ can be used at medium distances to achieve increased illumination.

(55) Alternatively, at least one of the light sources 103″ and 103″″ can also be used to illuminate the area or a partial area outside the illumination range of the light source 103″″ in far field F. By using several light sources, the required spatial illumination can be composed of several light sources.

(56) It goes without saying that light propagation always takes place in three-dimensional space and not, as shown here as an example, in a two-dimensional plane.

(57) Even if the arrangement of the light sources 103′ to 103″″ in FIGS. 9a and 9b is shown in a plane plane and a regular pattern, a curved or otherwise shaped surface may be provided to accommodate the light sources 103′ to 103″″. Thus, the direction of the light emission and the distance of the respective light source 103′ to 103″″ to the respective optical element 105′ to 105″″ can be preset. The number of light sources can also be increased or decreased, depending on the area to be illuminated and the available space. The optical elements 105′ to 105″″ of the optical system 104 can also be arranged on a curved or otherwise shaped surface to optimally illuminate the area to be illuminated.

(58) FIG. 10 shows exemplary positions where corresponding light units can be arranged within a motor vehicle 106 to achieve the best possible illumination.

(59) In the 106 motor vehicle, a TOF sensor 107 is integrated into a rearview device in the form of an interior mirror. However, the sensor 107 can also be placed in other positions, such as the dashboard, so the positioning in FIG. 10 is only exemplary.

(60) Several lighting units are arranged in the vehicle 106. In order to achieve the best possible illumination of a vehicle occupant 109 for the sensor 107, especially partially from the side, a lighting unit is integrated into the vehicle occupant seat 113. In addition or as an alternative, a lighting unit, as shown in FIG. 10, can be installed in a grab handle 111, for example above the driver's door. Such a lighting unit can also be provided as an alternative or supplement in further grab handles 111′ on the vehicle occupant side or in the rear. In addition, it may be intended to install additional grab handles 111″ in a vehicle interior in order to facilitate the movement and/or securing of vehicle occupants in an at least partially autonomous motor vehicle, which can then of course also be equipped with corresponding lighting units.

(61) Furthermore, at least one lighting unit can be arranged on the roof liner, for example the roof liner area 115. This allows a good illumination of the vehicle occupant 109 also from above. This positioning also makes it possible to illuminate the central area of the vehicle interior particularly well. Advantageously, the lighting unit is housed inside a dome-shaped housing, from where the lighting unit can illuminate up to 360° in a vertical plane and up to 180° in a horizontal plane. This can be done over several permanently installed lighting units, or the installed lighting unit can make a movement to change the direction of light propagation.

(62) Other suitable positions for lighting units are parts of the A-pillar, the B-pillar and/or the C-pillar, areas of door components such as doors, door frames, windows, window frames and/or corresponding covers, in particular paneling.

(63) The features disclosed in this description, the claims and the figures form the basis for the claimed disclosure, both individually and in any combination with each other, for the respective different embodiments.

REFERENCE SIGN LIST

(64) 1 review device 3 eye tracking sensor 5 surveillance area 7 TOF sensor 9 surveillance area 11, 13, 15, 17 step 19 head 21 upper body 23 steering wheel 25 eyes 102 lighting unit 103, 103′, 103″, 103′″, 103′″ light source 104 optical system 105, 105′, 105″, 105′″, 105′″ optical element 106 motor vehicle 107 sensor 109 vehicle occupant 111, 111′, 111″ handle 113 vehicle occupant seat 115 roof liner area 190 head 210 upper body 211, 212 shoulder 250 eyes A, A′, B, B′ illumination boundary N near field F far field 1* shoulder rotation to the right 2* head rotation to the left with an angle +y° 3* head rotation to the right with an angle −y° 4* head tilting down with an angle x°