METHOD FOR CONTROLLING A LIGHTING DEVICE FOR EMITTING A NON-DAZZLING BEAM FOR LIGHTING THE ROAD
20210354619 · 2021-11-18
Assignee
Inventors
Cpc classification
B60Q2300/056
PERFORMING OPERATIONS; TRANSPORTING
B60Q1/143
PERFORMING OPERATIONS; TRANSPORTING
International classification
Abstract
A method for controlling a lighting device of a host motor vehicle in order to emit a beam for lighting the road that is non-dazzling to a target object on this road. The method includes acquiring, with a sensor system of the host motor vehicle, the position (x.sub.target_t0, y.sub.target_t0) of the target object on the road at a given time (t0), and predicting, with a predicting unit, the position (xtarget_t1, ytarget_t1) of the target object on the road at a time (t1) that is in the future with respect to the given time. Also included is correcting, with the predicting unit, the acquired position of the target object at a given time depending on the predicted position at the future time, and generating, with the lighting device, a non-dazzling zone (Z) in a beam for lighting the road, which beam the device emits depending on the corrected position (P.sub.G,P.sub.D) of the target object.
Claims
1. A method for commanding a lighting device of a host motor vehicle to emit a road-illuminating beam that illuminates the road but that does not cause glare to a target object on this road; the method comprising the following steps: acquiring, by means of a sensor system of the host motor vehicle, the position (x.sub.cible_t0, y.sub.cible_t0) of the target object on the route at a given time (t0); predicting by means of a predicting unit, the position (x.sub.cible_t1, y.sub.cible_t1), of the target object on the route at a time (t.sub.1) that is in the future with respect to the given time; correcting, by means of the predicting unit, the target-object position acquired at a given time depending on the predicted position at the future time; generating, by means of the lighting device, in a beam for illuminating the road that said device emits, a region that does not cause glare, depending on the corrected position (P.sub.G, P.sub.D) of the target object.
2. The method as claimed in claim 1, wherein the acquiring step comprises a first sub-step of detecting the target object, and a second sub-step of acquiring the position (x.sub.cible_t0, y.sub.cible_t0) and the type of the target object.
3. The method as claimed in claim 2, wherein the step of predicting the position of the target object is dependent on the target object being a target motor vehicle traveling in the same direction as the host vehicle and at a speed higher than a threshold speed.
4. The method as claimed in claim 1, wherein the future time (t1) of the predicting step is offset with respect to the given time (t.sub.0) of the acquiring step by a computed duration depending on the latency of the acquiring, predicting, correcting and generating steps.
5. The method as claimed in claim 1, wherein the predicting step comprises a sub-step of modeling a path of the host vehicle.
6. The method as claimed in claim 5, wherein the path of the host vehicle is modeled from its position (x.sub.hôte, y.sub.hôte) at the given time (t.sub.0) of the acquiring step, to the position (x.sub.cible_t0, y.sub.cible_t0) of the target object acquired in said acquiring step.
7. The method as claimed in claim 6, wherein said path is modeled by a third-degree polynomial the coefficients (c.sub.0, c.sub.1, c.sub.2, c.sub.3) of which are determined depending on the curvature of the road, and on the speeds and positions of the host vehicle and target object.
8. The method as claimed in claim 7, wherein the coefficient of the term of degree 2 (c.sub.2) is computed depending on the curvature of the road and on the wheelbase of the host vehicle.
9. The method as claimed in claim 8, wherein the coefficient of the term of degree 1 (c.sub.1) is zero.
10. The method as claimed in claim 9, wherein the coefficient of the term of degree 3 (c.sub.3) is computed depending on the yaw and position (x.sub.cible_t0, y.sub.cible_t0) of the target object.
11. The method as claimed in claim 5, wherein the predicting step comprises a sub-step of extrapolating the modeled path up to the future time (t.sub.1) in order to obtain the predicted target-object position (x.sub.cible_t1, y.sub.cible_t1) at this future time.
12. The method as claimed in claim 11, the correcting step comprising replacing the acquired position (x.sub.cible_t0, y.sub.cible_t0) with a pair of positions (P.sub.G, P.sub.D) corresponding to the positions of the left and right edges of the target object in the predicted position.
13. The method as claimed in claim 5, wherein the predicting step comprises a sub-step of extrapolating the modeled path up to a first future time (t.sub.1) and up to a second future time (t.sub.2), in order to obtain first and second predicted positions (x.sub.cible_t1, y.sub.cible_t1), (x.sub.cible_t2, y.sub.cible_t2) of the target object at these future times, the first and second future times corresponding to an offset, from the given time, by the minimum and maximum latencies of the acquiring, predicting and correcting steps, respectively.
14. The method as claimed in claim 13, wherein the correcting step comprises replacing the acquired position (x.sub.cible_t0, y.sub.cible_t0) with a pair of positions (P.sub.G, P.sub.D) in which: one of the positions (P.sub.G) corresponds to the leftmost position of the positions (P.sub.G1, P.sub.G2) of the left edges in the first and second predicted positions (x.sub.cible_t1, y.sub.cible_t1), (x.sub.cible_t2, y.sub.cible_t2); the other of the positions (P.sub.D) corresponds to the rightmost position of the positions (P.sub.D1, P.sub.D2) of the right edges in the first and second predicted positions.
15. The method as claimed in claim 1, wherein the road-illuminating beam emitted by the lighting device is a pixelated beam and wherein the step of generating a region that does not cause glare consists in switching on and/or switching off and/or modifying the intensity of a plurality of pixels of this pixelated beam depending on the corrected position (P.sub.G, P.sub.D).
16. A motor-vehicle lighting system, comprising: a sensor system able to acquire the position (x.sub.cible_t0, y.sub.cible_t0) of a target object on the road at a given time (t.sub.0); a predicting unit able to predict the position (x.sub.cible_t1, y.sub.cible_t1) of the target object on the road at a time (t.sub.1) that is in the future with respect to the given time, and to correct the acquired position depending on the predicted position; a lighting device able to emit a road-illuminating beam and to generate, in this beam, a region that does not cause glare, depending on the corrected position (P.sub.G, P.sub.D) of the target object.
17. A data-storage means, wherein it stores one or more programs the execution of which permits the method as claimed in claim 1 to be implemented.
18. A computer program on a data-storage means, comprising one or more sequences of instructions that are executable by a microprocessor and/or a computer, the execution of said sequences of instructions permitting the method as claimed in claim 1 to be implemented.
19. The method as claimed in claim 2, wherein the future time (t1) of the predicting step is offset with respect to the given time (t.sub.0) of the acquiring step by a computed duration depending on the latency of the acquiring, predicting, correcting and generating steps.
20. The method as claimed in claim 2, wherein the predicting step comprises a sub-step of modeling a path of the host vehicle.
Description
[0086] Unless specified otherwise, technical features that are described in detail for one given embodiment may be combined with the technical features that are described in the context of other embodiments described by way of example and non-limitingly.
[0087]
[0088] The host vehicle 1 is equipped with a lighting system comprising headlamps capable of emitting a pixelated road-illuminating beam FP, a control unit capable of switching on and/or switching off and/or modifying the light intensity of each of the pixels of the beam, and a sensor system for detecting a target object on the road to which glare is not to be caused and for measuring the position of this object.
[0089] In order to optimally illuminate the road and without causing glare to other road users, the lighting system implements the known method shown in
[0090] Step A1 corresponds to a step of acquiring, using the sensor system, an image of the road scene at a time t.sub.0, and step A2 corresponds to a step of detecting the target vehicle 2 in this road scene. In step A3, the position (X.sub.cible, Y.sub.cible) of target vehicle 2 at time t.sub.0 is computed and, subsequently, in step A4, positions P.sub.G and P.sub.D corresponding to the positions of the left and right edges of the target vehicle 2 at the time t0 are determined. These positions of the left and right edges are then delivered, in a step A5, to the control unit, which switches on and off the pixels of the illuminating beam so as to generate, in the beam between these two positions P.sub.G and P.sub.D, a region Z that does not cause glare. It may easily be seen, from
[0091] Now, in the case where the target vehicle 2 is moving at a very high rate, as in the case of
[0092] Thus,
[0093] To this end, the host vehicle 1 comprises a lighting system 11 comprising a sensor system that is able to acquire the position of the target vehicle 2 on the road at a given time t0. It is for example a question of a camera that films the road scene in an acquiring step E1, and that is associated with a unit for processing the images delivered thereby with a view to detecting the presence of the target vehicle 2 in a detecting step E2, and to estimating the position (X.sub.cible_t0, Y.sub.cible_t0) thereof at a given time t0 in an acquiring step E3.
[0094] For example, the processing unit may implement, in step E2, methods for detecting the outline of the target vehicle, and, in step E3, methods for determining the centroid of this outline. As a variant, the processing unit may implement, in step E2, methods for detecting light sources belonging to the target vehicle, and, in step E3, methods for determining the centroid of the position of these light sources. In these two methods, the position of the centroid is considered to be the position (X.sub.cible_t0, Y.sub.cible_t0) of the target vehicle at the given time t0.
[0095] The lighting system 11 is also provided with a predicting unit that is able to predict the position of the target vehicle 2 on the road at a future time.
[0096] In order to predict the position of the target vehicle 2, the predicting unit is able to model, in a modeling step E4, a path TP of the host vehicle 1.
[0097] Specifically, the target vehicle 2 moves over the road in the same way as the host vehicle 1. Therefore, the target vehicle 2 was, a few moments before the time t0, in the same position as the host vehicle 1. It will therefore be understood that the path of the host vehicle 1 should result in the latter ending up, at a later time, in the position of the target vehicle 2. Therefore, modeling the path TP of the host vehicle 1 allows the path of the target vehicle 2 to be modeled, so as to be able to predict its position at a future time t1 or t2.
[0098] One method for modeling the path TP according to the invention will be detailed below with reference to
[0099] Moreover, the predicting unit is able to extrapolate, in an extrapolating step E5, the modeled path TP to one or more future times in order to obtain the predicted position of the target vehicle 2 at this future time. This type of extrapolation may for example consist of an extension of the modeled path beyond the position of the target vehicle 2 at the acquisition time t0 to obtain its position at said future time. For example, the position of the target vehicle 2 along the X-axis at said future time may be estimated using its position and its speed at the acquisition time t0 (this speed may for example be obtained by differentiating the positions of the target vehicle 2 that are successively acquired by the acquiring unit). This position along the X-axis may thus be obtained via the equation: x.sub.cible_futur=x.sub.cible+V.sub.ciblet.sub.futur. This position along the X-axis then allows the position along the Y-axis of the target vehicle 2 to be obtained, using the equation modeling the path TP.
[0100] In the method presented in
[0101] Furthermore, the predicting unit replaces, in a correcting step E6A, the acquired position (X.sub.cible_t0, Y.sub.cible_t0) with a pair of positions P.sub.G1 and P.sub.D1, which correspond to the positions of the left and right edges of target vehicle 2 (as seen from the host vehicle 1) in the predicted position (X.sub.cible_t1, Y.sub.cible_t1).
[0102] These positions of the left and right edges are obtained in this example using the size of the target vehicle 2, which will have been estimated beforehand in step E2 (via the detection of the outline or of the light sources of the target vehicle 2), and using the yaw of this target vehicle 2, this yaw being for example obtained via the following equation:
where V.sub.x and V.sub.y are the speeds of the target vehicle 2 along the X- and Y-axes, and V.sub.hôte is the speed of the host vehicle 1, this speed for example being obtained by means of a navigation system of the host vehicle.
[0103] The predicting unit extrapolates, in a step E5B, the modeled path TP up to a second future time t2, which corresponds to the maximum latency of these same steps E1 to E7, to obtain the predicted position (X.sub.cible_t2, Y.sub.cible_t2) of the target vehicle at this future time t2. The predicting unit also replaces, in a correcting step E6B, the acquired position (X.sub.cible_t0, Y.sub.cible_t0) with a pair of positions P.sub.G2 and P.sub.D2, which correspond to the positions of the left and right edges of the target vehicle 2 in the predicted position (X.sub.cible_t2, Y.sub.cible_t2).
[0104] Since the latency of the system is not a constant value, two pairs of positions corresponding to the minimum and maximum positions at which the target vehicle 2 will possibly be found are thus obtained.
[0105] In order to guarantee that the illuminating beam FP is emitted with a region Z that does not cause glare to the target vehicle 2, whatever the latency of the lighting system at the time t0, the predicting unit then computes, in a correcting step E7A, the leftmost position P.sub.G between the positions of the left edges P.sub.G1 and P.sub.G2: min(P.sub.G1, P.sub.G2), and, in a correcting step E7B, the rightmost position P.sub.D between the positions of the right edges P.sub.D1 and P.sub.D2: max(P.sub.D1, P.sub.D2). It will therefore be understood that a region is obtained that is bounded on the right and left by extreme positions that the target vehicle 2 may take, depending on the variance of the latency of the lighting system.
[0106] In another embodiment (not shown), the predicting unit could solely extrapolate the modeled path TP up to a single future time, corresponding to the average latency of the lighting system, to obtain the predicted position of the target vehicle at this future time, and replace this position with a pair of positions corresponding to the positions of the left and right edges of the target vehicle in this predicted position.
[0107] Finally, the lighting system 11 is also provided with a lighting device able to emit the pixelated road-illuminating beam FP and to generate a region Z that does not cause glare in this beam depending on the corrected position of the target object, namely on the pair of positions P.sub.G and P.sub.D. For this purpose the lighting device comprises one or more luminous modules that are arranged to emit said pixelated beam, and a control unit capable of switching on and/or switching off and/or modifying the light intensity of each of the pixels of the pixelated beam.
[0108] In a generating step E8, the control unit of the lighting device switches off a plurality of pixels of the pixelated beam FP in order to generate the region Z that does not cause glare between the positions P.sub.G and P.sub.D. This generating step E8 is implemented at a time that is subsequent to the acquisition time t0, and that corresponds to this time to shifted by the latency of the lighting system 11.
[0109] Thus, thanks to the lighting system 11 and to the method according to the embodiment shown, the region Z that does not cause glare in the road-illuminating beam FP is thus positioned not in the position (X.sub.cible_t0, Y.sub.cible_t0) determined at the acquisition time of the image of the road, but rather in a position that has been corrected depending on a prediction of its position at a subsequent time, corresponding to the time at which said road-illuminating beam FP is emitted with the region Z that does not cause glare.
[0110]
[0111] It will also be noted that the processing unit implements a step E31 of storing the successively acquired positions (X.sub.cible_t0, Y.sub.cible_t0) and a step E32 for differentiating these acquired positions in order to obtain the speeds (V.sub.X, V.sub.Y) of the target object. These speeds especially allow whether the target object 2 is traveling in the same direction or in a direction opposite to the direction of travel of the host vehicle 1 to be identified.
[0112] In the step E4 of modeling the path TP, it is sought to define a polynomial y=f(x), x being a coordinate of the host vehicle 1 traveling on the path TP along a steering X-axis of the host vehicle and y being a coordinate of the host vehicle 1 traveling on the path TP along a Y-axis normal to the X-axis. In the present case, the polynomial is a third-degree polynomial the coefficients of which are c.sub.0, c.sub.1, c.sub.2 and c.sub.3.
[0113] In order to facilitate the computation of these coefficients, the model checks, in a step E41, whether the target object 2 is a motor vehicle traveling in the same direction as the host vehicle 1 and at a speed V.sub.cible higher than a threshold speed, 80 km/h in the present case.
[0114] If these conditions are met, the predicting unit implements a first step E42 of determining the coefficient of the term of degree 2 C.sub.2. This coefficient is for example determined by means of the angle θ.sub.volant of the steering wheel of the host vehicle 1 at the acquisition time t0, of the gear ratio factor of the steering of the host vehicle 1 and of the wheelbase E of the host vehicle, via the equation:
As a variant, in the case where the host vehicle is provided with a navigation unit capable of determining the speed V.sub.hôte and the yaw {dot over (θ)}.sub.hôte of the host vehicle, the coefficient of the term of degree 2 may be computed by means of the following equation:
[0115] The predicting unit then implements a second step E43 of determining the coefficient of the term of degree 3. This coefficient is for example computed depending on the position of the target vehicle 2, on its yaw and on the coefficient of the term of degree 2, by means of the following equation:
[0116] Lastly, the predicting unit implements a third step E44 of determining the coefficient of the term of degree 0. This coefficient is for example computed depending on the position of the target vehicle 2 and on the coefficients of the terms of degree 2 and 3, by means of the following equation:
c.sub.0=y.sub.cible−c.sub.2x.sub.cible.sup.2−c.sub.3x.sub.cible.sup.3.
[0117] At the end of step 44, the coefficient of the term of degree 1 being zero (the host vehicle 1 being assumed to be moving collinearly to the road), an equation modeling the path TP of the host vehicle 1 between its position (X.sub.hôte, Y.sub.hôte) and the position (X.sub.cible_t0, Y.sub.cible_t0) of target vehicle 2 at time t0 is thus obtained: y=c.sub.0+c.sub.2x.sup.2+c.sub.3x.sup.3.
[0118]
[0119] The lighting system 100 comprises a sensor system 200, which for example comprises a camera 210 aimed at the road on which the motor vehicle is being driven, in order to implement step E1 of
[0120] The lighting system 100 also comprises a predicting unit 300, which is for example integrated into the same microcontroller as the processing unit 220 or, as a variant, integrated in another microcontroller, and which receives, from the processing unit 220, the position of the target vehicle at the given time, and implements steps E4, E5A, E5B, E6A, E6B, E7A and E7B of
[0121] The lighting system 100 furthermore comprises a lighting device 400 comprising first and second luminous modules 2 and 3 that are each able to project one pixelated beam, these two beams together forming the pixelated road-illuminating beam FP.
[0122] Each module comprises: [0123] a pixelated light source 21, 31 comprising 900 elementary emitters that are arranged in a matrix array of 20 rows by 45 columns, each of the elementary emitters being able to be activated selectively so as to emit one elementary light beam; and [0124] a projecting optical element 22, 32 that is associated with said light source with a view to projecting each of said elementary light beams in the form of a pixel having a width and a length of 0.3°.
[0125] In the described embodiment, the light source 21 comprises a monolithic matrix array of electroluminescent elements, such as described above.
[0126] Provision may be made to replace the light source 21 with any other type of pixelated light source described above, such as for example a matrix array of light-emitting diodes or a light source associated with a matrix array of optoelectronic elements, such as micromirrors.
[0127] Each luminous module 2 and 3 may comprise elements other than those described above. These elements will not be described in the context of the present invention since they do not interact functionally with the arrangements according to the invention.
[0128] Lastly, the lighting device 400 comprises a control unit 4 that is able to selectively control, depending on control instructions that it receives, the light intensity of each of the pixels of the pixelated beams emitted by the modules 2 and 3, for example by selectively switching on and off the elementary emitters of the light sources 21 and 31, or else by increasing or decreasing the electrical power supplied to each of these elementary emitters. The control unit 4, which is for example integrated into a microcontroller, receives, from the predicting unit 300, the position predicted for the future time and implements step E8 of
[0129] The above description clearly explains how the invention allows the objectives that were set therefor to be achieved, and especially how it provides a solution allowing a solution to be provided that allows, in an illuminating beam, a region to be generated that does not cause glare to a target object to which glare is not to be caused, when the target object is moving at a high rate. The method and the lighting system according to the invention allow, in the road-illuminating beam, the region that does not cause glare to be generated not in the target-object position determined at the acquisition time of the image of the road, but rather in a position that has been corrected depending on a prediction of its position at a subsequent time, corresponding to the time at which said road-illuminating beam is emitted with the region that does not cause glare.
[0130] The invention is not limited to the embodiments specifically given in this document by way of nonlimiting examples, and extends in particular to all equivalent means and to any technically workable combination of these means. Thus, the features, variants and various embodiments of the invention may be combined with one another, in various combinations, provided that they are not mutually incompatible or exclusive.