METHOD FOR CONTROLLING A MOTOR VEHICLE LIGHTING SYSTEM

20240135666 ยท 2024-04-25

Assignee

Inventors

Cpc classification

International classification

Abstract

A method for controlling a lighting system for a motor vehicle having a system for detecting objects includes defining at least one set of detectable types of objects and acquiring, by means of the detection system, a set of data relating to the position of a plurality of objects of types belonging to the set. Also included is determining a lighting model which is associated with said set and defines at least one zone referred to as the initial detection zone, and a light pattern referred to as the initial light pattern, of a light beam intended to be emitted in the initial detection zone. The lighting system is controlled in order to emit a light beam having the initial light pattern in the initial detection zone of said lighting model.

Claims

1. A method for controlling a lighting system for a motor vehicle equipped with an object detection system comprising a system for acquiring images of all or part of the environment of the vehicle, the method comprising the following steps: a. Defining at least one set of types of objects intended to be detected by the detection system of the motor vehicle, b. The detection system acquiring a dataset relating to the position, in the environment of the vehicle, of a plurality of objects of types belonging to said set, c. Determining, based on the dataset, a lighting model associated with said set defining at least one zone, called initial detection zone, associated with this set of types of objects and able to be addressed by the lighting system, and a photometry, called initial photometry, of a light beam intended to be emitted by the lighting system in the initial detection zone associated with this set, d. Controlling the lighting system on the basis of the determined lighting model so as to emit a light beam having the initial photometry in the initial detection zone of this lighting model.

2. The method as claimed in claim 1, wherein the dataset relating to the position of the objects, acquired in the acquisition step, comprises, for each object, the position, called initial position, of this object at the time when it was detected by the detection system.

3. The method as claimed in claim 2, wherein the step of determining said model comprises, for each type of object of said set, a step of modeling, based on the dataset, a zone, called first detection zone of said type of object, encompassing all of the initial positions of the objects of said type of object, and wherein said initial detection zone) is determined based on the first detection zones of all of the types of objects of said set.

4. The method as claimed in the preceding claim 3, wherein each step of modeling the first detection zone of a type of object implements a machine learning algorithm, making it possible to determine the first detection zone based on the initial positions of the objects of said type of object.

5. The method as claimed in claim 1, wherein, in the step of determining said model, said initial photometry of the light beam is determined on the basis of at least one of the types of objects of the set of types of objects.

6. The method as claimed in claim 5, the method comprising a step of providing at least one range of values of a parameter relating to the behavior of the motor vehicle or to the environment, and wherein the step of determining the lighting model associated with said set is a step of determining a lighting model, associated with said set, that is variable on the basis of said values of the parameter.

7. The method as claimed in claim 1, wherein: a. the definition step comprises defining at least three sets of types of objects including a first set comprising at least objects of ground marking type, a second set comprising at least objects of road user type and a third set comprising at least objects of traffic sign type, b. the determination step comprises determining three lighting models each associated with one of the sets, including a first lighting model associated with the first set, a second lighting model associated with the second set and a third lighting model associated with the third; c. and the step of controlling the lighting system comprises controlling the lighting system on the basis of the determined lighting models so as to emit a first light beam having the initial photometry of the first lighting model in the initial detection zone of this first model, a second light beam having the initial photometry of the second lighting model in the initial detection zone of this second model and a third light beam having the initial photometry of the third lighting model in the initial detection zone of this third model.

8. The method as claimed in claim 1, the method furthermore comprising the following steps: a. The object detection system of the vehicle detecting an object of a given type from among said set of types of objects, b. Controlling the lighting system so as to modify the light beam on the basis of type of the detected object.

9. The method as claimed in claim 8, wherein the step of controlling the lighting system comprises a step of generating a zone in the light beam level with the detected object, the zone having a photometry adapted to the type of the detected object, and a step of moving said zone on the basis of the movement of the detected object in the reference frame of the image acquisition system.

10. The method as claimed in claim 1, the motor vehicle being equipped with a system for partially or fully autonomous driving, wherein the implementation of the step of controlling the lighting system is conditional on the activation of the autonomous driving system, and the method comprises the following steps: a. An occupant of the vehicle receiving an instruction to take back manual control of the motor vehicle, b. Controlling the lighting system so as to emit at least one predetermined regulatory lighting and/or signaling beam.

11. A motor vehicle comprising an object detection system comprising a system for acquiring images of all or part of the environment of the vehicle, a lighting system, a system for partially or fully autonomous driving, and a controller for the lighting system, the controller being designed to implement the control step of the method according to the invention.

12. The method as claimed in claim 2, wherein, in the step of determining said model, said initial photometry of the light beam is determined on the basis of at least one of the types of objects of the set of types of objects.

13. The method as claimed in claim 2, wherein: a. the definition step comprises defining at least three sets of types of objects including a first set comprising at least objects of ground marking type, a second set comprising at least objects of road user type and a third set comprising at least objects of traffic sign type, b. the determination step comprises determining three lighting models each associated with one of the sets, including a first lighting model associated with the first set, a second lighting model associated with the second set and a third lighting model associated with the third; c. and the step of controlling the lighting system comprises controlling the lighting system on the basis of the determined lighting models so as to emit a first light beam having the initial photometry of the first lighting model in the initial detection zone of this first model, a second light beam having the initial photometry of the second lighting model in the initial detection zone of this second model and a third light beam having the initial photometry of the third lighting model in the initial detection zone of this third model.

14. The method as claimed in claim 2, the method furthermore comprising the following steps: a. The object detection system of the vehicle detecting an object of a given type from among said set of types of objects, b. Controlling the lighting system so as to modify the light beam on the basis of type of the detected object.

15. The method as claimed in claim 2, the motor vehicle being equipped with a system for partially or fully autonomous driving, wherein the implementation of the step of controlling the lighting system is conditional on the activation of the autonomous driving system, and the method comprises the following steps: a. An occupant of the vehicle receiving an instruction to take back manual control of the motor vehicle, b. Controlling the lighting system so as to emit at least one predetermined regulatory lighting and/or signaling beam.

16. The method as claimed in claim 3, wherein, in the step of determining said model, said initial photometry of the light beam is determined on the basis of at least one of the types of objects of the set of types of objects.

17. The method as claimed in claim 3, wherein: a. the definition step comprises defining at least three sets of types of objects including a first set comprising at least objects of ground marking type, a second set comprising at least objects of road user type and a third set comprising at least objects of traffic sign type, b. the determination step comprises determining three lighting models each associated with one of the sets, including a first lighting model associated with the first set, a second lighting model associated with the second set and a third lighting model associated with the third; c. and the step of controlling the lighting system comprises controlling the lighting system on the basis of the determined lighting models so as to emit a first light beam having the initial photometry of the first lighting model in the initial detection zone of this first model, a second light beam having the initial photometry of the second lighting model in the initial detection zone of this second model and a third light beam having the initial photometry of the third lighting model in the initial detection zone of this third model.

18. The method as claimed in claim 3, the method furthermore comprising the following steps: a. The object detection system of the vehicle detecting an object of a given type from among said set of types of objects, b. Controlling the lighting system so as to modify the light beam on the basis of type of the detected object.

19. The method as claimed in claim 3, the motor vehicle being equipped with a system for partially or fully autonomous driving, wherein the implementation of the step of controlling the lighting system is conditional on the activation of the autonomous driving system, and the method comprises the following steps: a. An occupant of the vehicle receiving an instruction to take back manual control of the motor vehicle, b. Controlling the lighting system so as to emit at least one predetermined regulatory lighting and/or signaling beam.

20. The method as claimed in claim 4, wherein, in the step of determining said model, said initial photometry of the light beam is determined on the basis of at least one of the types of objects of the set of types of objects.

Description

[0052] The present invention is now described using examples that are only illustrative and in no way limit the scope of the invention, and with reference to the appended drawings, in which drawings, in the various figures:

[0053] FIG. 1 schematically and partially shows a method for controlling a lighting system for a motor vehicle according to one embodiment of the invention;

[0054] FIG. 2 schematically and partially shows a motor vehicle according to one exemplary embodiment of the invention;

[0055] FIG. 3 schematically and partially shows datasets for implementing the method of FIG. 1;

[0056] FIG. 4 schematically and partially shows the implementation of a step of the method of FIG. 1;

[0057] FIG. 5 schematically and partially shows the implementation of a step of the method of FIG. 1;

[0058] FIG. 6 schematically and partially shows the implementation of a step of the method of FIG. 1;

[0059] FIG. 7 schematically and partially shows the implementation of a step of the method of FIG. 1; and

[0060] FIG. 8 schematically and partially shows the implementation of a step of the method of FIG. 1.

[0061] In the following description, elements that are identical in structure or in function and appear in various figures keep the same reference sign, unless otherwise stated.

[0062] [FIG. 1] describes a method for controlling a lighting system 3 fora motor vehicle 1 according to one embodiment of the invention.

[0063] The motor vehicle 1, shown in [FIG. 2], comprises an object detection system 2. This detection system 2 comprises an image acquisition system 21.

[0064] This system 21 comprises a camera able to acquire images of the road scene all around the motor vehicle 1. The detection system 2 also comprises a processing unit (not shown) designed to implement image processing algorithms on the images acquired by the camera 21 in order to detect objects in said images.

[0065] The motor vehicle 1 comprises a lighting system 3, comprising a plurality of lighting modules 31 to 36, each able to emit a pixelated light beam in a given direction, the lighting system 3 thus being able to illuminate the road all around the motor vehicle 1.

[0066] The motor vehicle 1 comprises a controller for the lighting system 3, able to selectively control each of the lighting modules 31 to 36 and to selectively control each of the pixels of the pixelated light beams able to be emitted by these lighting modules 31 to 36.

[0067] The motor vehicle 1 comprises a system for fully autonomous driving that is designed, when the motor vehicle is in an autonomous driving mode, to control the steering components, the braking components and the engine or transmission components of the motor vehicle, in particular on the basis of the objects detected by the processing unit of the detection system 2 in the images acquired by the camera 21.

[0068] In the remainder of the description, the method of [FIG. 1] will be a method for controlling the lighting modules 31 and 32, and will be described in conjunction with [FIG. 3] to [FIG. 8], which each show a road scene ahead of the vehicle, as may be seen by the camera 21 and as may be illuminated by the lighting modules 31 and 32, it being understood that the method is also implemented for road scenes to the side of and behind the vehicle by controlling the lighting modules 33 to 36.

[0069] In a step E1, a plurality of sets of types of objects G.sub.1 to G.sub.N will have been defined beforehand, each set Gi grouping together one or more types of objects T.sub.i,j. In the example described, this step E1 is simplified by defining a first set G.sub.1 of types of objects T.sub.1,1 grouping together traffic signs, a second set G.sub.2 of types of objects T.sub.2,1 and T.sub.2,2 grouping together pedestrians and vehicles, respectively, and a third set G.sub.3 of types of objects T.sub.3,1 grouping together ground markings and obstacles likely to be reached by the vehicle in a time less than two seconds. In the figures, objects of the type T.sub.1,1 will be represented by squares, objects of the type T.sub.2,1 will be represented by circles, objects of the type T.sub.2,2 will be represented by triangles and objects of the type T.sub.3,1 will be represented by stars.

[0070] In a step E2, a plurality of datasets S.sub.1 to S.sub.N is acquired. Each datum P.sub.i,j,k of a dataset S.sub.i represents a set of positions of an object O.sub.i,j,k of a type T.sub.i,j belonging to a set G.sub.i, estimated by a detection system of a motor vehicle, similar to the detection system 2 and comprising a camera similar to the camera 21. This set of positions P.sub.i,j,k groups together all of the positions of this object O.sub.i,j,k from an initial position P.sub.i,j,k(0) of this object, estimated at the time when it was detected by the detection system in the field of the camera, up to a final position, estimated at the last time before the disappearance of the object from the field of the camera.

[0071] [FIG. 3] shows a simplified example of the datasets S.sub.1 to S.sub.3, relating to the sets G.sub.1 to G.sub.3, the initial positions P.sub.i,j,k of the data of these datasets being projected onto a road scene ahead of a motor vehicle.

[0072] Each dataset S.sub.i furthermore comprises, for each datum P.sub.i,j,k of this set representing a set of positions of an object, the speed V.sub.i,j,k of the motor vehicle when the set of positions of this object was estimated.

[0073] In a preliminary step E1, in parallel with the definition step E1, multiple speed ranges ?V.sub.1 to ?V.sub.M were defined.

[0074] In a step E3, each of the datasets S.sub.1 to S.sub.N is split into a plurality of sub-datasets S.sub.1,1 to S.sub.N,M, each datum P.sub.i,j,k of a dataset S.sub.i being assigned to a subset S.sub.i,l if the speed V.sub.i,j,k(0) of the motor vehicle, at the time of acquisition of the initial position P.sub.i,j,k(0) of the object O.sub.i,j,k, is within the range ?V.sub.l. In other words, the subset S.sub.i,l contains all of the initial positions P.sub.i,j,k(0) of the objects O.sub.i,j,k whose type T.sub.i,j belongs to the set G, and whose initial speed V.sub.i,j,k(0) is within the range ?V.sub.l.

[0075] In a step E4, for each type of object T.sub.i,j of each set Gi and for each speed range ?V.sub.l, a zone Z.sub.i,j,l, called first detection zone of this type of object, is modeled. This zone Z.sub.i,j,l encompasses all of the initial positions P.sub.i,j,k(0) of the objects O.sub.i,j,k of the type of object T.sub.i,j and whose initial speed V.sub.i,j,k(0) is within the range ?V.sub.l.

[0076] For these purposes, a support vector machine has been trained beforehand to determine, with supervision and based on a plurality of points labeled with different labels and positioned in a space, for each label, a border of a zone such that the number of points labeled with this label and present in this zone is greater than a given threshold and such that the number of points labeled with a label other than this label and present in this zone is less than a given threshold.

[0077] In step E4, each of the sub-datasets S.sub.i,l for one and the same range ?V.sub.l is then provided at input of the previously trained support vector machine, along with thresholds for each type of object and for each range, so as to determine first detection zones Z.sub.i,j,l of the objects of type T.sub.i,j. Each zone Z.sub.i,j,l thus encompasses the initial positions P.sub.i,j,k(0) of the objects O.sub.i,j,k of the type of object T.sub.i,j and whose initial speed V.sub.i,j,k(0) is within the range ?V.sub.l. It is furthermore noted that each zone Z.sub.i,j,l is thus modeled by the neural network such that the probability of an object O.sub.i,j,k of the type of object T.sub.i,j being detected therein, when the initial speed V.sub.i,j,k(0) is within the range ?V.sub.l, is at a maximum and that the probability of an object O.sub.i,j,k of a type other than said type of object T.sub.i,j, when the initial speed V.sub.i,j,k(0) is within the range ?V.sub.l, being detected therein is at a minimum.

[0078] In a step E51, an initial detection zone A.sub.i,l is determined by combining the first detection zones Z.sub.i,j,l of the objects of type T.sub.i,j belonging to one and the same set G.sub.i.

[0079] [FIG. 4] thus shows the sub-datasets S.sub.1,1, S.sub.2,1 and S.sub.3,1 for initial speeds between 90 and 130 km/h. [FIG. 4] also shows the zones Z.sub.2,1,1, Z.sub.2,2,1 and Z.sub.3,1,1, associated respectively with the types T.sub.2,1, T.sub.2,2 and T.sub.3,1 determined at the end of step E51 and the zones A.sub.1,1, A.sub.2,1 and A.sub.3,1 determined at the end of step E52.

[0080] [FIG. 5] also shows the sub-datasets S.sub.1,2, S.sub.2,2 and S.sub.3,2 for initial speeds between 50 and 90 km/h. [FIG. 5] also shows the zones Z.sub.2,1,2, Z.sub.2,2,2 and Z.sub.3,1,2, associated respectively with the types T.sub.2,1, T.sub.2,2 and T.sub.3,1 determined at the end of step E51 and the zones A.sub.1,2, A.sub.2,2 and A.sub.3,2 determined at the end of step E52.

[0081] [FIG. 6] also shows the sub-datasets S.sub.1,3, S.sub.2,3 and S.sub.3,3 for initial speeds between 0 and 50 km/h. [FIG. 6] also shows the zones Z.sub.2,1,3, Z.sub.2,2,3 and Z.sub.3,1,3, associated respectively with the types T.sub.2,1, T.sub.2,2 and T.sub.3,1 determined at the end of step E51 and the zones A.sub.1,3, A.sub.2,3 and A.sub.3,3 determined at the end of step E52.

[0082] The zones A.sub.1,1, A.sub.1,2 and A.sub.1,3 associated with the set Gi of traffic signs are zones located more in the upper part of the road scene, the zones A.sub.2,1, A.sub.2,2 and A.sub.2,3 associated with the set G.sub.2 of road users are zones located more in the center of the road scene, and the zones A.sub.3,1, A.sub.3,2 and A.sub.3,3 associated with the set G.sub.3 of objects in the immediate navigable space of the vehicle are zones located more in the lower part of the road scene. It may be seen that the shape, the dimensions and the positions in the space of the initial detection zones A.sub.i,l associated with one and the same set G, vary on the basis of the initial speed.

[0083] Each initial detection zone A.sub.i,l is a zone of the space in which the probability of an object, of type T.sub.i,j belonging to a set G, associated with this zone, being able to be detected by the detection system 2 based on an image acquired by the camera 21 is particularly high.

[0084] In a step E52, for each initial detection zone A.sub.i,l of objects of type T.sub.i,j belonging to one and the same set G.sub.i, an initial photometry P.sub.i,l is determined that makes it possible to improve the detection performance of the detection system 2 taking into account the types of objects of this set G.sub.i. Determining this initial photometry P.sub.i,l may comprise determining a minimum, average and/or maximum light intensity of a light beam intended to be emitted by the lighting system 3 in the initial detection zone A.sub.i,l or else determining a light intensity for a plurality of pixels, for a plurality of groups of pixels or even for all of the pixels of a light beam intended to be emitted by the lighting system 3 in the initial detection zone A.sub.i,l.

[0085] For example, for the zones A.sub.3,1, A.sub.3,2 and A.sub.3,3, the lighting emitted by the lighting modules 31 and 32 is substantially parallel to the ground. The back-reflection of this lighting to the camera 21 will therefore not be very intense, and so it is necessary for the average light intensity of a light beam emitted in these zones to be high in order to allow the detection of a marking or an obstacle in these zones. For the zones A.sub.2,1, A.sub.2,2 and A.sub.3,3, the lighting emitted by the lighting modules 31 and 32 will be substantially perpendicular to a road user. This lighting will therefore be reflected satisfactorily to the camera 21, such that the average light intensity of a light beam emitted in these zones may be lower than that of a beam emitted in the zones A.sub.3,1, A.sub.3,2 and A.sub.3,3. For the zones A.sub.1,1, A.sub.1,2 and A.sub.1,3, the lighting emitted by the lighting modules 31 and 32 will be substantially perpendicular to a traffic sign. Since a traffic sign is generally provided with a reflective coating, this lighting will be reflected back in amplified form. It is therefore necessary for the average light intensity of a light beam emitted in these zones to be low so as not to saturate the sensors of the camera 21.

[0086] At the end of step E52, the set of initial detection zones A.sub.i,l and initial photometries P.sub.i,l, for all of the ranges ?V.sub.1 to ?V.sub.M and for one and the same set G.sub.i, forms an lighting model M.sub.i associated with this set G.sub.i.

[0087] It should be noted that steps E1 to E52 for determining these lighting models M.sub.1 to M.sub.N, for the sets G.sub.1 to G.sub.N, are produced by a computer unit, comprising a memory storing the sets G.sub.1 to G.sub.N and the speed ranges ?V.sub.1 to ?V.sub.M defined in steps E1 and E1, along with the datasets Si to S N, and a processor able to implement these steps. The computer unit is separate from the motor vehicle 1, steps E1 to E52 thus being carried out prior to the following steps. At the end of step E52, the models M.sub.1 to M.sub.N are loaded into a memory of the controller for the lighting system 3, for example in the form of images in which each pixel represents a pixel of a pixelated light beam intended to be emitted by the modules 31 and 32, the grayscale level of the pixel of the image representing a light intensity setpoint for an elementary light beam able to be emitted by these modules 31 and 32 so as to form the pixel of the pixelated light beam.

[0088] In a step E6, when the motor vehicle 1 is in an autonomous driving mode, the lighting modules 31 and 32 of the lighting system 3 are controlled by the controller so as to emit, ahead of the vehicle, an overall light beam F formed of multiple light beams F, each conforming to one of the models M.sub.1 to M.sub.N. Since the speed of the motor vehicle is within one of the ranges ?V.sub.l, each light beam F.sub.i is emitted in the initial detection zone A.sub.i,l with the initial photometry P.sub.i,l. These light beams F.sub.1 to F.sub.N are light beams that are emitted by default, in the absence of detection of an object on the road.

[0089] [FIG. 7] shows a road scene, illuminated by way of the beams F.sub.1, F.sub.2 and F.sub.3, emitted simultaneously by the lighting modules 31 and 32, so as together to form a segmented overall light beam F. In the example of [FIG. 7], the motor vehicle is traveling at a speed between 50 and 90 km/h.

[0090] Steps E7 and E8, which will now be described, relate to the adaptation of the segmented overall beam F carried out following the detection of an object O, while step E9 relates to the vehicle switching from an autonomous driving mode to a manual driving mode.

[0091] In a step E7, an object O.sub.1 is detected by the detection system 2, and is classified by this detection system 2 as being of a type T.sub.2,1 belonging to a set G. Another object O.sub.2 is detected by the detection system 2, and is classified by this detection system 2 as being of a type T.sub.2,2 belonging to this set G.sub.2. As shown in [FIG. 7], the object O.sub.1 is a motor vehicle and the object O.sub.2 is a pedestrian, these objects being located in the initial detection zone A.sub.2,2. The objects O.sub.1 and O.sub.2 are thus illuminated by the beam F2, the photometry P2,2 of which makes it possible to improve the detection performance of these types of objects by the detection system 2.

[0092] In a step E8, following the detection of an object O, the controller controls the lighting system 3 so as to generate a zone B in the light beam, centered on the object O and having a photometry adapted to the type of this object O. In the example described, following the detection of the objects O.sub.1 and O.sub.2, the controller controls the modules 31 and 32 so as to generate, in the beam F2, a lower-intensity zone B.sub.1, centered on the object O.sub.1, and an over-intensified zone B.sub.2, centered on the object O.sub.2. The zone B.sub.1 allows the detection system 2 to continue to detect the vehicle O.sub.1 while it is moving and the movement of the vehicle 1, without however dazzling a possible driver of this vehicle. The zone B.sub.2 allows the detection system 2 to continue to detect the pedestrian O.sub.2 while the vehicle 1 is moving. The zones B.sub.1 and B.sub.2 thus remain centered on these objects O.sub.1 and O.sub.2 while they are moving in the field of the camera 21, the estimation of the position of these objects O.sub.1 and O.sub.2 at a given time allowing the controller to move the zones B 1 and B.sub.2 at the next time, as shown in [FIG. 8], until the objects O.sub.1 and O.sub.2 leave the field of the camera. At the end of this step E8, the controller for the lighting system then controls the modules 31 and 32 so that the light beam F2 conforms to the default lighting model M.sub.2.

[0093] In a step E9, when the autonomous driving system receives an instruction I to take back manual control of the motor vehicle 1, the controller controls the lighting system, and in particular the lighting modules 31 and 32, to gradually transform the overall light beam F into a regulatory dipped beam LB. If the autonomous driving system receives an instruction to switch the motor vehicle 1 to an autonomous mode, the controller then controls the lighting system 3 so as to emit F.sub.1, F.sub.2 and F.sub.3, conforming to the models M.sub.1, M.sub.2 and M.sub.3, respectively, using the lighting modules 31 and 32.

[0094] The above description clearly explains how the invention makes it possible to achieve the objectives that it set itself, and in particular by proposing a method for controlling a lighting system for a motor vehicle, wherein data relating to the position of objects, classified according to their types, make it possible to describe at least one zone in which any new object, belonging to one of these types, will be likely to be present, and wherein a photometry is defined that makes it possible to maximize the probability of an object of this type actually being detected by a detection system of the motor vehicle. By virtue of the invention, the light beams emitted by the lighting system are thus intended entirely to support the image acquisition system of the detection system.

[0095] In any event, the invention should not be regarded as being limited to the embodiments specifically described in this document, and extends, in particular, to any equivalent means and to any technically feasible combination of these means. It is possible in particular to envisage types of detection system other than the one described, and in particular systems combining an image acquisition system with other types of sensors, the position of objects on the road being detected and estimated for example through multi-sensor data fusion. It is also possible to envisage types of objects other than those described. It is also possible to envisage other examples of methods for modeling first detection zones, and in particular types of machine learning algorithm other than the one described. It is also possible to envisage modeling first detection zones on the basis of parameters other than the speed of the vehicle.