METHOD FOR DETECTING A SCREENING OF A SENSOR DEVICE OF A MOTOR VEHICLE BY AN OBJECT, COMPUTING DEVICE, DRIVER-ASSISTANCE SYSTEM AND MOTOR VEHICLE
20170343649 · 2017-11-30
Assignee
Inventors
- Alexander Suhre (Bietigheim-Bissingen, DE)
- Youssef-Aziz Ghaly (Bietigheim-Bissingen, DE)
- Urs Luebbert (Bietigheim-Bissingen, DE)
Cpc classification
G01S7/4039
PHYSICS
G01S2007/4975
PHYSICS
International classification
Abstract
The invention relates to a method for detecting a screening of a sensor device (4) of a motor vehicle (1) by an object (8), in which at least one echo signal, captured by the sensor device (4), that characterizes a spacing between the sensor device (4) and the object (8) is received (S1) by means of a computing device (3), a capture region (E) for the sensor device (4) is determined, and on the basis of the at least one received echo signal it is checked whether the capture region (E) of the sensor device (4) is being screened by the object (8), at least in some regions, wherein the at least one echo signal is assigned by means of the computing device (3) to a discrete spacing value (B1, B2, B3) from a plurality of discrete spacing values (B1, B2, B3), for each of the discrete spacing values (B1, B2, B3) a power value (P) is determined (S2) on the basis of the echo signal, and on the basis of the power values (P) a decision is made by means of a classifier as to whether at least a predetermined proportion of the capture region (E) of the sensor device (4) is being screened (S6) by the object (8).
Claims
1. A method for detecting a screening of a sensor device of a motor vehicle by an object, the method comprising: receiving, by a computing device, at least one echo signal, captured by the sensor device, that characterizes a spacing between the sensor device and the object, determining a capture region for the sensor device; on the basis of the at least one received echo signal, checking whether the capture region of the sensor device is being screened by the object at least in some regions; assigning, by the computing device, the at least one echo signal to a discrete spacing value from a plurality of discrete spacing values determining, for each of the discrete spacing values a power value on the basis of the echo signal; and deciding, by a classifier and on the basis of the power values for the plurality of discrete spacing values, as to whether at least a predetermined proportion of the capture region of the sensor device is being screened by the object.
2. The method according to claim 1, wherein the power values for the plurality of discrete spacing values are assigned to a vector, and the vector is compared with a predetermined decision boundary by the classifier.
3. The method according to claim 2, wherein the predetermined decision boundary is predetermined during a training phase of the classifier.
4. The method according to claim 2, wherein the predetermined decision boundary is checked during a test phase of the classifier.
5. The method according to claim 1, wherein a plurality of echo signals are received by the computing device, each of the echo signals being received by the sensor device during a measuring cycle.
6. The method according to claim 1, wherein a relative location between the sensor device and the object is determined by the computing device on the basis of the respective power values for the discrete spacing values.
7. The method according to claim 6, wherein a first partial region of the capture region, which has been screened by the object, and a second partial region of the capture region, in which at least one further object is captured by the sensor device, are determined by the computing device on the basis of the determined relative location between the sensor device and the object.
8. The method according to claim 1, wherein on the basis of the discrete spacing values, the computing device checks whether, proceeding from the sensor device, a further object arranged behind the object at least in some regions, is able to be captured in the capture region by the sensor device (4).
9. The method according to claim 1, wherein the classifier is a support-vector machine, a Parzen-window classifier and/or a discriminant-analysis classifier.
10. A computing device for a driver-assistance system of a motor vehicle, configured to implement a method according to claim 1.
11. A driver-assistance system for a motor vehicle comprising: a computing device according to claim 10; and at least one sensor device.
12. The driver-assistance system according to claim 11, wherein the at least one sensor device exhibits one selected from a group consisting of: a radar sensor, an ultrasonic sensor, a laser sensor and a camera.
13. A motor vehicle with a driver-assistance system according to claim 11.
Description
[0024] The invention will now be elucidated in more detail on the basis of preferred exemplary embodiments and also with reference to the appended drawings.
[0025] Shown in these drawings are:
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033] The driver-assistance system 2 includes at least one sensor device 4, by means of which an object 8 in an ambient region 7 of motor vehicle 1 can be captured. The ambient region 7 completely surrounds motor vehicle 1. In the present case, an object 8 which is arranged in the ambient region 7 behind motor vehicle 1 can be captured with the at least one sensor device 4. The sensor device 4 has been designed to emit a transmit signal which is reflected from object 8. The reflected transmit signal arrives back at the sensor device 4 as an echo signal. On the basis of the time-delay, the spacing between the sensor device 4 and object 8 can be determined. The sensor device 4 may in principle take the form of an ultrasonic sensor, a radar sensor or a laser sensor. The sensor device may have been arranged in a front region 6 and/or in a rear region 5 of motor vehicle 1.
[0034] In the present exemplary embodiment, motor vehicle 1—or, to be more exact, the driver-assistance system 2—includes two spacing sensors 4 which take the form of radar sensors and which have been arranged in the rear region 5 of the motor vehicle. The spacing sensors may, in particular, take the form of continuous-wave radar sensors. The spacing sensors 4 may, for example, have been arranged in concealed manner behind a bumper of motor vehicle 1. Motor vehicle 1—or, to be more exact, the driver-assistance system 2—exhibits, in addition, a computing device 3. The computing device 3 may, for example, be constituted by a computer, by a digital signal processor or such like. The computing device 3 may also be an electronic control unit (ECU) of motor vehicle 1.
[0035] In the present case, it is to be checked whether one of the sensor devices 4 has been masked by an object 8. This is represented in exemplary manner in
[0036] The capture region E in the present case is assumed to have the shape of a sector of a circle. The capture region E is consequently divided into a first partial region 10, which has been screened by object 8, and into a second partial region 11 in which, where appropriate, further objects 9 can be detected by means of the sensor device 4. The second partial region 11 exhibits the beam angle a. In the present case, only a part of the further object 9 can be captured by means of the sensor device 4. Consequently the lateral spacing W and the longitudinal spacing L can, for example, be determined by means of the sensor device. The lateral spacing W may be, for example, a spacing at which a warning signal is output if an object 9 is located there.
[0037]
[0038] In a step S2, the echo signal is processed further by means of the computing device 3. The echo signal, which describes a spacing between the sensor device 4 and object 8, can now be assigned to a discrete spacing value B1, B2, B3. For each of the spacing values B1, B2, B3, a power value P can then be determined by means of the computing device. The power value P can be determined for each of the discrete spacing values B1, B2, B3 on the basis of the signal power of the echo signal. The respective power values P for each of the discrete spacing values B1, B2, B3 are assigned to a vector.
[0039] In a step S3, the vector is compared with a predetermined decision boundary by means of a classifier. The classifier can be made available by an appropriate computer on which an appropriate classification method is implemented. The classifier can also be made available by the computing device 3 itself. In the present case it will be assumed that in the case of a screening the screening object 8 is situated at a spacing remote from motor vehicle 1—or, to be more exact, from the spacing sensor 4. In this case, the power value P for the discrete spacing values B1, B2, B3 that is closest to that spacing will be highest. Similarly, the power or the power value P for the other spacing values B1, B2, B3 will be very much smaller. These power values may be at the level of noise, for example. On the basis of the power values P for the discrete spacing values B1, B2, B3, on the one hand it can now be ascertained whether a screening object 8 is arranged in the capture region E of the spacing sensor 4. Furthermore, it can be determined whether the screening is so strong that the power values P for regions behind object 8 are sufficiently small. Consequently it can be inferred that the spacing sensor 4 can ‘see’ nothing more behind object 8, and the field of view has consequently been impaired.
[0040] In the unscreened case, the power values P for the respective discrete spacing values B1, B2, B3 should exhibit similar values. This is represented in
[0041] For the purpose of determining the decision boundary, the classifier can firstly be operated in a training phase according to step S4. For this purpose, a reference object, for example, can be positioned at a predetermined spacing from the spacing sensor. Subsequently it can be decided to which class—‘screened’ or ‘not screened’—this spacing is to pertain. Consequently a ground-truth label can be defined. Subsequently the vector that comprises the discrete spacing values B1, 82, B3 and the associated power values P can be determined. This can be carried out for different spacings between the spacing sensor 4 and the reference object, and also for different reference objects. On the basis of the measured vectors with their ground-truth label, the classifier can then determine the decision boundary which is, for example, a line, a hyperplane or a probability density function, depending upon the classifier being used.
[0042] In the present exemplary embodiment, it is stipulated likewise according to step S5, since the decision boundary is checked in a test phase of the classifier. For this purpose, an object can be positioned at a predetermined spacing from the spacing sensor 4.
[0043] Subsequently the vector can be determined. In addition, the vector can be compared with the decision boundary, and it can be decided whether this object will be assigned to the ‘screened’ or ‘not screened’ class. This has, for example, been made clear in connection with
[0044] In step S3, the comparison now takes place of the ascertained power values P for the discrete spacing values B1 and B2 as a function of the decision boundary. In the present case, points 12 have been assigned to the ‘screened’ class, and points 13 have been assigned to the ‘not screened’ class. It should be noted that only two-dimensional input data were used in
[0045] Consequently, in a step S6 it can be decided by means of the classifier whether the vector that comprises the respective power values P for the discrete spacing values B1, B2, B3 will be assigned to a ‘screened’ class or to a ‘not screened’ class. Consequently it can be determined in straightforward manner whether or not object 8 is screening the capture region E of the spacing sensor. In addition, it can be determined to what extent object 8 is screening the capture region E of the spacing sensor 4.