METHOD AND APPARATUS FOR TESTING A DRIVER ASSISTANCE SYSTEM

20180093674 ยท 2018-04-05

Assignee

Inventors

Cpc classification

International classification

Abstract

The disclosure relates to a method and an apparatus that tests a driver assistance system. In a method that tests a driver assistance system in a vehicle, the vehicle includes at least one visual sensor, and the driver assistance system initiates a vehicle reaction based on input data provided by the visual sensor. A provision of the input data is modified using at least one virtual object. In this ease, this modification is carried out by virtue of an image captured by the visual sensor, or scenery captured by the visual sensor being enriched using the at least one virtual object before being captured by the visual sensor.

Claims

1. A method for testing a driver assistance system in a vehicle comprising: initiating a vehicle reaction using the driver assistance system based on input data provided by a visual sensor disposed on the vehicle indicative of an image or scenery captured by the visual sensor; and modifying a provision of the input data using at least one virtual object by enriching the input data using the at least one virtual object.

2. The method as claimed in claim 1, wherein enriching the input data includes placing the at least one virtual object in a field-of-view of the visual sensor.

3. The method as claimed in claim 1 further comprising linking actual vehicle environment data to the input data after enriching.

4. The method as claimed in claim 1, wherein the at least one virtual object is indicative of a virtual road marking.

5. The method as claimed in claim 1, wherein the at least one virtual object is indicative of a virtual third-party vehicle.

6. The method as claimed in claim 1, wherein the at least one virtual object is indicative of a virtual traffic sign.

7. A driver assistance system for a vehicle, comprising: at least one visual sensor configured to capture an image indicative of scenery; and a control module configured to, in response to an enrichment of the image with at least one virtual object that generates a modified provision of the image, initiate a vehicle reaction using the modified provision.

8. The driver assistance system as claimed in claim 7, wherein the modified provision places the at least one virtual object in a field-of-view of the visual sensor.

9. The driver assistance system as claimed in claim 8, wherein the control module is further configured to link an environment from the scenery within the field-of-view to the modified provision.

10. The driver assistance system as claimed in claim 7, wherein the at least one virtual object is indicative of a road marking.

11. The driver assistance system as claimed in claim 7, wherein the at least one virtual object is indicative of a third-party vehicle.

12. The driver assistance system as claimed in claim 7, wherein the at least one virtual object is indicative of a traffic sign.

13. A vehicle comprising: a visual sensor disposed on an exterior of the vehicle, the visual sensor being configured to generate input data indicative of an image of scenery within an environment of the vehicle; and a control module configured to, in response to a provision of the input data being modified with a virtual object that generates enriched input data, initiate a reaction to the enriched input data.

14. The vehicle as claimed in claim 13, wherein the provision places the virtual object in a field-of-view of the visual sensor.

15. The vehicle as claimed in claim 14, wherein the control module is further configured to link the environment from the image within the field-of-view to the provision.

16. The vehicle as claimed in claim 15, wherein the virtual object is indicative of a road marking.

17. The vehicle as claimed in claim 16 further comprising a second virtual object indicative of a third-party vehicle, wherein the control module is further configured to modify the provision of the input data with the virtual objects.

18. The vehicle as claimed in claim 17 further comprising a third virtual object indicative of a traffic sign, wherein the control module is further configured to modify the provision of the input data with the virtual objects.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0020] FIG. 1 shows, in a merely schematic illustration, an overview of components of the apparatus according to the disclosure in accordance with one embodiment; and

[0021] FIG. 2 shows a diagram that explains a method of operation of the apparatus, and possible signal flow when carrying out the method according to the disclosure.

DETAILED DESCRIPTION

[0022] As required, detailed embodiments of the present disclosure are disclosed herein; however, it is to be understood that the disclosed embodiments are merely exemplary of the disclosure that may be embodied in various and alternative forms. The figures are not necessarily to scale; some features may be exaggerated or minimized to show details of particular components. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the present disclosure.

[0023] In this case, it is assumed that a vehicle designated 10 in FIGS. 1 and 2 is equipped with an active driver assistance system, which may be, for example, a lane departure assistant, a traffic sign recognition device or an emergency braking system. The input data for this driver assistance system are provided, inter alia, by at least one visual sensor 35, the field-of-view of which is designated 20 in FIG. 1.

[0024] An essential element of the apparatus according to the disclosure is a combination unit (combiner) 50, which, according to FIG. 2, combines or links at least one virtual object 30 or the data describing such an object 30 with data 5 describing an actual vehicle environment (real world) and, as a result of this linking, generates an accordingly enriched or augmented image 55, which is supplied to the visual sensor 35, or the camera 35.

[0025] The at least one virtual object 30 may be, for example, a virtual traffic sign, a virtual third-party vehicle, a virtual lane or any desired other virtual object.

[0026] The active driver assistance system 60 is controlled on the basis of the data provided by the visual sensor 35, which driver assistance system generates corresponding control signals (for example that brake, accelerate and/or steer the vehicle 10) and transmits the corresponding control signals to corresponding actuators of the vehicle 10.

[0027] In FIGS. 1 and 2, 40 is used to designate an apparatus (for example a computer-based apparatus) configured to provide data describing the at least one virtual object 30, which is typically carried out via simulation. Numeral 15 is used to designate current vehicle data (such as vehicle position, vehicle speed and vehicle acceleration) that are supplied both to the device 40, and (in conjunction with the data 5 describing the actual vehicle environment or the real world) combiner 50.

[0028] The data describing the at least one virtual object 30 can be linked to the data 5 describing the actual vehicle environment (real world) according to the disclosure in different ways. On the one hand, a virtual object image can be placed in a field-of-view of the visual sensor 35, or a see-through display or an optical combiner can be used.

[0029] While exemplary embodiments are described above, it is not intended that these embodiments describe all possible forms of the disclosure. Rather, the words used in the specification are words of description rather than limitation, and it is understood that various changes may be made without departing from the spirit and scope of the disclosure. Additionally, the features of various implementing embodiments may be combined to form further embodiments of the disclosure.