Method for evaluating sensor data, including expanded object recognition

11900691 ยท 2024-02-13

Assignee

Inventors

Cpc classification

International classification

Abstract

A method for evaluating sensor data. The sensor data are ascertained by scanning a surrounding area, using at least one sensor. On the basis of the sensor data, object detection is carried out for determining objects from the sensor data. Object filtering is carried out. Surface characteristics of at least one object are identified, and/or the surface characteristics of at least one object are ascertained with the aid of access to a database. A control unit is also described.

Claims

1. A method for evaluating sensor data on a road, the method comprising: ascertaining sensor data by scanning a surrounding area, using at least one sensor; based on the sensor data, carrying out object detection for determining objects from the sensor data; carrying out object filtering; and identifying surface characteristics of at least one object, and/or the surface characteristics of the at least one object are ascertained by using access to a database; wherein the road has lanes which are set apart from one another by a roadway divider having specular characteristics, and wherein at least one phantom object in the sensor data is removed or a position thereof is corrected based on the ascertained surface characteristics of the at least one object by triangulation.

2. The method as recited in claim 1, wherein the surface characteristics of at least one recognized object are ascertained concurrently to object detection.

3. The method as recited in claim 2, wherein the surface characteristics identified and/or ascertained are used in the object filtering.

4. The method as recited in claim 1, wherein during the object filtering, a comparison with a database is carried out to ascertain surface characteristics of at least one detected object.

5. The method as recited in claim 1, wherein the database is in the form of a map having stored surface characteristics.

6. The method as recited in claim 1, wherein the surface characteristics of at least one recognized object are identified or classified, using machine learning.

7. The method as recited in claim 1, wherein: (i) the surface characteristics of at least one recognized object are ascertained in the course of sensor-data fusion, using object filtering, based on the access to the database, or (ii) previously detected surface characteristics of at least one recognized object are used by object filtering in the course of sensor data fusion.

8. An apparatus for coupling to at least one sensor comprising: a control unit configured for fetching out sensor data on a road of the at least one sensor, and evaluating the sensor data, by performing the following: ascertaining the sensor data by scanning a surrounding area, using the at least one sensor; based on the sensor data, carrying out object detection for determining objects from the sensor data; carrying out object filtering; and identifying surface characteristics of at least one object, and/or ascertaining the surface characteristics of the at least one object by accessing a database; wherein the road has lanes which are set apart from one another by a roadway divider having specular characteristics, and wherein at least one phantom object in the sensor data is removed or a position thereof is corrected based on the ascertained surface characteristics of the at least one object by triangulation.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 shows the surroundings of a vehicle having sensors, in order to illustrate a method according to one specific embodiment of the present invention.

(2) FIG. 2 shows a schematic flow chart for illustrating the method according to a further specific embodiment of the present invention.

(3) FIG. 3 shows a schematic flow chart for illustrating the method according to a further specific embodiment of the present invention, which includes an object filter that accesses a database.

(4) FIG. 4 shows a schematic flow chart for illustrating the method according to a further specific embodiment of the present invention, which includes sensor-data fusion; and

(5) FIG. 5 shows a schematic flow chart for illustrating the method according to a further specific embodiment of the present invention, which includes an object filter that accesses a database in the course of sensor-data fusion.

DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS

(6) The surroundings of a vehicle 1 having sensors 2, 4 are represented in FIG. 1, in order to illustrate a method 6 according to one specific embodiment of the present invention. In particular, the figure shows an example of a traffic situation including two lanes in one direction.

(7) Sensors 2, 4 may be, for example, a lidar sensor 2 and a camera 4.

(8) Sensors 2, 4 are connected to a control unit 5. Control unit 5 is used for fetching out sensor data and for executing method 6.

(9) In the surroundings, the lanes are set apart from each other by a roadway divider 8. Roadway divider 8 has specular surface characteristics.

(10) Vehicle 1 recognizes a vehicle 10 traveling ahead, while due to being concealed by vehicle 10 traveling ahead, further vehicle 12 is only detected as a reflected phantom vehicle 12a. Using method 6, the specular surface of roadway divider 8 may either be detected or ascertained from a map. Therefore, using this information, phantom object 12a may either be removed or put at the correct position of further vehicle 12 by triangulation.

(11) FIG. 2 shows a schematic flow chart for illustrating the method 6 according to a further specific embodiment. In particular, a sensor architecture is represented. In this context, object detection 14 takes place with the aid of recorded sensor data 13 of the detector of sensor 2, 4. Material/surface classification 16 is carried out concurrently to object detection 14. In light of surface estimation 16 and the object data, a downstream object filter 18 may reduce the false-positive rate and/or increase the correct-positive rate. Surface classification 16 may be implemented, for example, by a previously trained neural network.

(12) A schematic flow chart for illustrating the method 6 according to a further specific embodiment is shown in FIG. 3, the further specific embodiment including an object filter 18 that accesses a database 20. In this connection, in particular, the information regarding the surface characteristic is read out of a map 20; object filter 18 assuming the same function as the surface classification 16 shown in FIG. 2. Consequently, recognized objects 8, 10, 12, 12a are assigned the surface characteristics by object filter 18. Subsequently, the results of object filter 18 may be processed further.

(13) A schematic flow chart for illustrating the method 6 according to a further specific embodiment, which includes sensor-data fusion 22, is represented in FIG. 4. In this connection, object filter 18 is applied on the fusion level. Here, in comparison with the filtering in the course of the sensor-data processing, a higher false-positive rate of sensors 2, 4 may be permitted, since object filtering 18 occurs later. In comparison with the object filtering on the level of the sensor-data processing, one advantage is the option for the surface estimation of one sensor modality to have an influence on the object recognition of another modality, which means, for example, that the surface estimation of video camera 4 may improve object recognition 16 of lidar sensor 2. The processed sensor data 13 from object filtering element 18 and sensor-data fusion element 22, which include potential objects and surface characteristics, may subsequently be processed further. This may be used, for example, to generate or to update a model of the surroundings.

(14) FIG. 5 shows a schematic flow chart for illustrating the method 6 according to a further specific embodiment, which includes an object filter 18 that accesses a database 20 in the course of sensor-data fusion 22. In particular, a fusion architecture is shown, in which the object filter 18 on the level of the sensor-data fusion accesses a database 20, in order to carry out identification of surface characteristics on the basis of sensor data 13 of a plurality of sensors 2, 4. In this connection, the information regarding the surface characteristic is read out of a map 20, for example.