METHOD AND PROCESSOR CIRCUIT FOR OPERATING AN AUTOMATED DRIVING FUNCTION WITH OBJECT CLASSIFIER IN A MOTOR VEHICLE, AS WELL AS THE MOTOR VEHICLE
20230033314 · 2023-02-02
Inventors
Cpc classification
G06V10/25
PHYSICS
International classification
Abstract
An automated driving function in a motor vehicle comprises: a processor circuit of the motor vehicle recognizes respective individual images of an environment of the motor vehicle from sensor data of a least one sensor of the motor vehicle by means of at least one object classifier. At least one relational classifier using the object data for at least some of the individual objects additionally recognizes a respective pairwise object relation with the aid of predetermined relation features of the individual objects in the respective individual image determined from the sensor data, which relation is described by relational data, and an aggregation module is used to aggregate the relational data throughout multiple consecutive individual images to produce aggregation data, which describe aggregated object relations.
Claims
1. A method for operating an automated driving function in a motor vehicle, comprising: recognizing, by a processor circuit of the motor vehicle, from sensor data of a least one sensor of the motor vehicle, respective individual objects in individual images of an environment of the motor vehicle described by the sensor data by at least one object classifier, wherein at least one recognized object characteristic of the individual object is indicated by object data; recognizing, by at least one relational classifier using the object data for at least some of the individual objects, a respective pairwise object relation with aid of predetermined relation features of the individual objects in the respective individual image determined from the sensor data, which is described by relational data; and aggregating, by an aggregation module, the relational data throughout multiple consecutive individual images to produce aggregation data, which describe aggregated object relations, and the aggregation data are provided to a tracking module for an object tracking and/or to the automated driving function for a trajectory planning.
2. The method according to claim 1, wherein the aggregation module, in dependence on a frequency of repetition and/or quality of a recognition of the particular object relation, determines a weighting value for a particular aggregated object relation and by this describes the aggregation data, and/or the aggregated object relations which produce a closed relational graph of object relations for several of the individual objects describe these individual objects as a related object group.
3. The method according to claim 1, wherein respective sensor data from multiple sensors receive respective individual images and for each sensor a special relational classifier operates and the aggregation module performs a relational data fusion for relational data of the multiple relational classifiers.
4. The method according to claim 1, wherein the object classifier uses the object data as respective object characteristics to indicate a bounding box and/or an object type.
5. The method according to claim 1, wherein the aggregation data are used in the tracking module to perform an object tracking of an individual object hidden in at least one individual image of multiple consecutive individual images and/or the relational data are formed in the aggregation module by tracking data from the tracking module throughout multiple consecutive individual images by identifying a hidden individual object through the tracking module.
6. The method according to claim 1, wherein the relational classifier signals, as a pairwise object relation, a relative arrangement of the particular individual objects by a directional relation statement, especially adjacent, consecutive, predecessor, successor, and/or a nondirectional relation statement.
7. The method according to claim 1, wherein the relational classifier performs the recognition of the particular object relation independently of a following included environment map of the driving function and/or without information about planned trajectories of the driving function.
8. The method according to claim 1, wherein the driving function comprises a situation recognition for at least one driving situation and the respective driving situation is described as a combination of individual objects and their aggregated object relations.
9. The method according to claim 8, wherein the driving situation detected is an approaching of an intersection wherein aggregation data on object relations between stationary infrastructure objects, especially lane boundaries and/or lane arrows, and/or object relations between infrastructure objects and vehicles and/or object relations between vehicles are combined to form route hypotheses in regard to available routes, wherein possible routes are coordinated with groups of traffic lights through a relation recognition from aggregated object relations between the traffic lights and the individual objects describing a route.
10. The method according to claim 8, wherein one driving situation which is detected is a route with no road marking, wherein the individual objects detected are vehicles traveling on the route in a consecutive series one after the other, and the aggregated object relations are used to recognize the series of consecutively traveling vehicles and the geometrical trend of the series is signaled as the route.
11. The method according to claim 8, wherein one driving situation which is detected is a road boundary formed from discrete, similar individual objects, especially in a construction site and/or on a country road, wherein the trend of the boundary is determined through the aggregated object relations of the individual objects, especially with the aid of an aggregated object relation indicating that the particular individual object is located behind a particular predecessor object.
12. The method according to claim 1, wherein the driving function comprises an object recognition for at least one predetermined environment object and the particular environment object is described as a combination of individual components and their aggregated object relations.
13. The method according to claim 1, wherein by the motor vehicle and by at least one further motor vehicle for the generating and/or the updating of a digital road map, the respective motor vehicle by its own processor circuit uses a method according to one of the preceding claims to determine aggregation data of aggregated object relations and sends the aggregation data to a vehicle-external server by a predetermined communication method and the server determines a confidence value of the aggregated object relations, which is dependent on how many motor vehicles have respectively reported the object relation, and if the confidence value of a particular object relation is greater than a predefined threshold value, the particular object relation is entered into the digital road map, and/or for an initialization of the aggregation module, initial object relations already present in the road map are read out and aggregated with sensor data.
14. A processor circuit for a motor vehicle, wherein the processor circuit is adapted to carry out a method for operating an automated driving function in the motor vehicle, the method comprising: recognizing, by a processor circuit of the motor vehicle, from sensor data of a least one sensor of the motor vehicle, respective individual objects in individual images of an environment of the motor vehicle described by the sensor data by at least one object classifier, wherein at least one recognized object characteristic of the individual object is indicated by object data; recognizing, by at least one relational classifier using the object data for at least some of the individual objects, a respective pairwise object relation with aid of predetermined relation features of the individual objects in the respective individual image determined from the sensor data, which is described by relational data; and aggregating, by an aggregation module, the relational data throughout multiple consecutive individual images to produce aggregation data, which describe aggregated object relations, and the aggregation data are provided to a tracking module for an object tracking and/or to the automated driving function for a trajectory planning.
15. A motor vehicle having a processor circuit adapted to carry out a method for operating an automated driving function in the motor vehicle, the method comprising: recognizing, by a processor circuit of the motor vehicle, from sensor data of a least one sensor of the motor vehicle, respective individual objects in individual images of an environment of the motor vehicle described by the sensor data by at least one object classifier, wherein at least one recognized object characteristic of the individual object is indicated by object data; recognizing, by at least one relational classifier using the object data for at least some of the individual objects, a respective pairwise object relation with aid of predetermined relation features of the individual objects in the respective individual image determined from the sensor data, which is described by relational data; and aggregating, by an aggregation module, the relational data throughout multiple consecutive individual images to produce aggregation data, which describe aggregated object relations, and the aggregation data are provided to a tracking module for an object tracking and/or to the automated driving function for a trajectory planning.
Description
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
DETAILED DESCRIPTION
[0047] In the embodiments described herein, the components described each represent individual features to be viewed independently of each other, which may also develop additional embodiments further independently of each other. Therefore the disclosure should also encompass other than the represented combinations of features of the embodiments. Furthermore, the embodiments described can also be supplemented with other of the features already described.
[0048] In the figures, elements having the same or similar function may be given the same reference numbers.
[0049]
[0050] The automated driving function 12 can receive signals from the environment U by means of at least one sensor 15 for a detection of objects in an environment U. Examples of sensors 15 are: a camera for visible light, an infrared camera, a radar, a lidar, an ultrasound sensor, to mention only a few examples. The sensors 15 can each generate sensor data 16, which can be received by the processor circuit 11. The sensor data 16 can be generated in succession in individual measurement cycles, so that the sensor data 16 is always updated. Sensor data 16 of one measurement cycle then produce an individual image 17 of the environment U each time, i.e., a color image or a black and white image or a radar image or a lidar image or an image of ultrasound reflections, to mention only a few examples. In known manner, the processor circuit 11 can perform an object detection of individual objects 18 in the environment U in the way described hereafter, which is also known from the prior art. A feature extraction 19 can be implemented, which can be implemented separately or individually for example on the basis of an artificial neural network 20 for each sensor 15. The artificial neural network 20 is represented here by symbolic feature matrices of the feature extraction 19. On the basis of the feature extraction 19, image features such as edge lines and/or depth information and/or segmented surfaces which fulfill a homogeneity criterion (for example, uniform color or the same pattern) can be detected and made known in the individual images 17. On the basis of the individual images 17 and/or the feature extraction 19, one or more object classifiers 21 can perform an object detection for the detecting of the individual objects 18 for individual sub-regions in the individual images 17. Such an object classifier 21 can be implemented in known manner on the basis of an algorithm for machine learning, in particular an artificial neural network. The object detection of the object classifiers 21 can result in object data 22, which indicate where and which individual object 18 has been detected in the respective individual image 17. On the basis of the object data 22 of the object classifiers 21, a tracking module 23 can perform an object tracking 24, which follows or tracks the position of the individual objects 18 over multiple individual images 17, that is, over multiple measurement cycles. This can be done individually for each sensor 15 or in combination with a sensor function the object data 22 of multiple sensors 15 can be combined and the tracking of the individual objects 18 can be done by the tracking module 23 in a combined object tracking 24. Corresponding tracking modules 23 are known in the prior art. The tracking can result in tracking data 25, which can be provided to the automated driving function 12 for the driving task (longitudinal control and/or transverse control), that is, for generating the control commands 14. The automated driving function 12 may comprise multiple sub-functions 26, indicated here for example as sub-functions F1, F2, . . . , Fn. One sub-function 26 may be the recognition of an approach to a traffic light 27 or the recognition of a construction site barrier, as will be further explained in the following.
[0051] Another sub-function 26 may be the generating of an environment map or situation map 28, in which the position of the detected individual objects 18 can be mapped on the basis of the tracking data 25 in the situation map 28 relative to the motor vehicle 10 (in the situation map 28 the front of the vehicle is represented symbolically from a bird's eye view). On the basis of such a situation map 28, another sub-function 26 can be a trajectory planning, for example, to plan a driving trajectory 29 by which or through which the control commands 14 can then be generated.
[0052] In this signal flow for the automated driving function 12, one relational classifier 30 can be implemented for example individually for each sensor 15, not depending on either the situation map 28 or the planned driving trajectory 29. The relational classifier 30 can work on the sensor data 16 and/or the object data 22 for the individual images 17 or process them. A respective relational classifier 30 can be implemented on the basis of an algorithm for machine learning, for example an artificial neural network, and/or on the basis of an algorithmic evaluation of geometrical relations. The person skilled in the art can determine a suitable geometrical criterion with the aid of individual images from test drives.
[0053] With the relational classifier 30, relational data 31 can be generated or signaled regarding the individual images 17 for the individual objects 18, describing the object relations of the detected individual objects 18 in the individual images 17. The tracking module shown in
[0054] The generation of the relational data 31 and the aggregation data 33 is explained in the following with the aid of
[0055]
[0056]
[0057] The object types 67 (such as object type “traffic light”) and the object attributes 66 (such as “traffic light facing south”) and object states (such as “traffic light is now red”) constitute the described object characteristics.
[0058] The visual feature extraction 70 from the image data 60 and the spatial feature extraction 63, the context feature extraction 64 and/or the semantic feature extraction 65 can be used as input data for an artificial neural network 72, which carries out the actual function of the relational classifier VRC or the relational classifier 30. In this artificial neural network 72, the aggregation module 32 can also be realized. In place of or in addition to an artificial neural network, another algorithm of machine learning and/or a procedural program for the evaluation can be provided. The output data provided can then be the relational data 31 or the aggregation data 33, which can indicate for example pairwise object relations 74 between every two individual objects 18, being represented here as a directional graph 75, pointing from an individual object 18 (represented by its bounding box 62) as the subject to another individual object 18 (represented by its bounding box 62) as the object. The object relation 74 can be described by hypothesis values or probability values, which can indicate each time hypothesis values or probability values 77 of the classification result or recognition result for the possible relation types 76 (for example, “next to,” “direct neighbor,” “predecessor,” “successor”). In the described manner, the relational data 31 or the aggregation data 33 can then be passed on to the tracking module 23 and/or (if the aggregation module 32 is connected to it) to the aggregation module 32.
[0059]
[0060]
[0061]
[0062]
[0063]
[0064] Existing sub-functions of an automated driving function, based on recognized object/entity information, thus acquire further objects/entities and their attributes from the fusion. Functions which employ the relational graph furthermore obtain relations and relation attributes. Recognized relations are passed on in the aggregation/fusion to merged objects/entities and possible contradictions between fusion sources are dealt with.
[0065] The basic idea enlarges the object detection with additional information on the object relations, which can be determined by reusing learned object features in a real-time system.
[0066] The relation graph offers a compact, yet comprehensive representation of how an object is to be interpreted in its context, which simplifies the functions based on implementing this. Further, it allows arranging objects in independent or overlapping groups, so that different semantic connections can be derived for different driving functions. Due to their common purpose, a benefit results in computing expense.
[0067] The object relations can be easily tracked together with the corresponding objects, making it possible to pass on context information recognized in the sensor to a time-consistent environment model. Likewise, object relations can be merged, even between different sensor sources. Moreover, thanks to recognizing of relations in individual sensor measurements, there is a latency benefit as compared to the possibility of determining relations in a module or processing unit coming after the tracking.
[0068] For static objects, such as occur primarily in electronic maps, the object relations can be determined independently of image processing methods and can be created for example by manual labels and included in the map data. In this way, on the one hand, it is possible to merge map data with the data determined in real time, in order to deduce any information not available due to concealment, and on the other hand the relations from the maps allow an easier matching up of observed objects and map objects, since not only the object but also its relationship can be used for the matching up.
[0069] Specific applications which can be realized with the aid of the object relation graphs are the following:
[0070] A function for recognizing the situation at approaches to an intersection can be organized such that it can distinguish multiple intersections in the sensor data, in order to provide a description of the following traffic light systems at road forks for the possible directions, for example. The function can put out hypotheses on the number of recommended roads, their possible starting directions, and the states of relevant light signals. For the hypotheses for the roads, relation information between lane boundaries, arrows, and vehicles can be used and the cases can be considered when each of these road indications can or cannot be used in a specific situation (not available, concealed, faded out, not recognized, etc.). Road hypotheses for “discrete” objects, i.e., arrows and vehicles, are constructed through recognized relations. The matching up of possible roads with groups of traffic lights should be done through the relation recognition between the traffic lights and the objects describing a road. The hypothesis as to whether driving is allowed on a recognized road with possible destination direction is formed after an object tracking, in order to aggregate timing information on traffic light states.
[0071] A function for constructing hypotheses on road limitations consisting of discrete, similar objects can be provided. These occur for example in construction sites or on country roads and make it possible to perceive the route in event of heavily dirty or snow-covered traffic lanes. The trend of the limitation will be determined via the 3D-positioning of the individual objects by a tracking or an object fusion. The approach using object relations offers the benefit as compared to a description of the trend of the object by a curve that it will function in the same way for very short road sections limited in this way.
[0072] On the example of a case where guide beacons serve as the road limitation, these objects as well as their recognized series relations can be used as lane information, without a separate lane marking, for the construction site situation.
[0073] A third possible application would be to use the above described method in the context of a crowd sourcing for the generating and/or the updating of maps. Example: a vehicle which is part of a fleet drives through an intersection. Here, the vehicle will calculate, by means of the above described method, an object relation between for example traffic lanes and traffic lights. The vehicle sends these object relation to an external server by means of a Car2X method, for example. A second vehicle then also drives through the intersection and also calculates an object relation and sends this to the server. The server can then match up a “confidence score” with the object relation, being dependent on how many vehicles have reported the object relation. If the confidence score is above a predefined threshold, the object relation can be used in a map (in order to update it, for example).
[0074] On the whole, the examples show how the object relations extracted from individual images of an environment camera can be provided as additional input information for an environment observation.
[0075] German patent application no. 10 2021 119871.2, filed Jul. 30, 2021, to which this application claims priority, is hereby incorporated herein by reference, in its entirety. Aspects of the various embodiments described above can be combined to provide further embodiments. These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled.