Image-Based Method for Simplifying a Vehicle-External Takeover of Control of a Motor Vehicle, Assistance Device, and Motor Vehicle
20240103548 ยท 2024-03-28
Inventors
Cpc classification
G05D1/2247
PHYSICS
G05D1/80
PHYSICS
G06V20/70
PHYSICS
G06V10/26
PHYSICS
G06V20/56
PHYSICS
G05D1/0061
PHYSICS
G05D1/0038
PHYSICS
International classification
G05D1/80
PHYSICS
G05D1/224
PHYSICS
G06V10/26
PHYSICS
G06V20/56
PHYSICS
Abstract
A method is provided for simplifying a takeover of control of a motor vehicle by a vehicle-external operator. In the method, images of the surroundings of the vehicle are captured from the vehicle and semantically segmented. Errors in a corresponding segmentation model are predicted on the basis of at least one such image each. If a corresponding error prediction triggering a request for the takeover of control is made, an image-based visualization is automatically generated in which exactly one region corresponding to the error prediction is visually highlighted. The request and the visualization are then sent to the vehicle-external operator.
Claims
1.-10. (canceled)
11. A method for simplifying a takeover of control of a motor vehicle by a vehicle-external operator, the method comprising: in a conditionally automated operation of the motor vehicle, acquiring and semantically segmenting images of an environment of the motor vehicle by way of a predetermined trained segmentation model, based on at least one of the images in each case, predicting errors of the segmentation model, for an error prediction, which, according to a predetermined criterion, triggers an automatic output of a request for the takeover of control by the vehicle-external operator, automatically generating an image-based visualization in which an area corresponding to the error prediction is visually highlighted, and sending the request and the visualization to the vehicle-external operator.
12. The method according to claim 11, wherein: the errors of the segmentation model are predicted pixel by pixel, a number of the predicted errors and/or an average error is determined based on the errors of the segmentation model for the respective image, and it is checked as the predetermined criterion whether the number of the errors and/or the average error is greater than a predetermined error threshold value.
13. The method according to claim 11, wherein: the errors of the segmentation model are predicted pixel by pixel, a size of a coherent area of error pixels is determined, and it is checked as the predetermined criterion whether the size corresponds at least to a predetermined size threshold value.
14. The method according to claim 11, wherein: by way of a predetermined reconstruction model, from a semantic segmentation, the image underlying the semantic segmentation is approximated by generating a corresponding reconstruction image and the respective visualization is generated based on the reconstruction image.
15. The method according to claim 14, wherein: the reconstruction model comprises generative adversarial networks.
16. The method according to claim 14, wherein: to predict the errors, the reconstruction image is compared to the respective underlying acquired image and the errors are predicted based on detected differences.
17. The method according to claim 11, wherein: the visualization is generated in a form of a heat map.
18. The method according to claim 11, further comprising: determining which functionality is affected by the errors, and sending the functionality with the request to the vehicle-external operator.
19. An assistance unit for the motor vehicle, the assistance unit comprising: an input interface for acquiring the images, a data storage unit, a processor unit, and an output interface for outputting the request for the takeover of control by the vehicle-external operator and the visualization, wherein the assistance unit is configured to carry out the method according to claim 11.
20. A motor vehicle comprising: a camera for recording the images, the assistance unit according to claim 19, wherein the assistance unit is connected to the camera, and a communication unit for wirelessly sending the request for the takeover of control and the visualization and for wirelessly receiving control signals for control of the motor vehicle.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0032]
[0033]
DETAILED DESCRIPTION OF THE DRAWINGS
[0034] Attempts are presently being made for the increasing automation of vehicles, wherein, however, completely safe autonomous operation is presently not yet possible in any situation. It therefore results as a possible scenario that a vehicle is temporarily autonomously underway, but in individual situations, which cannot be autonomously managed by the vehicle, however, the control of the vehicle is taken over by an operator, who is in particular external to the vehicle. However, the problem arises here that such a takeover of control can occupy a significant time, for example up to 30 seconds. Moreover, it can be difficult for the vehicle-external teleoperator, to which, for example, multiple views of an environment of the respective vehicle recorded from different perspectives are provided, to acquire the respective environment and driving situation and react appropriately as quickly as possible, thus to control the respective vehicle safely.
[0035] To counter these difficulties, a method is proposed in the present case, for the illustration of which
[0036] In a conditionally automated operation of the motor vehicle 12, images 10 of the environment of the motor vehicle 12 are recorded therefrom, of which one is shown here by way of example and schematically. For this purpose, the motor vehicle 12 can be equipped with at least one camera 40. In the image 10 shown here, a traffic scene along a road 14, on which the motor vehicle 12 is moving, is depicted by way of example. The road 14 is laterally delimited therein by buildings 16 and is spanned by a bridge 18. In addition, the sky 20 is also depicted in some areas. Furthermore, an external vehicle 22 is shown here by way of example as representative of other road users. An obstacle, in the present case in the form of multiple traffic cones 24, which block a lane of the road 14 traveled by the motor vehicle 12, is located in the travel direction in front of the motor vehicle 12.
[0037] The image 10 is transmitted to an assistance unit 42 of the motor vehicle 12 and is acquired thereby via an input interface 44. The assistance unit 42 comprises a data memory 46 and a processor 48, for example a microchip, microprocessor, microcontroller, or the like, for processing the image 10. A semantic segmentation 26 is thus generated from the image 10. Various areas and objects corresponding to a present understanding of a segmentation model used for this purpose, which can be stored in the data memory 46, for example, are classified in this semantic segmentation 26. In the present case, the segmentation model has assigned a vehicle classification 28 to at least some areas of the front hood of the motor vehicle 12 recognizable in the image 10 and the external vehicle 22 but alsoincorrectly to the obstacle 24, thus the traffic cones. Both the actual buildings 16 and alsolikewise incorrectlyparts of the bridge 18 were assigned a building classification 30. Both the sky 20 and alsolikewise incorrectlyother parts of the bridge 18 were assigned a sky classification 32. This means that the segmentation model has made multiple errors in the semantic segmentation of the image 10.
[0038] By way of a reconstruction model, which is also stored in the data memory 46, for example, a reconstruction image 34 is generated on the basis of the semantic segmentation 26. This reconstruction image 34 represents the most realistic possible approximation or reconstruction of the image 10 underlying the respective semantic segmentation 26.
[0039] The assistance unit 42 then forms a difference between the original image 10 and the reconstruction image 34. The image 10 and the reconstruction image 34 are thus compared to one another here, wherein an average deviation from one another can be calculated. Anomalies can be detected on the basis of the difference or deviation between the image 10 and the reconstruction image 34, such as in this case areas of the obstacle 24 and the bridge 18.
[0040] If no significant anomalies are detected in this case, this indicates that the assistance system 42 correctly interprets the respective situation. Accordingly, based on this interpretation, thus in particular based on the semantic segmentation 26, the motor vehicle 12 or at least a vehicle unit 50 of the motor vehicle 12 can be automatically or autonomously controlled.
[0041] In contrast, if the detected anomalies are sufficiently large, thus meet a predetermined threshold value criterion, for example, a request for a takeover of control by an operator can be generated by the assistance unit 42. This request can be output, for example, via an output interface 52 of the assistance unit 42, for example, in the form of a wirelessly emitted request signal 54, which is schematically indicated here. This request signal 24 can be sent to a vehicle-external teleoperator 56. This teleoperator can thereupon send control signals 58, also schematically indicated here, to the motor vehicle 12 in order to remote control it wirelessly.
[0042] To facilitate an acquisition of the respective situation in which the motor vehicle 12 is located for the teleoperator 56, moreover a visualization 36 is generated on the basis of the anomalies or segmentation errors which have been detected, in particular with pixel accuracy. Thereinat least probable or suspectedincorrect classifications, thus error areas 38 corresponding to the anomalies are visually highlighted. This visualization 36 can also be sent as part of the request signal 54 to the teleoperator 56. It can be indicated in an intuitively comprehensible manner to the teleoperator 56 by the visualization 36 having the highlighted error areas 38 where a cause for the respective request to take over control is located. The teleoperator 56 can thus particularly quickly and effectively recognize the areas most relevant for a safe control of the motor vehicle 12 and accordingly react quickly without having to initially search the entire image 10 for possible problem points.
[0043] Overall, the described examples thus show how detecting and visualizing areas problematic from the aspect of the respective vehicle for its autonomous operation of vehicles can contribute to an improved situation acquisition and situation comprehension of a vehicle-external operator.
LIST OF REFERENCE NUMERALS
[0044] 10 image [0045] 12 motor vehicle [0046] 14 road [0047] 16 buildings [0048] 18 bridge [0049] 20 sky [0050] 22 external vehicle [0051] 24 obstacle [0052] 26 segmentation [0053] 28 vehicle classification [0054] 30 building classification [0055] 32 sky classification [0056] 34 reconstruction image [0057] 36 visualization [0058] 38 error area [0059] 40 camera [0060] 42 assistance unit [0061] 44 input interface [0062] 46 data memory [0063] 48 processor [0064] 50 vehicle unit [0065] 52 output interface [0066] 54 request signal [0067] 56 teleoperator [0068] 58 control signal