METHOD AND SYSTEM FOR THE CONTROL OF A VEHICLE BY AN OPERATOR

20230036840 ยท 2023-02-02

    Inventors

    Cpc classification

    International classification

    Abstract

    A method for the control of a vehicle by an operator. The method includes: using a predictive map to control the vehicle by: detecting a situation and/or location reference of the vehicle, transmitting data of a defined set of sensors, fusing and processing the data of the defined set of sensors; displaying the fused and processed data for the operator; creating/updating the predictive map by: recognizing a problematic situation and/or a problematic location by observation of the operator and/or marking by the operator, storing the problematic situation and/or the problematic location in a first database for storing problematic situations and locations, and training a model for selecting the defined set of sensors and fusing the data of the defined set of sensors by machine learning.

    Claims

    1. A method for control of a vehicle by an operator, comprising the following steps: using a predictive map to control the vehicle by: detecting a situation and/or location reference of the vehicle, transmitting data of a defined set of sensors, fusing and processing the data of the defined set of sensors, displaying the fused and processed data for the operator; and creating/updating the predictive map by: recognizing a problematic situation and/or a problematic location by: observation of the operator and/or marking by the operator, storing the problematic situation and/or the problematic location in a first database for storing problematic situations and locations, and training a model for selecting the defined set of sensors and fusing the data of the defined set of sensors by machine learning.

    2. The method as recited in claim 1, wherein the observation of the operator is carried out by detecting: stress level of the operator, and/or viewing direction of the operator, and/or behavior of the operator.

    3. The method as recited in claim 1, further comprising the following steps: retrieving parameters for upcoming routes and/or areas from a second database for storing situation-related and/or location-related detection, fusion, and display parameters; adapting the defined set of sensors, whose data are transmitted; adapting the fusion of the data of the defined set of sensors; adapting the display for the operator.

    4. The method as recited in claim 1, wherein the fusion of the data of defined set of sensors is allocated onto multiple partial fusions.

    5. The method as recited in claim 1, further comprising: searching for recognized situations and/or locations in the first database; evaluating the recognized situations and/or locations; generating situation-adapted and/or location-adapted detection, fusion, and display parameters; storing the situation-adapted and/or location-adapted detection, fusion, and display parameters in the second database.

    6. A system for control of a vehicle by an operator, comprising: a vehicle which permits teleoperation; an operator who controls the vehicle without direct line of sight based on pieces of vehicle and surroundings information; sensors, which enable a comprehensive surroundings model of the vehicle for the operator; a predictive map to select the defined set of sensors and fuse the data of the defined set of sensors, which is configured to indicate whether and how data of individual sensors of the defined set of sensors are fused with one another; a wireless network configured to transmit data of the sensors; a control center configured to for control the vehicle; and a training system configured to train the predictive map to select the defined set of sensors and use the defined set of sensors as a function of location, and/or situation, and/or preferences of the operator.

    7. The system as recited in claim 6, further comprising: a backend, in which the data of the defined set of sensors are processed between the wireless network and the control center.

    8. The system as recited in claim 7, wherein the backend is a part of the control center or is separate from the control center.

    9. The system as recited in claim 6, wherein the fusion of the data of the individual sensors takes place at arbitrary points of the system.

    10. A non-transitory machine-readable memory medium on which is stored a computer program for control of a vehicle by an operator, the computer program, when executed by a computer, causing the computer to perform the following steps: using a predictive map to control the vehicle by: detecting a situation and/or location reference of the vehicle, transmitting data of a defined set of sensors, fusing and processing the data of the defined set of sensors, displaying the fused and processed data for the operator; and creating/updating the predictive map by: recognizing a problematic situation and/or a problematic location by: observation of the operator and/or marking by the operator, storing the problematic situation and/or the problematic location in a first database for storing problematic situations and locations, training a model for selecting the defined set of sensors and fusing the data of the defined set of sensors by machine learning.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0064] Specific embodiments of the present invention are explained in greater detail on the basis of the figures and the following description.

    [0065] FIG. 1 shows a schematic representation of the system according to the present invention for the control of a vehicle by an operator.

    [0066] FIG. 2 shows a sequence of a data fusion of different sensors.

    [0067] FIG. 3 shows a first camera image, in which the objects detected from the data of a LIDAR sensor are shown as a bounding box.

    [0068] FIG. 4 shows a second camera image, in which data of a LIDAR sensor are shown as a LIDAR point cloud.

    [0069] FIG. 5.1 shows a third camera image.

    [0070] FIG. 5.2 shows a fusion image, in which data of a LIDAR sensor are shown as LIDAR point cloud in the third camera image.

    [0071] FIG. 6 shows a sequence of the method according to the present invention.

    DETAILED DESCRIPTION OF EXAMPLE EMBODIMENT

    [0072] In the following description of the specific embodiments of the present invention, identical or similar elements are identified by identical reference numerals, a repeated description of these elements being omitted in individual cases. The figures only schematically represent the subject matter of the present invention.

    [0073] FIG. 1 schematically shows a system 100 according to the present invention for the control of a vehicle 10 by an operator 36. Vehicle 10 is an automated or semiautomated vehicle 10, which also permits teleoperation.

    [0074] It may be seen from FIG. 1 that vehicle 10 to be controlled by operator 36 is equipped with two sensors 12, specifically a LIDAR sensor 14 and a camera sensor 16. Vehicle 10 may include further sensors 12, for example, radar sensor, ultrasonic sensor, and infrared camera. In the present case, camera sensor 16 is the standard sensor. However, there is an array of situations in which camera sensor 16 does not supply an adequate surroundings model. For example, in bad weather or in darkness, the fusion of the camera image with the data of LIDAR sensor 14 is capable of better displaying the vehicle surroundings to operator 36. In addition, pieces of information from infrastructure units 20 or from other vehicles 10 may also be incorporated into the surroundings model via V2X. In the present case in FIG. 1, infrastructure unit 20 shown is formed as a traffic sign 22, which is equipped with a sensor 12, namely a camera sensor 16.

    [0075] Situations in which data of multiple sensors 12 have to be fused with one another are defined not only by bad weather or temporal aspects, but are often also dependent on the local conditions.

    [0076] It is apparent from the representation according to FIG. 1 that system 100 according to the present invention, in addition to various sensors 12, furthermore includes a control unit 30, which includes a backend 32 and a control center 34, where operator 36 is located. System 100 furthermore includes a wireless network 40, which is designed, for example, as a mobile network or a WLAN and permits teleoperation of vehicle 10 from a distance. Operator 36 controls vehicle 10 from control center 34 without direct line of sight based on pieces of vehicle and surroundings information which are detected above all by sensors 12 of vehicle 10, and directly or indirectly controls the actuators of vehicle 10 via wireless network 40.

    [0077] Backend 32 is designed here as a data processing center, in which the data of sensors 12 are processed between wireless network 40 and control center 34. In the present case in FIG. 1, backend 32 is separate from control center 34. Alternatively, backend 32 may also be part of control center 34. Backend 32 fulfills the purpose here above all of carrying out computing-intensive fusions of data of different sources, for example, different vehicles 10 or infrastructure units 20, and then transferring the data preprocessed via a data connection 38 to control center 34.

    [0078] The fusion may take place at arbitrary points of system 100. The fusion may also be allocated onto multiple partial fusions at different points of system 100, for example, to preprocess and reduce data prior to the wireless transmission and to process and finally fuse data after the wireless transmission.

    [0079] Thus, for example, an in-vehicle fusion of data of LIDAR sensor 14 and camera sensor 16 of vehicle 10 may be carried out. The fusion result is transmitted via wireless network 40 to backend 32.

    [0080] Optionally, infrastructure unit 20, in the present case traffic sign 22, may be configured to fuse data of various sensors 12. The fused data are also transmitted to backend 32.

    [0081] Backend 32 may be configured to receive the data sent from vehicle 10 and infrastructure unit 20, fuse them, and transmit the data fused there further to control center 34.

    [0082] Control center 34 may also be configured to fuse the received data. The data thus fused are provided directly to operator 36, for example, via audiovisual or haptic devices.

    [0083] FIG. 2 shows a sequence 200 of a data fusion of different sensors 12. The sequence of a fusion of data of a LIDAR sensor 14 and a camera sensor 16 is shown by way of example in FIG. 2.

    [0084] Initially, data of a LIDAR sensor 14 are detected in a first step 201 and data of a camera sensor 16 are detected in a second step 202. Subsequently, the data of LIDAR sensor 14 and camera sensor 16 are brought together in a third step 203.

    [0085] The data of LIDAR sensor 14 and camera sensor 16 are then fused with one another. There is not only one fusion possibility for a combination of two sensors 12. Two possibilities 210, 220 for fusing data of LIDAR sensor 14 and camera sensor 16 are shown in FIG. 2. In a first possibility 210, in a fourth step 204, the data of LIDAR sensor 14 are augmented as a LIDAR point cloud 408 (see FIGS. 4 and 5.2) in the camera image, while in a second possibility 220, initially, in a fifth step 205, an object detection is carried out from the data of LIDAR sensor 14, which is subsequently shown in a sixth step 206 as a bounding box 306 (see FIG. 3) in the camera image.

    [0086] Finally, in a seventh step 207, the fused data of LIDAR sensor 14 and camera sensor 16 are displayed to operator 36.

    [0087] FIG. 3 shows a first camera image 300, in which the surroundings of vehicle 10 are shown. Road users 302 are shown by a camera sensor 16. However, camera sensor 16 is disturbed by the sunshine in an area 304, so that road users 302 are not clearly recognizable in area 304.

    [0088] In this situation, the data of camera sensor 16 are fused with the data of a LIDAR sensor 14 of vehicle 10. The fusion is carried out on the basis of a weighting of particular sensors 12. In the present case, a weighting of camera sensor 16 of 0.5 and a weighting of LIDAR sensor 14 of 0.5 are selected for an area 304 problematic for camera sensor 16. A weighting of camera sensor 16 of 1 [sic] is selected outside area 304.

    [0089] Initially an object detection is carried out from the LIDAR data. The detected objects are subsequently shown to operator 36 as a bounding box 306 in first camera image 300.

    [0090] FIG. 4 shows a second camera image 400, in which a stop sign 402, a person 404, and an obstacle 406 are shown. The data of a LIDAR sensor 14 are shown as LIDAR point cloud 408 in second camera image 400. Only the closest LIDAR points are visualized.

    [0091] FIG. 5.1 shows a third camera image 502, in which a motorcycle rider 512, a pedestrian 514, a bicycle rider 516, and multiple streetlights 518 are recognized, while FIG. 5.2 shows a fusion image 504 in which the data of a LIDAR sensor 14 are shown as LIDAR point cloud 408 in third camera image 502. The distance is represented in the present case by different densities of points and thus different gray levels. The distance may also be represented by different color tones.

    [0092] FIG. 6 shows an exemplary method sequence 600 for the control of a vehicle 10 by an operator 36.

    [0093] In a first method step 601, the method according to the present invention is started. Vehicle 10 is controlled by operator 36. In a second method step 602, a situation and/or location reference of vehicle 10 is detected. Subsequently, data of a defined set of sensors 12 are transmitted in a third method step 603. The transmitted data are then fused in a fourth method step 604. The fused and processed data are then displayed to operator 36 in a fifth method step 605.

    [0094] With the aid of method steps 602 through 605, a predictive map is used which may be updated during the control by operator 36. It is checked whether a problematic situation and/or a problematic location was recognized. A problematic situation and/or a problematic location may be recognized in a sixth method step 606 by observation of operator 36. A problematic situation and/or a problematic location may also, however, be recognized by marking by operator 36 in a seventh method step 607.

    [0095] If a problematic situation and/or a problematic location are recognized, it and/or these are stored in an eighth method step 608 in a first database 630 for storing problematic situations and locations.

    [0096] In a ninth method step 609, it is checked whether the trip is ended. If the trip is ended, the method is ended in a tenth method step 610. If vehicle 10 drives further, method steps 602 through 609 repeat.

    [0097] In the creation of the predictive map, the detection, fusion, and display parameters are adapted if parameters are already present for the situation and/or location reference detected in second method step 602.

    [0098] In an eleventh method step 611, the parameters for upcoming routes and/or areas are retrieved from a second database 640 for storing situation-related and/or location-related detection, fusion, and display parameters, which the predictive map displays. Subsequently, the defined set of sensors 12, whose data are transmitted, is adapted in a twelfth method step 612. The fusing of the data of sensors 12 is adapted in a thirteenth method step 613 and the display for operator 36 is adapted in a fourteenth method step 614.

    [0099] If a problematic situation and/or a problematic location are recognized, an aggregation of data is carried out in parallel. The aggregation is started in a fifteenth method step 615 if a problematic situation and/or a problematic location are recognized.

    [0100] A search is made for the recognized situations and/or locations in a sixteenth method step 616. Subsequently, in a seventeenth method step 617, an evaluation of identical situations and/or locations is compiled. In an eighteenth method step 618, it is then checked whether the recognized situation and/or the recognized location are permanently critical. Method steps 616 through 618 are repeated. If the recognized situation and/or the recognized location are permanently critical, it is furthermore checked in a nineteenth method step 619 whether parameters are already present. If the parameters are already present, they are retrieved in a twentieth method step 620 from second database 640 and taken into consideration when evaluating the recognized situation and/or the recognized location in a twenty-first method step 621. Subsequently, situation-adapted and/or location-adapted detection, fusion, and display parameters are generated in a twenty-second step 622, which are stored in a twenty-third method step 623 in second database 640. After the storage of the adapted parameters, the aggregation of data is ended in a twenty-fourth method step 624.

    [0101] However, as already stated above in general, this selected sequence for carrying out the method according to the present invention in FIG. 6 is not the only one possible, since the fusion may also take place at any other position of system 100.

    [0102] The present invention is not restricted to the exemplary embodiments described here and the aspects highlighted therein. Rather, a variety of modifications are possible within the scope of the present invention, which are within the expertise of those skilled in the art.