Method and Device for Operating a Robot with Improved Object Detection

20220119007 ยท 2022-04-21

    Inventors

    Cpc classification

    International classification

    Abstract

    A method and device are disclosed for improved object detection in an area surrounding a robot. In the method, first and second sensing data are obtained, which can be assigned to a first or second sensing means of the robot, respectively, and which contain at least one portion of the area surrounding the robot. An objection detection of an object in the area surrounding the robot is carried out using a fusion of at least the first and the second sensing data. An item of redundancy information is generated, which is assigned to the object detection and at least indicates whether the detected object has been detected using only the first or only the second sensing data or whether the detected object or at least one or more sections of same has been detected redundantly using both the first and the second sensing data.

    Claims

    1. A method for operating a robot, the method comprising: obtaining first detection data that can be associated with a first detection device of the robot and includes at least a subsection of an environment of the robot; obtaining second detection data that can be associated with a second detection device of the robot and includes at least a subsection of the environment of the robot; performing an object detection of an object in the environment of the robot using a fusion of at least the first detection data and the second detection data; and generating redundancy information associated with the object detection, the redundancy information at least indicating whether one of (i) the detected object has been detected based only n the first detection data, (ii) the detected object has been detected based only om the second detection data, and (iii) at least one section of the detected object has been detected redundantly based on both of the first detection data and the second detection data.

    2. The method as claimed in claim 1 further comprising: controlling the robot at least partially automatically based on the object detection and the redundancy information.

    3. The method as claimed in claim 1, further comprising: generating, based on the object detection and the redundancy information, control data for at least partially automated control of the robot, wherein the control data include one of (i) first control data associated with a first control maneuver in response to the object not having been detected redundantly; and (ii) second control data associated with a second control maneuver in response to if the object having been detected redundantly, the second control maneuver being different than the first control maneuver.

    4. The method as claimed in claim 3 further comprising: deferring control of the robot, in response to the object not having been detected redundantly, until an update of the redundancy information to decide whether the object has been detected erroneously.

    5. The method as claimed in claim 3, further comprising, in response to if the object not having been detected redundantly; initially controlling the first control maneuver, the first control maneuver causing a motion-dynamically weaker reaction of the robot than the second control maneuver; and after an update of the redundancy information, additionally controlling the second control maneuver if in response to the object being detected redundantly in the updated redundancy information.

    6. The method as claimed in claim 3, further comprising, in response to the object not having been detected redundantly; controlling the second control maneuver, the second control maneuver causing a motion-dynamically stronger reaction of the robot than the first control maneuver, without prior control of the first control maneuver in response to a hazard to the environment of the robot due to the second control maneuver being excluded based on at least one of the first detection data, the second detection data, and further detection data.

    7. The method as claimed in claim 3 further comprising: controlling the second control maneuver in response to determining that the object could have been detected based on both of the first detection data and the second detection data even though the redundancy information indicates that the object was not detected redundantly.

    8. The method as claimed in claim 1, the performing the object detection further comprising: approximating a total object outline based on the fusion of at least the first detection data and the second detection data.

    9. The method as claimed in claim 1, the performing the object detection further comprising: approximating (i) at least a first partial object outline based on the first detection data and (ii) at least a second partial object outline based on the second detection data, wherein the redundancy information indicates that the object is detected redundantly if both the first partial object outline and the second partial object outline can be associated with the object.

    10. The method as claimed in claim 9 further comprising: associating the first partial object outline and the second partial object outline with the object based on their respective feature vectors, which can be associated with an object class.

    11. A device for operating an at least partly autonomous robot, the device configured to: obtain first detection data that can be associated with a first detection device of the robot and includes at least a subsection of an environment of the robot: obtain second detection data that can be associated with a second detection device of the robot and includes at least a subsection of the environment of the robot: perform an object detection of an object in the environment of the robot using a fusion of at least the first detection data and the second detection data: and generate redundancy information associated with the object detection, the redundancy information at least indicating whether one of (i) the detected object has been detected based only on the first detection data, (ii) the detected object has been detected based only on the second detection data, and (iii) at least one section of the detected object has been detected redundantly based on both of the first detection data and the second detection data.

    12. The method of claim 1, wherein the method is carried out by executing commands of a computer program with a computer.

    13. A machine-readable memory medium configured to store a computer program for operating a robot that, when executed by a computer, causes the computer to: obtain first detection data that can be associated with a first detection device of the robot and includes at least a subsection of an environment of the robot; obtain second detection data that can be associated with a second detection device of the robot and includes at least a subsection of the environment of the robot; perform an object detection of an object in the environment of the robot using a fusion of at least the first detection data and the second detection data; and generate redundancy information associated with the object detection, the redundancy information at least indicating whether one of (i) the detected object has been detected based only on the first detection data, (ii) the detected object has been detected based only on the second detection data, and (iii) at least one section of the detected object has been detected redundantly based on both of the first detection data and the second detection data.

    Description

    BRIEF DESCRIPTION OF THE FIGURES

    [0033] Advantageous exemplary embodiments of the invention are described in detail below with reference to the accompanying figures. In the figures:

    [0034] FIG. 1 shows a robot, which is in the form of a vehicle here by way of example, with a device set up for the detection of an object in the environment of the robot,

    [0035] FIG. 2 shows a block diagram illustrating a method for detecting an object in the environment of the robot.

    [0036] The figures are only schematic and not true to scale. In the figures, identical, equivalent or similar elements are provided with the same reference characters.

    EMBODIMENTS OF THE INVENTION

    [0037] FIG. 1 shows an at least partially autonomous robot 100, which is in the form of a vehicle here only by way of example and is hereinafter referred to as such. Here, by way of example, the vehicle 100 is an at least partially autonomous, but in particular a highly or fully autonomous, motor vehicle. Accordingly, the vehicle 100 has actuators and a vehicle drive (not referenced in detail), which can be electronically activated for automated driving control of the vehicle 100, for example for accelerating, braking, steering, etc. Alternatively, the at least partly autonomous robot may also be another mobile robot (not shown), for example one that moves by flying, floating, submerging or walking. The mobile robot, for example, may also be an at least partly autonomous lawnmower or an at least partly autonomous cleaning robot. Also in these cases one or more actuators, for example a drive and/or the steering of the mobile robot, may be activated electronically in such a way that the robot moves at least partly autonomously.

    [0038] The vehicle 100 further has a device 110 which is set up for the detection of objects in the environment of the robot, i.e. in the environment of the vehicle in relation to the vehicle 100, and in particular for the detection of at least one object 200 in the environment of the vehicle. The device 110 has a data processing device 120, for example in the form of a computer or an electronic control unit, which may also be set up to activate the actuators and the vehicle drive. This actuation can be carried out by means of corresponding control signals, which are generated and output by the device 110 or the data processing device 120 and which are received and processed by the actuators and the vehicle drive. The data processing device 120 has a processor 121 and a data memory 122 for storing program instructions or a computer program for operating the vehicle 100 and/or for the detection of objects in the environment of the vehicle. For example, a machine learning system in the form of one or more types of neural networks, KNN, may be implemented in the data processing device 120. In addition, the vehicle 100 has a plurality of detection devices or sensors 130, such as optical sensors, such as cameras, ultrasonic sensors or radar sensors, lidar sensors, etc., which monitor or detect the environment of the vehicle 100. Detection data of the sensors 130 are made available to the data processing device 120 or the device 110, which is set up to plan a driving strategy, which may include for example one or more control maneuvers, i.e. driving maneuvers related to the vehicle 100, on the basis of the detection data, and to activate the vehicle actuators and/or the traction drive accordingly. Accordingly, the data processing device 120 or the device 11 is set up to receive, for example, the different detection data of the sensors 130 as input data, to process, in particular to fuse, these data and possibly additionally supplied and/or generated (intermediate) data, and to provide output data based on the processing and/or obtained therefrom to one or more vehicle systems, such as the actuators and the vehicle drive. The input data and/or output data can be supplied and provided as signals for electronic data processing.

    [0039] FIG. 2 shows in a block diagram how the detection of the object 200 in the environment of the vehicle and the coordinated planning of a driving strategy appropriate to the situation can be carried out by means of the device 110 or the data processing device 120. The object 200, which is another road user for example, is located in front of the vehicle 100 in the direction of travel of the vehicle 100 only by way of example here. It will be understood that the object 200 can also be located relative to the vehicle 100 at other positions of the environment of the vehicle and consequently can be detected by another combination or fusion of the sensors 130.

    [0040] As indicated in FIG. 2 by the reference characters 130-1, 130-2 used, the object 200 is detected here by a plurality of the sensors 130, wherein with this exemplary embodiment two sensors 130-1 and 130-2 are used by way of example, which in this exemplary embodiment are of different types. Only by way of example, the sensor 130-1 is a lidar sensor assembly and the sensor 130-2 is a camera, wherein other combinations of the sensors 130 are possible. According to FIG. 2, the object 200 or a subsection of the same is detected, for example by the sensor 130-1, and one or more (here by way of example two) further subsections of the object 200 are detected by the sensor 130-2, for example, and the (first and second) detection data derived therefrom are supplied to the device 110 or the data processing device 120, as indicated in FIG. 2 within the device 110 or the data processing device 120 represented by a dashed rectangle. In this case the first and second detection data of the sensors 130-1, 130-2 associated with this are fused in a block designated by B1 to generate an object detection 201 from these fused detection data. In this case, one or more object detection methods and/or object assessment methods may be applied, which may also include feature extraction, determination of the optical flow, semantic segmentation, object classification and similar.

    [0041] The device 110 or the data processing device 120 is set up to approximate a first object outline 200-1 from the detection data of the first sensor 130-1 (which may be a partial object outline or a total object outline), which at least approximately describes the object 200 detected by the first sensor 130-1 in the form of an associated outline. The first partial object outline 200-1 is described here as a polygon.

    [0042] In addition, the device 110 or the data processing device 120 is set up to approximate a second partial object outline 200-2 and a third partial object outline 200-3 from the detection data of the second sensor 130-2, which at least approximately describes the subsections of the object 200 detected by the second sensor 130-2 in the form of associated outlines. The second and third partial object outlines 200-2, 200-3 are described here as polygons. It should be noted that alternatively the first object outline 200-1 can be approximated from a detection by means of one of the sensors 130, and each one of the partial object outlines 200-2, 200-3 can be approximated by means of two more of the sensors 130.

    [0043] The device 110 or the data processing device 120 is optionally also set up to approximate a bounding box 200-4, as indicated in FIG. 2 by a rectangle.

    [0044] Furthermore, the device 110 or the data processing device 120 is set up to approximate a total object outline of the object 200 from the fused detection data of the first sensor 130-1 and the second sensor 130-2, which at least substantially corresponds to the first object outline 200-1 here. The object outline or the total object outline 200-1 is described here as a polygon only by way of example and approximates the object 200 with high accuracy based on the fused detection data.

    [0045] In this exemplary embodiment, the total object outline 200-1 includes the first and second partial object outlines 200-2, 200-3, which, for example due to their respective feature vector, which for example includes a respective velocity relative to the vehicle 100 or other suitable features, are to be associated with the first and second partial object outlines 200-1, 200-2 of the total object outline 200-1 and thus to the object 200 or the object detection 201. In other words, the object 200 has been detected redundantly by means of the first and second sensors 130-1, 130-2, namely by detecting each subsection of the object 200.

    [0046] As indicated in FIG. 2, the object 200 is not only described based on the object detection 201 or the total object outline 200-1, but additionally also by redundancy information 202, which here contains by way of example an indication of whether the detected object 200 has been detected on the basis of only the first detection data or only the second detection data, or whether the detected object 200 or at least one or more subsections of the same has been detected redundantly on the basis of both the first and the second detection data. In this exemplary embodiment, the redundancy information 202, which may be a data field or similar for example, includes information about the first and second partial object outlines 200-2, 200-3. Thus, the object 200 is described by the object detection 201 or the total object outline 200-1 and the redundancy information 202, wherein the latter shows that the object 200 has been detected redundantly by both the first sensor 130-1 and the second sensor 130-2 in this exemplary embodiment. In addition, the redundancy information 202 also contains here information about which subsections, which are described for example by the partial object outlines 200-2, 200-3, have been detected redundantly.

    [0047] As indicated in FIG. 2 with the block B2, the device 110 or the data processing device 120 is set up to plan a driving strategy depending on the object outline 200-1 and the redundancy information 202, wherein in this exemplary embodiment only by way of example a first driving maneuver M1 and a second driving maneuver M2 are distinguished. Further, the device 110 or the data processing device 120 is set up, based on the object detection 201 or the total object outline 200-1 and the redundancy information 202, to generate control data for at least partially automated control of the vehicle 100. The control data may be provided, for example, as output signals and are supplied to the actuators and/or the vehicle drive as input signals. The control data include first control data which are associated with the first driving maneuver M1 if the object 200 has not been detected redundantly (which does not apply in this exemplary embodiment for better illustration, however). Or the control data include second control data, which are associated with the second driving maneuver M2, which is different to the first driving maneuver M1, if the object 200 has been detected redundantly. Only by way of example, the first driving maneuver M1 is driving-dynamically weaker than the second driving maneuver M2, which in practice can mean for example a braking maneuver with lower deceleration or similar.

    [0048] Since the object 200 has been captured or detected redundantly here, the device 110 or the data processing device 120 can decide, for example statistically or by another suitable method, that the object 200 is not an erroneous detection, such as a ghost image due to a shadow formation or similar, and can activate the driving-dynamically stronger driving maneuver M2. The second driving maneuver M2 corresponds, for example, to a braking maneuver with comparatively strong deceleration, such as full braking with maximum brake pressure, and/or a steering maneuver for evasion to prevent a collision with the object 200.

    [0049] The sensors 130 provide current detection data of the environment of the vehicle continuously, for example cyclically. In some exemplary embodiments, the device 110 or the data processing device 120 is set up to defer control of the vehicle 100 until an update of the redundancy information 202 on the basis of the constantly updated detection data to decide after a time delay whether the object 200 has been detected erroneously. This can be particularly useful if the object 200 has not been detected redundantly (at least not yet).

    [0050] In some exemplary embodiments, the device 110 or the data processing device 120 is further set up to cause the first driving maneuver M1, which causes a driving-dynamically weaker reaction of the vehicle 100 than the second driving maneuver M2 if the object 200 has not been detected redundantly. After the update of the redundancy information 202 described above, then the second driving maneuver M2 is then additionally caused if appropriate, provided that on the basis of the updated redundancy information 202 it can be concluded that the object detection 201 is not an erroneous detection and therefore the second driving maneuver M2 is necessary to react adequately to this traffic situation.

    [0051] In some exemplary embodiments, the device 110 or the data processing device 120 is further set up, if the object 200 has not been detected redundantly, to immediately cause the second driving maneuver M2 without prior control of the first driving maneuver M1 if on the basis of the detection data of one or more of the sensors 130 a hazard to the environment of the vehicle by the second driving maneuver M2 can be excluded. For example, because a following vehicle is at a long distance from the vehicle 100 or a neighboring lane is free and so on.

    [0052] In some exemplary embodiments, the device 110 or the data processing device 120 is further set up to immediately cause the second driving maneuver M2 if it is determined that the object 200 could actually have been detected based on both the first and the second detection data, although the redundancy information 202 indicates the object 200 as not redundantly detected.#