Method for Controlling a Flight Movement of an Aerial Vehicle for Landing or for Dropping a Cargo, and Aerial Vehicle
20220397913 · 2022-12-15
Inventors
Cpc classification
G05D1/106
PHYSICS
B64U70/00
PERFORMING OPERATIONS; TRANSPORTING
B64U2201/10
PERFORMING OPERATIONS; TRANSPORTING
B64U2101/60
PERFORMING OPERATIONS; TRANSPORTING
B64U2101/30
PERFORMING OPERATIONS; TRANSPORTING
B64C39/024
PERFORMING OPERATIONS; TRANSPORTING
International classification
G05D1/10
PHYSICS
Abstract
The preferred embodiments relate to a method for controlling a flight movement of an aerial vehicle for landing the aerial vehicle, including: recording of first image data by means of a first camera device, which is provided on an aerial vehicle, and is configured to record an area of ground, wherein the first image data is indicative of a first sequence of first camera images. The method also includes recording of second image data by means of a second camera device, which is provided on the aerial vehicle, and is configured to record the area of ground, wherein the second image data is indicative of a second sequence of second camera images.
Claims
1. A method for controlling a flight movement of an aerial vehicle for landing the aerial vehicle, comprising: recording of first image data by means of a first camera device, which is provided on an aerial vehicle, and is configured to record an area of ground, wherein the first image data is indicative of a first sequence of first camera images; recording of second image data by means of a second camera device, which is provided on an aerial vehicle, and is configured to record the area of ground, wherein the second image data is indicative a second sequence of second camera images; processing the first and the second image data by means of an evaluation device, comprising: performing a first image analysis based on artificial intelligence, for the first image data, wherein at least a first landing zone is hereby determined in the recorded area of ground; performing a second image analysis, which is conducted free of use of artificial intelligence, for the second image data, wherein at least a second landing zone is hereby determined in the recorded area of ground; determining position coordinates for a clear landing zone, which is comprised by the first and second landing zones, if a comparison reveals that the first and second landing zones overlap in the area of ground, at least in the clear landing zone; receiving the position coordinates for a target landing site by a control device of the aerial vehicle; determining release characteristics, if a comparison of the position coordinates for the clear landing zone and the position coordinates for the target landing zone reveals a match; and transmitting the release characteristics, which are indicative of the availability of the target landing site for a landing, to the control device of the aerial vehicle; and controlling of a flight movement of the aerial vehicle by the control device, for landing on the target landing site.
2. A method for controlling a flight movement of an aerial vehicle, for dropping a cargo by the aerial vehicle, comprising: recording of first image data by means of a first camera device, which is provided on an aerial vehicle, and is configured to record an area of ground, wherein the first image data is indicative of a first sequence of first camera images; recording of second image data by means of a second camera device, which is provided on the aerial vehicle, and is configured to record an area of ground, wherein the second image data register a second sequence of second camera images; processing the first and the second image data by means of an evaluation device, comprising: performing a first image analysis based on artificial intelligence, for the first image data, wherein at least a first drop zone is hereby determined in the recorded area of ground, for dropping a cargo of the aerial vehicle; performing a second image analysis, which is conducted free of use of artificial intelligence, for the second image data, wherein at least a second drop zone is hereby determined in the recorded area of ground, for dropping a cargo of the aerial vehicle; determining position coordinates for a clear drop zone, which is comprised by the first and the second drop zones, if a comparison reveals that the first and the second drop zones overlap in the area of ground, at least in the clear drop zone; receiving position coordinates for a target drop site by a control device of the aerial vehicle; determining release characteristics, if a comparison of the position coordinates for the clear drop zone and the position coordinates for the target drop site reveals a match; and transmitting the release characteristics, which are indicative of the availability of the target drop site for the dropping of the cargo to the control device of the aerial vehicle; and controlling a flight movement of the aerial vehicle by the control device, for dropping the cargo on the target drop site.
3. The method according to claim 1, wherein the performing the first image analysis, and/or the second image analysis, further comprises the following: providing obstacle characteristics, which are indicative of at least one category of ground obstacle; determining a landing zone as unsuitable for the landing or the dropping of the cargo, if, with the first and/or second image analyses, using the obstacle characteristics, it is determined that a ground obstacle is arranged in the landing/drop zone, which can be assigned to the at least one category of ground obstacle; and continuing with processing of the first and the second image data by means of the first and second image analyses until the clear landing/drop zone is determined.
4. The method according to claim 3, wherein obstacle characteristics are provided, which are indicative of one or a plurality of the following categories of ground obstacle: moving ground obstacle, stationary ground obstacle, human, plant, and animal.
5. The method according to claim 1, wherein the first and the second image data are recorded by means of at least two different camera devices from the following group: a visual imaging camera, a thermal imaging camera, a radar camera, an event camera, and an infrared camera.
6. The method according to claim 1, wherein furthermore the following is provided: performing a visual analysis for the first/second camera device, wherein it is hereby determined as to whether a particular recorded field of view of the first/second camera device is clear of any field of view blockage.
7. The method according to claim 6, wherein in the course of the visual analysis it is determined as to what extent the recorded field of view of the first camera device, and/or of the second camera device is blocked, and field of view characteristics are provided in the evaluation device and transmitted to the control device, which characteristics register the extent of the blocking of the recorded field of view.
8. The method according to claim 7, wherein the control device is checking the field of view characteristics, and is controlling the flight movement of the aerial vehicle in accordance with emergency landing control signals, if the field of view characteristics register an extent of blockage of the recorded field of view for the first camera device and/or the second camera device, which exceeds a threshold value, wherein the emergency landing control signals are set up to effect an emergency landing of the aerial vehicle.
9. An aerial vehicle, comprising: a sensor device, which has a first and a second camera device; an evaluation device, which has one or a plurality of processors; and a control device, which is configured to control an operation of the aerial vehicle; wherein the aerial vehicle, for performing a landing, is configured for the following: recording first image data by means of a first camera device, which is provided on an aerial vehicle, and is configured to record an area of ground, wherein the first image data is indicative of a first sequence of first camera images; recording second image data by means of a second camera device, which is provided on an aerial vehicle, and is configured to record an area of ground, wherein the second image data is indicative of a second sequence of second camera images; processing of the first and the second image data by means of an evaluation device, comprising: performing a first image analysis based on artificial intelligence, for the first image data, wherein at least a first landing zone is hereby determined in the recorded area of ground; performing a second image analysis, which is conducted free of use of artificial intelligence, for the second image data, wherein at least a second landing zone is hereby determined in the recorded area of ground; determining position coordinates for a clear landing zone, which is comprised by the first and second landing zones, if a comparison reveals that the first and second landing zones overlap in the area of ground, at least in the clear landing zone; receiving position coordinates for a target landing site by a control device of the aerial vehicle; determining release characteristics, if a comparison of the position coordinates for the clear landing zone and the position coordinates for the target landing site reveals a match; and transmitting release characteristics, which are indicative of the availability of the target landing site for landing, to the control device of the aerial vehicle; and controlling of a flight movement of the aerial vehicle by the control device, for landing on the target landing site.
10. An aerial vehicle, with: a sensor device, which has a first and a second camera device; an evaluation device, which has one or a plurality of processors; and a control device, which is configured to control an operation of the aerial vehicle; wherein the aerial vehicle, for dropping a cargo by the aerial vehicle, is configured for the following: recording first image data by means of a first camera device, which is provided on an aerial vehicle, and is configured to record an area of ground, wherein the first image data is indicative of a first sequence of first camera images; recording second image data by means of a second camera device, which is provided on an aerial vehicle, and is configured to record an area of ground, wherein the second image data is indicative of a second sequence of second camera images; processing the first and the second image data by means of an evaluation device, comprising: performing a first image analysis based on artificial intelligence, for the first image data, wherein at least a first drop zone is hereby determined, for dropping a cargo of the aerial vehicle in the recorded area of ground; performing a second image analysis, which is performed free of use of artificial intelligence, for the second image data, wherein at least a second drop zone is hereby determined, for dropping a cargo of the aerial vehicle in the recorded area of ground; determining position coordinates for a clear drop zone comprised by the first and second drop zones, if a comparison reveals that the first and second drop zones at least partially overlap in the area of ground; receiving position coordinates for a target drop site by a control device of the aerial vehicle; determining release characteristics, if a comparison of the position coordinates for the clear drop zone and the position coordinates for the target drop site reveals a match; and transmitting the release characteristics, which are indicative of the availability of the target drop site for the dropping of the cargo, to the control device of the aerial vehicle; and controlling of a flight movement of the aerial vehicle by the control device, for dropping the cargo on the target drop site.
11. The aerial vehicle according to claim 9, designed as an unmanned aerial vehicle.
12. The aerial vehicle according to claim 10, designed as an unmanned aerial vehicle.
13. The method according to claim 2, wherein the performing the first image analysis, and/or the second image analysis, further comprises the following: providing obstacle characteristics, which are indicative of at least one category of ground obstacle; determining a landing zone as unsuitable for the landing or the dropping of the cargo, if, with the first and/or second image analyses, using the obstacle characteristics, it is determined that a ground obstacle is arranged in the landing/drop zone, which can be assigned to the at least one category of ground obstacle; and continuing with processing of the first and the second image data by means of the first and second image analyses until the clear landing/drop zone is determined.
14. The method according to claim 13, wherein obstacle characteristics are provided, which are indicative of one or a plurality of the following categories of ground obstacle: moving ground obstacle, stationary ground obstacle, human, plant, and animal
15. The method according to claim 13, wherein the first and the second image data are recorded by means of at least two different camera devices from the following group: a visual imaging camera, a thermal imaging camera, a radar camera, an event camera, and an infrared camera.
16. The method according to claim 13, wherein furthermore the following is provided: performing a visual analysis for the first/second camera device, wherein it is hereby determined as to whether a particular recorded field of view of the first/second camera device is clear of any field of view blockage.
17. The method according to claim 16, wherein in the course of the visual analysis it is determined as to what extent the recorded field of view of the first camera device, and/or of the second camera device is blocked, and field of view characteristics are provided in the evaluation device and transmitted to the control device, which characteristics register the extent of the blocking of the recorded field of view.
18. The method according to claim 17, wherein the control device is checking the field of view characteristics, and is controlling the flight movement of the aerial vehicle in accordance with emergency landing control signals, if the field of view characteristics register an extent of blockage of the recorded field of view for the first camera device and/or the second camera device, which exceeds a threshold value, wherein the emergency landing control signals are set up to effect an emergency landing of the aerial vehicle.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0027] In what follows further embodiments are explained in more detail with reference to figures. Here:
[0028]
[0029]
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0030]
[0031] In accordance with
[0032] The image data recorded in a step 20 are transmitted from the sensor device 1 to the evaluation device 2, where the camera images are processed. A redundant image analysis is executed for the image data in the evaluation device 2 by means of data processing.
[0033] Prior to image analysis for the determination of at least one clear area of ground for landing and/or dropping a cargo, data pre-processing is provided, in the example of embodiment, in steps 21 and 22.
[0034] Here, in step 21, first image data, for example RGB image data (from a visual imaging camera), of a first camera device 1.1 are preprocessed in the evaluation device 2.
[0035] In step 21, second image data, for example thermal imaging camera image data, of a second camera device 1.2, are independently preprocessed in the evaluation device 2.
[0036] Then, in a first image analysis (step 23), the first image data is analyzed, using an analysis algorithm based on artificial intelligence. In one example of embodiment, the first image analysis is executed by means of a neural network, for example a convolutional neural network (CNN) or a visual transformer. For this purpose, the pixel data are processed directly as RGB values by the neural network and modified by means of different layers. By means of the neural network such as CNN, a pixel-by-pixel classification can be executed, for example, one class per image pixel: building, vegetation, human, animal, water, or the like. The output of the neural network is directly in the form of the characteristics in image coordinates. The neural network has previously been trained using training data, as is of known art in various forms of embodiment.
[0037] Independently of the first image analysis based on artificial intelligence, the second image data are analyzed by the sensor device 1 in a second image analysis (step 24), wherein the second image analysis is executed by means of a classical analysis algorithm that does not use artificial intelligence. In the second image analysis, for example, the image data from at least a second camera device 1.2 can be evaluated. Here, a threshold filter can be applied to the sensor signals from the thermal imaging camera, which reproduces the heat per pixel on a pixel-by-pixel basis. The image data from the thermal imaging camera are suitable for detecting and determining, for example, people, animals, and/or vehicles by means of deterministic analysis (for example threshold filters). The image data of the event-based camera can also be evaluated alternatively or additionally for this purpose, as it concerns moving objects. A threshold filter can also be used for image data analysis.
[0038] Both image analyses, which are executed independently of each other, serve to check whether the image data register a clear area of ground that is currently available for a landing and/or for dropping a cargo.
[0039] In step 25, the results of the two image analyses are combined. Position coordinates for a clear area of ground (landing zone/drop zone) are determined if the clear area of ground lies in an overlap zone of a first area of ground, which was determined in the context of the first image analysis, and a second area of ground, which was determined in the context of the second image analysis, independently of the first image analysis. Here, position coordinates for a target zone/location are received in the evaluation device 2 from the control device 3 of the aerial vehicle; these register a desired target location for landing the aerial vehicle, and/or for dropping a cargo by the aerial vehicle. Release characteristics are then determined in the evaluation device 2 if a comparison of the position coordinates for the clear area of ground that has been found, and the position coordinates for the target zone on the ground, reveals a match, so that the desired target location is available. Otherwise, the search for a clear area of ground can be continued in accordance with the method described above. Alternatively, the control unit 3 of the aerial vehicle can provide modified position coordinates for an alternative target zone/location, whereupon the comparison with the position coordinates for the clear area of ground that has been found can be executed for the modified position coordinates.
[0040] The release characteristics are then provided and transmitted to an interface in step 26 for transmission to the control device 3 of the aerial vehicle, whereupon the aerial vehicle is controlled by the control device 3 for landing and/or dropping the cargo accordingly (step 28).
[0041] In order to increase further the reliability of the determination of the clear or suitable area of ground, provision is optionally made in the example of embodiment shown in step 28 to execute a visual analysis for one or a plurality of the camera devices 1.1, . . . , 1.n before determining the position coordinates for the clear area of ground, and/or before transmitting the release characteristics to the control device 3. The visual analysis is used to check whether a particular assigned recorded field of view of the camera devices 1.1, . . . , 1.n may at least partially be blocked. If the recorded field of view is blocked, the assigned camera device may no longer be able to fulfil its sensor task reliably. Here provision can be made for field of view characteristics to be generated and transmitted to the control device 3, wherein the field of view characteristics register whether and, if so, to what extent, the recorded field of view of one or more of the camera devices 1.1, . . . , 1.n is blocked. If the extent of a blockage for at least one recorded field of view exceeds a threshold value, the control device 3 can, for example, reject the previous determination of the clear area of ground, and transmit no release characteristics to the control device 3, despite the fact that the clear area of ground has been found. Alternatively or additionally, an emergency landing of the aerial vehicle can be initiated.
[0042] In addition to the visual analysis, or as an alternative to the latter (step 29), provision can be made for one or a plurality of further checks to be executed for the sensor device 1 and/or the evaluation device 2. For this purpose, a system status, for example, of the power supply, and/or the utilisation of a processor unit, the current runtime of the artificial intelligence, and/or a time synchronisation of the camera devices 1.1, . . . , 1.n, is continuously monitored, either in terms of individual characteristics, or in groups, for example at regular time intervals.
[0043] Based on the release characteristics received from the evaluation device 2, the control device 3 then controls the aerial vehicle such that a landing of the aerial vehicle, and/or a dropping of a cargo by the aerial vehicle in the target zone (target landing site/location, target drop site/location) are executed.
[0044] The features disclosed in the above description, in the claims, as well as in the figures, can be of importance both individually and in any combination for the implementation of the various embodiments.