APPARATUS AND METHOD FOR AREA MAPPING
20240044645 ยท 2024-02-08
Assignee
Inventors
Cpc classification
G01S19/01
PHYSICS
G01C11/02
PHYSICS
G06T3/4038
PHYSICS
International classification
G01C11/02
PHYSICS
G03B37/04
PHYSICS
G01S19/01
PHYSICS
G06T3/40
PHYSICS
Abstract
The invention relates to an apparatus (1) and a method for area mapping, wherein the apparatus (1) for area mapping includes a framework having a main camera (2) and/or a main sensor attached to the framework, and at least two auxiliary cameras (3) and/or auxiliary sensors attached to the framework, wherein the main camera (2) and/or the main sensor as well as the at least two auxiliary cameras (3) and/or the at least two auxiliary sensors have a defined camera footprint (8, 9) on the ground and a defined ground image resolution, wherein the camera footprint (8) of the auxiliary cameras (3) and/or auxiliary sensors on the ground is smaller than the camera footprint (9) of the main camera (2) and/or of the main sensor on the ground and lies at least partially within same, and the ground image resolution of the auxiliary cameras (3) and/or auxiliary sensors is greater than the ground image resolution of the main camera (2) and/or of the main sensor, and the auxiliary cameras (3) and/or auxiliary sensors are aligned such that images generated by the auxiliary cameras (3) and/or auxiliary sensors at least partially overlap, and the area mapping apparatus (1) has at least one GNSS receiver and a trigger mechanism, wherein the GNSS receiver, the main camera (2) and/or the main sensor and the at least two auxiliary cameras (3) and/or auxiliary sensors are coupled to the trigger mechanism, as a result of which quick creation of high-resolution, georeferenced orthomosaic images of areas and high area coverage is made possible.
Claims
1.-34. (canceled)
35. An apparatus for area and weed mapping, including: a framework: a main sensor attached to the framework, and at least two auxiliary sensors attached to the framework, wherein the main sensor and the at least two auxiliary sensors each have a defined camera footprint on the ground and a defined ground image resolution, wherein the camera footprint of the auxiliary sensors on the ground is smaller than the camera footprint of the main sensor on the ground and lies at least partially within the same, and the ground image resolution of the auxiliary sensors is greater than the ground image resolution of the main sensor, and the auxiliary sensors are aligned such that images generated by the auxiliary sensors at least partially overlap, and the apparatus has at least one GNSS receiver and a trigger mechanism, wherein the GNSS receiver, the main sensor and the at least two auxiliary sensors are coupled to the trigger mechanism, and wherein the apparatus includes at least one data processing module configured to preprocess the images generated by the main sensor and the at least two auxiliary sensors for generating georeferenced orthomosaic images and/or to generate the georeferenced orthomosaic images directly in the apparatus.
36. The apparatus for area and weed mapping according to claim 35, wherein the auxiliary sensors are aligned in such a way that an image generated by an auxiliary sensor overlaps at at least one edge with an image generated by another auxiliary sensor and at an edge, at which it does not overlap with the generated image of another auxiliary sensor, adjoins an image generated by another auxiliary sensor, or adjoins an edge of the main sensor; and/or the overlap of the generated images is less than 10% and/or greater than 1%.
37. The apparatus for area and weed mapping according to claim 35, wherein the camera footprint of the main sensor on the ground is greater than 5% of the sum of the camera footprints of the auxiliary sensors on the ground.
38. The apparatus for area and weed mapping according to claim 35, wherein the trigger mechanism triggers the main sensor simultaneously or with a defined time offset with at least two auxiliary sensors; and/or simultaneously or with a defined time offset determines and documents an exact position of the apparatus via at least one GNSS receiver and/or data on the rotation of the apparatus via at least one IMU sensor.
39. The apparatus for area and weed mapping according to claim 35, further comprising at least one attachment means for attachment to a manned or unmanned flying object; and/or at least one movable connecting element, wherein the movable connecting element connects the framework and at least one attachment means for attachment to a manned or unmanned flying object.
40. The apparatus for area and weed mapping according to claim 35, further comprising a second main sensor, wherein the second main sensor is aligned in such a way that an image area generated by it at least partially overlaps with an image area generated by the first main sensor and has a different viewing angle.
41. The apparatus for area and weed mapping according to claim 35, further comprising at least one storage module and/or at least one transmission device; and/or the at least one data processing module is a Single Board Computer or a Single Board Computer for Edge Computing.
42. The apparatus for area and weed mapping according to claim 41, wherein the at least one data processing module is configured to identify and map weed using an algorithm.
43. A method for area and weed mapping using an apparatus for area and weed mapping, the apparatus including a main sensor and at least two auxiliary sensors, wherein the main sensor and the at least two auxiliary sensors each have a defined camera footprint on the ground and a defined ground image resolution, wherein the camera footprint of the auxiliary sensors on the ground is smaller than the camera footprint of the main sensor on the ground and lies at least partially within the same, and the ground image resolution of the auxiliary sensors is greater than the ground image resolution of the main sensor, and the auxiliary sensors are aligned such that images generated by the auxiliary sensors at least partially overlap, and the apparatus has at least one GNSS receiver and a trigger mechanism, wherein the GNSS receiver, the main sensor and the at least two auxiliary sensors are coupled to the trigger mechanism, and wherein the method comprises: performing, by a data processing module of the apparatus, a preprocessing of the images generated by the main sensor and the at least two auxiliary sensors for generating of georeferenced orthomosaic images; and/or generating, by the data processing module of the apparatus, the georeferenced orthomosaic images directly in the apparatus.
44. The method for area and weed mapping according to claim 43, wherein the trigger mechanism is triggered several times in succession and thus several images are generated successively by at least one sensor; and/or the trigger mechanism is triggered several times such that the generated images of the main sensor overlap in a movement direction of the apparatus; and/or the apparatus is tilted, rotated and/or pivoted relative to the movement direction during movement in the movement direction.
45. The method for area and weed mapping according to claim 43, wherein, if a laterally offset image has already been previously generated with the main sensor, the subsequently generated image of the main sensor at least partially overlaps therewith, wherein the overlap of the subsequently generated image of the main sensor with the laterally offset generated image of the main sensor is more than 20%.
46. The method for area and weed mapping according to claim 43, wherein the orthomosaic images are generated from the images of the main sensor via photogrammetric calculations.
47. The method for area and weed mapping according to claim 43, wherein an orthomosaic image of the main sensor is georeferenced using position data; and an orthomosaic image of the auxiliary sensors is georeferenced by comparison with the georeferenced orthomosaic image generated by the main sensor.
48. The method for area and weed mapping according to claim 43, wherein the georeferencing of the generated images of the auxiliary sensors is performed based on a calibration of an orientation of the auxiliary sensors relative to the main sensor and the georeferenced images of the main sensor and/or the georeferenced orthomosaic image of the main sensor, wherein the calibration is performed by measuring the camera footprints of the aligned auxiliary sensors in the camera footprint of the main sensor.
49. The method for area and weed mapping according to claim 43, wherein the at least one data processing module identifies and maps weed using an algorithm.
Description
DRAWINGS
[0042] Preferred embodiments of the invention are shown in the drawings and are explained in more detail below.
[0043]
[0044]
[0045]
DESCRIPTION OF EMBODIMENTS
[0046]
[0047]
[0048] Preferably, a movement direction of the apparatus 1 in the illustrated embodiment is orthogonal to the long side of the camera footprint 9 of the main camera 2, with the lateral overlap 7 of the camera footprints 8 of the auxiliary cameras 3 shown in
[0049] For example, if the apparatus 1 according to the invention is moved in the preferred movement direction and the trigger mechanism is triggered several times in succession, images are generated which reproduce the camera footprint 9 of the main camera 2, wherein the offset of the images and thus the overlap of the images in the movement direction depend on how quickly the apparatus 1 is moved. If the position and rotation of the apparatus 1 is then determined in parallel with the triggering of the main camera 2, a georeferenced orthomosaic image can be created based on several captured images of the main camera 2 and the position data of the individual images via photogrammetric calculations, which, however, does not have a high image ground resolution if the ground distance 6 is appropriate. If, in this case, the auxiliary cameras 3 were triggered at the same time as the main camera 2 or with a time delay and high-resolution images with a significantly smaller camera footprint 8 were taken, these high-resolution images of the auxiliary cameras 3 can, if the camera footprints 8 of the auxiliary cameras 3 were previously correlated with the camera footprint 9 of the main camera 2, also be georeferenced on the basis of the georeferenced orthomosaic image generated with the main camera 2. For mapping a large area, for example, a drone could carry the apparatus 1 along a pre-programmed path over the area so that it is scanned. By using the apparatus 1 in the method according to the invention, a large area can be imaged and georeferenced at high resolution in significantly less time than would be possible using the current state of the art. By using multiple auxiliary cameras 3 with small camera footprints 8 on the ground but high image ground resolution, the overall resolution of the georeferenced orthomosaic images can be increased to the maximum resolution capability of the individual auxiliary cameras 3 and a trade-off between image ground resolution and area performance or duration is no longer necessary, thereby significantly reducing the overall time required to create high resolution georeferenced area images. A further increase in performance can be achieved, for example, by the simultaneous use of multiple apparatuses 1 arranged in parallel. For example, weed could thus be identified and mapped by an algorithm in high-resolution, georeferenced area images of agricultural land, in order to subsequently generate herbicide application maps. In order to correct the flight speed or to avoid strongly blurred images during the recording, in an embodiment of the method according to the invention, for motion compensation, the recording apparatus could be tilted, for example, so that the viewing direction 4 of the main camera 2 and the viewing directions 5 of the auxiliary cameras 3 change in such a way that the camera footprint 9 of the main camera 2 on the ground and the camera footprints 8 of the auxiliary cameras 3 do not change. In addition, it would also be possible to perform the motion compensation using digital methods, such as the Time Delayed Integration (TDI) or Forward Motion Compensation (FMC) method.
[0050]
[0051] All of the features shown here can be essential to the invention either individually or in any combination with each other.
LIST OF REFERENCE SIGNS
[0052] 1 apparatus [0053] 2 main camera [0054] 3 auxiliary camera [0055] 4 viewing direction (main camera) [0056] 5 viewing direction (auxiliary camera) [0057] 6 ground distance [0058] 7 overlap [0059] 8 camera footprint (auxiliary camera) [0060] 9 camera footprint (main camera)