METHOD AND MOBILE DETECTION UNIT FOR DETECTING ELEMENTS OF INFRASTRUCTURE OF AN UNDERGROUND LINE NETWORK

20220282967 · 2022-09-08

    Inventors

    Cpc classification

    International classification

    Abstract

    A method for the positionally correct capture of exposed infrastructure elements arranged underground, in an open excavation, by means of a mobile capture apparatus including: by a 3D reconstruction device, image data and/or depth data of a scene containing at least one exposed infrastructure element arranged underground are captured and a 3D point cloud having a plurality of points is generated on the basis of these image data and/or depth data; by of one or more receivers, signals of one or more global navigation satellite systems are received and a first position indication of the position of the capture apparatus in a global reference system is determined; and a plurality of second position indications of the position of the capture apparatus in a local reference system and a plurality of orientation indications of the orientation of the capture apparatus in the respective local reference system are determined.

    Claims

    1. A method for the positionally correct capture of exposed infrastructure elements having a diameter less than 30 cm and which are arranged in an open excavation, by means of a mobile capture apparatus, wherein: by means of a 3D reconstruction device of the mobile capture apparatus, capturing image data and depth data of a scene containing at least one exposed infrastructure element arranged underground, and generating a 3D point cloud having a plurality of points on a basis of the image data and the depth data; by means of one or more receivers of the mobile capture apparatus, receiving signals of one or more global navigation satellite systems, and determining a first position indication of a position of the mobile capture apparatus in a global reference system; and c) determining a plurality of second position indications of the position of the mobile capture apparatus in a local reference system and a plurality of orientation indications of the orientation of the mobile capture apparatus in the respective local reference system, i) wherein the determining one of the plurality of second position indications and one of the plurality of orientation indications is effected by means of an inertial measurement unit of the mobile capture apparatus, which captures linear accelerations of the mobile capture apparatus in three mutually orthogonal principal axes of the local reference system and angular velocities of a rotation of the mobile capture apparatus about the three mutually orthogonal principal axes, and ii) wherein the 3D reconstruction device comprises more than one 2D camera, by means of which the image data and the depth data of the scene are captured and the determination of the one of the plurality of second position indications and of the one of the plurality of orientation indications is effected by means of visual odometry on the basis of the image data and the depth data; and iii) wherein the mobile 3D reconstruction device comprises a LIDAR measuring device, by means of which the depth data of the scene are captured and the determination of the one of the plurality of second position indications and of the one of the plurality of orientation indications is effected by means of the visual odometry on the basis of the depth data; d) allocating a respective georeference to the points of the 3D point cloud on the basis of the first position indication and a plurality of the second position indications and also a plurality of the orientation indications, e) wherein the mobile capture apparatus is configured to be carried by a person and held by one or both hands of the person, the mobile capture apparatus has a housing having a largest edge length which is less than 50 cm, and wherein the one or more receivers, the inertial measurement unit, and the 3D reconstruction device are arranged in the housing.

    2. (canceled)

    3. (canceled)

    4. (canceled)

    5. The method as claimed in claim 1, wherein the image data and the depth data of a plurality of frames of the scene are captured and the 3D point cloud is generated.

    6. The method as claimed in claim 1, wherein the one or more receivers are configured to receive reference or correction signals, from land-based reference stations.

    7. The method as claimed in claim 1, wherein a LIDAR measuring device of the 3D reconstruction device is configured as solid-state LIDAR.

    8. (canceled)

    9. (canceled)

    10. The method as claimed in claim 1, wherein the following are stored in a temporally synchronized manner in a storage unit of the mobile the capture apparatus: a) the first position indication of the position in the global reference system and/or raw data assigned to the first position indication; and b) the one or more second position indications; and c) the one or more second orientation indications; and d) the captured image data and/or the captured depth data and/or the captured linear accelerations of the mobile capture apparatus in the three mutually orthogonal axes of the local reference system and also the angular velocities of the rotation of the mobile capture apparatus about the three mutually orthogonal axes.

    11. (canceled)

    12. The method as claimed in claim 1, wherein allocating the respective georeference to the points of the 3D point cloud is effected by means of sensor data fusion, wherein a factor graph as a graphical model is applied for optimization purposes, wherein the sensor data fusion is based on a nonlinear equation system, on a basis of which an estimation of the position and of the orientation of the mobile capture apparatus is effected, and wherein on the basis of the image data and/or depth data captured by the 3D reconstruction device, at least one infrastructure element or a line or a connection element, is detected and classified and the estimation of the position and of the orientation of the mobile capture apparatus on the basis of the nonlinear equation system is additionally effected on the basis of the results of the detection and classification of the infrastructure element.

    13. (canceled)

    14. (canceled)

    15. The method as claimed in claim 1, wherein by means of the one or more receivers, signals from a maximum of three navigation satellites of a global navigation satellite system are received, wherein the respective georeference is allocated to the points of the 3D point cloud with an accuracy in a range of less than 10 cm, less than 5 cm, or less than 3 cm.

    16. The method as claimed in claim 1, wherein the second position indications of the position of the mobile capture apparatus and/or the orientation indications of the mobile capture apparatus as prior information assist a resolution of ambiguities of differential measurements of carrier phases in order to georeference infrastructure elements even if the one or more receivers report a failure or determine a usable second position indication and/or orientation indication only for a short time by means of the inertial measurement unit.

    17. The method as claimed in claim 12, wherein with an aid of the sensor data fusion regions of infrastructure elements recorded multiply or at different times are recognized and reduced to a temporally most recent captured region of the infrastructure elements.

    18. (canceled)

    19. The method as claimed in claim 1, wherein a plausibility of a temporal sequence of first position indications of the position of the capture apparatus in the global reference system is determined by a first velocity indication being determined on the basis of the temporal sequence of first position indications and a second velocity indication being calculated on the basis of the captured linear accelerations and angular velocities and being compared with the first velocity indication.

    20. The method as claimed in claim 1, wherein on the basis of the 3D point cloud and/or on the basis of the image data, at least one infrastructure element is detected and classified, and wherein at least one histogram of color and/or grayscale value information, and/or saturation value information and/or brightness value information and/or of an electromagnetic wave spectrum of a plurality of points of the 3D point cloud is generated for the detection, classification and/or segmentation.

    21. (canceled)

    22. (canceled)

    23. The method as claimed in claim 20, wherein the histogram or histograms local maxima are detected and among the local maxima such maxima with the smallest separations with respect to a predefined color, saturation and brightness threshold value of an infrastructure element are detected.

    24. The method as claimed in claim 23, wherein a group of points whose points do not exceed a predefined separation threshold value with respect to the color information composed of the detected local maxima is extended iteratively by further points which do not exceed a defined geometric and color separation with respect to those of the group, in order to form a locally continuous region of an infrastructure element with similar color information.

    25. (canceled)

    26. The method as claimed in claim 20, wherein for the detection, classification and/or segmentation of the infrastructure elements, color or grayscale value information of the captured image data and/or the captured depth data and associated label information are fed to an artificial neural network for training purposes.

    27. The method as claimed in claim 1, wherein for each detected infrastructure element, an associated 3D object is generated on the basis of the 3D point cloud.

    28. The method as claimed in claim 1, wherein an optical vacancy between two 3D objects is recognized and a connection 3D object as a 3D spline, is generated for closing the optical vacancy.

    29. The method as claimed in claim 28, wherein in that for recognizing the optical vacancy, a feature of a first end of a first 3D object and the same feature of a second end of a second 3D object are determined, wherein the first and second features are compared with one another and the first and second features are a diameter or a color or an orientation or a georeference.

    30. The method as claimed in claim 28, wherein the mobile capture apparatus is put into an optical vacancy mode and is moved proceeding from the first end to the second end.

    31. (canceled)

    32. (canceled)

    33. (canceled)

    34. The method as claimed in claim 1, wherein by means of a display device of the mobile capture apparatus, one or more of the following are displayed: i) a representation of the 3D point cloud; ii) a textured mesh model generated on the basis of the 3D point cloud and the image data of the more than one 2D camera; iii) 3D objects corresponding to infrastructure elements; iv) a 2D location plan; v) a parts list of infrastructure elements; vi) a superposition of image data of a 2D camera of the capture apparatus with a projection of one or more 3D objects corresponding to an infrastructure element; vii) a superposition of image data of a 2D camera of the capture apparatus with a projection of a plurality of points of the 3D point cloud.

    35. A mobile capture apparatus for the positionally correct capture of exposed infrastructure elements having a diameter less than 30 cm and which are arranged underground in an open excavation, comprising: a 3D reconstruction device for capturing image data and depth data of a scene containing at least one exposed infrastructure element arranged underground, and for generating a 3D point cloud having a plurality of points on the basis of the image data and the depth data; one or more receivers for receiving signals of one or more global navigation satellite systems and for determining a first position indication of the position of the capture apparatus in a global reference system; an inertial measurement unit for determining a second position indication of the position of the capture apparatus in a local reference system and an orientation indication of the orientation of the capture apparatus in the local reference system, wherein the inertial measurement unit is designed to capture linear accelerations of the mobile capture apparatus in three mutually orthogonal principal axes of the local reference system and angular velocities of the rotation of the mobile capture apparatus about the three mutually orthogonal principal axes;  and  wherein the 3D reconstruction device comprises more than one 2D camera, by means of which the image data and the depth data of the scene are capturable, wherein a second position indication of the position of the capture apparatus in the local reference system and the orientation indication are determinable by means of visual odometry on the basis of the image data and the depth data;  wherein the 3D reconstruction device comprises a LIDAR measuring device, by means of which depth data of the scene are capturable, wherein a second position indication of the position of the capture apparatus in the local reference system and the orientation indication are effected by means of visual odometry on the basis of the depth data; wherein the capture apparatus is configured to allocate a respective georeference to the points of the 3D point cloud, on the basis of the first position indication and a plurality of the second position indications and also a plurality of the orientation indications; wherein the mobile capture apparatus is able to be carried by a person, wherein the mobile capture apparatus is able to be held by both hands of a person, preferably by one hand of a person, and has a housing, the largest edge length of which is less than 50 cm, wherein the receiver(s), the inertial measurement unit and the 3D reconstruction device are arranged in the housing.

    36. (canceled)

    37. (canceled)

    38. (canceled)

    39. (canceled)

    40. (canceled)

    41. (canceled)

    42. (canceled)

    43. (canceled)

    44. (canceled)

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0092] Further details and advantages of the invention shall be explained below on the basis of the exemplary embodiments shown in the figures. The following is shown herein:

    [0093] FIG. 1 shows one exemplary embodiment of a mobile capture apparatus according to the invention in a schematic block illustration;

    [0094] FIG. 2 shows one exemplary embodiment of a method according to the invention for capturing exposed infrastructure elements situated underground in a flow diagram;

    [0095] FIG. 3 shows one exemplary projection of a 3D point cloud;

    [0096] FIG. 4 shows one exemplary representation of a scene;

    [0097] FIGS. 5, 6 show representations of construction projects in which the invention can be used;

    [0098] FIG. 7 shows a block diagram for elucidating the processes when allocating the georeference to the points of the 3D point cloud;

    [0099] FIG. 8 shows a schematic representation of a plurality of scenes;

    [0100] FIG. 9a shows a plan view of an excavation with a plurality of at least partly optically concealed infrastructure elements; and

    [0101] FIG. 9b shows a plan view of the excavation in accordance with FIG. 9a with a recognized and closed optical vacancy.

    DETAILED DESCRIPTION

    [0102] FIG. 1 illustrates a block diagram of one exemplary embodiment of a mobile capture apparatus 1 for capturing exposed infrastructure elements situated underground, in particular in an open excavation. The mobile capture apparatus 1 comprises, inter alia, one or more receivers 2, consisting of a receiving installation for receiving and processing signals of one or more global navigation satellite systems and for determining a first position of the capture apparatus in the global reference system on the basis of time-of-flight measurements of the satellite signals. The receiver 2, in particular the receiving installation of the receiver 2, can be connected to one or more antennas, preferably arranged outside the housing 9 of the mobile capture apparatus 1, particularly preferably on an outer contour of the housing 9. Alternatively, the antenna can be arranged within the housing 9. This first position of the capture apparatus 1 in the global reference system can be improved in particular by means of a reference station or the service of a reference network. The mobile capture apparatus 1 also contains a 3D reconstruction device 4 for capturing image data and/or depth data of a scene, in particular of a frame of a scene containing exposed infrastructure elements situated underground. Furthermore, the mobile capture apparatus 1 comprises an inertial measurement unit 3 for measuring the accelerations along the principal axes and the angular velocities of the rotations of the mobile capture apparatus 1. Furthermore, a plurality of second position indications of the position of the capture apparatus are estimated by means of visual odometry of the image data and/or depth data and by means of an inertial measurement unit 3 by simultaneous position determination and mapping. In particular, the plurality of second position indications of the position of the capture apparatus 1 in a local reference system and the plurality of orientation indications of the orientation of the capture apparatus 1 in the respective local reference system are determined, [0103] a. wherein the determination of one of the second position indications and of one of the orientation indications is effected by means of an inertial measurement unit 3 of the mobile capture apparatus 1, which captures linear accelerations of the mobile capture apparatus 1 in three mutually orthogonal principal axes of the local reference system and angular velocities of the rotation of the mobile capture apparatus 1 about these principal axes, and/or [0104] b. wherein the 3D reconstruction device 4 comprises one or more 2D cameras, by means of which the image data and/or the depth data of the scene are captured and the determination of one of the second position indications and of one of the orientation indications is effected by means of visual odometry on the basis of the image data and/or the depth data; and/or [0105] c. wherein the 3D reconstruction device 4 comprises a LIDAR measuring device, by means of which the depth data of the scene are captured and the determination of one of the second position indications and of one of the orientation indications is effected by means of visual odometry on the basis of the depth data.

    [0106] The receiver(s) 2, the inertial measurement unit 3 and the 3D reconstruction device 4 are arranged in a common housing 9.

    [0107] The housing 9 has dimensions which make it possible that the mobile capture apparatus 1 can be held by a user by both hands, preferably in a single hand. The housing 9 has a largest edge length that is less than 50 cm, preferably less than 40 cm, particularly preferably less than 30 cm, for example less than 20 cm.

    [0108] Further components of the mobile capture apparatus 1 that are likewise arranged in the housing 9 are a laser pointer 5, a data processing device 6, a storage unit 7, a communication device 10 and a display device 8.

    [0109] The laser pointer 5 can be used for the optical marking of infrastructure elements and/or for supplementary distance measurement and is arranged in the housing or frame 9 in such a way that a laser beam that points in the direction of the scene captured by the 3D reconstruction device 4, for example at the center of the scene captured by the 3D reconstruction device 4, is generable by said laser pointer.

    [0110] The data processing device 6 is connected to the receiver(s) 2, the inertial measurement unit 3 and the 3D reconstruction device 4, such that the individual measured and estimated data and also the image data can be fed to the data processing device 6. Furthermore, the laser pointer 5, the storage unit 7 and the display device 8 are connected to the data processing device 6.

    [0111] The capture apparatus 1 contains a communication device 10 configured in particular as a communication device for wireless communication, for example by means of Bluetooth, WLAN or mobile radio.

    [0112] The display device 8 serves for visualizing the infrastructure elements captured by means of the capture apparatus 1. The display device 8 is preferably embodied as a combined display and operator control device, for example in the manner of a touch-sensitive screen (referred to as touchscreen).

    [0113] The mobile capture apparatus 1 shown in FIG. 1 can be used in a method for capturing exposed infrastructure elements situated underground. One exemplary embodiment of such a method 100 shall be explained below with reference to the illustration in FIG. 2.

    [0114] In the method 100 for capturing infrastructure elements of an underground line network in an open excavation by means of a mobile capture apparatus 1, in a capturing step 101, by means of one or more receivers 2 of the mobile capture apparatus 1, signals of one or more global navigation satellite systems are received and processed and also one or more position indications of the position of the capture apparatus 1 in the global reference system are determined. At the same time, by means of a 2D camera of the mobile capture apparatus 1, said 2D camera being provided as part of the 3D reconstruction device 4, image data of a scene containing exposed infrastructure elements situated underground are captured. A LIDAR measuring device of the 3D reconstruction device captures image data and/or depth data of the scene. Furthermore, a plurality of second position indications of the position of the capture apparatus are estimated by means of visual odometry of the image data and/or depth data and by means of an inertial measurement unit 3 by simultaneous position determination and mapping. The inertial measurement unit 3 is designed to capture linear accelerations of the mobile capture apparatus 1 in three mutually orthogonal principal axes of the local reference system and angular velocities of the rotation of the mobile capture apparatus 1 about these principal axes. The capture apparatus 1 is carried by a person, preferably by both hands of a person, particularly preferably by one hand of a person.

    [0115] The estimated second position indications in the local system, the estimated orientation indications in the local reference system, the measured first position in the global reference system, the measured accelerations along the principal axes and the measured angular velocities of the rotations of the mobile capture apparatus 1 about the principal axes and the captured image data are stored in a synchronized manner in the storage unit 7 of the capture apparatus 1. The user can move with the capture apparatus 1 during the capturing step 101, for example along an exposed infrastructure element. The synchronized storage of these data ensures that the data can be processed correctly in the subsequent method steps. The image data captured by the 3D reconstruction device are conditioned in a subsequent reconstruction step 102 in such a way that they generate a 3D point cloud having a plurality of points and color information for the points. In this respect, this is referred to here as a colored 3D point cloud.

    [0116] In a georeferencing step 103, a first position indication in a geodetic reference system, for example an officially recognized coordinate system, is then allocated to the points of the 3D point cloud on the basis of the estimated second position indications of the 3D reconstruction device 4 in the local reference system, the estimated orientations of the 3D reconstruction device 4 in the local reference system and the measured first positions of the mobile capture apparatus 1 in the global reference system and the measured accelerations of the mobile capture apparatus 1 along the principal axes and the measured angular velocities of the rotations of the mobile capture apparatus 1 about the principal axes of the mobile capture apparatus 1. In this respect, after the georeferencing step 103 a colored, georeferenced 3D point cloud is calculated and provided.

    [0117] Afterward, in a recognition step 104, infrastructure elements are detected on the basis of the color information of the data. For the detection, classification and/or segmentation of the infrastructure elements, color information of the captured image data is compared with predefined color information. Alternatively or additionally, a marking of the infrastructure elements may have been effected by the user during the capture of the scene by means of the laser point 5. The marking by the laser point 5 can be detected in the image data and used for detecting the infrastructure elements. As a result of the recognition step 104, a plurality of image points of the image data, in particular a plurality of points of the colored, georeferenced 3D point cloud, are allocated in each case to a common infrastructure element, for example a line element or a line connection element. The illustration in FIG. 3 shows one exemplary image representation of a recognized infrastructure element in a 2D projection.

    [0118] In a subsequent data conditioning step 105, the generated data of the individual recognition step are conditioned and the infrastructure elements thereof are detected. The conditioning can be effected by means of the data processing device 6. In this case, various types of conditioning are possible, which can be carried out alternatively or cumulatively: in the data conditioning step 105, 3D objects corresponding to the captured infrastructure elements can be generated, such that a 3D model of the underground line network is generated. Furthermore, a projection of the 3D point cloud can be calculated. It is possible to generate a 2D location plan in which the detected infrastructure elements are reproduced. Furthermore, a parts list of the recognized infrastructure elements can be generated.

    [0119] In a visualization step 106, by means of the display device 8 of the mobile capture apparatus 1, [0120] a representation of the 3D point cloud and/or [0121] a 2D location plan and/or [0122] a parts list of infrastructure elements and/or [0123] a superposition of image data of a 2D camera of the capture apparatus with a projection of one or more 3D objects corresponding to an infrastructure element and/or [0124] a superposition of image data of a 2D camera of the capture apparatus with a projection of a plurality of points of the 3D point cloud
    can then be displayed.

    [0125] FIG. 4 visualizes an application of the method according to the invention and of the apparatus according to the invention. A plurality of frames of a recorded scene containing a multiplicity of infrastructure elements 200, 200′ of a distribution network are illustrated. The infrastructure elements 200, 200′ are fiber-optic cables and telecommunication cables, which are laid in a common excavation in some instances without a spacing between one another. The diameter of these infrastructure elements 200, 200′ is less than 30 cm, in some instances less than 20 cm. Some infrastructure elements 200′ have a diameter of less than 10 cm. A person 201 is standing in the open excavation and using a mobile capture apparatus 1 (not visible in FIG. 4) for capturing the exposed infrastructure elements 200, 200′ by means of the method according to the invention.

    [0126] The representations in FIGS. 5 and 6 show typical construction sites for laying infrastructure elements of underground distribution networks in a town/city environment. These construction sites are situated in a town/city road area and are distinguished by excavations having a depth of 30 cm to 2 m. Around the excavation the space available is restricted and accessibility to the excavation is limited in part by parked automobiles and/or constant road traffic. The town/city environment of the excavation is often characterized by shading of the GNSS signals and of mobile radio reception.

    [0127] FIG. 7 shows a block diagram illustrating the data flow for generating the 3D point cloud and allocating the georeferences to the points of the point cloud. As data sources or sensors, the mobile capture apparatus 1 comprises the inertial measurement unit 3, the receiver 2 for the signals of the global navigation satellite system including mobile radio interface 302, a LIDAR measuring device 303—embodied here as a solid-state LIDAR measuring device—of the 3D reconstruction device 4 and also a first 2D camera 304 of the 3D reconstruction device 4 and optionally a second 2D camera 305 of the 3D reconstruction device 4.

    [0128] The data provided by these data sources or sensors are stored in a synchronized manner in a storage unit 7 of the mobile capture apparatus (step 306). That means that [0129] the first position indication of the position in the global reference system and/or raw data assigned to this position indication; and [0130] the one or more second position indications; and [0131] the one or more second orientation indications; and [0132] the captured image data and/or the captured depth data and/or the captured linear accelerations of the mobile capture apparatus 1 in three mutually orthogonal axes of the local reference system and also the angular velocities of the rotation of the mobile capture apparatus 1 about these axes;
    are stored in a temporally synchronized manner in the storage unit 7 of the capture apparatus 1.

    [0133] By means of the LIDAR measuring device 303, the depth data of the scene are captured and one of the second position indications and one of the orientation indications are determined by means of visual odometry on the basis of the depth data. On the basis of the image data and/or depth data determined by the LIDAR measuring device 303, a local 3D point cloud having a plurality of points is generated, cf. block 307.

    [0134] By means of the first 2D camera 304 and optionally the second 2D camera 305, the image data and/or the depth data of the scene 350 are captured and one of the second position indications and one of the orientation indications are in each case determined by means of visual odometry on the basis of the respective image data and/or the depth data of the 2D camera 304 and optionally 305. For this purpose, feature points are extracted, cf. block 308 and optionally 309.

    [0135] Furthermore, on the basis of image data and/or depth data captured by the 3D reconstruction device 4, at least one infrastructure element, in particular a line or a connection element, is detected and classified and optionally segmented, cf. block 310. In this case, one or more of the following items of information are obtained: color of an infrastructure element, diameter of an infrastructure element, course of an infrastructure element, bending radius of an infrastructure element, first and second position indications of the mobile capture apparatus. The detection, classification and optionally segmentation can be effected by means of an artificial neural network configured as part of a data processing device of the mobile capture apparatus, in particular as software and/or hardware.

    [0136] The mobile capture apparatus can optionally comprise a device for voice control. Auditory information used for detecting and classifying the infrastructure elements and/or for allocating the georeference to the points of the 3D cloud can be captured via the device for voice control.

    [0137] The output data present as local 2D data of blocks 307, 308, 309 and 310 are firstly transformed into 3D data (block 311), in particular by back projection.

    [0138] The data of a plurality of frames 350, 351, 352 of a scene that have been transformed in this way are then fed to sensor data fusion 312, which carries out an estimation of the position and of the orientation of the mobile capture apparatus 1 on the basis of a nonlinear equation system. A factor graph is preferably used for the purpose of sensor data fusion 312, which factor graph represents the complex relationships between different variables and factors. In this context, the motion information (angular velocities, orientation indications, etc.) added sequentially for each frame can be fused with carrier phase observations (GNSS factors) in a bundle adjustment. In this case, the GNSS factors represent direct observations of the georeferenced position of a frame, whereas the relative pose factors yield information about the changes in pose between the frames and feature point factors link the local location references (e.g. recognizable structures and/or objects) detected in the image recordings and establish the spatial reference to the surroundings. Furthermore, the results of the detection, classification and/or segmentation of infrastructure elements (color information, geometric application-specific features such as e.g. diameter, course, bending radii, first and second position indications of the mobile capture apparatus, etc.) can concomitantly influence the sensor data fusion mentioned above. What arises as the result of the sensor data fusion 312 is a continuous, globally fully, newly aligned 3D point cloud of all entire frames of a scene, on the basis of which all infrastructure elements can be extracted three-dimensionally, in a georeferenced manner with an absolute accuracy of a few centimeters.

    [0139] The illustration in FIG. 8 shows a plan view of a portion of a distribution network with a plurality of infrastructure elements 200 that were captured by means of the method according to the invention and the apparatus according to the invention. In this case, regions that were captured as part of a common scene, i.e. as part of a continuous sequence of a plurality of frames, are marked by a small box 360. The scenes are recorded in temporal succession, for example whenever the respective section of the distribution network is exposed. As a result of overlap, some overlap regions 361 are contained in two different scenes and thus doubly. The temporal sequence of the scenes may extend over a number of days. These scenes are combined in the context of the sensor data fusion, with the result that a single, common 3D point cloud of the distribution network is generated which contains no doubly recorded regions. In this case, it is advantageous if with the aid of the sensor data fusion regions of infrastructure elements recorded multiply or at different times, such as overlaps between two recordings, for example, are recognized and reduced to the temporally most recent captured regions of the infrastructure elements.

    [0140] FIG. 9a shows a plan view of a part of a distribution network which was laid partly in a closed manner of construction, e.g. by means of press drilling. During the capture of this part of the distribution network, a part of the infrastructure elements 200 arranged underground is not optically capturable by the mobile capture apparatus 1 on account of concealment, cf. concealed region 400. A total of four such partly concealed infrastructure elements are illustrated in FIG. 9a. An optical vacancy thus arises in the 3D point cloud or the network defined by the 3D objects. In accordance with one configuration of the present invention, the optical vacancy between two 3D objects 401, 402 corresponding to a first infrastructure element 200 is recognized, and a connection 3D object 403, in particular as a 3D spline, is generated for closing the optical vacancy, cf. FIG. 9b. For recognizing the optical vacancy, one or more features of a first end of a first 3D object 401 and the same feature(s) of a second end of a second 3D object 402 are determined. The features of the two ends are compared with one another. The features can be for example the diameter and/or the color and/or the orientation and/or position indications. Alternatively, provision can be made for the user of the mobile capture apparatus to put the latter into an optical vacancy mode, for example by activating an operator control element of the mobile capture apparatus. In the optical vacancy mode, the operator can move the mobile capture apparatus above the concealed infrastructure element proceeding from an end of the infrastructure element corresponding to the first end of the first 3D object 401 along an optical vacancy trajectory as far as the end of the infrastructure element 200 corresponding to the second end of the second 3D object 402. The mobile capture apparatus 1 can then generate a connection 3D object 403 connecting the first end of the first 3D object 401 to the second end of the second 3D object 402, said connection 3D object being illustrated in FIG. 9b.

    REFERENCE SIGNS

    [0141] 1 Mobile capture apparatus [0142] 2 One or more receivers [0143] 3 Inertial measurement unit [0144] 4 3D reconstruction device [0145] 5 Laser pointer [0146] 6 Data processing device [0147] 7 Storage unit [0148] 8 Display device [0149] 9 Housing [0150] 10 Communication device [0151] 100 Method [0152] 101 Data capturing step [0153] 102 Reconstruction step [0154] 103 Georeferencing step [0155] 104 Recognition step [0156] 105 Data conditioning step [0157] 106 Visualization step [0158] 200, 200′, 200″ Infrastructure element [0159] 201 Person [0160] 302 Mobile radio interface [0161] 303 LIDAR measuring device [0162] 304 2D camera [0163] 305 2D camera [0164] 306 Synchronization [0165] 307 Generation of local 3D point cloud [0166] 308 Extraction of feature points [0167] 309 Extraction of feature points [0168] 310 Detection and classification [0169] 311 Back projection [0170] 312 Sensor data fusion [0171] 350, 351, 352 Frame [0172] 360 Scene [0173] 361 Overlap region [0174] 400 Optically concealed region [0175] 401, 402 3D object [0176] 403 Connection 3D object