DIGITAL REALITY PLATFORM PROVIDING DATA FUSION FOR GENERATING A THREE-DIMENSIONAL MODEL OF THE ENVIRONMENT
20230042369 · 2023-02-09
Assignee
- Leica Geosystems Ag (Heerbrugg, CH)
- HEXAGON GEOSYSTEMS SERVICES AG (Heerbrugg, CH)
- LUCIAD NV (Leuven, BE)
Inventors
- Burkhard BÖCKEM (Jonen, CH)
- Jürgen DOLD (Sempach, CH)
- Pascal STRUPLER (Ennetbaden, CH)
- Joris SCHOUTEDEN (Kessel-Lo, BE)
- Daniel BALOG (Merchtem, BE)
Cpc classification
G06F30/12
PHYSICS
G06F3/04815
PHYSICS
G06T19/20
PHYSICS
G01C15/00
PHYSICS
G06F30/13
PHYSICS
International classification
G01C15/00
PHYSICS
G06F3/04815
PHYSICS
G06F30/13
PHYSICS
G06T17/20
PHYSICS
Abstract
The present invention relates to three-dimensional reality capturing of an environment, wherein data of various kinds of measurement devices are fused to generate a three-dimensional model of the environment. In particular, the invention relates to a computer-implemented method for registration and visualization of a 3D model provided by various types of reality capture devices and/or by various surveying tasks.
Claims
1-85. (canceled)
86. A computer-implemented method, comprising: reading input data providing a translocal 3D mesh of an environment and a local 3D mesh of an item within the environment, generating on an electronic graphical display a 3D environment visualization of the translocal 3D mesh, inserting a 3D item visualization of the local 3D mesh into the 3D environment visualization, wherein the 3D item visualization is moveable within the 3D environment visualization by means of touchscreen input or mouse input, such that a pre-final placement of the 3D item visualization within the 3D environment visualization is settable by user-input, using the pre-final placement to automatically incorporate the local 3D mesh into the translocal 3D mesh to form a combined 3D mesh, wherefore: a section of the local 3D mesh corresponding to a spatial border part, particularly a border line, of the 3D item visualization, considered in the pre-final placement, is compared to a section of the translocal 3D mesh corresponding to an adjacent part, particularly an adjacent line, of the 3D environment visualization, the adjacent part being adjacent to the spatial border part, and based on said comparison, a snapping-in is carried out such that a final placement of the 3D item visualization within the 3D environment visualization, and accordingly a final incorporation of the local 3D mesh into the translocal 3D mesh, is automatically set by refining the pre-final placement in such a way that a spatial discrepancy between the spatial border part of the 3D item visualization and the adjacent part of the 3D environment visualization is minimized.
87. The method according to claim 86, wherein the snapping-in is initiated in case an absolute vertical offset of at least part of the 3D item visualization with respect to a defined horizontal reference level within the 3D environment visualization falls below a defined snapping-in offset.
88. The method according to claim 87, wherein the snapping-in offset and/or the reference level is/are settable by user-input.
89. The method according to claim 86, the method further comprising at least one of the following features: for the snapping-in a correspondence between the local 3D mesh and the translocal 3D mesh is automatically determined by means of a feature matching algorithm identifying corresponding features within the local 3D mesh and a section of the translocal 3D mesh corresponding to a current overlap area between the 3D item visualization and the 3D environment visualization; for the snapping-in a geometric distortion between the local 3D mesh and the translocal 3D mesh is corrected; and as output of the snapping-in the local 3D mesh is automatically incorporated into the translocal 3D mesh.
90. The method according to claim 86, wherein: the final placement is used for identification of a fraction of the translocal 3D mesh representing a surface area which has a surface type corresponding to a surface area represented by a fraction of the local 3D mesh, the translocal 3D mesh is processed such that a texture characteristic provided by the translocal 3D mesh for the surface area represented by the fraction of the translocal 3D mesh is changed based on the fraction of the local 3D mesh.
91. The method according to claim 86, wherein: a visualization fusion is carried out such that, considered in a current placement of the 3D item visualization, in a section of the 3D environment visualization which corresponds to the 3D item visualization the 3D environment visualization is replaced by the 3D item visualization, and in a surrounding section, which is adjacent to the 3D item visualization and extends away from the 3D item visualization, the 3D environment visualization is replaced by a replacement visualization based on synthetic data, the replacement visualization providing a gapless transition between the 3D item visualization and the remaining of the 3D environment visualization.
92. A computer-implemented method, comprising: reading input data providing: a translocal 3D model of an environment providing for a textured 3D representation of the environment, a local 3D model of a subarea within the environment providing for a textured 3D representation of the subarea, and referencing information providing a position of the subarea within the environment, an identification of a fraction of the translocal 3D model representing a surface area which has a surface type corresponding to a surface area represented by a fraction of the local 3D model, wherein the identification takes into account the referencing information, and a processing of the translocal 3D model such that a texture characteristic provided by the translocal 3D model for the surface area represented by the fraction of the translocal 3D model is changed based on the fraction of the local 3D model.
93. The method according to claim 92, wherein the identification involves: an assignment of different surfaces within the local 3D model and of different surfaces within the translocal 3D model, respectively, into different surface classes by semantic and/or geometric classification, and a comparison of the local 3D model with the translocal 3D model in order to match surfaces assigned to corresponding classes.
94. The method according to claim 92, wherein the identification is based on at least one of: analyzing the local 3D model and the translocal 3D model with respect to at least a part of the subarea which is represented both by the local 3D model and the translocal 3D model; and analyzing a part of the local 3D model corresponding to an inside part of the subarea and a part of the translocal 3D model corresponding to an outside part to the subarea, wherein the inside part and the outside part immediately adjoin each other.
95. The method according to claim 92, wherein in the course of the processing of the translocal 3D model: data of the translocal 3D model are replaced or complemented by data of the local 3D model, and/or data of the translocal 3D model are replaced or complemented by synthetic data.
96. A computer-implemented method, comprising: reading input data providing a translocal 3D model of an environment and a local 3D model of an item within the environment, generating on an electronic graphical display a 3D environment visualization of the translocal 3D model, inserting a 3D item visualization of the local 3D model into the 3D environment visualization, wherein the 3D item visualization is moveable within the 3D environment visualization by means of touchscreen input or mouse input, such that a placement of the 3D item visualization within the 3D environment visualization is settable by user-input, wherefore a visualization fusion is carried out such that, considered in a current placement of the 3D item visualization, in a section of the 3D environment visualization which corresponds to the 3D item visualization the 3D environment visualization is replaced by the 3D item visualization, and in a surrounding section, which is adjacent to the 3D item visualization and extends away from the 3D item visualization, the 3D environment visualization is replaced by a replacement visualization based on synthetic data, the replacement visualization providing a gapless transition between the 3D item visualization and the remaining of the 3D environment visualization.
97. The method according to claim 96, wherein the replacement visualization provides within the surrounding section a flat, and particularly horizontal, ground section and a transition section providing a connection between the ground section with the remaining of the 3D environment, particularly wherein the transition section provides a linear height profile transition.
98. The method according to claim 97, wherein the ground section connects to the 3D item visualization, particularly to the lowest vertical point of the 3D item visualization.
99. The method according to claim 96, wherein the visualization fusion comprises that, dynamically considered in each placement of the 3D item visualization, the 3D environment visualization corresponding to a fusion section comprising the respective area of the 3D item visualization, and particularly the respective area of the surrounding section, is: temporarily replaced by a visualization providing a, particularly horizontal, flat plane based on synthetic data, or temporarily replaced by a projection visualization providing a vertical projection of the 3D environment visualization onto a, particularly horizontal, flat plane, wherein the flat plane extends over the entire fusion section.
100. The method according to claim 99, wherein: the level of the flat plane is fixed, particularly wherein the flat plane is at a defined ground level, or the level of the flat plane dynamically corresponds to the respective level of the 3D item visualization within the 3D environment visualization, wherein the level of the flat plane or the correspondence between the level of the flat plane and the level of the 3D item visualization is settable by user-input.
101. The method according to claim 96, wherein the visualization fusion is initiated in case an absolute vertical offset of at least part of the 3D item visualization with respect to a defined horizontal reference level within the 3D environment visualization falls below a defined fusion offset.
102. The method according to claim 101, wherein the fusion offset and/or the reference level is/are settable by user-input.
103. A computer program product comprising program code, which, when executed by a computer, causes the computer to carry out the method according to claim 1.
104. The method according to claim 86, wherein the input data providing the translocal 3D model comprise at least one of: aerial surveying data of a surveying device specifically foreseen to be carried by at least one of an aircraft, a satellite, and a surveying balloon, and a 3D terrain and/or city model, particularly in the form of a 3D point cloud or a 3D vector file model, particularly a 3D mesh.
105. The method according to claim 86, wherein the input data providing the local 3D model comprise at least one of: data provided by a surveying station being specifically foreseen to be stationary during data acquisition and comprising at least one of an imager and a laser based ranging device, and data provided by a portable surveying device being specifically foreseen to be carried by a human operator or a robotic vehicle, particularly an automated guided vehicle or an unmanned aerial vehicle, and to be moved during data acquisition, the portable surveying device comprising at least one of an imager and a laser based ranging device.
Description
[0157] Different aspects relating to the computer-implemented methods according to the invention are described or explained in more detail below, purely by way of example, with reference to working examples shown schematically in the drawing. Identical elements are labelled with the same reference numerals in the figures. The described embodiments are generally not shown true to scale and they are also not to be interpreted as limiting the invention. Specifically,
[0158]
[0159]
[0160]
[0161]
[0162]
[0163]
[0164]
[0165]
[0166]
[0167] For example, the translocal 3D model is a city model based on aerial surveying by a Leica CityMapper-2 and the local 3D model is a building model based on (already registered) data of a Leica Pegasus:Backpack and a Leica RTC360 for the outside and a Leica BLK2GO for the inside. Typically, the level of detail of the local 3D model is higher than the overall city model. Here, the 3D models are provided in the form of 3D meshes. Alternatively, the models may be provided in the form of point clouds or any kind of vector file model.
[0168] Matching the local building model and the translocal city model typically requires information of a rough alignment and/or orientation of the two models with respect to each other in order that, for example, an automatic feature extraction and matching algorithm can precisely align the two models, e.g. to generate a common 3D model wherein the data of the translocal 3D model corresponding to the area 3 represented by the local 3D model is replaced by data of the local 3D model.
[0169]
[0170] The 3D item visualization 2 is moveable within the 3D environment visualization 1 by means of touchscreen input or mouse input, wherein different input modes are provided to position and orient the 3D item visualization 2 within the 3D environment visualization 1. By way of example, each input mode restricts movement of the 3D item visualization 2 to exactly two degrees of freedom.
[0171] For example, as depicted from top to bottom of the figure, the 3D item visualization 2 has already been rotated into its correct orientation, wherein for finally arranging the represented building it is switched between two different input modes 4A,4B restricting movement of the 3D item visualization 2 to different subsets of translational degrees of freedom each. By way of example, [0172] in a first input mode 4A, movement of the 3D item visualization 2 is restricted to translations along horizontal (orthogonal) x and y axes, wherein any rotation and the height 5 above ground level are kept fixed, and [0173] in a second input mode 4B, movement of the 3D item visualization 2 is restricted to adapting the height 5 (along a z axis orthogonal to the x and y axes).
[0174] For example, the switch between input modes 4A,4B may be based on a keystroke combo or a multi-touch gesture such as sweeping with one finger for x-y-movement and sweeping with two fingers for the height adjustment.
[0175] As soon as the 3D item visualization 2 is close to the ground, a snapping-in may be automatically carried out wherein this pre-final placement is refined in such a way that a spatial discrepancy between the spatial border part of the 3D item visualization 2 and an adjacent part of the 3D environment visualization is minimized.
[0176] After placing the 3D item visualization 2 to an end position 3, the relative configuration 6 between the 3D environment visualization 1 and the 3D item visualization 2 is locked and used, e.g. by an automatic feature extraction and matching algorithm, to precisely align the two models in order to generate a common 3D model visualized in the bottom frame of the figure.
[0177]
[0178] Here, reference information providing an alignment of the local 3D model and the translocal 3D model is used to support identification of matching surfaces being associated to similar texture characteristics, e.g. water surfaces, roof areas 7, or particular vegetation, e.g. similar kind of trees 8. By way of example, the reference information is inherently provided in case of registered 3D models. Alternatively, the reference information may be based on additional information, e.g. positioning information provided by metadata of the respective 3D models.
[0179] The top part of the figure shows a 3D environment visualization 1 based on the translocal 3D model and a 3D item visualization 2 based on the local 3D model to be inserted into the 3D environment visualization 1, and the bottom part shows a visualization of a common 3D model 9, wherein the extent of the subarea corresponding to the local 3D model is indicated by a dashed line 10.
[0180] By way of example, the identification is based on finding connecting surface types which immediately adjoin each other at the transition between the area 10 associated to the local 3D model and the neighboring translocal 3D model, e.g. such as it is indicated for one of the tree areas 8. Furthermore, the reference information may also provide for identification of matching surface types in an extended neighborhood around the area 10 associated to the local 3D model, e.g. such as it is indicated for the roof area 7.
[0181]
[0182] In particular, the text section 15 comprises a project name 16, the type 17 of surveying device(s) underlying the corresponding data set, and the date 18 of data acquisition and/or model creation. The text section 15 may comprise further information, e.g. an indication 19 that a data set is already registered or still unregistered.
[0183] Instead of simply rotating, the 3D thumbnail visualizations 11,12 may move within the task list such that in general a viewing state is changed, e.g. change of orientation of the represented item, zooming in/out, or a lateral movement of the represented item.
[0184] By way of example, the viewing state of each of the 3D thumbnail visualizations 11,12 is settable by touchscreen input or mouse input, e.g. wherein the viewing state can be set by moving a mouse cursor or touchscreen input means within its associated longitudinal list entry along a longitudinal axis 20 of its associated list entry.
[0185]
[0186] The 3D environment visualization provides for setting different viewing perspectives, e.g. a virtual tour through the captured environment to examine the inside and outside of buildings with customized level of detail, wherein the accessible level of detail of visualized areas and buildings varies, e.g. depending on a user category of a current user or depending on a defined initialization setting of the translocal 3D visualization, e.g. wherein the 3D environment visualization is only based on data of certain surveying devices. In other words, some of the areas and buildings are visualized with reduced level of detail compared to the level of detail inherently provided by the corresponding surveying data sets.
[0187] However, as indicated by
[0188] Further information may be an indication of the user who acquired the additional data, e.g. together with an indication of a skill level of this user, an indication on conditions during the data acquisition, and criteria to get access to the data.
[0189]
[0190] However, the computer program product is configured to cause a comparison of the local 3D models across the different user assignments in order to determine a deviation information with regard to a current visualization of a building and available visualizations assigned to other users. Therefore, a current user can be informed that for a certain currently visualized building 26 deviating information is available. Furthermore, based on positioning of a mouse cursor or positioning of a touchscreen input means 13 over the respective building visualization 26, a text bubble provides comparison info such that the user is provided a comparison between current data characteristics 27 and additional data characteristics 28, e.g. a comparison of a quality parameter 25, the respectively used devices 23, and the respective creation dates 24.
[0191] Thus, even though a user has no immediate access to the additional data set or the additional local 3D model, he is still able to assess the value of his current building visualization. For example, the computer program product may further provide a search and/or filtering of external data (not assigned to a current user), e.g. with regard to a quality parameter or a specific surveying device, wherein found data, particularly corresponding deviations, is specifically indicated in the 3D map view.
[0192] By way of example, the computer program product may also provide a functionality that a user can mark deficient sections or items in his assigned data, wherein, even though he has no access to external data, the external data is searched for candidates to correct or fix the deficiency. Based on this search, the computer program product will then initiate a display of a preview for improvement, wherein the external data is visualized with reduced level of detail as inherently provided by the external local 3D model, i.e. potential “replacement” data is not fully displayed in the initial 3D visualization for the current user.
[0193]
[0194] By way of example, as depicted by the top part of
[0195] During visualization fusion, as depicted by the bottom part of
[0196] By way of example, in order to make room for the 3D item visualization 2 and the ground section 32 the corresponding 3D environment visualization is vertically projected onto a horizontal flat plane 33. In other words, the 3D environment visualization is “pushed down” (as shown in the figure) and/or “pulled up” (not shown) to the horizontal flat plane 33.
[0197] In a transition section 34, the ground section 32 connects to the remaining 35 of the 3D environment visualization, e.g. in such a fashion, as depicted in the figure, that between the flat ground section 32 and the remaining 35 of the 3D environment visualization a linear height profile transition is provided.
[0198] Referring now to
[0199] The middle part of
[0200] Although the invention is illustrated above, partly with reference to some preferred embodiments, it must be understood that numerous modifications and combinations of different features of the embodiments can be made. All of these modifications lie within the scope of the appended claims.