CREATION OF A 3D CITY MODEL FROM OBLIQUE IMAGING AND LIDAR DATA
20170277951 · 2017-09-28
Assignee
Inventors
- Rüdiger WAGNER (Oberriet, CH)
- Anders EKELUND (Jonkoping, SE)
- Andreas Axelsson (Bankeryd, SE)
- Patrick STEINMANN (Heerbrugg, CH)
Cpc classification
G01S17/86
PHYSICS
H04N13/282
ELECTRICITY
International classification
G01S17/02
PHYSICS
Abstract
A method and a hybrid 3D-imaging device for surveying of a city scape for creation of a 3D city model. According to the invention, lidar data is acquired simultaneously with the acquisition of imaging data for stereoscopic imaging, i.e. acquisition of imaging and lidar data in one go during the same measuring process. The lidar data is combined with the imaging data for generating a 3D point cloud for extraction of a 3D city model, wherein the lidar data is used for compensating and addressing particular problem areas of generic stereoscopic image processing, in particular areas with unfavourable lighting conditions and areas where the accuracy and efficiency of stereoscopic point matching and point extraction is strongly reduced.
Claims
1. A method for surveying of a city scape and for creation of a 3D city model of the surveyed city scape, the method comprising: acquiring imaging data of an area of the city scape, the imaging data being adapted for generation of 3D information, and simultaneous acquisition of lidar data for the area of the city scape; generating a 3D point cloud of the area of the city scape by processing of the imaging data regarding generation of 3D information; performing a quality assessment of the imaging data based on data classification, wherein at least part of the imaging data is assigned to a first class being defined by at least one of: a defined first classification criterion for the imaging data, a defined second classification criterion within a first auxiliary 3D point cloud solely based on the imaging data, and a combination of the imaging data of the first class with lidar data corresponding to a critical area comprising a fraction of the area of the city scape defined by the imaging data of the first class; and generating a 3D city model with automated building model extraction based on the 3D point cloud.
2. A method according to claim 1, wherein the generation of the 3D point cloud is additionally based on a quality assessment of the lidar data, wherein at least part of the lidar data is assigned to a second class being defined by at least one of: a defined third classification criterion within a second auxiliary 3D point cloud solely based on the lidar data, and a defined first comparison criterion for comparing the first auxiliary 3D point cloud solely based on the imaging data with a third auxiliary point cloud based on a combination of the imaging and the lidar data, wherein the imaging data of the first class is only combined with lidar data of the critical area where the critical area is overlapping the fraction of the area of the city scape defined by the lidar data of the second class.
3. A method according to claim 1, wherein the generation of the 3D point cloud is also based on a quality assessment of the lidar data based on data classification, wherein at least part of the lidar data being assigned to a third class being defined by at least one of: a defined fourth classification criterion within the second auxiliary 3D point cloud solely based on the lidar data, and a defined second comparison criterion for comparing the first auxiliary 3D point cloud solely based on the imaging data with the third auxiliary point cloud based on a combination of the imaging and the lidar data, wherein the imaging data corresponding to the fraction of the area of the city scape defined by the lidar data of the third class is combined with the lidar data of the third class.
4. A method according to claim 3, wherein at least one of the first to fourth classification criteria is based on a semantic classification, wherein the semantic classification comprises semantic classifiers defining at least one of: shadowing, a region with an occlusion, a region with vegetation, and a region with a homogeneous surface.
5. A method according to claim 1, wherein at least one of the first to second comparison criteria is based on at least one of: a signal to noise threshold, a resolution threshold, and a systematic error threshold.
6. A method according to claim 1, wherein the imaging data and the lidar data being acquired by one single hybrid 3D-imaging device, the lidar data being acquired for a selected region of the area within the surveyed city scape, based on at least one of: an a-priori model of the surveyed city scape, and an analysis of the imaging data, or the generation of the 3D point cloud based on a photogrammetric method, wherein the photogrammetric method is adapted for processing at least one of: nadir and/or oblique images, multispectral images, normalized difference vegetation index images, building footprints, and a reference model comprising at least one of a digital terrain model, a digital elevation model, and a digital surface model.
7. A hybrid 3D-imaging device for surveying of a city scape to create a 3D city model of the surveyed city scape, the hybrid 3D-imaging device comprising: an imaging device for generating imaging data for an area of the city scape, the imaging data being adapted for generation of 3D information; a lidar device for generating lidar data for the area of the city scape, and simultaneously generating the imaging data; and a control and processing unit being adapted for: controlling the imaging device and the lidar device, generating a 3D point cloud for the area of the city scape based on the imaging and the lidar data, and generating a 3D city model with automated building model extraction, based on the 3D point cloud, wherein: the control and processing unit is adapted to: processing of the imaging data regarding generation of 3D information, assessing a quality of the imaging data based on data classification, wherein at least part of the imaging data being assigned to a first class being defined by at least one of: a defined first classification criterion for the imaging data, and a defined second classification criterion within a first auxiliary 3D point cloud solely based on the imaging data, combining the imaging data of the first class with lidar data corresponding to a critical area comprising a fraction of the area of the city scape defined by the imaging data of the first class.
8. The hybrid 3D-imaging device according to claim 7, wherein the control and processing unit is adapted for generating the 3D point cloud with a quality assessment of the lidar data, wherein at least part of the lidar data being assigned to a second class being defined by at least one of: a defined third classification criterion within a second auxiliary 3D point cloud solely based on the lidar data, and a defined first comparison criterion for comparing the first auxiliary 3D point cloud solely based on the imaging data with a third auxiliary point cloud based on a combination of the imaging and the lidar data, wherein the imaging data of the first class is only combined with lidar data of the critical area where the critical area is overlapping the fraction of the area of the city scape defined by the lidar data of the second class.
9. The hybrid 3D-imaging device according to any one of claim 7, wherein the control and processing unit being adapted for generating the 3D point cloud with a quality assessment of the lidar data based on data classification, wherein at least part of the lidar data is assigned to a third class being defined by at least one of: a defined fourth classification criterion within the second auxiliary 3D point cloud solely based on the lidar data, and a defined second comparison criterion for comparing the first auxiliary 3D point cloud solely based on the imaging data with the third auxiliary point cloud based on a combination of the imaging and the lidar data, wherein the imaging data corresponds to the fraction of the area of the city scape defined by the lidar data of the third class being combined with the lidar data of the third class.
10. The hybrid 3D-imaging device according to claim 9, wherein at least one of the first to fourth classification criteria being based on a semantic classification, in particular wherein the semantic classification comprises semantic classifiers defining at least one of: shadowing, a region with an occlusion, a region with vegetation, and a region with a homogeneous surface.
11. The hybrid 3D-imaging device according to claim 7, wherein at least one of the first to second comparison criteria being based on at least one of: a signal to noise threshold, a resolution threshold, and a systematic error threshold.
12. The hybrid 3D-imaging device according to claim 7, wherein: the hybrid 3D-imaging device is built as one single hybrid 3D-imaging device, the single hybrid 3D-imaging device being adapted for acquiring lidar data for a selected region of the area within the surveyed city scape, based on at least one of: an a-priori model of the surveyed city scape, and an analysis of the imaging data, and the control and processing unit being adapted for generating the 3D point cloud with a photogrammetric method, and being adapted for processing at least one of: nadir and/or oblique images, multispectral images, normalized difference vegetation index images, building footprints, and a reference model comprising at least one of a digital terrain model, a digital elevation model, and a digital surface model.
13. A computer program product for generating a 3D city model of a surveyed city scape, wherein the computer program product being stored on a control and processing unit, and comprising program code being configured for: automatically communicating with a database comprising imaging and lidar data of a surveyed city scape, generating a 3D point cloud of the area of the city scape based on the imaging and lidar data, wherein the generation of the 3D point cloud being further based on: processing of the imaging data regarding generation of 3D information, a quality assessment of the imaging data based on data classification, wherein at least part of the imaging data being assigned to a first class being defined by at least one of: a defined first classification criterion for the imaging data, and a defined second classification criterion within a first auxiliary 3D point cloud solely based on the imaging data, combining the imaging data of the first class with lidar data corresponding to a critical area comprising a fraction of the area of the city scape defined by the imaging data of the first class.
14. A hybrid 3D-imaging device for aerial surveying of a city scape, the hybrid 3D-imaging device comprising: one single sensor platform supporting: a nadir imaging camera, an oblique imaging camera, and a lidar device, wherein the nadir and oblique imaging cameras being arranged on the sensor platform on a circumferential area around the lidar device.
15. The hybrid 3D-imaging device according to claim 14, wherein the sensor platform supports: a nadir imaging camera, four oblique imaging cameras with oblique angles of 30-45 degrees with respect to the sensor platform, and a lidar device, wherein the four oblique imaging cameras all have different viewing directions from each other and wherein the four oblique imaging cameras and the nadir camera is placed circumferentially around the lidar device.
Description
DETAILED DESCRIPTION
[0090] Devices, methods and setups and computer programs according to the invention are described or explained in more detail below, purely by way of example, with reference to working examples shown schematically in the drawing. Specifically,
[0091]
[0092]
[0093]
[0094] The diagrams of the figures should not be considered as being drawn to scale. Where appropriate, the same reference signs are used for the same features or for features with similar functionalities.
[0095]
[0096]
[0097]
[0098]
[0099] Point clouds from lidar data generally do not have such issues. However, due to the lower point density, meshes have far less detail and are often not textured. Anyway, lidar data is not depending on lighting conditions and provides 1.sup.st, 2.sup.nd, and 3.sup.rd return to see through vegetation. Therefore, according to the invention, generic (stereoscopic) imaging data is combined with lidar data, in particular for compensating and addressing particular problem areas of generic stereoscopic image processing where the accuracy and efficiency of point matching and point extraction is below average.
[0100] According to the invention lidar data is for example used to provide ground reference in low lighting areas and occlusions, and for improving point matching for vegetation areas and homogeneous surface areas such as water.
[0101] In particular, according to the invention lidar data is acquired simultaneously with the acquisition of imaging data for stereoscopic imaging. Here, simultaneous acquisition of data means an acquisition of the imaging and lidar data during the same measuring process, i.e. in one go when surveying a city scape. Therefore, by combining the best of the two worlds of stereoscopic imaging and lidar, i.e. high resolution information by stereoscopic imaging and lighting independent information by lidar, the generation of a 3D city model is strongly improved.
[0102]
[0103] Here, the hybrid 3D-imaging device 11 comprises one single sensor platform 13 supporting exactly one nadir camera 14, particularly with multispectral bands, exactly four oblique RGB or RGBN cameras 15, in particular with oblique angles of 30-45 degrees, and exactly one lidar device 16, in particular wherein the lidar device is adapted for providing a Palmer scan. The four oblique imaging cameras 15 all have different viewing directions from each other, and the four oblique imaging cameras 15 and the nadir camera 14 are placed circumferentially around the lidar device 16, in particular with a mostly uniform angular separation and with a common distance from the center.
[0104] Depending on the acquisition area, a variety of sensor platform configurations may be possible to best support the simplified generation of the model. Thus, the hybrid sensor setup may vary depending on the application and the area to be surveyed.