SYSTEM AND METHOD FOR REFERENCING A DISPLAYING DEVICE RELATIVE TO A SURVEYING INSTRUMENT

20170337743 · 2017-11-23

Assignee

Inventors

Cpc classification

International classification

Abstract

The invention relates to a method and a system for interrelating a displaying device relative to a surveying instrument, said displaying device and surveying instrument being spatially separated from each other and comprising communication means to communicate with each other, wherein the method comprises providing a first image from the surveying instrument and providing a second image from the displaying device, wherein the first image and the second image at least in part cover the same scenery, detecting a plurality of corresponding features in the first image and in the second image with a processing unit, deriving a set of transition parameters based at least in part on the corresponding features with said processing unit, referencing the displaying device relative to the surveying instrument regarding position and orientation based at least in part on the set of parameters with said processing unit.

Claims

1. A method for interrelating a displaying device to a surveying instrument, said displaying device and surveying instrument being spatially separated from each other and each comprising a communication means to communicate with each other, the method comprising: providing a first image from the surveying instrument and providing a second image from the displaying device, wherein the first image and the second image at least in part cover the same scenery; detecting a plurality of corresponding features in the first image and in the second image with a processing unit; deriving a set of transition parameters based at least in part on the plurality of corresponding features with said processing unit; and referencing the displaying device relative to the surveying instrument regarding position and orientation based at least in part on the set of parameters using the processing unit.

2. The method according to claim 1, wherein detecting the plurality of corresponding features comprises using a feature matching algorithm using the processing unit.

3. The method according to claim 1, wherein the surveying instrument comprises a means for capturing photos.

4. The method according to claim 1, wherein the surveying instrument comprises a means for providing a three-dimensional point cloud or a plurality of single points.

5. The method according to claim 1, wherein the displaying device comprises a means for capturing photos or videos.

6. The method according to claim 5, wherein the displaying device comprises a screen or a projector for displaying one or more of: the first image, the second image, a live image captured by the means for capturing photos or videos comprised by the surveying instrument or the displaying device, a three-dimensional point cloud, or a plurality of single points.

7. The method according to claim 6, further comprising the step of displaying, or overlaying the live image with information related to the three-dimensional point cloud or the plurality of single points.

8. The method according to claim 7, wherein the information are: symbols representing geodata, wherein the geodata comprise the three-dimensional point cloud, the plurality of single points, a plurality of points to be staked-out, and supplementary data with georeference.

9. The method according to claim 1, wherein at least one of the displaying device or the surveying instrument comprise a sensor unit providing sensor data regarding position or orientation of the displaying device relative to the surveying instrument for improving the set of transition parameters, and wherein referencing the displaying device relative to the surveying instrument is further based on said sensor data.

10. The method according to claim 9, wherein the sensor unit comprises one or more of: an inertial measurement unit, a depth sensor, a distance sensor, a Global Navigation Satellite System (GNSS) receiver, or a compass.

11. The method according to claim 1, further comprising: receiving control commands with the displaying device, and with the control commands, controlling the surveying instrument.

12. The method according to claim 1, wherein the surveying instrument is one of: a total station, a laser scanner, a GNSS surveying pole, a mobile mapping system, and wherein the displaying device is one of: a tablet computer, a smart phone, a field controller, augmented, mixed or virtual reality glasses, and a helmet with head-up-display.

13. The method according to claim 1, wherein the displaying device comprises an eye tracker.

14. The method according to claim 1, wherein at least one of referencing the displaying device relative to the surveying instrument or providing a first image and a second image are performed by making use of the principle of: image resection, structure from motion, or simultaneous localisation and mapping (SLAM).

15. A surveying system comprising: a surveying instrument; a displaying device, said displaying device and surveying instrument being spatially separated from each other and each comprising a communication means to communicate with each other; and a processing unit configured to: provide a first image from the surveying instrument and providing a second image from the displaying device, wherein the first image and the second image at least in part cover the same scenery; detect a plurality of corresponding features in the first image and in the second image with a processing unit; derive a set of transition parameters based at least in part on the plurality of corresponding features with said processing unit; and reference the displaying device relative to the surveying instrument regarding position and orientation based at least in part on the set of parameters using the processing unit.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0091] In the following, the invention will be described in detail by referring to exemplary embodiments that are accompanied by figures, in which:

[0092] FIG. 1: shows an embodiment of a system according to the invention and a snapshot of a situation in which a method according to the invention is carried out;

[0093] FIG. 2: shows another embodiment of a system according in which a method according to the invention is carried out;

[0094] FIG. 3: shows another embodiment of a system according to the invention and a snapshot of a situation in which a method according to the invention is carried out;

[0095] FIG. 4: shows yet another embodiment of a system according to the invention and a snapshot of a situation in which a method according to the invention is carried out;

[0096] FIG. 5: shows one embodiment of a system according to the invention and a snapshot of a situation in which a method according to the invention is carried out;

[0097] FIG. 6: shows an embodiment of a system according to the invention and a snapshot of a situation in which a method according to the invention is carried out;

[0098] FIG. 7: shows a further embodiment of a system according to the invention and a snapshot of a situation in which a method according to the invention is carried out;

[0099] FIG. 8: shows one embodiment of a system according to the invention and a snapshot of a situation in which a method according to the invention is carried out;

DETAILED DESCRIPTION

[0100] As shown in FIG. 1, within the first to the second image a set of homologous points is identified. In the shown example, the corner points of a window have been pointed out as observed both in the first image of the surveying instrument (denoted by F1, F2, F3 and F4) and also in the second image of the camera associated to the displaying device (denoted by F1′, F2′, F3′ and F4′). The first image may be at least a part of a 3D point cloud or a photo or part of a video captured by a camera mounted on or in the surveying instrument.

[0101] Referencing the displaying device (an optical head-mounted display typically worn like a pair of eyeglasses) to the surveying instrument (a total station) is based on calculating the difference in position and orientation recognized in the first and the second image by making use of the homologous points and the perspective distortion of their locations.

[0102] The internal coordinate system of the surveying instrument and the coordinate system of the displaying device may be interrelated to each other. They also may be put into correlation with regard to an external absolute reference system, such as WGS84 for example. An algorithm may determine the position offset (3 translations) and/or the orientation offset (3 rotations) in order to register the perspective of the displaying device in context of the coordinate frame defined by the surveying system (or vice versa).

[0103] For an optional specification of the calculation of the relative pose, it is also possible to additionally make use of information provided by one or more additional sensors being part of the method and/or system. Examples of such sensors may be, [0104] for information on position: GNSS sensor, accelerometer, [0105] for information on tilt angles: accelerometer, gyroscope, compass, [0106] for information on distances: distance and depth sensors.

[0107] In this case, a sensor fusion enables the provision of accurate values for the pose and the position of the device. Furthermore, only some of the degrees of freedom may need to be provided by the above described referencing method, whereas the other degrees of freedom could be measured by the device directly with the assigned separate sensors.

[0108] The referencing calculation may be carried out by a processing unit which may be comprised by one of the following components: the displaying device, the surveying instrument, a separate local computer, a remote server, a cloud computer; in this understanding, the processing unit is a tangible single unit. However alternatively, two or more of said components may comprise tangible sub-processing units, said sub-processing units being comprised by the processing unit; in this understanding, the processing unit is a collective of sub-processing units.

[0109] For such purposes data can be communicated and shared between the displaying device, the surveying instrument and any further involved devices by making use of communication channels (such as Wi-Fi, Bluetooth, data cable, etc.).

[0110] The determination of the position and orientation offset is at least in part based on the identification of homologous points in the image taken with the displaying device, i.e. a first set of points, and the image taken with the surveying instrument, i.e. a second corresponding set of points.

[0111] The determination of the pose of the glasses may further be based on 3D points of a SLAM (Simultaneous Localization and Mapping) point cloud. These SLAM points may be generated by forward intersection from images acquired with the camera of a GNSS pole (surveying instrument), see FIG. 2. For the determination of the pose of the glasses (displaying device) an image with the camera of the glasses is taken. Homologous points in the image from the glasses and the images from the GNSS pole are identified, e.g. by feature matching (SIFT, SURF, etc.). The pose of the glasses is computed based on known 3D coordinates corresponding to the homologous points, e.g. by resection, which is shown in FIG. 3.

[0112] In case only image data but no 3D-information are available, e.g. when matching a panorama image acquired with a terrestrial laser scanner with an image acquired with glasses or a tablet, the pose may be determined approximately only. In this constellation, only the direction of the position offset may be determined but not the magnitude of the position offset. In cases the position of the displaying device (e.g. glasses, tablet) is close to the surveying instrument, the position offset can be ignored completely. This is especially true when no relevant objects are located close to the surveying instrument (and the displaying device).

[0113] In other words, the reprojection error in the image of the displaying device for 3D points given in the coordinate system of the surveying system is small when the distance between the surveying instrument and displaying device is small and the 3D points are far away.

[0114] Alternatively, estimated values for the object distance and/or the position offset magnitude can be used to further reduce the image reprojection error for the display device. A further alternative is a consideration of values from a depth- or distance-sensor in order to reduce or eliminate this image reprojection error, said sensor being integrated in the displaying device.

[0115] However, if scale is introduced, the entire pose (including a scaled position offset) can be determined. In practice, this can be achieved by measuring the 3D coordinates of points identified in the images with the surveying instrument or if 3D information can be derived from a given, already measured point cloud, or from a CAD object which is identified in the images.

[0116] According to the invention, the first image may be continuously updated by the surveying instrument. When the environment changes (for example by a car that drives off, or a building crane changes its position), real-time referencing between the surveying instrument and the displaying device is ensured without disturbances. Due to object presence differences some features may otherwise not be able to be matched.

[0117] In another embodiment of the invention, instead of homologous points, line features or image patches are used to match the images of the displaying device and the surveying instrument. In yet another embodiment of the invention, instead of homologous points, vanishing points are used to determine only the orientation difference between the displaying device and the surveying instrument. The position offset, then, is assumed zero.

[0118] According to one aspect of the invention, stakeout points may be projected on the displaying device and be overlaid on the display screen with the live view (see FIG. 4). This allows for highlighting those locations where a surveying pole needs to be placed by a user.

[0119] According to another aspect of the invention, the coordinates of already measured single points, or point clouds, or an already measured part of a point cloud which is currently being generated could be—apart from being utilised for the referencing—displayed on the displaying device. The surveying instrument optionally also has a camera for capturing images which as well may be used for referencing the instrument relative to the device, and for displaying the captured images or part of the captured images on the displaying device in a spatially referenced manner. Such an augmented reality image allows the user e.g. to perform a visual progress check or completeness check of a surveying task while the task is or after the task has been carried out (see FIG. 6). In case of a point cloud scanned by a laser scanner, this could give the user an early visual indication of gaps in the point cloud (indicated in FIG. 6 by the exclamation mark) in order to allow the user to refine or redo the scanning.

[0120] According to another aspect of this invention, an image of a scanned point cloud and/or single point measurements, or a model thereof indicating e.g. building and vegetation layers or also additional data, may be projected onto the displaying device or displayed as overlay on the screen with the live view. Some examples are: [0121] Visualization of building plans to allow for a comparison “planned vs. as-built” e.g. for constructors, architects, etc. [0122] Visualization of hidden structures e.g. pipes or electrical wiring inside walls

[0123] According to another aspect of the invention, instead of or in addition to measured points also labels or additional pieces of information on the surveying task may be displayed on the displaying device (see FIG. 8). Some examples are: [0124] Information such as name, identification number or type, [0125] previously measured values of marker or measurement points, [0126] information on the operation status of the surveying instruments and the scanning process; and [0127] information on automatically assigned feature class for a check by the user if scanned objects have been correctly classified.

[0128] According to another aspect of the invention, for a given surveying task a user could walk with augmented reality glasses containing an eye tracker through the measurement site and “view” the objects of particular interest. Using the described referencing method, the pose of the user('s glasses) can be calculated along this trajectory as well as—using information from the eye tracker—the rough 3D positions of the objects of interest. An unmanned aerial vehicle (UAV) or unmanned ground vehicle (UGV) can then be used to perform a detailed, high-resolution scan of those objects of interest. Such process allows for reduced (“also hands-free”) on-site work of the user and efficient flight planning in case of a UAV or terrestrial path planning in case of a UGV. Alternatively, a tapping command applied by the hand of the user on a tablet pc (see FIG. 5), or the targeting of a feature or region with the viewing direction of a user can command the UAV or UGV to perform the detailed scan of the object tapped on the display (see FIG. 7).

[0129] According to yet another aspect of the invention, given the pose of the glasses with respect to the coordinate system of e.g. the laser scanner, either the angles of the displaying device itself or the viewing direction of the user as determined by an eye tracking sensor located in the glasses may be used to steer the surveying instrument. When controlled in such a way, the instrument would point e.g. towards the same object as the user is looking to or heading, respectively.

[0130] According to yet another aspect of the invention, a user may steer the surveying process by contactless hand gestures. These gestures may be recorded by the camera of the displaying device and translated into meaningful commands for the surveying instrument by a processing unit which is located for example in the displaying device or in the surveying instrument. By doing so, the user may define the scanning window by pointing with his fingers to the upper left and the lower right corner of the area to be scanned or point to the coordinates to be measured as single-points.

[0131] According to another aspect of the invention, the previously mentioned workflows and applications are combinable in order to provide the user with a combination of different augmented reality visions. Here, the user may select the augmented image to be visualized by providing input [0132] manually on the displaying device, on the surveying instrument or on a peripheral system (e.g. on a keyboard, touch pad, mouse), or [0133] by hand gestures detected and interpreted by the displaying device or by the surveying instrument, or [0134] by his viewing direction or head direction detected by the displaying device or by the surveying instrument.

[0135] According to another aspect of the invention, the user may face (in case of glasses) a surveying instrument or may direct another displaying device (e.g. tablet pc) at the surveying instrument, and the glasses then visualize, by displaying, the scanner's operating status and instructions, whereas when looking/directing in the direction of objects to be scanned or already scanned, the displaying device may display the scanned point cloud, alternatively along with text labels, indicating e.g. the objects' class.

[0136] Although the invention is illustrated above, partly with reference to some preferred embodiments, it must be understood that numerous modifications and combinations of different features of the embodiments can be made. All of these modifications lie within the scope of the appended claims.