METHOD AND SYSTEM FOR GENERATING SCAN DATA OF AN AREA OF INTEREST

20240264606 ยท 2024-08-08

Assignee

Inventors

Cpc classification

International classification

Abstract

A system and a method for generating three-dimensional scan data of areas of interest, the method comprising a user defining the areas of interest using a mobile device in the environment, and a scanning device performing a scanning procedure at each defined area of interest to generate the scan data of the respective area of interest, wherein defining the areas of interest comprises, for each area of interest, generating identification data, wherein generating the identification data at least comprises generating image data of the respective area of interest, and the scanning procedure at each defined area of interest is performed by a mobile robot comprising the scanning device and being configured for autonomously performing a scan of a surrounding area using the scanning device, the mobile robot having a SLAM functionality for simultaneous localization and mapping and being configured to autonomously move through the environment using the SLAM functionality.

Claims

1. A method for generating three-dimensional scan data of one or more areas of interest in an environment, the method comprising: a user defining the one or more areas of interest using a mobile device in the environment; and a scanning device performing a scanning procedure at each defined area of interest to generate the scan data of the respective area of interest, defining the areas of interest comprises, for each area of interest, generating identification data, wherein generating the identification data at least comprises generating image data of the respective area of interest; and the scanning procedure at each defined area of interest is performed by a mobile robot comprising the scanning device and being configured for autonomously performing a scan of a surrounding area using the scanning device, the mobile robot having a SLAM functionality for simultaneous localization and mapping and being configured to autonomously move through the environment using the SLAM functionality, wherein the identification data is provided to the mobile robot, and, in the course of each scanning procedure, the mobile robot: navigates to the respective area of interest using the identification data; detects the respective area of interest using the identification data; and uses the scanning device to scan the respective area of interest to generate the three-dimensional scan data.

2. The method according to claim 1, wherein generating the identification data comprises generating position data related to a determined position of the mobile device at the respective area of interest, wherein the position is a position of the mobile device while capturing the image data, wherein the position of the mobile device is determined: relative to the environment, particularly relative to a local coordinate system of the environment, and/or relative to a global coordinate system, particularly using GNSS data of an global navigation satellite system receiver of the mobile device.

3. The method according to claim 1, wherein generating the identification data comprises generating pose data related to a pose of the mobile device while capturing the image data, particularly using IMU data of an inertial measuring unit of the mobile device, the pose comprising at least the attitude in three degrees-of-freedom.

4. The method according to claim 1, wherein the mobile device tracks its path in the environment and generating the identification data comprises generating path data related to the path of the mobile device, wherein: navigating to the respective area of interest comprises using the path data; and/or the mobile robot generates, based on the path information, a route for the mobile robot through the environment.

5. The method according to claim 4, wherein tracking the path comprises using a SLAM functionality of the mobile device, wherein for tracking the path the SLAM functionality of the mobile device uses at least one of: IMU data of an inertial measuring unit of the mobile device, and image data continuously captured by at least one camera of the mobile device.

6. The method according to claim 1, wherein the identification data is generated using environment data comprising at least one of image data, 2D data or 3D data of the environment, particularly wherein the environment data: is retrieved from an external data source; and/or is used for determining a position of an area of interest based on the image data, wherein the image data comprises depth information.

7. The method according to claim 1, wherein the mobile robot has access to environment data comprising 3D data of the environment, wherein the mobile robot: moves through the environment using the environment data and its SLAM functionality; navigates to the respective area of interest using the identification data and the environment data; and/or detects the areas of interest based on the identification data and the environment data, wherein the 3D data of the environment has a lower resolution than the scan data of the areas of interest.

8. The method according to claim 1, wherein the mobile device comprises a display, at least one camera and an image-capturing functionality for generating, upon a trigger by the user of the mobile device and using the at least one camera, the image data, wherein also position data and/or pose data is generated upon the trigger.

9. The method according to claim 1, wherein: the identification data is generated and provided to the mobile robot directly after generating the image data; and/or the mobile robot starts a scanning procedure upon receiving the identification data.

10. The method according to claim 1, wherein the image data comprises depth information, particularly wherein the mobile device comprises at least one time-of-flight camera and/or a 3D camera arrangement, wherein: the identification data is generated using environment data comprising 3D data of the environment, wherein the environment data is used for determining a position of an area of interest based on the depth information; and/or the mobile robot detects the respective area of interest based on the depth information, particularly wherein the mobile robot comprises at least one time-of-flight camera and/or a 3D camera arrangement.

11. A system for generating three-dimensional scan data of one or more areas of interest in an environment, the system comprising a mobile device and a mobile robot, wherein: the mobile device comprises a camera for capturing images of the one or more areas of interest and for generating image data, and the mobile robot has a SLAM functionality for simultaneous localization and mapping and a scanning device for performing a scan at the one or more areas of interest and generating the scan data of the one or more areas of interest, wherein the system is configured to generate, using at least the image data, identification data for each of the one or more areas of interest, the identification data allowing identifying the respective area of interest, and to provide the identification data to the mobile robot, wherein the mobile robot is configured to autonomously: move through the environment using the SLAM functionality; navigate to the areas of interest using the identification data; detect the areas of interest based on the identification data; and perform a scan at each of the one or more areas of interest to generate the three-dimensional scan data.

12. The system according to claim 11, wherein the scanning device comprises: at least one laser scanner, at least one structured-light scanner, and/or at least one time-of-flight camera; and/or wherein the mobile robot is configured as a legged robot, comprising actuated legs for moving through the environment, a wheeled robot, comprising actuated wheels for moving through the environment, and/or an unmanned aerial vehicle or a quadcopter, comprising actuated rotors for moving through the environment.

13. The system according to claim 11, wherein the mobile device comprises a display, at least one camera and an image-capturing functionality for generating, upon a trigger by the user of the mobile device and using the at least one camera, the image data.

14. The system according to claim 13, wherein: the image-capturing functionality is provided by a software application installed on the mobile device, wherein the display is configured as a touchscreen and the software application allows the user to mark an area in an image displayed on the display to define as an area of interest; the mobile device comprises an inertial measuring unit, a compass and/or a GNSS receiver; the at least one camera is configured as a time-of-flight camera and the image data comprises depth information; the mobile device is configured for detecting a position of the mobile device while capturing the image data, and the system is configured to generate the identification data using position data related to the detected position; the mobile device is configured for detecting a pose of the mobile device while capturing the image data, and the system is configured to generate the identification data using pose data related to the detected pose; and/or the mobile device is configured for tracking a path through the environment, particularly using a SLAM functionality of the mobile device, IMU data of an inertial measuring unit of the mobile device, and/or image data continuously captured by the at least one camera, and the system is configured to generate the identification data using path data related to the path.

15. The system according to claim 11, wherein the mobile robot is configured to receive environment data comprising 3D data of the environment, and is configured to autonomously: move through the environment using the environment data and the SLAM functionality; navigate to the areas of interest using the environment data and the determined positions; and/or detect the areas of interest based on the image data and the 3D data, particularly wherein the image data comprises depth information.

16. The system according to claim 11, wherein the mobile device comprises a SLAM functionality for simultaneous localization and mapping of the mobile device and is configured to track the path using the SLAM functionality, wherein for tracking the path the SLAM functionality uses: IMU data of an inertial measuring unit of the mobile device, and/or image data continuously captured by at least one camera of the mobile device.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0037] The disclosure in the following will be described in detail by referring to exemplary embodiments that are accompanied by figures, in which:

[0038] FIG. 1 shows an exemplary environment with three areas of interest that a user wants to be scanned;

[0039] FIG. 2 shows the user defining an area of interest by capturing image data thereof using a mobile device;

[0040] FIG. 3 shows an exemplary of the mobile device;

[0041] FIG. 4 shows a path of the user through the environment to capture image data of each of the three areas of interest;

[0042] FIG. 5 shows a path of a mobile robot through the environment to scan each of the three areas of interest;

[0043] FIG. 6 shows the mobile robot scanning the defined area of interest of FIG. 2;

[0044] FIG. 7 illustrates the data generated and used during an exemplary embodiment of a method; and

[0045] FIG. 8 shows a flowchart illustrating an exemplary embodiment of a method.

DETAILED DESCRIPTION

[0046] FIG. 1 shows a layout of an apartment. The apartment is an example of an environment 1, in which there are areas of interest that a person would like to have scanned. In the shown example, there are three areas of interest 11, 12, 13. A first area of interest 11 is situated in a bedroom and comprises a wall including a window. A second area of interest 12 is situated in a bathroom and comprises appliances including a bathtub. A third area of interest 11 is situated in a combined kitchen and living room and comprises a built-in kitchen unit including a stove.

[0047] For instance, the window, the bathtub and the kitchen unit have recently been installed in the apartment, and an existing 3D model of the apartment needs to be updated with new 3D data of these areas. In this case, for instance, the person who would like to have the scans performed at the areas of interest may be a contractor or craftsman that have installed the applications or an owner of the apartment or an architect that have commissioned the installations. Alternatively, there is no 3D model of the environment 1, and only 3D data of the areas of interest 11, 12, 13 may be needed.

[0048] Conventionally, the person who would like to have the scans performed would haul a scanner through the apartment and place it at each area of interest to perform the scans. Alternatively, the person could define the areas of interest and then have someone else perform the scanning.

[0049] In the approach suggested by the present application, the areas of interest are specifically defined by a user of a mobile device. Data comprising identification information that allows identifying the defined areas is made available to a mobile robot that will perform the scans.

[0050] FIG. 2 shows the user 3 of an exemplary embodiment of the mobile device 30 at the third area of interest 13 of the environment of FIG. 1. The mobile device 30 has a camera 36 and is used by the user 3 to capture an image 33 of the area of interest 13.

[0051] FIG. 3 shows the front side of the mobile device 30 of FIG. 2. It comprises a display 35 and an image capturing functionality to capture images 33 of the areas of interest, thereby generating digital image data that may be provided to the mobile robot. The image may be captured upon receiving a trigger by the user. For instance, the trigger may comprise the user pushing a button of the mobile device or a digital button on the touch-sensitive display 35. Also a position of the mobile device may be determined and position data may be generated upon receiving the trigger. The identification data to be provided to the mobile robot may comprise the image data and the position data or be generated based on the image data and the position data.

[0052] Optionally, the user may define an area 37 in the image (e.g. using the touch-sensitive display 35) as the area of interest. Then, for instance, this information is included in the identification data. Alternatively, only the image data related to the user-defined area 37 in the image is included in generating the identification data.

[0053] The identification data needs to include data that allows the mobile robot to determine at least a rough position of each area of interest within the environment 1. For instance, an absolute or relative position of the mobile device may be determined while capturing the image or a path to that position may be tracked. The identification data further needs to include data that allows the mobile robot to detect the area of interest at this rough position. In particular, this information may include the image data of the area of interest and/or pose data regarding a pose of the mobile device while capturing the image data. Alternatively, a precise position of the area of interest may be derived by comparing the image data and existing environment data.

[0054] FIG. 4 shows an exemplary path of the user of the mobile device through the environment 1. The user captures a first image 31 of the first area of interest from a first position 21, then moves along a path 20 to a second position 22 to capture an image 32 of the second area of interest and finally moves along the path 20 to a third position 23 to capture an image 33 of the third area of interest.

[0055] While the user moves along the path 20, the mobile device may track this path and generate path information. For instance, tracking the path 20 may involve using one or more cameras and/or an inertial measuring unit (IMU) and a simultaneous localization and mapping (SLAM) functionality of the mobile device. Also, the mobile device may comprise a compass and/or a global navigation satellite system (GNSS) receiver that may be involved in tracking the path 20. The path information may be part of the identification data or used to generate the identification data for an area of interest, particularly path information relating to the path 20 from the previous area of interest.

[0056] The identification data of each area of interest may be generated and sent to the mobile robot directly after capturing the respective image. Alternatively, generating and/or sending the identification data may require a further user input, e.g. on the mobile device.

[0057] FIG. 5 shows the scanning by the mobile robot 2 in the environment 1. The mobile robot 2 has a SLAM functionality for simultaneous localization and mapping that allows the mobile robot to autonomously move through the environment 1. The mobile robot uses the received identification data to autonomously navigate to the user-defined areas of interest.

[0058] In the shown example, having received the identification data of the three areas of interest, the mobile robot moves to a first scanning position 41 at the first area of interest and performs a first scan. Then, the mobile robot moves along a path 40 to a second scanning position 42 and to a third scanning position to perform a second and third scan.

[0059] The scanning positions are selected based on the received identification data. It is not necessary that the scanning position is the same as the position at which the image of the respective area of interest has been captured. Sometimes, it may be even necessary to use a different position for the scanning than for capturing the image.

[0060] By also using the information of the user's path 20, the robot can quickly navigate between interest points, as the planning required basically consists only of local obstacle avoidance. Furthermore, because of the user's selections, the robot is aware of what is important and does not spend time on places in which the user is not interested. Hence, the time-efficiency of the robot is also increased and can approach that of a teach-and-repeat workflow (where the planning is entirely up to the operator).

[0061] Another possibility is that if an environment model, e.g. a CAD model of the environment, is available, the robot can try to localize itself with respect to this model and do the same operations as if the CAD model was a previously recorded scan.

[0062] FIG. 6 shows an exemplary embodiment of the mobile robot 2 at the third area of interest 13 of the environment of FIG. 1. The mobile robot has a scanning device 26 to capture the 3D data of the area of interest 13.

[0063] For moving through the environment, the mobile robot may use different kinds of locomotion, each having its own advantages and disadvantages depending on the kind of environment. For instance, as shown here, the mobile robot 2 may be embodied as a legged robot, e.g. comprising four actuated legs. Alternatively, the robot 2 may be configured as a wheeled robot, i.e. comprising actuated wheels (and/or tracks), or as an unmanned aerial vehicle (UAV), particularly a quadcopter comprising actuated rotors.

[0064] The scanning device 26 of the mobile robot may comprise any suitable scanning means, in particular at least one laser scanner, at least one structured-light scanner, and/or at least one time-of-flight camera.

[0065] FIG. 7 illustrates the generation, use and flow of data within an exemplary embodiment of a system while performing an exemplary embodiment of a method. The mobile device 30 captures image data 51 of the area of interest and optionally further data, such as position data 52 and pose data 53 related to a position and pose of the device 30 while capturing the image data 51. The mobile device 30 may also generate path data 54 from tracking a path to the area of interest. Also, the image data 51 may comprise RGB and depth information.

[0066] This data 51, 52, 53, 54 captured by the mobile device 30 is used to generate identification data 50 that will be provided to the mobile robot 2. Generating the identification data 50 may be done on the mobile device 30 or on an external computing unit of the system. It may comprise using existing environment data 55 of the environment. This may comprise 2D, 3D or image data of the environment.

[0067] The mobile robot 2 receives the identification data 50, identifies the area of interest and generates the scan data 60 of the area of interest. For facilitating identification of the area of interest, the mobile robot optionally may use existing environment data 55 of the environment.

[0068] FIG. 8 shows a flow chart illustrating an exemplary embodiment of a method 100. The approach of the described method 100 allows to efficiently operate and not spend time on areas that are not of particular interest. Also, it includes the advantage of a teach-and-repeat workflow which allows the robot to quickly navigate as it has a strong prior on the path it can take (i.e. the user's trajectory). The approach can be seen as a sort of a teach-and-repeat workflow with additional information added, namely the areas of interest defined by the user. Therefore, it can be placed between a fully autonomous exploration and a simple path following algorithm.

[0069] As shown here, the approach consists of two stages. In a first stage (definition stage 110), a user, using a mobile device, takes images of areas of interest, e.g. those places that should be scanned thoroughly. The mobile device captures the images, thus generating 112 image data of the areas of interest. Optionally, the mobile device also captures other data, e.g. regarding a position or pose of the device while capturing the image. For instance, the device may record the user's motion by means of an odometry system (e.g. ARKit, ARCore) to determine a trajectory between two areas of interest. Based on the image data and the other data, identification data is generated 114 and provided 116 to the mobile robot.

[0070] In a second stage (scanning stage 120), the mobile robot identifies the areas of interest based on the images taken by the user and references itself relative to the position of the mobile device when taking the image.

[0071] The scanning stage 120 comprises the robot using the identification data 50 to autonomously navigate 122 towards the respective area of interest and to detect 124 the respective area of interest. Optionally, in order to efficiently navigate between the areas of interest, the robot may use the user's trajectory as a basis for its own path planning. Then, the robot uses its scanning device to scan 126 the area of interest to generate the three-dimensional scan data.

[0072] During the first stage 110, the user can use a large variety of lightweight devices (almost any modern smartphone/tablet), which allows to quickly go through the scene and select (define) the areas of interest. A software application (app) may be installed on the mobile device that automatically provides the captured data (or identification data that is generated based on the captured data) to the mobile roboteither directly or via a server computer. The app may also automatically track the user's movement between two areas of interest. Optionally, the app may also receive a user input, e.g. on a touchscreen of the mobile device, to define the area of interest more precisely in the captured image.

[0073] Furthermore, in order to do this, the robot performing the scanning does not need to be on-sight at the time of capturing the data. Hence, one can make efficient use of resources, e.g. the expert can visit multiple sites in a short time and is not bound to having the infrastructure (i.e. the robot) in place at the time of his visit.

[0074] Although aspects are illustrated above, partly with reference to some preferred embodiments, it must be understood that numerous modifications and combinations of different features of the embodiments can be made. All of these modifications lie within the scope of the appended claims.