MOBILE APPARATUS, METHOD FOR DETERMINING POSITION, AND NON-TRANSITORY RECORDING MEDIUM
20240427343 ยท 2024-12-26
Inventors
Cpc classification
G05D1/242
PHYSICS
G05D1/243
PHYSICS
G05D2111/52
PHYSICS
International classification
G05D1/246
PHYSICS
G05D1/242
PHYSICS
Abstract
A mobile apparatus includes circuitry to control the mobile apparatus to perform teaching travel and autonomous travel, generate; for each of nodes on a travel route independently for external sensors, calculation information for calculating a deviation between a node passed in the teaching travel and a point passed in the autonomous travel; store in the memory the calculation information in association with the node and the external sensor; calculate, for each node independently for the external sensors, the deviation based on the calculation information and a of the extern sensor value in the autonomous travel; determine, for each node independently for the external sensors, the calculated deviation as a position and posture of the node with reference to the position and posture of the mobile apparatus; integrate the positions and postures of the node determined independently for the external sensors; and control the mobile apparatus to autonomously travel.
Claims
1. A mobile apparatus comprising circuitry configured to: control the mobile apparatus to perform teaching travel and autonomous travel on a travel route, the teaching travel in which the mobile apparatus stores in a memory a position and posture of the mobile apparatus traveling on the travel route under control by a manual operation, the autonomous travel in which the mobile apparatus autonomously travels on the travel route; generate, for each of nodes on the travel route independently for each of multiple external sensors, calculation information used for calculating a deviation between a node passed in the teaching travel and a point passed in the autonomous travel; store in the memory the calculation information in association with the node and the external sensor; calculate, for each node independently for each of the multiple external sensors, the deviation based on the calculation information and a sensor value of the external sensor obtained in the autonomous travel; determine, for each node independently for each of the multiple external sensors, the calculated deviation as a position and posture of the node on the travel route with reference to the position and posture of the mobile apparatus; integrate, for each node, the positions and postures of the node determined independently for each of the multiple external sensors; and control the mobile apparatus to autonomously travel on the travel route based on the integrated position and posture of each node on the travel route.
2. The mobile apparatus according to claim 1, further comprising: one or more internal sensors to detect a travel amount and a travel direction of the mobile apparatus, wherein the circuitry is configured to use a sensor value of the one or more internal sensors in generating the calculation information for each node on the travel route independently for each of the multiple external sensors.
3. The mobile apparatus according to claim 1, wherein, in integrating the positions and postures of each node with reference to the position and posture of the mobile apparatus determined independently for each of the multiple external sensors, the circuitry is configured to weight the positions and postures of each node based on reliability of the calculation information and reliability of respective sensor values of the multiple external sensors acquired in the autonomous travel.
4. The mobile apparatus according to claim 1, wherein the nodes on the travel route includes a first node and a second node subsequent to the first node, wherein, in a case where the circuitry integrates the positions and postures of the first node determined independently for each of the multiple external sensors, the circuitry is configured to: calculate a position and posture of the second node based on the integrated position and posture of the first node and relative position information between the first node and the second node included in the calculation information; and calculate a travel route whose origin is the position and posture of the mobile apparatus at the first node.
5. The mobile apparatus according to claim 1, further comprising the multiple external sensors, wherein the multiple external sensors are two or more of a global navigation satellite system, a two-dimensional light detection and ranging sensor, and a camera that obtains range information.
6. The mobile apparatus according to claim 5, wherein the multiple external sensors include the GNSS as a first external sensor and the two-dimensional light detection and ranging sensor as a second external sensor, wherein the circuitry is configured to: generate a position and posture in a GNSS coordinate system as the calculation information based on a sensor value of the first external sensor and generate a two-dimensional occupancy grid map as the calculation information based on sensor values of the second external sensor; determine a deviation of the position and posture in the GNSS coordinate system based on the sensor value of the first external sensor in the autonomous travel from the position and posture of the GNSS coordinate system included in the calculation information as a first position and posture of the node with reference to the position and posture of the mobile apparatus; compare the two-dimensional occupancy grid map with a scanning point cloud based on the sensor values of the second external sensor in the autonomous travel; determine a deviation of the position and posture of the mobile apparatus from an origin of the two-dimensional occupancy grid map as a second position and posture of the node with reference to the position and posture of the mobile apparatus; and integrate the first position and posture and the second position and posture.
7. The mobile apparatus according to claim 1, further comprising the multiple external sensors, wherein the multiple external sensors are two or more of a GNSS, a three-dimensional light detection and ranging sensor, and a camera that obtains range information.
8. The mobile apparatus according to claim 7, wherein the multiple external sensors include the GNSS as a first external sensor and the three-dimensional light detection and ranging sensor as a second external sensor, wherein the circuitry is configured to: generate a position and posture in a GNSS coordinate system as the calculation information based on a sensor value of the first external sensor and generate a three-dimensional point cloud map as the calculation information based on sensor values of the second external sensor; determine a deviation of the position and posture in the GNSS coordinate system based on the sensor value of the first external sensor in the autonomous travel from the position and posture of the GNSS coordinate system included in the calculation information as a first position and posture of the node with reference to the position and posture of the mobile apparatus; compare the three-dimensional point cloud map with a scanning point cloud based on the sensor values of the second external sensor in the autonomous travel; determine a deviation of the position and posture of the mobile apparatus from an origin of the three-dimensional point cloud map as a second position and posture of the node with reference to the position and posture of the mobile apparatus; and integrate the first position and posture and the second position and posture.
9. A method for determining a position of a mobile apparatus, the method comprising: controlling the mobile apparatus to perform teaching travel in which the mobile apparatus stores in a memory a position and posture of the mobile apparatus traveling on a travel route under control by a manual operation; generating, for each of nodes on the travel route independently for each of multiple external sensors, calculation information used for calculating a deviation between a node passed in the teaching travel and a point passed in autonomous, the autonomous travel in which the mobile apparatus autonomously travels on the travel route; storing in the memory the calculation information in association with the node and the external sensor; calculating, for each node independently for each of the multiple external sensors, the deviation based on the calculation information and a sensor value of the external sensor obtained in the autonomous travel; determining, for each node independently for each of the multiple external sensors, the calculated deviation as a position and posture of the node on the travel route with reference to the position and posture of the mobile apparatus; integrating, for each node, the positions and postures of the node determined independently for each of the multiple external sensors; and controlling the mobile apparatus to autonomously travel on the travel route based on the integrated position and posture of each node on the travel route.
10. A non-transitory recording medium storing a plurality of program codes which, when executed by one or more processors, causes the one or more processors to perform a method, the method comprising: controlling a mobile apparatus to perform teaching travel in which the mobile apparatus stores in a memory a position and posture of the mobile apparatus traveling on a travel route under control by a manual operation; generating, for each of nodes on the travel route independently for each of multiple external sensors, calculation information used for calculating a deviation between a node passed in the teaching travel and a point passed in autonomous, the autonomous travel in which the mobile apparatus autonomously travels on the travel route; storing in the memory the calculation information in association with the node and the external sensor; calculating, for each node independently for each of the multiple external sensors, the deviation based on the calculation information and a sensor value of the external sensor obtained in the autonomous travel; determining, for each node independently for each of the multiple external sensors, the calculated deviation as a position and posture of the node on the travel route with reference to the position and posture of the mobile apparatus; integrating, for each node, the positions and postures of the node determined independently for each of the multiple external sensors; and controlling the mobile apparatus to autonomously travel on the travel route based on the integrated position and posture of each node on the travel route.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] A more complete appreciation of embodiments of the present disclosure and many of the attendant advantages and features thereof can be readily obtained and understood from the following detailed description with reference to the accompanying drawings, wherein:
[0009]
[0010]
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026] FIG.17 is a flowchart of an example process of autonomous travel performed by the mobile robot;
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035] The accompanying drawings are intended to depict embodiments of the present disclosure and should not be interpreted to limit the scope thereof. The accompanying drawings are not to be considered as drawn to scale unless explicitly noted. Also, identical or similar reference numerals designate identical or similar components throughout the several views.
DETAILED DESCRIPTION
[0036] In describing embodiments illustrated in the drawings, specific terminology is employed for the sake of clarity. However, the disclosure of this specification is not intended to be limited to the specific terminology so selected and it is to be understood that each specific element includes all technical equivalents that have a similar function, operate in a similar manner, and achieve a similar result.
[0037] Referring now to the drawings, embodiments of the present disclosure are described below. As used herein, the singular forms a, an, and the are intended to include the plural forms as well, unless the context clearly indicates otherwise.
[0038] A description is given below of a mobile robot (mobile apparatus) and a position estimation method performed by the mobile apparatus according to example embodiments of the present disclosure, with reference to the attached drawings.
First Embodiment
Supplemental Description about Positioning Technology
[0039] Autonomous travel technologies (e.g., SLAM technologies) using a map generated using the GNSS positioning technology and the LiDAR technology have been developed. To generate the map, the GNSS positioning technology is used in an open outdoor environment, and a range sensor or a camera is used in a relatively narrow indoor environment.
[0040] If a map is generated in an environment in which the GNSS positioning accuracy is continuously reduced, the generated map may be distorted. In such a case, the accuracy of the position and posture estimation of the mobile robot in autonomous travel is reduced. For example, multiple sensors are used for map generation and self-localization in order to estimate the position and posture with high accuracy even in a vast place where the GNSS positioning accuracy is not necessarily high.
[0041] In order to estimate the position and posture with high accuracy, some technologies have been proposed to achieve accurate environment map generation and self-localization using multiple sensors such as a GNSS, a LiDAR sensor, and a camera. Many of such technologies adopt a method of reconstructing the entire environment into one two-dimensional or three-dimensional map and determining the location of a mobile robot in the map.
[0042] However, even with multiple sensors, it is still difficult to accurately reconstruct the environment without distortion, and often requires manual intervention to, for example, correct the map. On an inaccurate map that is, for example, distorted, it is possible that a deviation occurs in the respective self-localization results by the sensors, which leads to instability of traveling.
[0043] A description is given below of an example of estimation (determination) of the position and posture using the GNSS and LiDAR.
[0044] If the positioning accuracy of the GNSS is satisfactory, the distortion in the map generated using LiDAR can be corrected to match the GNSS coordinate system, thereby generating an accurate map. However, when the GNSS positioning accuracy is continuously low, the distortion in the map generated using LiDAR is not appropriately corrected, and that portion of the map remains inaccurate.
[0045] Further, if GNSS positioning is accurate by chance at a place where the map is generated using LiDAR in autonomous travel, there arises a gap between the self-localization result obtained by matching scan data of LiDAR with the map and the GNSS positioning result. In that case, the mobile robot autonomously travels using, as a final estimation result, any one of these self-localization results or an intermediate result, which may lead to inaccurate traveling. Alternatively, an operator manually corrects the distorted portion of the map to match the GNSS coordinates to prevent incorrect traveling.
[0046] In the present embodiment, even in a vast place where the GNSS positioning accuracy is not necessarily high, a mobile apparatus can autonomously travel with high accuracy without manual correction of the map. Further, this enables the mobile apparatus to autonomously travel on a travel route on which the mobile apparatus is controlled to travel by a manual operation by an operator who does not have expertise.
[0047]
[0048] As illustrated in
[0049]
[0050]
[0051] The mobile robot 10 integrates the deviation amount (that is, the position of the waypoint 8 viewed from the mobile robot 10) while weighting the position and posture by the GNSS and the position and posture by the SLAM in accordance with the positioning accuracy of the GNSS and the positioning accuracy of the LiDAR.
[0052] As described above, the mobile robot 10 according to the present embodiment has local maps independent for multiple sensors in a topological map format. The mobile robot 10 calculates the reliability of positions and postures of itself estimated independently for each of multiple sensors, based on the sensor values obtained in teaching travel and autonomous travel. The mobile robot 10 then integrates the self-positions and self-postures into the final position and posture with weighting according to the reliability. This obviates the necessity to manually correct the map to make the respective position estimation results by the sensors consistent. Accordingly, even if the positioning accuracy of any of the sensors is reduced in the route teaching phase, highly accurate autonomous travel can be achieved.
Terms
[0053] Teaching travel refers to the travel, under the manual operation of a controller by the operator, of the mobile robot 10 on the travel route on which the mobile robot 10 is to autonomously travel so that the mobile robot 10 stores the positions and postures while traveling the travel route. Autonomous travel refers to automatic travel of the mobile robot 10 on the travel route or along the travel route learned in the teaching travel.
[0054] An external sensor is an external environmental sensor that measures the environment in which the mobile robot 10 is placed (that is, outside the mobile robot 10). Examples of the external sensor include a GNSS antenna, a LiDAR sensor, and a camera that acquires distance information. An internal sensor is a sensor that measures the state of the mobile robot 10 (that is, the inside the mobile robot 10). Examples of the internal sensor include a speed sensor, an accelerometer, an angular velocity sensor, and a direction sensor.
[0055] Nodes are points that are parts of the travel route. In the present embodiment, the nodes are also referred to waypoints.
[0056] Location information is information specifying the location of the mobile robot 10 with reference to the origin, and a posture is information specifying the direction of the mobile robot 10 with reference to a reference direction. The posture may be a posture in a three-dimensional space or a two-dimensional space.
[0057] Calculation information is information that enables calculation of a deviation between a node passed in the teaching travel and a point passed in the autonomous travel. The calculation information may differ depending on the external sensor. In GNSS, the calculated information is the position and posture in the GNSS coordinate system. In LiDAR, the calculated information is a two-dimensional occupancy grid map. In the present embodiment, the calculation information is described by the term local map.
[0058] The local map is a map of at least a part of the travel route and covers not the entire travel route. The local map may be a map of a range measured by the external sensor in the environment with the waypoint 8 as the origin.
[0059] Integration refers to merging two or more things into one.
[0060] The position and posture with reference to the mobile robot 10 are the position and posture relative to the position and posture of the mobile robot 10. In the present embodiment, the expression viewed from the mobile robot 10 may be used.
System Configuration
[0061]
[0062] The communication system 1 includes the mobile robot 10 located at the operation site and a display apparatus 60. The mobile robot 10 and the display apparatus 60 of the communication system 1 communicate with one another through a communication network 100. The communication network 100 includes, for example, the Internet, a mobile communication network, a local area network (LAN), or combinations thereof. The communication network 100 may include, in addition to a wired network, a wireless network in compliance with such as 3rd Generation (3G), 4th Generation (4G), 5th Generation (5G), Wireless Fidelity (WI-FI), Worldwide Interoperability for Microwave Access (WiMAX), and Long Term Evolution (LTE).
[0063] The mobile robot 10 is a robot installed in the operation site to autonomously travel from one location to another location at the operation site. The autonomous travel includes operation of autonomously travelling the operation site using the result of imitation learning (machine learning) of the routes traveled at the operation site in the past. The autonomous travel may be operation of autonomously travelling the operation site on a travel route set in advance, or operation of autonomously travelling the operation site using a technology such as line tracing. The mode in which the mobile robot 10 travels by autonomous travel may be called an autonomous travel mode in the following description. Further, the mobile robot 10 may travel under the manual remote control by an operator at a remote location. The mode in which the mobile robot 10 travels under the remote control by an operator may be called a manual operation mode in the following description. In other words, the mobile robot 10 may travel in the operation site while switching between the autonomously travel and the travel under the manual control of the operator. The mobile robot 10 may execute a preset task such as inspection, maintenance, transportation, or light work, while travelling the operation site. In this disclosure, the mobile robot 10 is a robot in a broad sense and may be any robot that can perform both of travel under the remote control by an operator and autonomous travel. Examples of the mobile robot 10 include an automobile that can travel while switching between automatic driving and manual driving according to an operation by an operation at a remote location. Examples of the mobile robot 10 further include flying objects such as a drone, a multicopter, and an unmanned flying object.
[0064] The operation site where the mobile robot 10 is installed is, for example, The monitored area is an area (also referred to as a target site, or simply a site) in which the mobile robot 10 is installed. Examples of the monitored area include an outdoor area such as a business place, a factory, a chemical plant, a construction site, a substation, a farm, a field, a cultivated land, or a disaster site; and an indoor area such as an office, a school, a factory, a warehouse, a commercial facility, a hospital, or a nursing facility. In other words, the operation site may be any location where there is a need for the mobile robot 10 to perform tasks that have been manually performed by humans.
[0065] The display apparatus 60 may be implemented by a computer such as a laptop personal computer (PC), operated by an operator residing at a control site different from the operation site, to perform predetermined operations to the mobile robot 10.
[0066] The operator performs operations, for example, to control the mobile robot 10 to move or execute a predetermined task via an operation screen displayed on the display apparatus 60 at the control site such as an office.
[0067] For example, the operator remotely controls the mobile robot 10 while viewing an image of the operation site displayed at the display apparatus 60.
[0068] Although one mobile robot 10 and one display apparatus 60 are connected to each other via the communication network 100 in
Configuration of Mobile Robot
[0069]
[0070] The crawler traveling bodies 10a and 10b are each a unit as traveling means for the mobile robot 10. The crawler traveling bodies 10a and 10b each uses a metallic or rubber belt. Compared with a traveling body that travels with tires, such as an automobile, the crawler traveling body has a wider contact area with the ground, so that the travel is more stable even in an environment with bad footing, for example. While the traveling body that travels with the tires requires space to make a turn, the mobile apparatus with the crawler traveling body can perform a so-called spin turn, so that the mobile apparatus can smoothly turn even in a limited space.
[0071] The main body 50 supports the crawler traveling bodies 10a and 10b to travel and contains a controller that controls the driving of the mobile robot 10. The main body 50 further includes a battery to supply electric power for driving the crawler traveling bodies 10a and 10b.
[0072]
[0073] The state indicator lamps 144 are each an example of a notification means to indicate the state of the mobile robot 10. When the state of the mobile robot 10 changes, the state indicator lamps 144 light up to notify a person nearby of the state change of the mobile robot 10. The state change is, for example, a decrease in the remaining battery level. The state indicator lamps 144 light up also in response to, for example, the detection of possibility of an abnormality such as the detection of an obstacle that obstructs the traveling of the mobile robot 10. Although the mobile robot 10 in
[0074] The lid 146 is disposed on an upper face of the main body 50 and seals the inside of the main body 50. The lid 146 has a ventilation part 35a having an air vent through which air flows from or into the main body 50.
[0075] The two crawler traveling bodies 10a and 10b are installed such that the mobile robot 10 can travel with the main body 50 interposed therebetween. The number of crawler traveling bodies is not limited to two and may be three or more. For example, the mobile robot 10 may include three crawler traveling bodies arranged in parallel such that the mobile robot 10 can travel. Alternatively, the mobile robot 10 may include, for example, four crawler traveling bodies arranged on the front, rear, right, and left sides like the tires of an automobile.
[0076]
[0077] As illustrated in
[0078] The range sensor 112 irradiates an object with laser light, measures the time for the laser light to be reflected back from the object, and calculates the distance to the object and the direction in which the object is present based on the measured time.
[0079] The mobile robot 10 further includes a range sensor 113 on the front face of the main body 50 in the traveling direction as illustrated in
[0080] The range sensor 113 irradiates an object (e.g., an obstacle on the road surface) with laser light, measures the time for the laser light to be reflected back from the object, and calculates the distance to the object and the direction in which the object is present based on the measured time.
[0081] For the installation position of the range sensor 113, an appropriate value depends on the widths and lengths of the crawlers of the crawler traveling bodies 10a and 10b and the sizes, widths, depths, and heights of objects to be detected.
[0082] The mobile robot 10 illustrated in
Hardware Configuration
[0083] Referring to
[0084] As illustrated in
[0085] The CPU 101 controls the entire operation of the mobile robot 10. The memory 102 is a temporary storage area for the CPU 101 to execute programs such as a travel program.
[0086] The camera 111 includes a general image-capturing device, such as a digital single-lens reflex camera or a compact digital camera, which acquires planar images. The camera 111 further includes special image-capturing device, such as a spherical camera, a stereo camera, or an infrared camera, which acquires special images. The mobile robot 10 may include multiple cameras 111 and may include both the general image-capturing device to acquire planar images and the special image-capturing device to acquire special images. The range sensor 112 for horizontal direction detection and the range sensor 113 for oblique direction detection are, for example, LiDAR sensors.
[0087] The navigation satellite system 114 includes an antenna to receive radio waves from a GNSS satellite and measures the position of the mobile robot 10 on the earth based on the received result. The navigation satellite system 114 is also referred to as a GNSS.
[0088] The IMU 115 is an inertial measurement sensor and estimates the position and posture (translational motion and rotational motion in orthogonal triaxial directions) of the mobile robot 10 by an accelerometer and an angular speed sensor. The IMU 115 is an example of an internal sensor.
[0089] The battery 121 is a power source for the mobile robot 10 to travel. The start switch 142 is a switch for starting the mobile robot 10. The emergency stop switch 143 is a switch for stopping the mobile robot 10 as desired.
[0090] The motor drivers 122a and 122b are drivers for the travel motors 132a and 132b included in the two crawler traveling bodies 10a and 10b, respectively. The brake drivers 123a and 123b are drivers for the brake motors 133a and 133b included in the two crawler traveling bodies 10a and 10b, respectively. The power switch 141 is a switch to turn on or off the power of the mobile robot 10.
[0091] The travel program to be executed on the mobile robot 10 in the present embodiment may be recorded, in a file format installable or executable by a computer, on a computer-readable recording medium, such as a compact disc-read only memory (CD-ROM), a flexible disk (FD), a compact disc-recordable (CD-R), or a digital versatile disk (DVD).
[0092] The travel program to be executed on the mobile robot 10 according to the present embodiment can be stored on a computer connected to a network, such as the Internet, to be downloaded from the computer via the network. Alternatively, the travel program executed on the mobile robot 10 according to the present embodiment can be provided or distributed via a network such as the Internet. The travel program executed on the mobile robot 10 according to the present embodiment may be prestored in, for example, a ROM.
Display Apparatus
[0093]
[0094] As illustrated in
[0095] The CPU 501 controls the entire operation of the computer 500. The ROM 502 stores programs, such as an initial program loader (IPL), for driving the CPU 501. The RAM 503 is used as a work area for the CPU 501. The HD 504 stores various kinds of data such as a program. The HDD controller 505 controls an HDD to read or write various types of data from or to the HD 504 under the control of the CPU 501. The display 506 displays various information such as a cursor, a menu, a window, a character, or an image. The external device connection I/F 508 is an interface for connecting various external devices. Examples of the external device include, but are not limited to, a universal serial bus (USB) memory and a printer.
[0096] The network I/F 509 is an interface for performing data communication via a network. The bus line 510 is, for example, an address bus or a data bus for electrically connecting the components such as the CPU 501 illustrated in
[0097] The keyboard 511 is a kind of input device including multiple keys used for inputting, for example, characters, numerical values, and various instructions. The pointing device 512 is a kind of input device used to, for example, select various instructions, execute various instructions, select a target for processing, and move a cursor. The DVD-RW drive 514 controls the reading or writing of various types of data to or from a DVD-RW 513, which is an example of a removable recording medium. The DVD-RW drive 514 is not limited to the drive for DVD-RW, and may be, for example, a drive for a digital versatile disc-recordable (DVD-R). The medium I/F 516 controls the reading or writing (storing) of data from or to a recording medium 515 such as a flash memory.
Functional Configuration
[0098] Referring to
Functional Configuration of Mobile Robot (Controller)
[0099] The mobile robot 10 includes a transmission-reception unit 31, a determination unit 32, an image-capture control unit 33, a state detection unit 34, a location-information acquisition unit 35, a destination-candidate acquisition unit 36, a route-information generation unit 37, a route-information management unit 38, a destination setting unit 39, a travel control unit 40, an image recognition unit 41, a mode setting unit 42, an autonomous travel unit 43, a manual-operation processing unit 44, a task execution unit 45, a map-information generation unit 46 (information generation unit), an image processing unit 47, a learning unit 48, and a storing-reading unit 49. These are units of functions or means implemented or caused to function by one or more of the hardware elements illustrated in
[0100] The transmission-reception unit 31 has the function of transmitting and receiving various types of data (or information) to and from other apparatus or communication terminal via the communication network 100.
[0101] The determination unit 32 is implemented by processing of the CPU 101 and executes various determinations. The image-capture control unit 33 is implemented by processing of the CPU 101 for, for example, the camera 111 and controls imaging processing of the camera 111.
[0102] The state detection unit 34 is implemented by the processing of the CPU 101 for, for example, the camera 111 and the range sensors 112 and 113. The state detection unit 34 detects the state of the mobile robot 10 or the surroundings of the mobile robot 10 using various sensors. For example, the state detection unit 34 measures the distance to an object (obstacle) present around the mobile robot 10 and outputs the measured distance as distance data. Further, the state detection unit 34 may use the distance data to acquire data indicating the location of the mobile robot 10 based on a match with an environment map by applying SLAM. SLAM is a technology that allows simultaneous processing of self-localization and environment map generation. The state detection unit 34 further detects the direction to which the mobile robot 10 is heading or faces using, for example, the IMU 115.
[0103] The location-information acquisition unit 35 is implemented by the processing of the CPU 101 for the navigation satellite system 114 and acquires location information indicating the current location of the mobile robot 10 using the navigation satellite system 114. For example, the location-information acquisition unit 35 acquires coordinate information indicating the latitude and longitude of the current location of the mobile robot 10 using the navigation satellite system 114.
[0104] The destination-candidate acquisition unit 36 is implemented by, for example, the processing of the CPU 101 and acquires an image of a candidate of the destination to which the mobile robot 10 travels (may be referred to as a destination candidate image in the following description). Specifically, the destination-candidate acquisition unit 36 acquires, as a destination candidate image, an image captured under the control of the image-capture control unit 33. For example, the captured image is an image of a partial area of the site where the mobile robot 10 is installed.
[0105] The route-information generation unit 37 is implemented by, for example, the processing of the CPU 101 and generates route information indicating a travel route of the mobile robot 10. The route-information generation unit 37 generates route information indicating a route from the current location to the final destination (that is, travel destination) based on the location of the destination candidate selected by the operator of the mobile robot 10. Example methods of generating the route information include a method of connecting the waypoints 8 from the current location to the final destination with a straight line, and a method of minimizing the travel time by avoiding an obstacle using a captured image or information of the obstacle obtained by the state detection unit 34.
[0106] The route-information management unit 38 is implemented by, for example, the processing of the CPU 101 and stores the route information generated by the route-information generation unit 37 for management in a route-information management DB 3002.
[0107] The destination setting unit 39 is implemented by, for example, the processing of the CPU 101 and sets a travel destination of the mobile robot 10. For example, based on the current location of the mobile robot 10 acquired by the location-information acquisition unit 35 and the route information generated by the route-information generation unit 37, the destination setting unit 39 sets one of destination candidates selected by the operator of the mobile robot 10, as the travel destination to which the mobile robot 10 heads next.
[0108] The travel control unit 40 is implemented by, for example, the processing of the CPU 101 for the motor drivers 122a and 122b and controls the travel of the mobile robot 10 by driving the travel motors 132a and 132b. The travel control unit 40 controls the mobile robot 10 to travel, for example, according to a drive instruction from the autonomous travel unit 43 or the manual-operation processing unit 44.
[0109] The image recognition unit 41 is implemented by, for example, the processing of the CPU 101 and performs image recognition on a captured image acquired by the image-capture control unit 33. For example, the image recognition unit 41 performs image recognition to determine whether a specific subject is captured in the acquired captured image. The specific subject is, for example, an obstacle on the travel route or around the travel route of the mobile robot 10, an intersection such as a crossroad or an L-shaped road, or a sign or a signal at the site.
[0110] The mode setting unit 42 is implemented by, for example, the processing of the CPU 101 and sets an operation mode of the travel of the mobile robot 10. The mode setting unit 42 sets either an autonomous travel mode in which the traveling robot 10 autonomously travels or a manual operation mode in which the mobile robot 10 travels under the control operations manually made by the operator.
[0111] The autonomous travel unit 43 is implemented by, for example, the processing of the CPU 101 and controls the autonomous travel of the mobile robot 10. For example, the autonomous travel unit 43 outputs an instruction for driving the mobile robot 10 to the travel control unit 40, such that the mobile robot 10 travels on the travel route indicated by the route information generated by the route-information generation unit 37.
[0112] The autonomous travel unit 43 includes a calculation unit 43a, a deviation calculation unit 43b, a position estimation unit 43c, and an integration unit 43d. The calculation unit 43a calculates the position and posture of the mobile robot 10 based on the measurement values of the multiple external sensors and the internal sensors using a known method such as a Kalman filter in the autonomous travel.
[0113] The deviation calculation unit 43b calculates the deviation between the local map and the sensor value of the external sensor for each of the multiple external sensors. The position estimation unit 43c determines the calculated deviation as the position and posture of the waypoint 8 as viewed from the mobile robot 10. The integration unit 43d integrates the position and posture of the waypoint 8 determined for each of the multiple external sensors.
[0114] The manual-operation processing unit 44 is implemented by, for example, the processing of the CPU 101 and controls the processing instructed by the manual operation for the mobile robot 10. For example, the manual-operation processing unit 44 outputs, to the travel control unit 40, an instruction for driving the mobile robot 10 according to a manual-operation command transmitted from the display apparatus 60.
[0115] The task execution unit 45 is implemented by, for example, the processing of the CPU 101 and causes the mobile robot 10 to execute a preset task according to a request from the operator. Examples of the preset task executed by the task execution unit 45 include processing of capturing images for inspection of equipment at the site and performing light work using a movable arm.
[0116] The map-information generation unit 46 is implemented by, for example, the processing of the CPU 101 and manages map information representing an environment map of the operation site where the mobile robot 10 is installed, using a map-information management DB 3003. For example, the map-information generation unit 46 generates multiple local maps based on the multiple external sensors, respectively, and manages the local maps in association with the external sensors. The local maps may be stored in association with the external sensors, respectively, in a DB of table format in a memory.
[0117] The image processing unit 47 is implemented by, for example, the processing of the CPU 101 and generates a display image (screen image) to be displayed on the display apparatus 60. For example, the image processing unit 47 performs processing on the captured image acquired by the image-capture control unit 33 to generate a display image to be displayed on the display apparatus 60.
[0118] The learning unit 48 is implemented by, for example, the processing of the CPU 101 and learns travel routes to be used for autonomous travel of the mobile robot 10. For example, the learning unit 48 performs imitation learning (machine learning) of the travel routes to be used for autonomous travel based on the captured images acquired through the travel operation in the manual operation mode by the manual-operation processing unit 44 and the detection data obtained by the state detection unit 34. The autonomous travel unit 43 causes the mobile robot 10 to autonomously travel based on the learning data, which is the result of imitation learning by the learning unit 48.
[0119] The storing-reading unit 49 is implemented by, for example, the processing of the CPU 101 and stores various data or information in the storage unit 3000 or reads various data or information from the storage unit 3000.
Destination-Candidate Management Table
[0120]
[0121] The destination-candidate management table stores destination candidate data for each site identifier (ID) for identifying the site where the mobile robot 10 is installed. In the destination candidate data, a candidate ID for identifying a destination candidate, location information indicating the location of the destination candidate, and captured image data obtained by capturing a specific area of the site as the destination candidate are associated with one another. The location information is coordinate information including the latitude and longitude that indicate the location of the destination candidate at the site. In a case where the mobile robot 10 is a flying object such as a drone, the location information includes information such as the speed, the posture or altitude of the flying object in addition to the coordinate information indicating the latitude and longitude. In
Route-Information Management Table
[0122]
[0123] The route-information management table stores, for each site ID identifying the site where the mobile robot 10 is installed, a route ID for identifying a travel route of the mobile robot 10 and route information indicating the travel route of the mobile robot 10 in association with one another. The route information indicates the travel route of the mobile robot 10 for reaching next destinations one by one in order. The route information is generated by the route-information generation unit 37, for example, when the mobile robot 10 starts the traveling.
[0124] In the present embodiment, the travel route connecting the multiple destinations and the travel route stored in the teaching travel are described. However, a part or the whole of the former travel route is the latter travel route, and the two are not strictly distinguished in the description.
Map-Information Management Table
[0125]
[0126] The map-information management table of
[0127] The map-information management table may further include a local map obtained by a camera. The camera may be a stereo camera or a monocular camera that obtains distance information. The local map by the camera represents information similar to that represented by a local map obtained by SLAM.
Learning-Data Management DB
[0128] A description is given below of a learning-data management DB 3004 stored in the storage unit 3000. The learning-data management DB 3004 stores learning data, which is the result of learning by the learning unit 48 of autonomous travel routes of the mobile robot 10. The learning-data management DB 3004 stores, for example, captured image data obtained by the mobile robot 10, various types of detection data, and learning data as the result of imitation learning (machine learning) for each site or for each mobile robot 10. The mobile robot 10 performs autonomous travel based on the learning data stored in the learning-data management DB 3004. The captured image data captured by the special image-capturing device or the general image-capturing device of the camera 111 may include Pan-Tilt-Zoom (PTZ) parameters for specifying an imaging direction of the special image-capturing device or the general image-capturing device. The captured image data including the PTZ parameters is stored (saved) in the storage unit 3000 (in particular, the learning-data management DB 3004) of the mobile robot 10. Alternatively, the mobile robot 10 may store the PTZ parameters in the storage unit 3000 in association with the location information of the destination candidate and further store the coordinate data (x, y, ) indicating the posture of the mobile robot 10 at the time of acquisition of the captured image data of the destination candidate in the storage unit 3000. This enables the correction of the posture of the mobile robot 10 using the PTZ parameters and the coordinate data (x, y, ) when the actual location where the mobile robot 10 stops deviates from the location of the destination.
[0129] Some data, such as the data of the autonomous travel route (GNSS positioning history) of the mobile robot 10 and the captured image data of destination candidates used for display on the display apparatus 60, may be accumulated on a cloud computing service such as AMAZON WEB SERVICES (AWS).
Functional Configuration of Display Apparatus
[0130] Referring back to
[0131] The transmission-reception unit 51 is implemented by, for example, the processing of the CPU 501 in relation to the network I/F 509 and transmits or receives various types of data or information to or from another device or communication terminal.
[0132] The operation reception unit 52 is implemented by, for example, the processing of the CPU 501 in relation to the keyboard 511 or the pointing device 512 and receives various selections or inputs from the operator.
[0133] The display control unit 53 is implemented by, for example, the processing of the CPU 501 and controls the display 506 to display various screens. The determination unit 54 is implemented by the processing of the CPU 501 and performs various determinations.
[0134] The manual-operation command generation unit 55 is implemented by, for example, the processing of the CPU 501 and generates a manual-operation command (instruction) for moving the mobile robot 10 by a manual operation in response to an input operation performed by the operator. The autonomous-travel request generation unit 56 is implemented by, for example, the processing of the CPU 501 and generates an autonomous-travel request information for causing the mobile robot 10 to autonomously travel. The autonomous-travel request generation unit 56 generates an autonomous-travel request information to the mobile robot 10, for example, based on information on the destination candidate selected by the operator.
[0135] The image processing unit 57 is implemented by, for example, the processing of the CPU 501 and generates a display image to be displayed on a display such as the display 506. The image processing unit 57 performs processing, for example, on the captured image acquired by the mobile robot 10 to generate a display image to be displayed on the display apparatus 60. The communication system 1 may be provided with the function of at least one of the image processing unit 47 included in the mobile robot 10 and the image processing unit 57 included in the display apparatus 60.
[0136] The storing-reading unit 59 is implemented by, for example, the processing of the CPU 501 and stores various data or information in the storage unit 5000 or reads out various data or information from the storage unit 5000.
Processing or Operation according to Embodiment
Destination-Candidate Registration Process
[0137] First, referring to
[0138] The display apparatus 60 starts operating the mobile robot 10 in response to a predetermined input operation by the operator (S11). The transmission-reception unit 51 transmits an operation start request to the mobile robot 10 (S12). Accordingly, the transmission-reception unit 31 of the mobile robot 10 receives the operation start request transmitted from the display apparatus 60.
[0139] Subsequently, the image-capture control unit 33 starts image capturing using the special image-capturing device and the general image-capturing device of the camera 111 (S13). The image-capture control unit 33 acquires data of captured images captured by the special image-capturing device and the general image-capturing device. In the present embodiment, moving images are acquired by the image capturing in S13. The transmission-reception unit 31 transmits the data of captured images acquired in S13 to the display apparatus 60 (S14). Thus, the transmission-reception unit 51 of the display apparatus 60 receives the captured image data transmitted from the mobile robot 10.
[0140] Subsequently, the display control unit 53 of the display apparatus 60 displays an operation screen 200 including the captured image data received in S14 on the display such as the display 506 (S15).
[0141]
[0142] The operation screen 200 includes a site display area 210 and a site display area 230. In the site display area 210, captured image data (a planar image) captured by the general image-capturing device of the camera 111 and received in S14 is displayed. In the site display arca 230, captured image data (special image) captured by the special image-capturing device of the camera 111 and received in S14 is displayed. The special image displayed in the site display area 230 is an omnidirectional image of the site captured by the special image-capturing device of the camera 111 as described above. Examples of the omnidirectional image include a spherical image, a wide-angle view image, or a hemispherical image. Alternatively, the special image-capturing device may combine the images captured by the general image-capturing device while rotating, so as to obtain an omnidirectional image as the special image. In the site display area 230, further, a mobile robot display image 235 indicating the presence of the mobile robot 10 is superimposed on the special image. In the operation screen 200, further, coordinate information represented by latitude (Lat) and longitude (Lon) is displayed as the location information indicating the current location of the mobile robot 10. In a case where the mobile robot 10 is a flying object such as a drone, the operation screen 200 may display other information such as the speed and the posture (position) or altitude of the flying object, in addition to the coordinate information represented by the latitude and longitude. The operation screen 200 may display the captured images obtained by the special image-capturing device and the general image-capturing device, as live streaming images that are distributed in real time through a computer network such as the Internet.
[0143] The operation screen 200 further includes an operation icon 250 for allowing the operator to remotely control the mobile robot 10. The operation icon 250 includes multiple direction instruction buttons each of which is pressed to request movement of the mobile robot 10 in a certain horizontal direction (e.g., forward, backward, right rotation, or left rotation). The operator can remotely control the mobile robot 10 by selecting the direction instruction button on the operation icon 250 while viewing the planar image displayed in the site display area 210 and the special image such as the spherical image displayed in the site display arca 230.
[0144] In the present embodiment, the travel of the mobile robot 10 is remotely controlled by receiving the selection on the operation icon 250 displayed on the operation screen 200. Alternatively, the travel of the mobile robot 10 may be controlled by a dedicated controller, such as a keyboard or a game pad including a joystick.
[0145] The operation screen 200 further includes a destination-candidate registration button 270, which is pressed to register a destination candidate using the planar image displayed in the site display area 210 or the special image displayed in the site display area 230. In the following description, a destination candidate is registered in response to the pressing of the destination-candidate registration button 270 by the operator viewing the special image displayed in the site display area 230. Similar processing is performed also in a case where a destination candidate is registered in response to the pressing of the destination-candidate registration button 270 by the operator viewing the planar image displayed in the site display area 210. The operation screen 200 further includes a destination setting button 290, which is pressed to set a destination of the mobile robot 10.
[0146] Subsequently, the display apparatus 60 performs a manual operation processing for the mobile robot 10 using the operation screen 200 displayed in S15 (S16). Details of processing of S16 will be described later.
[0147] When the operator presses the destination-candidate registration button 270 on the operation screen 200, the operation reception unit 52 receives a request for registering, as a destination candidate, an area included in the special image displayed in the site display arca 230 (S17). Then, the transmission-reception unit 51 transmits a destination-candidate registration request to the mobile robot 10 (S18). The destination candidate registration request is, for example, a request for capturing an image of the area included in the special image displayed in the site display area 230. Accordingly, the transmission-reception unit 31 of the mobile robot 10 receives the destination-candidate registration request transmitted from the display apparatus 60.
[0148] Subsequently, the location-information acquisition unit 35 of the mobile robot 10 acquires location information indicating the current location of the mobile robot 10 using the navigation satellite system 114 (S19). Specifically, the location-information acquisition unit 35 acquires coordinate information including the latitude and longitude of the current location of the mobile robot 10. In a case where the mobile robot 10 is a flying object such as a drone, the location-information acquisition unit 35 acquires, as the location information, information such as the speed and the posture or altitude of the flying object in addition to the coordinate information including the latitude and longitude. The destination-candidate acquisition unit 36 acquires a captured image captured by the special image-capturing device at the current location of the mobile robot 10 as a destination candidate image (S20). In the present embodiment, as the destination candidate image, a still image is acquired by the destination-candidate acquisition unit 36.
[0149] Then, the storing-reading unit 49 stores the destination candidate data including the location information acquired in S19 and the captured image acquired in S20 in the destination-candidate management DB 3001 (destination-candidate management table) illustrated in
[0150] As described above, the communication system 1 displays the captured image captured by the mobile robot 10 at the display apparatus 60 operated by the operator who remotely controls the mobile robot 10. This allows the operator to remotely control the mobile robot 10 while visually checking the surroundings of the mobile robot 10 in real time. Further, the communication system 1 registers in advance a captured image obtained by capturing a specific area of the site, as a candidate of the destination for the autonomous travel of the mobile robot 10, in response to an input operation of the operator made in the manual operation for the mobile robot 10.
Manual Operation Process
[0151] Subsequently, referring to
[0152] First, the operation reception unit 52 of the display apparatus 60 receives selection on the operation icon 250 on the operation screen 200 displayed in S15, according to an input operation of the operator (S31).
[0153] The manual-operation command generation unit 55 generates a manual-operation command according to the selection on the operation icon 250 selected in S31 (S32). In the example of
[0154] Subsequently, the mode setting unit 42 sets the mobile robot 10 to operate in the manual operation mode (S34). Then, the manual-operation processing unit 44 outputs an instruction to drive to the travel control unit 40 based on the manual-operation command received in S33. The travel control unit 40 controls the mobile robot 10 to travel according to the drive instruction from the manual-operation processing unit 44 (S35). Further, the learning unit 48 performs imitation learning (machine learning) of the travel routes traveled by the processing of the manual-operation processing unit 44, according to the manual operation (S36). The learning unit 48 performs imitation learning of the travel routes, for example, based on the captured images acquired through the travel operation in the manual operation mode by the manual-operation processing unit 44 and the detection data obtained by the state detection unit 34. The learning unit 48 may perform imitation learning of travel routes using only the captured images acquired in manual operation. Alternatively, the learning unit 48 may perform imitation learning of travel routes using both the captured images and the detection data obtained by the state detection unit 34. The captured images used for the imitation learning by the learning unit 48 may be the captured images acquired by the autonomous travel unit 43 during the autonomous travel in the autonomous travel mode.
[0155] Subsequently, the mode setting unit 42 of the mobile robot 10 sets the mobile robot 10 to operate in the manual operation mode (S34). Then, the manual-operation processing unit 44 outputs an instruction to drive to the travel control unit 40 based on the manual-operation command received in S33. The travel control unit 40 controls the mobile robot 10 to travel according to the drive instruction from the manual-operation processing unit 44 (S35). Further, the learning unit 48 performs imitation learning (machine learning) of the travel routes traveled by the processing of the manual-operation processing unit 44, according to the manual operation (S36). The learning unit 48 performs imitation learning of the travel routes, for example, based on the captured images acquired through the travel operation in the manual operation mode by the manual-operation processing unit 44 and the detection data obtained by the state detection unit 34. The content of learning of travel routes includes the position and posture of the mobile robot 10.
[0156] Then, the mobile robot 10 executes registration of the destination candidate in the travel operation (S37).
[0157] First, the determination unit 32 determines whether a preset task has been executed by the task execution unit 45 (S51). Specifically, the task execution unit 45 causes the mobile robot 10 to execute a preset task according to, for example, a task execution request from the operator or a schedule set in advance. Then, the determination unit 32 determines whether the preset task is executable under the control of the task execution unit 45. Examples of the preset task include capturing images for inspection of equipment at the site and performing light work using the movable arm.
[0158] The mobile robot 10 performs inspection work of an inspected object such as a meter or a valve, for example, when entering an inspection area while traveling the site. At this time, the mobile robot 10 stops traveling to perform the operation to capture an image of the inspected object. Such operation may be used to trigger the registration of the destination candidate.
[0159] When determining that the preset task has been executed by the task execution unit 45 (YES in S51), the determination unit 32 proceeds the process to S56. By contrast, when determining that the preset task is not executed by the task execution unit 45 (NO in S51), the determination unit 32 proceeds the process to S52.
[0160] Subsequently, the determination unit 32 determines whether the mobile robot 10 has stopped moving (S52). Specifically, when determining that the drive control of the travel motors 132a and 132b by the travel control unit 40 is stopped, the determination unit 32 determines that the mobile robot 10 stops moving. When the determination unit 32 determines that the mobile robot 10 has stopped moving (YES in S52), the determination unit 32 proceeds the process to S56. By contrast, when the determination unit 32 determines that the mobile robot 10 has not stopped moving (i.e., the mobile robot 10 is moving) (NO in S52), the determination unit 32 proceeds the process to S53.
[0161] The determination unit 32 then determines whether an intersection is detected near the mobile robot 10 (S53). Specifically, the image recognition unit 41 performs image recognition on a captured image acquired by the special image-capturing device or the general image-capturing device. Then, the determination unit 32 determines whether an intersection has been detected in the captured image as a result of the processing of the image recognition unit 41. The mobile robot 10 can acquire an image as viewed from different directions on the travel route at a time regardless of the travel direction, by acquiring a special image captured omnidirectionally at a distinctive location such as an intersection. The intersection is an example of a specific subject detected by the image recognition unit 41. The specific subject is not limited to an intersection but may be, for example, an obstacle on or around the travel route of the mobile robot 10, or a sign or a signal at the operation site. The specific subject differs depending on the type of the mobile robot 10, such that the specific subject differs between the case where the mobile robot 10 travels on a road surface and the case where the mobile robot 10 flies like a drone. The mobile robot 10 is preliminarily set with information on the specific subject to be detected.
[0162] When determining that an intersection has been detected near the mobile robot 10 (YES in S53), the determination unit 32 proceeds the process to S56. By contrast, when determining that an intersection has not been detected near the mobile robot 10 (NO in S52), the determination unit 32 proceeds the process to S54.
[0163] Subsequently, the determination unit 32 determines whether the current location of the mobile robot 10 is close to the destination candidate registered in the destination-candidate management DB 3001 (S54). Specifically, the determination unit 32 refers to the location information of the destination candidates stored in the destination-candidate management DB 3001 and determines whether there is a destination candidate close to the current location of the mobile robot 10 acquired by the location-information acquisition unit 35. The determination of whether the current location is close to the destination candidate is made based on, for example, the difference in value between the location information of the destination candidate preliminarily set and the location information indicating the current location of the mobile robot 10. Alternatively, the determination of whether the current location is close to the destination candidate may be made by performing image recognition on a captured image of the destination candidate (destination candidate image) and a captured image at the current location of the mobile robot 10.
[0164] When determining that the current location of the mobile robot 10 is close to the destination candidate registered in the destination-candidate management DB 3001 (YES in S54), the determination unit 32 proceeds the process to S56. By contrast, when determining that the current location of the mobile robot 10 is not close to the destination candidate registered in the destination-candidate management DB 3001 (NO in S54), the determination unit 32 proceeds the process to S55.
[0165] Subsequently, the determination unit 32 determines whether the direction instructed by the manual-operation command transmitted from the display apparatus 60 has been changed (S55). Specifically, the determination unit 32 determines whether the travel direction instructed by the manual-operation command has been changed, based on the manual-operation command received in S33. For example, when the manual-operation command instructs a characteristic operation, the determination unit 32 determines that the travel direction has been changed. Examples of the characteristic operation include steering by a certain amount or more, accelerating, and braking, which are performed, for example, when the mobile robot 10 travels through an intersection or a curve.
[0166] When it is determined that the travel direction instructed by the manual-operation command transmitted from the display apparatus 60 has been changed (YES in S55), the determination unit 32 proceeds the process to S56. By contrast, when the determination unit 32 determines that the travel direction instructed by the manual-operation command transmitted from the display apparatus 60 is not changed (NO in S55), the determination unit 32 ends the process.
[0167] In S56, the location-information acquisition unit 35 acquires the location information indicating the current location of the mobile robot 10 using, for example, the navigation satellite system 114. In addition, the destination-candidate acquisition unit 36 acquires a captured image captured by the special image-capturing device at the current location of the mobile robot 10 as a destination candidate image indicating a destination candidate of the mobile robot 10 (S57).
[0168] Then, the storing-reading unit 49 stores the destination candidate data including the location information acquired in S19 in
[0169] As described above, the mobile robot 10 automatically captures an image of the surroundings of the mobile robot 10 using the camera 111 mounted on the mobile robot 10, based on a predetermined determination criterion in accordance with the travel state of the mobile robot 10, and registers in advance the captured image indicating a candidate of the travel destination of the mobile robot 10. The determination criterion by the determination unit 32 is not limited to the those described above referring to S51 to S55 but is appropriately set in accordance with details of the manual operation of the mobile robot 10 and the surroundings of the mobile robot 10.
[0170] The determination criterion by the determination unit 32 may be, for example, a state change of the mobile robot 10 to be recognized by the operator when the operator selects the destination of the mobile robot 10, the condition of the operation site, or an environmental change at the operation site.
Destination Setting
[0171] Subsequently, referring to
[0172] First, the operation reception unit 52 of the display apparatus 60 receives selection of the destination setting button 290 by an input operation of the operator on the operation screen 200 (S71).
[0173] Subsequently, the transmission-reception unit 51 transmits a destination candidate acquisition request for requesting data indicating the destination candidate, to the mobile robot 10 (S72). The destination candidate acquisition request includes the site ID for identifying the site where the mobile robot 10 is installed. The transmission-reception unit 31 of the mobile robot 10 receives the destination candidate acquisition request transmitted from the display apparatus 60.
[0174] Subsequently, the storing-reading unit 49 searches the destination-candidate management DB 3001 (destination-candidate management table) illustrated in
[0175] Then, the transmission-reception unit 31 transmits the destination candidate data retrieved in S73 to the display apparatus 60 that is the source of the request (S74). The destination candidate data retrieved in S73 includes, for each of multiple destination candidates, the candidate ID, the captured image, and the location information. Thus, the transmission-reception unit 51 of the display apparatus 60 receives and acquires the destination candidate data transmitted from the mobile robot 10.
[0176] Subsequently, the display control unit 53 of the display apparatus 60 displays a selection screen 400 including the destination candidate data received in S74 on the display such as the display 506 (S75). Specifically, the image processing unit 57 generates the selection screen 400 based on the received destination candidate data. Then, the display control unit 53 displays the selection screen 400 generated by the image processing unit 57. Alternatively, the image processing unit 47 of the mobile robot 10 may generate the selection screen 400. In this case, the image processing unit 47 generates the selection screen 400 based on the destination candidate data retrieved in S73. In S74, the transmission-reception unit 31 transmits screen data of the selection screen 400 including the destination candidate data generated by the image processing unit 47 to the display apparatus 60.
[0177]
[0178] The selection screen 400 includes an image display area 410, which displays multiple captured images 415a to 417f (may be collectively referred to as the captured images 415) selectably by the operator. The captured images 415a to 417f are the destination candidate images included in the destination candidate data received in S74. The selection screen 400 further includes an OK button 430 to be pressed to complete the selection and a cancel button 435 to be pressed to cancel the selection.
[0179] The operator selects one or more captured images 415 displayed in the image display area 410 using an input means such as the pointing device 512.
[0180] When the operator selects some of the captured images 415 displayed in the image display area 410, the operation reception unit 52 of the display apparatus 60 receives the selection of destination candidate image, which indicates an area to be set as the destination (S76). In the selection screen 400 illustrated in
[0181] As described above, the display apparatus 60 selects an area indicated by the captured image 415 as the travel destination of the mobile robot 10 using the captured image 415 (destination candidate image) indicating a travel destination candidate of the mobile robot 10, which has been captured in advance by the mobile robot 10. This increases the operability in selecting the travel destination of the mobile robot 10, for example, as compared to the method of allowing the operator to input a character string indicating location information or a keyword.
[0182] Subsequently, when the selection of the destination candidate image is received in S76 and the operator presses the OK button 430, the autonomous-travel request generation unit 56 of the display apparatus 60 generates autonomous-travel request information (S77). The autonomous-travel request information includes one or more candidate IDs respectively associated with one or more captured images 415 having been selected in S76 and information indicating the order in which the selection of the captured images is received in S76.
[0183] Subsequently, the transmission-reception unit 51 transmits the autonomous-travel request information generated in S77 to the mobile robot 10 (S78). The transmission-reception unit 31 of the mobile robot 10 receives the autonomous-travel request information transmitted from the display apparatus 60.
[0184] Then, the mobile robot 10 starts autonomous travel based on the autonomous-travel request information received in S78 (S79).
Autonomous Travel
[0185] Subsequently, referring to
[0186] First, when the autonomous-travel request information is received in S78, the mode setting unit 42 of the mobile robot 10 sets the mobile robot 10 to operate in the autonomous travel mode (S91).
[0187] Subsequently, the route-information generation unit 37 generates route information indicating an autonomous travel route of the mobile robot 10 based on the autonomous-travel request information received in S78 (S92). Specifically, based on the candidate IDs and the order of selecting the captured images 415 specified by the candidate IDs, which are indicated by the received autonomous-travel request information, the route-information generation unit 37 generates a travel route such that the mobile robot 10 autonomously travels the areas indicated by the captured images 415, in the order in which the captured images 415 are selected by the operator. The travel route generated by the route-information generation unit 37 is not limited to the route in the order of selecting of the captured images 415. Alternatively, the route-information generation unit 37 may generate route information based on locations where the captured images 415 are captured, which are stored in the destination-candidate management DB 3001, such that the distance or travel time of the travel route connecting the locations indicated by the selected captured images 415 becomes shorter.
[0188] Then, the route-information management unit 38 stores the route information generated in S92 in the route-information management DB 3002 (route-information management table) illustrated in
[0189] Subsequently, the location-information acquisition unit 35 acquires location information indicating the current location of the mobile robot 10 (S94).
[0190] The destination setting unit 39 sets the travel destination of the mobile robot 10 based on the current location of the mobile robot 10 acquired by the location-information acquisition unit 35 and the route information generated in S92 (S95). Specifically, the destination setting unit 39 sets, as the travel destination, for example, the location of a destination to which the mobile robot 10 should go next, from among multiple destinations each specified by the candidate ID in the generated route information. For example, in a case where autonomous travel just starts, the destination setting unit 39 sets, as the travel destination, the location of the destination specified by the first candidate ID in the route information. Then, the destination setting unit 39 generates a travel route from the acquired current location of the mobile robot 10 to the travel destination that is thus set. Example methods for the destination setting unit 39 to generate the travel route include a method of connecting the current location and the destination with a straight line, and a method of minimizing the travel time by avoiding an obstacle using a captured image or information of the obstacle obtained by the state detection unit 34.
[0191] The method for the destination setting unit 39 to generate the travel route is not limited to generating the travel route using the location information of travel destination candidates registered in the destination-candidate management DB 3001. Alternatively, the following method may be adopted. Based on the result of image recognition by the image processing unit 47 on the captured image captured by the camera 111, the destination setting unit 39 may identify a location in the captured image 415 set as the travel destination.
[0192] Then, the travel control unit 40 controls the mobile robot 10 to travel to the set travel destination through the travel route generated in S92. In this case, the travel control unit 40 controls the mobile robot 10 to autonomously travel according to a drive command from the autonomous travel unit 43 (step S96). The autonomous travel process performed by the autonomous travel unit 43 will be described in detail later.
[0193] When the mobile robot 10 has reached the final destination (YES in S97), the travel control unit 40 ends the process. By contrast, when the mobile robot 10 has not reached the final destination (NO in S97), the travel control unit 40 repeats the process from S94 and continues the autonomous travel until the mobile robot 10 reaches the final destination.
[0194] In this manner, the mobile robot 10 autonomously travels to the location indicated in the captured image 415 selected by the operator as the destination of the mobile robot 10. In the autonomous travel mode, the mobile robot 10 autonomously travels according to the generated route information or using the learning data learned in the manual operation mode.
Integration of Deviation from Travel Route using Local Map
[0195] A description is given below of a process of calculating a deviation from the travel route by comparing the multiple local maps associated with the way points 8 with the sensor values of the multiple external sensors in the teaching travel and integrating the deviations for the external sensors. First, a topological map will be described.
[0196]
[0197] The map-information generation unit 46 stores a local map (calculation information) generated using the sensor values of the external sensors and the internal sensors for the node v.sub.i, which is an element of the node set V. In the present embodiment, information that enables calculation of a deviation (position and posture) between a point passed in the teaching travel and a point passed in autonomous travel is referred to as a map. For example, it is assumed that a two-dimensional grid map (an HW image in which 1 is stored in a grid in which an obstacle is present and 0 is stored in a grid in which no obstacle is present) generated by the LiDAR during the teaching travel and a scanning point cloud of the LiDAR in the autonomous travel are given. In this case, the position and posture of the origin (=a point passed in the teaching travel=a node) of the two-dimensional grid map viewed from the mobile robot 10 in the autonomous travel can be calculated by, for example, optimization calculation. Accordingly, when the external sensor called LiDAR is used, the two-dimensional grid map can be called a map. By contrast, it is assumed that the position (latitude and longitude) and the azimuth observed by the GNSS in the teaching travel and the position (latitude and longitude) and the azimuth observed by the GNSS in the autonomous travel are given. In this case, since the position and posture of the point (node) passed in the teaching travel, viewed from the mobile robot 10 can be calculated, the position (latitude and longitude) and the azimuth observed by the GNSS in the teaching travel is also called a map in the present embodiment.
[0198] The autonomous travel of the mobile robot 10 includes two phases, a route teaching phase and an autonomous travel phase. The route teaching phase corresponds to the teaching travel. In this phase, the operator manually operates the mobile robot 10 with the controller to travel on a travel route on which the mobile robot 10 is expected to autonomously travel. The autonomous travel phase corresponds to autonomous travel. In this phase, the mobile robot 10 autonomously travels so as to trace the travel route (move on or along the travel route) traveled in the route teaching phase.
[0199]
[0200] (1) First, the storage device stores the first local map and the second local map generated in the route teaching phase by the sensor values output from the first external sensor and the second external sensor. The first external sensor and the second external sensor are, for example, two of the navigation satellite system (GNSS) 114, the range sensor (LiDAR) 112 or 113, and the camera 111.
[0201] (2) In the autonomous travel phase, the first external sensor and the second external sensor each detect a sensor value. The internal sensor (e.g., the IMU 115) also detects a sensor value. The sensor value of the internal sensor is integrated with the sensor value of the first external sensor or the second external sensor to calculate the position and posture.
[0202] (3) The mobile robot 10 compares the first local map with the sensor value of the first external sensor and calculates the deviation from the travel route using the first external sensor (a deviation from the node). The mobile robot 10 compares the second local map with the sensor value of the second external sensor and calculates the deviation from the travel route using the second external sensor (a deviation from the node). As will be described later, the deviation from the node represents the position and posture of the waypoint 8 as viewed from the mobile robot 10. The mobile robot 10 calculates the reliability of the first external sensor based on the first local map and the first external sensor, and calculates the reliability of the second external sensor based on the second local map and the second external sensor.
[0203] (4) The mobile robot 10 weights the position and posture detected by the mobile robot 10 with the reliability, and integrates the position and posture detected by the first external sensor and the position and posture detected by the second external sensor. Since the positions and postures to be integrated are the deviations from the travel route, integrating the positions and postures is equivalent to integrating the deviations from the travel route.
[0204] (5) The mobile robot 10 controls the travel using the integrated position and posture in which the positions and postures detected by the multiple external sensors are integrated.
Processing in Route Teaching Phase
[0205] The route teaching phase will be described in detail with reference to
[0206] First, the map-information generation unit 46 initializes a topological map G by setting both the node set V and the edge set E to be empty (S101). The map-information generation unit 46 sets a k-th sensor value of the external sensor or a k-th local map generated using the external sensor as m.sup.k. The map-information generation unit 46 adds the first node vo to the node set V at the start of the route teaching phase. All m.sup.k (=:m.sub.i.sup.k) at that time are stored in the node v.sub.0.
[0207] The transmission-reception unit 31 receives a manual-operation command of the manual control of the mobile robot 10 by the operator (S102). The travel control unit 40 controls the mobile robot 10 to travel according to a drive instruction from the manual-operation processing unit 44 (S103).
[0208] The determination unit 32 periodically determines whether the mobile robot 10 has moved by a certain distance or more or turned by a certain angle or more (S104). The certain distance is set, for example, in accordance with the processing load, the memory usage, and the accuracy of the position and the route and may be, for example, 1 meter to several meters.
[0209] When it is determined YES in step S104, the map-information generation unit 46 adds a new node v.sub.1 to the node group V and adds a new edge e.sub.0,1 to the edge group E (S105). In the edge e.sub.0,1, the relative travel amount and the relative turning amount acquired by the internal sensor from the time of the addition of the initial node v.sub.0 are stored. The position of the node v.sub.i is referred to as the waypoint 8 in
[0210] For example, until the route teaching phase is finished by the operation of the operator (No in S107), the map-information generation unit 46 adds the node v.sub.i and the edge e.sub.i-1,i each time the mobile robot 10 moves by the certain distance or more or turns by the certain angle or more in the similar manner.
[0211] The node set V and the edge set E can be expressed as G={V, E} V={v.sub.0, . . . , v.sub.n} E={e.sub.0,1, . . . , e.sub.n-1,n} v.sub.i={m.sub.i.sup.0, . . . , m.sup.iN} e.sub.i-1,i={x.sub.i-1,i, y.sub.i-1,i .sub.i-1,i}.
Processing in Autonomous Travel Phase
[0212]
[0213] Before the start of autonomous travel, the mobile robot 10 is placed at the start point. The k-th sensor value of the external sensor in the autonomous travel phase is referred to as a sensor value s.sup.k. A specific calculation method of steps S201 to S203 and S206 will be described later.
[0214] S201: The deviation calculation unit 43b calculates the deviation of the position and posture of the node v.sub.i viewed from the mobile robot 10 based on the k-th sensor value s.sup.k of the external sensor in the autonomous travel of the mobile robot 10 and the k-th local map m.sub.i.sup.k stored in the certain node v.sub.i in the teaching travel of the mobile robot 10. Since the position and posture of the node vi as viewed from the mobile robot 10 are the position and posture of the node v.sub.i from the position and posture of the mobile robot 10 being the origin, the position and posture of the node v.sub.i with reference to the position and posture of the mobile robot 10 are the deviation of the position and posture of the node v.sub.i. The position estimation unit 43c estimates (determines) the calculated deviation as the position and posture x{circumflex over ()}.sub.i.sup.k={x{circumflex over ()}.sub.i.sup.k, y{circumflex over ()}.sub.i.sup.k, {circumflex over ()}.sub.i.sup.k} of the node v.sub.i with reference to the position and posture of the mobile robot 10. Note that although {circumflex over ()} represents a hat, {circumflex over ()} is not directly above x but at the upper right due to font restrictions. An arithmetic device for performing this operation is represented as an arithmetic device F.sub.k:(s.sub.k, m.sub.i.sup.k).fwdarw.x{circumflex over ()}.sub.i.sup.k. By contrast, since the accuracy of the local map mik and the k-th sensor value s.sup.k of the external sensor is unknown, x{circumflex over ()}.sub.i.sup.k is integrated according to the reliability as described below.
[0215] S202: The integration unit 43d calculates the estimated reliability b.sub.i.sup.k for the arithmetic device F.sub.k. An arithmetic device that performs this operation is represented as an arithmetic unit B.sub.k:(s.sub.k, m.sub.i.sup.k).fwdarw.b.sub.i.sup.k.
[0216] S203: The integration unit 43d weights the position and posture x.sub.i.sup.k of the node v.sub.i estimated by each external sensor based on the reliability and integrates the weighted positions and postures. An arithmetic device that performs this arithmetic operation is represented as an arithmetic device I:(x{circumflex over ()}.sub.i.sup.0, . . . , x{circumflex over ()}.sub.i.sup.N, b.sub.i.sup.0, . . . b.sub.i.sup.N).fwdarw.x{circumflex over ()}.sub.i. Since the position and posture x.sub.i.sup.k of the node v.sub.i are deviations of the position and posture of the mobile robot 10 from the origin (node) of the local map m.sub.i.sup.k, the integration of the position and posture x.sub.i.sup.k of the node v.sub.i includes integration of the deviations.
[0217] S204: The autonomous travel unit 43 calculates a taught travel route (that is, the position and posture of each node v.sub.j) in the mobile robot coordinate system from the position and posture x.sub.i of the node v.sub.i and the relative position {x.sub.j-1,i, .sub.yj-1,j, .sub.j-1,j} between the nodes stored in each edge. An arithmetic device that performs this operation is represented as an arithmetic device T:x{circumflex over ()}.sub.i, e.sub.0,1, . . . , e.sub.n-1,n).fwdarw.(x{circumflex over ()}.sub.0, . . . , x{circumflex over ()}.sub.n). The calculated travel route is a travel route passing through the node v.sub.i and the subsequent nodes with the position and posture of the node v.sub.i as the origin. As a result, every time the mobile robot 10 passes through the node v.sub.i, the travel route after the node v.sub.i in which the deviation at the node v.sub.i is reduced is recalculated. Accordingly, the mobile robot 10 can travel on the travel route with a reduced deviation from the taught travel route.
[0218] S205: The calculation unit 43a applies to, for example, the Kalman filter the position and posture obtained by integrating the sensor values of the internal sensor with the integrated position and posture of the node v.sub.i as the origin and the position and posture based on the sensor values s.sup.k of the multiple external sensors, thereby calculating the position and posture. The autonomous travel unit 43 controls the mobile robot 10 to travel on the taught travel route (x{circumflex over ()}.sub.0, . . . , x{circumflex over ()}.sub.n) in the mobile robot coordinate system calculated in step S204 based on the current position and posture of the mobile robot 10.
[0219] S206: The autonomous travel unit 43 determines whether to advance the node number i of the position and posture estimation target to the next node number, depending on whether the position and posture of the mobile robot 10 is within a threshold distance from the position and posture x{circumflex over ()}.sub.i of the node v.sub.i on the taught travel route (x{circumflex over ()}.sub.0, . . . ,x{circumflex over ()}.sub.n). A determination device that performs this determination is represented as a determination device J::(x{circumflex over ()}.sub.i, (x{circumflex over ()}.sub.0, . . . , x{circumflex over ()}.sub.n)).fwdarw.0 (i is not advanced) or 1 (i is advanced).
[0220] S207: The travel control unit 40 sets i to 0 at the start of the autonomous travel and increments i by one.
[0221] S208: The travel control unit 40 repeats the process of
[0222] In steps S201 to S204 in
Example Processing Using GNSS and 2D LiDAR
[0223] Detailed examples of steps S201 to S203 and S206 in the autonomous travel phase will be described.
[0224] The reference sign m.sub.i.sup.0 represents the position and posture estimated from a GNSS measurement history and odometry by the internal sensor, and a measurement state (good/bad). The reference sign m.sub.i.sup.1 is a two-dimensional occupancy grid map of the mobile robot coordinate system generated from a history of scanning point cloud of a 2D LiDAR and the odometry by the internal sensor. The reference sign s.sup.0 represents the position and posture estimated from a GNSS measurement history and odometry by the internal sensor, and a measurement state. The reference sign s.sup.1 represents a scanning point cloud of the 2D LiDAR.
[0225] S201: The arithmetic device F.sub.0 compares the positions and the postures by the GNSS position measurement results of m.sub.i.sup.0 and s.sup.0 to calculate the position and posture x{circumflex over ()}.sub.i.sup.0 of the node vi as viewed from the mobile robot 10. It is assumed that m.sub.i.sup.0 represents the position and posture (x.sub.i,GNSS, y.sub.i,GNSS, .sub.i,GNSS) of the node v.sub.i in the GNSS coordinate system, and s.sub.0 represents the position and posture (x.sub.GNSS, y.sub.GNSS, .sub.GNSS) of the mobile robot 10 in the GNSS coordinate system in the autonomous travel. At this time, the position and posture of the node v.sub.i as viewed from the mobile robot 10 is calculated by Expressions 1 and 2. The calculation by Expressions 1 and 2 corresponds to the arithmetic device F.sub.0.
As is clear from Expressions (1) and (2), the position and posture of the node v.sub.i as viewed from the mobile robot 10 is deviations of the position and posture of the mobile robot 10 in the GNSS coordinate system from the position and posture (x.sub.i,GNSS, y.sub.i,GNSS, .sub.i,GNSS) of the node vi in the GNSS coordinate system (an example of first position and posture).
[0226] Subsequently, the arithmetic device F.sub.1 calculates the position and posture x{circumflex over ()}.sub.i.sup.1 of the node v.sub.i as viewed from the mobile robot 10 by matching calculation between the two-dimensional occupancy grid map of m.sub.i.sup.1 and the scanning point cloud of 2D LiDAR of s.sup.1 (an example of second position and posture). It is assumed that m.sub.i.sup.1 represents a two-dimensional occupancy grid map with the node v.sub.i as the origin, and s.sub.1 represents a LiDAR scanning point cloud {(x.sub.j, y.sub.j)j=1, . . . , N} (N is the number of scan points) of the mobile robot 10 in autonomous travel. At this time, the autonomous travel unit 43 finds a way of moving the two-dimensional occupancy grid map in the xy plane so that the sum of all the values of the grids corresponding to the points of the scanning point cloud (the grid having an obstacle has the value of 1) is the largest (that is, the scanning point cloud matches the map). Thus, the origin of the m.sub.i.sup.1 as viewed from the mobile robot 10 (i.e., the position and posture of the node v.sub.i in the two-dimensional occupancy grid map) can be obtained. This series of operations corresponds to the arithmetic device F.sub.1.
[0227]
[0228] In
[0229] S202: The estimated reliability b.sub.0 is set to 1 only when the GNSS position measurements of m.sub.i.sup.0 and s.sup.0 are both good, and is set to 0 otherwise. The estimated reliability b.sub.1 is set to 1 when the amount of obstacles in the two-dimensional occupancy grid map of the m.sub.i.sup.1 is sufficiently large and the number of scanning points of the scanning point cloud 94 of s.sup.1 is sufficiently large, and is set to 0 otherwise. The values of the estimated reliabilities b.sub.0 and b.sub.1 are not limited to 1 or 0 and may be any suitable values according to the accuracy of the local map and the sensor value. For example, the GNSS position measurement states of m.sub.i.sup.0 and s.sup.0 are evaluated in N stages, and the total is set as the estimated reliability b.sub.0. The amount of obstacles in the two-dimensional occupancy grid map and the number of scanning points of the scanning point cloud 94 are evaluated in N stages, and the total is set as the estimated reliability b.sub.1.
[0230] S203: When either the estimated reliability b.sub.i.sup.0 or b.sub.i.sup.1 is not 0, I(x{circumflex over ()}.sub.i.sup.0, x{circumflex over ()}.sub.i.sup.1, b.sub.i.sup.0, b.sub.i.sup.1)=(b.sub.i.sup.0x{circumflex over ()}.sub.i.sup.0+b.sub.i.sup.1x{circumflex over ()}.sub.i.sup.1)/(b.sub.i.sup.0+b.sub.i.sup.1).
[0231] When the estimated reliability b.sub.i.sup.0=b.sub.i.sup.1=0, the position and posture based on the previous estimation result of the position and posture of the node v.sub.i and the odometry by the internal sensor is set as the estimation result (dead reckoning).
[0232] S206: When the difference of x{circumflex over ()}.sub.i={x{circumflex over ()}.sub.i, y{circumflex over ()}.sub.i, {circumflex over ()}.sub.i} integrated in step S203 from ({circumflex over ()}.sub.i.sup.2+y{circumflex over ()}.sub.i.sup.2) is larger than a threshold value, the value is set to 0, and when the difference is smaller than the threshold value, the value is set to 1. The reference x{circumflex over ()}.sub.i and y{circumflex over ()}.sub.i represent the current location of the mobile robot 10 calculated by the calculation unit 43a.
Example Processing Using GNSS and 3D LiDAR
[0233] A description is given of processing of the scanning point cloud by a 2D LiDAR instead of the scanning point cloud by a 3D LiDAR with reference to the flowchart of
[0234] The reference sign m.sub.i.sup.0 represents the position and posture estimated from a GNSS measurement history and odometry by the internal sensor, and a measurement state (good/bad). The reference sign m.sub.i.sup.1 is a three-dimensional point cloud map of the mobile robot coordinate system generated from a history of scanning point cloud of a 3D LiDAR and the odometry by the internal sensor. The reference sign s.sup.0 represents the position and posture estimated from a GNSS measurement history and odometry by the internal sensor, and a measurement state. The reference sign s.sup.1 represents a scanning point cloud of the 3D LiDAR.
[0235] S201: The arithmetic device F.sub.0 compares the positions and the postures by the GNSS position measurement results of m.sub.i.sup.0 and s.sup.0 to calculate the position and posture x{circumflex over ()}.sub.i.sup.0 of the node v.sub.i as viewed from the mobile robot 10. It is assumed that m.sub.i.sup.0 represents the position and posture (x.sub.i,GNSS, y.sub.i,GNSS, .sub.i,GNSS) of the node v.sub.i in the GNSS coordinate system, and s.sub.0 represents the position and posture (x.sub.GNSS, y.sub.GNSS, .sub.GNSS) of the mobile robot 10 in the GNSS coordinate system in the autonomous travel. At this time, the position and posture of the node vi as viewed from the mobile robot 10 is calculated by the above Expressions 1 and 2. The calculation by Expressions 1 and 2 corresponds to the arithmetic device F.sub.0.
[0236] As is clear from Expressions (1) and (2), the position and posture of the node v.sub.i as viewed from the mobile robot 10 is deviations of the position and posture of the mobile robot 10 in the GNSS coordinate system from the position and posture (x.sub.i,GNSS, y.sub.i,GNSS, .sub.i,GNSS) of the node vi in the GNSS coordinate system (an example of first position and posture).
[0237] Subsequently, the arithmetic device F.sub.1 calculates the position and posture x{circumflex over ()}.sub.i.sup.1 of the node v.sub.i as viewed from the mobile robot 10 by matching calculation between the three-dimensional point cloud map of m.sub.i.sup.1 and the scanning point cloud of 3D LiDAR of s.sup.1 (an example of second position and posture). It is assumed that m.sub.i.sup.1 represents a three-dimensional point cloud map with the node v.sub.i as the origin, and s.sub.1 represents a 3D LiDAR scanning point cloud {(x.sub.j, y.sub.j)j=1, . . . , N} (N is the number of scan points) of the mobile robot 10 in autonomous travel. At this time, the autonomous travel unit 43 finds a way of moving the three-dimensional point cloud map to minimize an error from the scanning point cloud by, for example, NDT matching (see P. Biber & W. Strasser. The normal distributions transform: a new approach to laser scan matching, Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453)). Thus, the origin of the m.sub.i.sup.1 as viewed from the mobile robot 10 (i.e., the position and posture of the node v.sub.i in the two-dimensional occupancy grid map) can be obtained. This series of operations corresponds to the arithmetic device F.sub.1.
[0238] S202: The estimated reliability b.sub.0 is set to 1 only when the GNSS position measurements of m.sub.i.sup.0 and s.sup.0 are both good, and is set to 0 otherwise. The estimated reliability b.sub.1 is set to 1 when the amount of obstacles in the field of view of the 3D LiDAR on the three-dimensional point cloud map of the m.sub.i.sup.1 is sufficiently large, and is set to 0 otherwise. However, the autonomous travel unit 43 performs the following processing when calculating the amount of obstacles.
[0239]
[0240] The autonomous travel unit 43 calculates the relative position and posture of the mobile robot 10 in the two-dimensional occupancy grid map as viewed from the mobile robot 10 in the autonomous travel (step S4). The autonomous travel unit 43 calculates the amount of obstacles in the field of view of the 3D LiDAR based on the number of occupied grids (step S5). The autonomous travel unit 43 normalizes the amount of obstacles by the outer circumferential length of the two-dimensional occupancy grid map (step S6). The autonomous travel unit 43 evaluates the estimated reliability b.sub.0 by the normalized amount of obstacles (step S7). The estimated reliability b.sub.0 is set to 1 when the normalized amount of obstacles is larger than the threshold, and otherwise, the estimated reliability b.sub.0 is set to 0. The reason why the amount of obstacles is normalized is to eliminate the dependency on the range (size) of the point cloud map. The values of the estimated reliabilities b.sub.0 and b.sub.1 are not limited to 1 or 0 and may be any suitable values according to the accuracy of the local map and the sensor value. For example, the GNSS position measurement states of m.sub.i.sup.0 and s.sup.0 are evaluated in N stages, and the total is set as the estimated reliability b.sub.0. The amount of obstacles on the three-dimensional point cloud map and the number of scanning points of the scanning point cloud 94 are evaluated in N stages, and the total is set as the estimated reliability b.sub.1.
[0241] S203: When either the estimated reliability b.sub.i.sup.0 or b.sub.i.sup.1 is not 0, I(x{circumflex over ()}.sub.i.sup.0, x{circumflex over ()}.sub.i.sup.1, b.sub.i.sup.0, b.sub.i.sup.1)=(b.sub.i.sup.0x{circumflex over ()}.sub.i.sup.0+b.sub.i.sup.1x{circumflex over ()}.sub.i.sup.1)/(b.sub.i.sup.0+b.sub.i.sup.1).
[0242] When the estimated reliability b.sub.i.sup.0=b.sub.i.sup.1=0, the position and posture based on the previous estimation result of the position and posture of the node v.sub.i and the odometry by the internal sensor is set as the estimation result (dead reckoning).
[0243] S206: When the difference of x{circumflex over ()}.sub.i={x{circumflex over ()}.sub.i, y{circumflex over ()}.sub.i, {circumflex over ()}.sub.i} integrated in step S203 from ({circumflex over ()}.sub.i.sup.2+y{circumflex over ()}.sub.i.sup.2) is larger than a threshold value, the value is set to 0, and when the difference is smaller than the threshold value, the value is set to 1. The reference x{circumflex over ()}.sub.i and y{circumflex over ()}.sub.i represent the current location of the mobile robot 10 calculated by the calculation unit 43a.
[0244] The mobile robot 10 according to the present embodiment has a local map in a topological map format independently for each sensor. The mobile robot 10 calculates the reliability of estimation of self-position and self-posture, which are independently estimated for each independent local map, based on the sensor values obtained in teaching travel and autonomous travel. The mobile robot 10 then integrates the self-positions and self-postures into the final position and posture with weighting according to the reliability. This obviates the necessity to manually correct the map to make the respective position estimation results by the sensors consistent. Accordingly, even if the positioning accuracy of any of the sensors is reduced in the route teaching phase, highly accurate autonomous travel can be achieved.
[0245] The reliability calculation when a 3D LiDAR is used has the following two advantages.
[0246] Since the map point cloud is used instead of the real-time point cloud for the calculation of the reliability (the amount of obstacles around the mobile robot 10) for the map matching, the pre-processing for the reliability calculation can be executed in advance.
[0247] In other words, the real-time calculation load is small.
[0248] When the reliability is calculated by viewing the real-time point cloud, the reliability is increased by an obstacle (such as a vehicle or a person) temporarily passing by the mobile robot 10, and an erroneous localization result may be adopted. However, when the map point cloud is used, there is no such a concern.
[0249] Second embodiment A description is given of a second embodiment in which the display apparatus 60 is a head mounted display (HMD). The HMD is a kind of display apparatus that is worn on the head so as to cover both eyes with two displays in a goggle-like shape. The HMD displays parallax images to the left and right eyes that lead to stereoscopic view. Providing stereoscopic view is not a requisite.
[0250] In addition to HMDs that simply provide stereoscopically view of videos, there are various types of HMDs such as an HMD called virtual reality (VR) headset (goggles), augmented reality (AR) headset, or mixed reality (MR) headset. A VR headset displays an artificial virtual world. An AR headset reads the real world stereoscopically and displays virtual information superimposed on the real world. An MR headset is an extension of the AR headset and displays an object which is not actually present in the real world in a three-dimensional manner. In the present embodiment, since the operator browses with the HMD the image data captured by the mobile robot 10, a simple HMD or AR headset would be used.
[0251]
[0252] The CPU 201 controls the entire operation of the HMD. The ROM 202 stores a program such as an initial program loader (IPL) to boot the CPU 201. The RAM 203 is used as a work area for the CPU 201.
[0253] The external device connection I/F 205 is an interface for connecting various external devices. The external device in this case is, for example, a communication management server 3 or an carphone 6 having a microphone.
[0254] The display 207 is a display apparatus such as a liquid crystal display or an organic EL display that displays various images.
[0255] The operation device 208 is an input means operated by an operator to, for example, select or execute various instructions, select a target for processing, or move a cursor being displayed. Examples of the input means include various operation buttons, a power switch, a physical button, and a line-of-sight operation circuit that operates in response to detection of the line of sight of the operator.
[0256] The medium I/F 209 controls the reading or writing (storing) of data from or to a recording medium 209m such as a flash memory. Examples of the recording medium 209m include a DVD and a BLU-RAY DISC.
[0257] The speaker 212 is a circuit that converts an electric signal into physical vibration to generate sound such as music or voice. The electronic compass 218 calculates the direction of the HMD from the Earth's magnetism and outputs direction information. The gyro sensor 219 detects a change in angle (roll, pitch, and yaw) of the HMD as the HMD moves.
[0258] The accelerometer 220 is a sensor that detects acceleration in triaxial directions. Examples of the bus line 211 include an address bus and a data bus, which electrically connect the components including the CPU 201 one another.
[0259] The display apparatus 60 according to the present embodiment can detect the posture of the display apparatus 60 by the signal of the gyro sensor 219 and thus can identify the direction in which the operator is viewing. The camera 111 can capture a spherical image, a wide-angle image, or a hemispherical image. Accordingly, the display apparatus 60 can display the direction in which the operator is viewing on the display 207 of the HMD. This makes it easier for the operator to view various directions compared with operating the area displayed in the site display area 210 with the pointing device 512 such as a mouse.
[0260] The position of the mobile robot 10 when the operator is viewing in a certain direction with the HMD is identified by the state detection unit 34. Accordingly, the display apparatus 60 can display annotation of an object such as a building present in the direction of view of the operator from the position of the mobile robot 10 in a manner superimposed on the image data transmitted from the mobile robot 10. In this case, for example, the system administrator sets annotation of the building using the environment map generated by the mobile robot 10.
[0261]
[0262] The mobile robot 10 of the present embodiment provides effects similar to those of the first embodiment or the second embodiment. Further, the use of an HMD as the display apparatus 60 is advantageous in that the operator can easily view the image data captured by the mobile robot 10.
[0263] Although the example embodiments of the present disclosure are described above, the above-described embodiments are not intended to limit the scope of the present invention. Thus, numerous modifications and replacements of elements are possible within the scope of the present invention.
[0264] For example, in the above-described embodiments, two positions and postures obtained using two external sensors are integrated, but three or more positions and postures obtained using three or more external sensors may be integrated.
[0265] The technology according to the present disclosure is effective in a vast place where the GNSS positioning accuracy is not necessarily high but can be suitably applied to a narrow indoor environment and a place where the GNSS positioning accuracy is high.
[0266] Further, position information by UWB (Ultra-Wideband) as an example of the external sensor may be used. In a UWB system, a communication tag is attached to the mobile robot 10, and sensors are installed at intervals of 30 m to 40 m in the travel environment of the mobile robot 10. The location of the communication tag can be identified by two or more sensors receiving the radio waves transmitted by the communication tag.
[0267] In the configuration illustrated in, for example,
[0268] Each of the functions of the above-described embodiments may be implemented by one or more processing circuits or circuitry. The processing circuit or circuitry in the present specification includes a programmed processor to execute each function by software, such as a processor implemented by an electronic circuit, and devices, such as an application specific integrated circuit (ASIC), a digital signal processor (DSP), a field programmable gate array (FPGA), and/or combinations thereof which are configured or programmed, using one or more programs stored in one or more memories to perform the disclosed functionality.
[0269] The embodiments of the present disclosure can provide significant improvements in computer capability and functionality. These improvements allow operators to take advantage of computers that provide more efficient and robust interaction with tables that is a way to store and present information on information processing apparatuses. In addition, the embodiments of the present disclosure can provide a better operator experience through the use of a more efficient, powerful, and robust user interface. Such a user interface provides a better interaction between a human and a machine.
Aspect 1
[0270] A mobile apparatus performs teaching travel in which the mobile apparatus stores a position and posture while traveling on a travel route under control by a manual operation of an operator and autonomous travel in which the mobile apparatus travels the stored position in the stored posture. The mobile apparatus includes an information generation unit, a deviation calculation unit, a position estimation unit, an integration unit, and an autonomous travel unit.
[0271] The information generation unit generates calculation information used for calculating a deviation between a node passed in the teaching travel and a point passed in the autonomous travel. The information generation unit generates the calculation information for each node on the travel route independently for multiple external sensors, in association with the node and the external sensor.
[0272] The deviation calculation unit calculates, for each node independently for the multiple external sensors, the deviation based on the calculation information and a sensor value of the external sensor obtained in the autonomous travel.
[0273] The position estimation unit determines, for each node independently for the multiple external sensors, the calculated deviation as a position and posture of the node on the travel route with reference to the position and posture of the mobile apparatus.
[0274] The integration unit integrates, for each node, the positions and postures of the node on the travel route determined independently for the multiple external sensors.
[0275] The autonomous travel unit controls the mobile apparatus to autonomously travel on the travel route based on the position and the posture of the node on the travel route integrated by the integration unit.
Aspect 2
[0276] The mobile apparatus of Aspect 1 further includes one or more internal sensors to detect a travel amount and a travel direction of the mobile apparatus.
[0277] The information generation unit uses a sensor value of the one or more internal sensors in generating the calculation information, and, in the teaching travel, stores the calculation information in the storage unit in association with the external sensor for each node on the travel route.
Aspect 3
[0278] In the mobile apparatus of Aspect 1 or 2, in integrating the positions and postures of the node with reference to the position and posture of the mobile apparatus determined independently for the multiple external sensors, the integrated processing unit weights the positions and postures of the node based on reliability of the calculation information and reliability of respective sensor values of the multiple external sensors in the autonomous travel.
Aspect 4
[0279] In the mobile apparatus of any one of Aspects 1 to 3, when the integration unit integrates the positions and postures of a first node determined independently for the multiple external sensors, the autonomous travel unit calculates a position and posture of a second node subsequent to the first node based on the integrated position and posture of the first node and relative position information between the first node and the second node included in the calculation information, and calculates a travel route whose origin is the position and posture of the mobile apparatus at the first node.
Aspect 5
[0280] In the mobile apparatus of any one of Aspects 1 to 4, the multiple external sensors are two or more of a GNSS, a 2D LiDAR, and a camera to obtain range information.
Aspect 6
[0281] In the mobile apparatus of Aspect 5, the calculation information is a position and posture of a GNSS coordinate system when the first external sensor is a GNSS, and the calculation information is a two-dimensional occupancy grid map when the second external sensor is a 2D LiDAR.
[0282] The position estimation unit determines, for the first external sensor, a deviation of the position and posture in the GNSS coordinate system indicated by a sensor value of the first external sensor in the autonomous travel from the position and posture of the GNSS coordinate system included in the calculation information, as a first position and posture of the node with reference to the position and posture of the mobile apparatus.
[0283] The position estimation unit compares the two-dimensional occupancy grid map with a scanning point cloud indicated by sensor values of the second external sensor the in autonomous travel, and determines, for the second external sensor, a deviation of the position and posture of the mobile apparatus from the origin of the two-dimensional occupancy grid map as a second position and posture of the node with reference to the position and posture of the mobile apparatus.
[0284] The integration unit integrates the first position and posture and the second position and posture.
Aspect 7
[0285] In the mobile apparatus of any one of Aspects 1 to 4, the multiple external sensors are two or more of a GNSS, a 3D LiDAR, and a camera to obtain range information.
Aspect 8
[0286] In the mobile apparatus of Aspect 7, the calculation information is a position and posture of a GNSS coordinate system when the first external sensor is a GNSS, and the calculation information is a three-dimensional point cloud map when the second external sensor is a 3D LiDAR.
[0287] The position estimation unit determines, for the first external sensor, a deviation of the position and posture in the GNSS coordinate system indicated by a sensor value of the first external sensor in the autonomous travel from the position and posture of the GNSS coordinate system included in the calculation information, as a first position and posture of the node with reference to the position and posture of the mobile apparatus.
[0288] The position estimation unit compares the three-dimensional point cloud map with a scanning point cloud indicated by sensor values of the second external sensor in the autonomous travel, and determines, for the second external sensor, a deviation of the position and posture of the mobile apparatus from the origin of the three-dimensional point cloud map as a second position and posture of the node with reference to the position and posture of the mobile apparatus.
[0289] The integration unit integrates the first position and posture and the second position and posture.
[0290] The above-described embodiments are illustrative and do not limit the present invention. Thus, numerous additional modifications and variations are possible in light of the above teachings. For example, elements and/or features of different illustrative embodiments may be combined with each other and/or substituted for each other within the scope of the present invention. Any one of the above-described operations may be performed in various other ways, for example, in an order different from the one described above.
[0291] The functionality of the elements disclosed herein may be implemented using circuitry or processing circuitry which includes general purpose processors, special purpose processors, integrated circuits, application specific integrated circuits (ASICs), digital signal processors (DSPs), field programmable gate arrays (FPGAs), and/or combinations thereof which are configured or programmed, using one or more programs stored in one or more memories, to perform he disclosed functionality. Processors are considered processing circuitry or circuitry as they include transistors and other circuitry therein. In the disclosure, the circuitry, units, or means are hardware that carry out or are programmed to perform the recited functionality. The hardware may be any hardware disclosed herein or otherwise known which is programmed or configured to carry out the recited functionality.
[0292] There is a memory that stores a computer program which includes computer instructions. These computer instructions provide the logic and routines that enable the hardware (e.g., processing circuitry or circuitry) to perform the method disclosed herein. This computer program can be implemented in known formats as a computer-readable storage medium, a computer program product, a memory device, a record medium such as a CD-ROM or DVD, and/or the memory of a FPGA or ASIC.