Cloud based robotic control systems and methods
11460861 · 2022-10-04
Assignee
Inventors
Cpc classification
B25J9/161
PERFORMING OPERATIONS; TRANSPORTING
H04L67/125
ELECTRICITY
B60W2556/45
PERFORMING OPERATIONS; TRANSPORTING
G05D1/0088
PHYSICS
H04L67/565
ELECTRICITY
B60W60/001
PERFORMING OPERATIONS; TRANSPORTING
H04L67/1001
ELECTRICITY
International classification
G05D1/00
PHYSICS
B60W60/00
PERFORMING OPERATIONS; TRANSPORTING
H04L67/1001
ELECTRICITY
H04L67/565
ELECTRICITY
H04L67/125
ELECTRICITY
Abstract
A method of controlling an operation of a robot (102) is provided using a cluster of nodes (104) in a network (106). The method includes receiving, using a gateway cloud driver (108), robot state information from the robot (102) via the network (106), and converting, using the gateway cloud driver (108), the robot state information into at least one message. The method further includes transmitting, using a message broker (110), the at least one message to the cluster of nodes (104) via the network (106). The method further includes processing, using the cluster of nodes (104) in the network (106), the at least one message by parallel computing, and generating, using the cluster of nodes (104) in the network (106), a robot command to control the operation of the robot (102).
Claims
1. A method of operating in parallel a plurality of heterogenous robots using a cluster of nodes in a network, the method comprising: receiving, using a gateway cloud driver, robot state information from a given robot of the plurality of heterogenous robots via the network; converting, using the gateway cloud driver, the robot state information into at least one message; transmitting, using a message broker, the at least one message to the cluster of nodes via the network; processing, using the cluster of nodes in the network, the at least one message by parallel computing, the processing being carried out in parallel with the processing of at least one message for at least one other robot of the plurality of heterogenous robots; and generating, using the cluster of nodes in the network, a robot command to control the operation of the given robot.
2. The method of claim 1, further comprising including, in the robot state information, at least one of: odometry information, scan information, and pose information associated with the robot.
3. The method of claim 1, further comprising converting the robot state information into one or more custom-defined data types that can be processed by the message broker and the cluster of nodes.
4. The method of claim 1, further comprising defining, using the gateway cloud driver, at least one cloud channel between the gateway cloud driver and the message broker for transmitting the at least one message to the cluster of nodes.
5. The method of claim 4, wherein defining the at least one cloud channel comprise creating the at least one cloud channel based on a type of the robot state information.
6. The method of claim 5, wherein at least one message pertaining to robot state information of the same type for the at least one other robot of the plurality of robots is also transmitted over the cloud channel.
7. The method of claim 6, wherein each message pertaining to the robot state information of the same type is tagged with an identifier of the robot associated to the message.
8. The method of claim 1, wherein the at least one message of the given robot and the at least one message for the at least one other robot is processed by parallel computing in real time using the cluster of nodes in the network.
9. The method of claim 8, wherein performing the processing of the at least one message by parallel computing in real time comprises performing a scan matching process associated with the robot in parallel.
10. The method of claim 1, further comprising including, in the robot command, a motion command generated based on velocity obstacle information.
11. The method of claim 10, wherein including the motion command generated based on the velocity obstacle information comprises publishing the motion command having at least one of: velocity information and pose information.
12. The method of claim 1, further comprising converting the robot command into a robot operation system (ROS) message, and transmitting the ROS message to the robot using the gateway cloud driver to control the operation of the robot.
13. The method of claim 1, wherein the robot state information is generated by a device of the given robot and is in a format native to the device; wherein the gateway cloud driver applies a device driver to convert the robot state information in the native format to the at least one message.
14. The method of claim 13, wherein the converted message is processable by the cluster node independently of the format of the robot state information native to the device.
15. The method of claim 14, wherein the heterogenous robots form a swarm of robots.
16. The method of claim 15, wherein the robot state information comprises at least one of odometry information, scan information, and pose information associated with the robot; and wherein the at least one message for the given robot is processed to avoid collision with any other one of the robots forming the swarm.
17. The method of claim 1, wherein the robot state information is generated by a first device of the given robot; wherein the gateway cloud driver applies a first device driver to convert the robot state information; and wherein the at least one message for the at least one other robot is generated by converting robot state information received from a second device of the at least one other robot using a second device driver of the gateway cloud driver, the first device being different from the second device and the first device driver being different from the second device driver.
18. The method of claim 1, wherein the robot state information is generated by a first device of the given robot; wherein the gateway cloud driver applies a device driver to convert the robot state information; and wherein the at least one message for the at least one other robot is generated by converting robot state information received from a second device of the at least one other robot, the first device being of the same type as the second device, and the device driver converting both the robot state information from the given robot and the robot state information received from the at least one other robot.
19. A system of controlling operation of a plurality of heterogenous robots using a cluster of nodes in a network, comprising: a gateway cloud driver configured to receive robot state information from a given robot of the plurality of heterogenous robots via the network, and configured to convert the robot state information into at least one message; a message broker configured to communicate the at least one message between the cluster of nodes and the gateway cloud driver via the network; and wherein the cluster of nodes in the network is configured to: process the at least one message by parallel computing, the processing being carried out in parallel with the processing of at least one message for at least one other robot of the plurality of heterogenous robots; and generate a robot command to control the operation of the given robot.
20. The system of claim 19, wherein the robot state information includes, at least one of: odometry information, scan information, and pose information associated with the robot.
21. The system of claim 19, wherein the gateway cloud driver is configured to convert the robot state information into one or more custom-defined data types that can be processed by the message broker and the cluster of nodes.
22. The system of claim 19, wherein the gateway cloud driver is configured to define at least one cloud channel between the gateway cloud driver and the message broker for transmitting the at least one message to the cluster of nodes.
23. The system of claim 22, wherein the gateway cloud driver is configured to create the at least one cloud channel based on a type of the robot state information.
24. The system of claim 23, wherein at least one message pertaining to robot state information of the same type for the at least one other robot of the plurality of robots is also transmitted by the gateway cloud driver over the cloud channel.
25. The system of claim 24, wherein each message pertaining to the robot state information of the same type is tagged with an identifier of the robot associated to the message.
26. The system of claim 19, wherein the cluster of nodes in the network is configured to perform the processing of the at least one message by parallel computing in real time.
27. The system of claim 26, wherein the cluster of nodes in the network is configured to perform a scan matching process associated with the robot in parallel.
28. The system of claim 19, wherein the cluster of nodes in the network is configured to generate a motion command, as a part of the robot command, based on velocity obstacle information.
29. The system of claim 28, wherein the cluster of nodes in the network is configured to publish the motion command having at least one of: velocity information and pose information.
30. The system of claim 19, wherein the gateway cloud driver is configured to convert the robot command into a robot operation system (ROS) message, and transmit the ROS message to the robot to control the operation of the robot.
31. The system of claim 19, wherein the robot state information is generated by a device of the given robot and is in a format native to the device; wherein the gateway cloud driver applies a device driver to convert the robot state information in the native format to the at least one message.
32. The system of claim 31, wherein the converted message is processable by the cluster node independently of the format of the robot state information native to the device.
33. The system of claim 32, wherein the heterogenous robots form a swarm of robots.
34. The system of claim 33, wherein the robot state information comprises at least one of odometry information, scan information, and pose information associated with the robot; and wherein the at least one message for the given robot is processed to avoid collision with any other one of the robots forming the swarm.
35. The system of claim 19, wherein the robot state information is generated by a first device of the given robot; wherein the gateway cloud driver applies a first device driver to convert the robot state information; and wherein the at least one message for the at least one other robot is generated by converting robot state information received from a second device of the at least one other robot using a second device driver of the gateway cloud driver, the first device being different from the second device and the first device driver being different from the second device driver.
36. The system of claim 19, wherein the robot state information is generated by a first device of the given robot; wherein the gateway cloud driver applies a device driver to convert the robot state information; and wherein the at least one message for the at least one other robot is generated by converting robot state information received from a second device of the at least one other robot, the first device being of the same type as the second device, and the device driver converting both the robot state information from the given robot and the robot state information received from the at least one other robot.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The embodiments will be more readily understood in view of the following description when accompanied by the below figures and wherein like reference numerals represent like elements, wherein:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25) Corresponding reference characters indicate corresponding parts throughout the several views. Although the drawings represent embodiments of the present disclosure, the drawings are not necessarily to scale and certain features may be exaggerated in order to better illustrate and explain the present disclosure. The exemplifications set out herein illustrate an exemplary embodiment of the disclosure, in one form, and such exemplifications are not to be construed as limiting the scope of the disclosure in any manner.
DETAILED DESCRIPTION OF EMBODIMENTS
(26) Embodiments of the present disclosure are described below by way of example only, with reference to the accompanying drawings. Further, the following description is merely exemplary in nature and is in no way intended to limit the disclosure, its application, or uses. As used herein, the term “module” or “unit” may refer to, be part of, or include an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor or microprocessor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality. Thus, while this disclosure includes particular examples and arrangements of the modules, the scope of the present system should not be so limited since other modifications will become apparent to the skilled practitioner.
(27) Local Collision Avoidance for Non-Holonomic Robots
(28) Referring now to
(29) Current local collision avoidance methods are mainly based on the Velocity Obstacle (VO) theory. VOs are areas in velocity space where if the velocity of a robot points into one of the areas it will collide with another robot after some time. A diagram of VO is shown in
(30) All these methods assume that the robot can reach any velocity in the velocity space, one hundred percent accurate localization information, and circular robot footprint. However, the real robot cannot satisfy such prerequisites. Therefore, other constraints need to be attached to those VOs. These include kinematic constraints such as acceleration and max velocity limits, Non-Holonomic constraints for differential robots, and localization uncertainty. When considering localization uncertainty, a robot footprint needs to be expanded so that it can cover the uncertainty from localization and make sure that the calculated velocity is valid even if localization is not accurate. Simply using a circular footprint with an extended radius may exclude possible valid velocities. So the convex hull footprint calculated from the Minkowski Sum of robots and obstacles is introduced in calculating VOs. The calculation of convex hull footprint for a robot is highly computation-intensive and may take around 50 percent of the total computation time.
(31) Once all VOs are obtained from velocity space, an optimization algorithm needs to be designed to select optimal velocity from areas outside all VOs. There are three key methods for collision free velocity selection. They are Optimal Reciprocal Collision Avoidance (ORCA) method, Clear Path method and Sampling based method. Clear Path method has relatively better overall performance in real world experiments.
(32) Taking into account all the detailed considerations above, an algorithm that can control robots in the real world to avoid each other in a more effective way is developed. But such algorithm requires at least a laptop to run. In swarm robotics the number of robots can reach about one hundred or more, and, obviously, equipping a laptop for each robot can greatly increase the investment and also the size of the robot, besides which, power consumption of a laptop will lead to less robot running time.
(33) To utilize the algorithm but, at the same time, reduce the “side effects”, one effective approach is offloading algorithm computation into a Cloud environment and connecting the robot through a wireless network. In the present disclosure, an algorithm, which uses a convex hull footprint for VO calculating, considers all aforementioned constraints, and utilizes Clear Path method for optimal velocity computation, is implemented. The following paragraphs offer details about the Cloud platform and illustrate the implementation of the algorithm.
(34) IoT Cloud Architecture
(35) Referring now to
(36) The Front-end Gateway Layer is responsible for connecting devices or robots 102 with the Message broker 110. As IoT Cloud is designed to serve heterogeneous devices, it needs a component to record specific information about the devices and the map between message broker channels and native device data channels. Such a component is the Gateway. All devices are connected through the Gateways, and below the Gateways are IoT Cloud device drivers which convert device data into messages that Cloud services can process. The Cloud device drivers first get data from devices and then call IoT Cloud APIs to send converted data into the Cloud. The Gateway maintains connections between devices and the Cloud. At the same time, there is a Gateway master that coordinates multiple Gateways and registers connection information, such as data channel mapping between message broker and devices, so that the Middle layer can discover devices and provide seevice entries.
(37) The Stream Processing layer handles real-time data processing. This layer uses Apache Storm as the computation engine. Storm is a distributed real-time streaming processing system that can process at most one million Tuples (data type processed in Storm) per second. It is very suitable for processing stream data from numerous smart devices. Storm gets source data from one of its components called Spout and then sends data to a process component called Bolt. Spouts and Bolts are arranged as nodes in a graph that are connected by streams which resemble the edges of a graph. Such a stream processing workflow is called Topology. To use the IoT Cloud service, application Topology should be developed first. Since data input and output of the application Topology are closely related to devices, the IoT Cloud platform provides APIs to build custom input Spouts and output Bolts. As mentioned before, the Gateway layer is responsible for maintaining the connections of Spouts and Bolts to the message broker by writing connection information to Zookeeper. To use the real-time stream processing service in this layer, data from devices should be sent to the correct message broker channels, which are connected to the input Spouts of an application Topology via IoT Cloud device drivers. Then, by subscribing the channels that connect to the output Bolts of the application Topology, results can be fetched in real time. Such a data processing paradigm is very suitable for robot controlling. Most of the work in the present disclosure focuses on designing and implementing application Topology and its corresponding IoT Cloud driver for robot collision avoidance. The bulk of the computation required for the collision avoidance algorithm is shifted to this layer. Once an application is deployed into the IoT Cloud, it can provide services to a large number of devices as long as they have the correct IoT Cloud drivers. By deploying multiple instances of the application or increasing the number of computation nodes for the application, data processing ability can be scaled accordingly.
(38) The Batch/Storage Layer stores data from Stream Processing Middle Layer and provides Batch Processing/Data Mining services for the static data from various distributed databases. Since the present disclosure mainly works on real-time data processing, this layer will not be used.
(39) Implementation of the Collision Avoidance Algorithm
(40) Referring now to
(41) The IoT Cloud driver 108 will subscribe ROS topics to get the robot state, such as odometry, laser scans and so on, and then convert the data into messages that can be transmitted through a broker 110 to the Cloud. After the data processing procedure, the IoT Cloud 104 will send back velocity commands through the message broker. The IoT Cloud driver will convert the velocity command message into a ROS message which is then sent to the correct ROS topic so that the robot can be controlled. The present disclosure uses ROS as a device driver just for demonstration. Besides ROS, other device specific drivers can also be used, as long as they can provide APIs for data retrieval.
(42) The IoT Cloud computation engine together with the message broker servers, are deployed on the FutureGrid Cloud platform. The complicated collision avoidance algorithm is implemented as a Storm Topology running in the computation engine. The message broker of the IoT Cloud relays data from IoT Cloud driver and feeds it into Spouts of the control Topology. Once the velocity command is calculated, the Topology will send the command through an output Bolt to the message broker which then relays the message back to the Cloud driver, and then to the robot.
(43) IoT Cloud Driver for Collision Avoidance
(44) Referring now to
(45) The next thing that an IoT Cloud driver 108 needs to do is defining IoT Cloud channels for those ROS topics. For IoT Cloud application Topology, each input Spout or output Bolt is connected to a predefined IoT Cloud channel according to the application and the message broker. All data are transmitted through these channels. While the number of Spouts and Bolts in an application topology cannot be changed, the number of the robot that connects to the Cloud may vary from time to time. So IoT Cloud channels should be defined according to the robot information types rather than the robot entity. Thus, the IoT Cloud driver 108 will create an IoT Cloud channel for each information type and publish converted messages to the corresponding Cloud channel. To distinguish the messages sent from different robots 102, a unique robot ID generated by the Cloud driver 108 is attached to the message. The application Topology will get the correct robot state according to the ID. However, the Bolt of the application Topology that publishes velocity commands back to robots 102 will also publish all commands for different robots into one IoT Cloud channel. It is the Gateway that creates a command queue for each Cloud driver instance, and sends each message to the correct queue according to the robot ID attached in the message.
(46) Topology Design
(47) Referring now to
(48) The algorithm runs as a control loop which executes periodically. For a single loop, it starts by collecting the robot's newest state information, including getting obstacles from the laser scan, calculating convex hull footprint from pose array, getting neighbors from pose share messages, and extracting velocity and pose of the robot from the odometry. The next step is to calculate preferred velocity from a global plan. The present disclosure implements a very simple global planner that generates a straight path consisting of a number of way points from the start to the goal. If the robot has already reached the goal position and it only needs to adjust its heading, then a stopping velocity command or rotating command will be sent to the robot directly. Otherwise, the algorithm will update the robot's position and its neighbors' according to the predefined control period. With all the information up-to-date, VO lines from different aspects will be calculated. Such VO lines include those from neighbors, obstacles and various constraints, such as aforementioned kinematic constraints, non-holonomic constraints and so on. Once all VO lines are obtained, the optimal velocity that is closest to the preferred velocity is selected using Clear Path algorithm. Finally, the application will check the validation of the new velocity computed. If it is valid, the velocity will be sent to the robot, otherwise the application will try the next way point and calculate a new velocity.
(49) To implement the whole application into a Storm Topology, Spouts and Bolts that connect the Topology and the message broker should be designed first. As there are five types of information that need to be uploaded into the Topology, five Spouts need to be defined. These Spouts include odometry receiver Spout, scan receiver Spout, pose array receiver Spout, configuration Spout and pose share receiver Spout. The first three Spouts are used to get robot state information, while the configuration Spout receives basic parameters of the robot, such as control frequency, acceleration limits, maximum velocity, start pose and goal pose and so on, and the pose share receiver Spout is responsible for feeding information about all neighbors to the Topology. Two Bolts are required for publishing the computed velocity command and pose share messages to the message broker respectively. As the algorithm needs the neighbors' information, all robots in the scene should publish their state to a common IoT Cloud channel periodically so that they can share their newest state with each other. All of these Spouts and Bolts are defined in a configuration file and the IoT Cloud platform will automatically generate them according to this file.
(50) Referring now to
(51) All of the three Topologies are implemented in JAVA. Topology A integrates all components into one Bolt 600. As all messages are fed into the Bolt, the Bolt is so busy dealing with new messages that the overall delay of the velocity command is high. Topology C implements each component into separate Bolts, and even calculates different types of VO lines in parallel. However robot state information has to be transmitted between several Bolts, resulting in the serialization/deserialization process along with the communication delay between computation nodes consuming much more time than the time that is saved by parallel computing. So the overall delay of Topology C is also very high. By reviewing the performance metrics of Topology A and C, it is demonstrated that Bolts, which process input messages from Spouts, require much less computation than the Bolts that calculate VO lines and velocity commands. So, in Topology B, all components that process robot state information are combined into one Bolt and other components that calculate VO lines and velocities are wrapped into another Bolt. Such a design can reduce delays caused by data transmission between Bolts and, at the same time, isolate message processing from the main collision avoidance algorithm.
(52) Besides those Spouts and Bolts that interact with message broker, there are five more components in Topology B. To utilize collision avoidance control service, a robot should first send its parameters and start and goal poses to the Global Planner Bolt through the Configuration Spout. The Global Planner Bolt will then do the following jobs:
(53) a) Make a global path plan according to the start and goal Poses.
(54) b) Spawn a custom-defined JAVA Object called Agent that contains robot parameters, some algorithm related state variables, and the global plan generated before, and send it to Velocity Compute Bolt.
(55) c) Spawn a Control-Publish Time State Object that contains control period and pose share period, and, also, two variables to record the last time that the robot is controlled/published respectively. Besides, there are two Boolean variables that record whether the Topology is currently calculating velocity or publishing pose share message. This Object will be sent to the Dispatcher Bolt that triggers control or pose share processes according to the given period.
(56) d) Spawn a Pose Share Message Object that contains basic information to be shared. This object is sent to the Agent State Bolt for robot pose sharing.
(57) Each of the three Objects spawned by the Global Planner Bolt will be stored as <robot Id, object> Hash Map in the destiny Bolt.
(58) The Dispatcher Bolt will check those Control-Publish Time State Objects stored in the Bolt instance for controlling or pose sharing. Every 10 ms it will receive a Tuple from the Timer Spout, which will trigger the Dispatcher Bolt to check whether it needs to emit a new Tuple to the Agent State Bolt to start a new controlling/publishing loop.
(59) The Agent State Bolt implements modules that collect up-to-date robot information as shown in
(60) The Velocity Compute Bolt contains all other modules for velocity command calculation. After a new Agent State Object is received, this Bolt will select the correct Agent Object from the Hash Map that stores Agent Objects from Global Planner Bolt and then execute step 2 to step 6 in
(61) As mentioned before, some of the Bolts may cache some runtime information about a robot. However each Bolt can run multiple instances in parallel, and how to make sure the proper Tuple is sent to the Bolt instance that caches the right robot information is very important to the process logic. In Apache Storm, the organization of connections between instances of different connected Bolts is called Grouping. Here, each Tuple, except the one emitted from Timer Spout, is attached with the robot ID Field and Field Grouping based on the ID Field is used. However, for Timer Spout, all Dispatcher Bolt instances need its periodical output as a time reference. Also the Agent State Bolt instances need to cache every robot's pose share information so that they can extract all neighbors for each robot. As such, these two components use All Grouping.
(62) The Topology shown in
(63) Experiment and Results
(64) Referring now to
(65) Apache Storm and Rabbitmq Message Broker are deployed in the FutureGrid Cloud Platform while the IoT Cloud Gateway is deployed with the Cloud driver, ROS and the simulator being on a local desktop computer. The Simulator will publish the information of each robot to ROS and then the IoT Cloud driver will convert those ROS messages into custom-defined JAVA Objects that are used in the Topology. Configurations of the system are presented in Table 1.
(66) TABLE-US-00001 TABLE 1 Hardware Configuration of the System VMs in Cloud Local host CPU Model Intel Core i7 9xx Intel Core i7-2620M CPU Frequency/Mhz 2931.436 2701 Cores 4 2 Thread per core 1 2 Memory/MB 8192 7900 OS Ubuntu 12.04.4 Ubuntu 12.04.5 (Linux 3.2.0) (Linux 3.13.0) Hypervisor KVM None
(67) The main task of this experiment is to test the availability of the application for large scale robot control. Since no SLAM (Simultaneous Localization and Mapping) module is developed in Simbad and Localization Pose Array cannot be generated, here Gaussian noise is added to the robot pose to create a fake localization pose array for the test. Pose array is published at a frequency of 10 Hz and other information is published at a frequency of 20 Hz.
(68) Local test shows that the most computation-intensive component is the Velocity Compute Bolt, so velocity command delays for different number of robots with a different parallelism hint for Velocity Compute Bolt is measured. As shown in Table 1 there are 5 computation nodes with 20 cores in the cluster. To make sure each Bolt instance runs in parallel, the maximum number of parallel instances for Velocity Compute Bolt is limited to 5, while for Agent State Bolt it is set to 2 to see whether increased parallelization of the Agent State Bolt can bring better performance. Other components in the application Topology have only one instance for each. Also, to make sure the computation load is evenly distributed between the instances, the Filed Grouping strategy is replaced by custom defined Mod Grouping and an index value sequentially from 1 to maximum number of robots is assigned to each robot. This index is attached to all messages and Mod Grouping uses results of the index Mods the number of target Bolt instances to select which instance or task the message is sent to.
(69) Referring now to
(70)
(71) NPC is the number of parallelism for Velocity Compute Bolt and NPS is the number of parallelism for Get Robot State Bolt. The first column of the figures shows command delays and collision times of robots with NPC=5, NPS varying from 1 to 2 and the maximum NR (Number of Robots) in the test set to 50. The rest of the figures show test results with NPC varying from 1 to 4, NPS varying from 1 to 2, and the maximum NR in the test to be 30.
(72)
(73) All results in
(74) Performance Test
(75) Referring now to
(76) Robots are placed in a square array all facing the origin of the coordinates and the antipodal position is set as the goal. If the number of the robots in a row/column is odd, all robots are shifted a little so that the start position and the goal position will not be the same for all robots. Additionally, to keep the scenario dense all throughout the test, the control command is not executed by robots, so robots will not move during the test.
(77) Previous results show that except for Velocity Compute Bolt, other components in the Topology have load capacity less than 5%, so there are still many computation resources available in the cluster and increasing NPC to a reasonable number larger than 5 will not affect the test. Here NPC is set to 6, 8 and 10 with NPS varying from 1 to 2.
(78) Referring now to
(79) Referring now to
(80) In the pose share loop as shown in
(81) Referring now to
(82) Since the number of parallel instances for Velocity Compute Bolt dominates the overall control delay for each robot, it is important to analyze the relationship between NPC and the maximum number of robots that the application can serve while keeping command latency close to the preset control period. To do so, the relationship between the overall control latency and the number of the running robots should be analyzed first. Then, the maximum number of robots that can be served for control latency around the preset control period can be determined.
(83)
(84) While t is the overall control delay, NR is the number of robots, k and b are coefficients for the linear function, and ΔT is the execution period of the Time Spout. The reason for selecting t>T+ΔT is that only under this condition can the instance run in full load, which can reflect the computation capacity of the instance.
(85) Using Linear Regression Analysis, k and b can be calculated and the fitted curve is shown as blue dotted line in
k*NR+b≤T (2)
(86) TABLE-US-00002 TABLE 2 Maximum Number of Robots that the Application can Serve with Different NPCs NPC NPS k b n* 6 1 5.09 −68.2 23 6 2 5.29 −74.57 23 8 1 4.02 −61.07 27 8 2 4.17 −62.11 26 10 1 3.46 −57.04 30 10 2 3.47 −57.43 30
(87) In applications and platforms that involve wide area network communication, the data transmitting latency is always the overhead that cannot be ignored. This has been demonstrated in
(88) Lastly, data transmitting delay in the Cloud can also be tricky. As for Storm, currently tasks are distributed by the Nimbus node and cannot be set by program or manually by commands. Therefore connected tasks that distributed in different machines will suffer longer communication delay than those distributed in the same machine. These are the problems that need to be explored in the future.
(89) The present disclosure provides a novel Cloud-based computation framework for Swarm robotics. To demonstrate the viability of the framework for real-time control of large number of robots, a local collision avoidance algorithm is implemented as a Cloud application. Unlike other research work that tries to minimize computation cost by ignoring important real world factors, our setup adopts a complex algorithm that can reflect more details about the real world scenario. These computation-intensive tasks are transferred to the Cloud, so that robots or other intelligent entities can be simplified in both hardware and software. By offloading computation to the IoT Cloud, more complex entities can be studied in Swarm Robotics/Intelligence without trimming off the details of the entity. Such precise implementation of the entity model can lead to deeper insight into swarm characteristics.
(90) By implementing and testing the collision avoidance algorithm on the IoT Cloud platform, scaling and real-time controlling ability of the system is verified. The results demonstrate that by simply scaling the computation resources, one IoT Cloud application can provide service for more robots. Such features will greatly facilitate extending the population in Swarm robotics and also provide support for large-scale Swarm systems.
(91)
(92)
(93) IoTCloud Framework
(94) Referring now to
(95) In general, a real-time application running in a DSPF can be modeled as a directed graph with streams defining the edges and processing tasks defining the nodes. A stream is an unbounded sequence of events flowing through the edges of the graph and each such event consists of a chunk of data. The processing tasks at the nodes consume input streams and produce output streams. A DSPF provides the necessary API and infrastructure to develop and execute applications on a cluster of nodes. In Storm these tasks are called Spouts and Bolts. To connect a device to the cloud services, a user develops a gateway application that connects to the device's data stream. Once an application is deployed in an IoTCloud gateway 108, the cloud applications discover those applications and connect to them for data processing using the discovery service.
(96) Design of Robot Applications
(97) Referring now to
(98) RBPF SLAM Algorithm
(99) Assuming a series of laser readings z.sub.1:t=(z.sub.1, . . . , z.sub.t) over time, as well as a set of odometer measurements u.sub.1:t-1=(u.sub.1, . . . , u.sub.t-1) from the robot. Our goal is to estimate both a map m of the environment and the trajectory of the robot, x.sub.1:t=(x.sub.1, . . . , x.sub.t). For any time t, it can be sampled from the posterior probability,
p(x.sub.1:t,m|z.sub.1:t,u.sub.1:t-1)=p(x.sub.1:t|z.sub.1:t,u.sub.1:t-1)p(m|x.sub.1:t,z.sub.1:t), (1)
(100) by sampling from the first term on the right hand side to produce an estimate of the robot's trajectory given just the observable variables, and then sample from the second term to produce an estimate of the map using that sampled trajectory. The particle filter maintains a set of particles, each including a possible map of the environment and a possible trajectory of the robot, along with a weight which can be thought of as a confidence measure. A standard implementation of the algorithm executes the following steps for each particle i as follows:
(101) 1. Make an initial estimate of the position of the robot at time t, using the estimated position at time t−1 and odometry measurements, i.e.
x′.sub.t.sup.i=x′.sub.t-1.sup.i⊕u.sub.t-1
where ⊕ is a pose compounding operator. The algorithm also incorporates the motion model of the robot when computing this estimate.
2. Use the ScanMatching algorithm shown in Algorithm 1 with cutoff of ∞ to refine x′.sub.t.sup.i using the map m.sub.t-1.sup.i and laser reading z.sub.t. If the ScanMatching fails, use the previous estimate.
3. Update the weight of the particle.
4. The map m.sub.t.sup.i of the particle is updated with the new position x.sub.t.sup.i and z.sub.t.
(102) TABLE-US-00003 Algorithm 1: Scan Matching input : pose u and laser reading z output: bestPose and l 1 steps ← 0; l ← −∞; bestPose ← u; delta ← InitDelta; 2 currentL ← likelihood (u, z); 3 for i ← 1 to nRefinements do 4 delta ← delta/2; 5 repeat 6 pose ← bestPose; l ← currentL; 7 for d ← 1 to K do 8 xd ← deterministicsample(pose, delta); 9 localL ← likelihood (xd, z); 10 steps+ = 1; 11 if currentL < localL then 12 currentL ← localL; bestPose ← xd; 13 end 14 end 15 until l < currentL and steps < cutoff ; 16 end
(103) After updating each particle, the algorithm normalizes the weights of all particles based on the total sum of squared weights, and then resamples by drawing particles with replacement with probability proportional to the weights. Resam-pled particles are used with the next reading. At each reading the algorithm takes the map associated with the particle of highest weight as the correct map. The computation time of the algorithm depends on the number of particles and the number of points in the distance reading. In general the accuracy of the algorithm improves if more particles are used.
(104) Streaming Parallel Algorithm Design
(105) Referring now to
(106) Our stream workflow of the algorithm is shown in
(107) A key idea behind our implementation is to distribute the particles across a set of tasks running in parallel. This particle-specific code is encapsulated in the ScanMatcher bolt, so the present disclosure can control the parallelism of the algorithm by changing the number of ScanMatcher bolt instances. The Resampling bolt must wait until it receives the results of the ScanMatcher bolts. After a resampling happens, the algorithm removes some existing particles and duplicate others, so the assignments of particles to ScanMatcher tasks have to be rearranged. The directed communication required among the parallel ScanMatcher tasks to do the reassignment is not well supported by Apache Storm, so the present disclosure uses an external RabbitMQ message broker. All the data flowing through the various communication channels are in a byte format serialized by Kryo. The steps for a single reading as shown in
(108) 1. LaserScan spout receives laser and odometer readings via the message broker.
(109) 2. The reading is sent to a Dispatcher, which broadcasts it to the parallel tasks.
(110) 3. Each ScanMatcher task receives the laser reading, updates its assigned particles, and sends the updated values to the Resampling bolt.
(111) 4. After resampling, the Resampling bolt calculates new particle assignments for the ScanMatchers, using the Hungarian algorithm to consider relocation costs. The new particle assignment is broadcast to all the ScanMatchers.
(112) 5. In parallel to Step 4, the Resampling bolt sends the resampled particle values to their new destinations according to the assignment.
(113) 6. After ScanMatchers receive new assignments, they distribute the maps associated with the resampled particles to the correct destinations, using RabbitMQ queues to send messages directly to tasks.
(114) 7. The ScanMatcher with the best particle outputs its values and the map.
(115) 8. ScanMatcher bolts send messages to the dispatcher, indicating their willingness to accept the next reading.
(116) Our implementation exploits the algorithm's ability to lose readings by dropping messages that arrive at a Dispatcher bolt while a computation is ongoing, to avoid memory overflow. Owing to the design of the GMapping algorithm, only a few resampling steps are needed during map building. If resampling does not happen then steps 5 and 6 are omitted. Although an open source serial version of the algorithm in C++ is available through OpenSlam, it is not suitable for our platform which requires Java. The algorithm described above was implemented in Java and is available in github along with steps to run the experiments.
(117) Results and Discussion
(118) Referring now to
(119)
(120) TABLE-US-00004 TABLE 3 Serial average time (in ms) for different datasets and numbers of particles. Particle count Data set 20 60 100 Simbad 987.8 2778.7 4633.84 Simbad, 792.86 2391.4 4008.7 ScanMatching Cutoff = 140 ACES 180 537 927.2
(121) The parallel speedups gained for the ACES and Simbad datasets are shown in
(122) Referring now to
(123) Another factor that affects parallel speedup is the difference in computation times among parallel tasks. Assume we have n ScanMatcher tasks taking t.sub.1, . . . , t.sub.m, . . . , t.sub.n, seconds, where t.sub.m is the maximum among these times. In the serial case, the total time for ScanMatching is T.sub.s=t.sub.1+ . . . +t.sub.m+ . . . +t.sub.n, while for the parallel case it is at least tm because Resampling has to wait for all parallel tasks to complete. The overhead introduced because of this time imbalance is t.sub.overhead=t.sub.m−T.sub.s/n, which is 0 in the ideal case when all tasks take the same time.
(124)
(125)
(126) To further investigate the behavior of the algorithm, we plotted the computation times for each reading, as shown in
(127) Referring now to
(128)
(129)
(130) The present disclosure has shown how to offload robotics data processing to the cloud through a generic real-time parallel computation framework, and our results show significant performance gains. Because the algorithm runs on a distributed cloud, it has access to potentially unbounded memory and CPU power. This allows the system to scale well, and for example could build maps in large, complex environments needing a large number of particles or dense laser readings.
(131) There are many possible enhancements to our system. The present disclosure addressed the problem of imbalances in particle computation times by simply discarding particles that exceed a hard computation limit, which works well for this particular algorithm but may not for others. Also we have observed fluctuations in processing time caused by virtualization, multi-stream interference and garbage collection. In the future we would like to address these fluctuations with a generic approach such as running duplicate computation tasks. Also, it is observed that result broadcast and gathering in the streaming tasks takes considerable time, so reducing I/O would also significantly improve performance.
(132) Reducing programming complexity is also an interesting direction. Modern distributed stream processing engines expose low-level APIs, making developing intricate IoT applications quite complex. Future work should propose higher-level APIs to handle complex interactions by abstracting out the details. Distributing state between parallel workers currently requires a third node, such as an external broker or a streaming task acting as an intermediary. A group communication API between the parallel tasks would be a worthy addition to DSPF. Extending our work to abstract out a generic API to quickly develop any particle filtering algorithm would also be interesting.
(133) The above detailed description and the examples described therein have been presented for the purposes of illustration and description only and not for limitation. For example, the operations described can be done in any suitable manner. The methods can be performed in any suitable order while still providing the described operation and results. It is therefore contemplated that the present embodiments cover any and all modifications, variations, or equivalents that fall within the scope of the basic underlying principles disclosed above and claimed herein. Furthermore, while the above description describes hardware in the form of a processor executing code, hardware in the form of a state machine, or dedicated logic capable of producing the same effect, other structures are also contemplated.