DISINFECTION ROBOTS
20230126179 · 2023-04-27
Assignee
Inventors
- Ning Xi (Hong Kong, CN)
- Ye Ma (Hong Kong, CN)
- Siyu Wang (Hong Kong, CN)
- Qingyang Wang (Hong Kong, CN)
- Jiajie Ye (Hong Kong, CN)
- Chun Kwok (Kowloon, CN)
- Sheng Bi (Guangzhou, CN)
Cpc classification
B25J9/1633
PERFORMING OPERATIONS; TRANSPORTING
A61L2202/14
HUMAN NECESSITIES
B25J9/1664
PERFORMING OPERATIONS; TRANSPORTING
B25J13/006
PERFORMING OPERATIONS; TRANSPORTING
B25J13/089
PERFORMING OPERATIONS; TRANSPORTING
B25J9/1607
PERFORMING OPERATIONS; TRANSPORTING
A61L2/24
HUMAN NECESSITIES
B25J9/1684
PERFORMING OPERATIONS; TRANSPORTING
A61L2202/16
HUMAN NECESSITIES
International classification
A61L2/24
HUMAN NECESSITIES
B25J11/00
PERFORMING OPERATIONS; TRANSPORTING
B25J13/08
PERFORMING OPERATIONS; TRANSPORTING
Abstract
A UV based surface disinfection system that consists of the UV light source, a robot arm, and an omni directional mobile base. The mobile robot can be programmed autonomously and be able to bring the UV light source to the centimeters away from surfaces to achieve effective and efficient surface disinfection. The mobile robot can navigate autonomously in a complicated environment to perform disinfection operation in a large area.
Claims
1. A disinfection robot comprising: a mobile base with omni-directional wheels; a universally positionable manipulator with an end effector (E.E.); a UV lamp module attached to the end effector of the manipulator that includes a UV disinfection lamp and a ranging module using sensors to detect the distance of the lamp from a surface to be disinfected; and an electronic control system for planning a path for the UV lamp to be moved over the surface based on information from the sensors by controlling the directional wheels of the base and the movement of the manipulator.
2. The disinfection robot according to claim 1 wherein the ranging module includes a multi-line laser scanner and a 360 degrees camera.
3. The disinfection robot according to claim 2 wherein the ranging module is a LiDAR system.
4. The disinfection robot according to claim 1 further including a remote processor with a transceiver and a transceiver connected to the electronic control circuit so the remote processor and electronic control circuit can communicate over a transmission link so that a user can control the robot remotely.
5. The disinfection robot according to claim 4 wherein the remote processor and the electronic control circuit communicate over a WiFi transmission link.
6. The disinfection robot according to claim 1 wherein the universally positionable manipulator has 3 rotational joints, which are located in three XYZ joint frames, and the mobile base is located in a separate XYZ frame.
7. The disinfection robot according to claim 1 wherein the electronic control system acquires in task space from the sensors the velocity, acceleration, position and time according to the known initial and end points, and generates a plan for a trajectory to control the end-effector so it follows this trajectory.
8. The disinfection robot according to claim 7 wherein the trajectory is divided into three sections: third-order, fifth-order and third-order trajectories; and after performing the third-order, fifth-order and third-order trajectories in the planner, the pose of each point in task space is transformed into joint space to control the robot's joint through an inverse kinematics and a Jacobian matrix so that commands can be published to control the joints based on the position and velocity, where the velocity and position correspond to a specific time.
9. The disinfection robot according to claim 1 wherein the electronic control system contains a feedforward part and a feedback part and a controller for the manipulator, wherein a first part (the feedforward) is acquired through a Dynamics model to compute torque in the controller and the second part (feedback) is the position and velocity, wherein based on the position and velocity error, the position loop and velocity loop are constructed, wherein the feedforward of torque is combined with the position loop and velocity loop to calculate the decoupling output, the controller output is a torque whose value satisfies the requirement of the entire robot system in order to realize a target, including mutual influence between each joint; and wherein the torque output is provided to a driver for the manipulator, and the driver generates current to make manipulator motors operate according to the corresponding torque.
10. The disinfection robot according to claim 1 wherein the electronic control system plans a path according to an object perceptive local planner method comprising the steps of: segmenting X(t) segments from a real-time point cloud; segmenting X.sub.ref(s) segments from a global map as the end effector's desired path; using the selected segments X(t) with a point cloud sliced Wasserstein based perceptive motion local planner to map the UV desired path compared to the actual UV path; updating the comparison; and updating the speed profile of the end effector along the geodesics path, whereby the optimal transport path from X(t) to X.sub.ref (s) is established.
11. The disinfection robot according to claim 10 wherein the comparisons is updated at a rate of 10 Hz and the speed profile is updated at a rate of 50 Hz.
12. The disinfection robot according to claim 1 further including a time-of-flight (ToF) camera attached to the UV light module at the end-effector and a Light Detection and Ranging (LiDAR) device on the robot base, and wherein the method of planning the path comprises the steps of: receiving a present image, I(t), a present point cloud X(t) and camera_intrinsics.yaml & camera_tof_If.launch signals from the robot at a present point cloud segmentation processor; producing at the present point cloud segmentation processor a visual cone in the local frame and X.sub.seg(t), which is delivered to a sliced perceptive object tracker; receiving UV_desired.txt, Map.ply and p_num(s) at a.reference point cloud generator, which in turn generates X′.sub.seg(s), which is applied to the sliced perceptive object tracker along with an EE State (velocity or acceleration) and EE constraints, wherein the EE constraints come from joint dynamic constraints and environmental information (e.g., friction) after passing through a dynamics module; causing the sliced perceptive object tracker to engage in trajectory planning with dynamic constraints on a geodesics path and Wasserstein barycentre as intersections from X to X′, wherein the tracker produces as an output the point cloud registration (based on sliced Wasserstein distance); passing the point cloud registration to a point cloud sliced Wasserstein based perceptive motion local planner, wherein the planner has its output tf: from/UV to/UV desired directed to the reference point cloud generator as an input along with X′.sub.seg(s) and LiDAR_FoV.yami; applying a second output of the planner to a calculation block which engages in e_calculation (known/map to/UV), which provides the input P_num(s) to the reference point cloud generator; and applying an output p(T) from the tracker to a kinematics unit to create q(t), which is directed to the robot, and which in turn provides the present image and present point cloud applied to the present point cloud segmentation unit, whereby global localization is used as an initial condition, then it mainly compares reference point cloud and present point cloud in the end effector frame (E.E.F.) of the mobile manipulator based on Wasserstein distance, and then outputs rigid transformation between them at a low frequency and desired velocity along the optimal path of the end effector at a high frequency.
13. The disinfection robot according to claim 1 for conducting a disinfection task with respect to a movable object, further including a 3D camera on the end effector of the robot, and wherein the electronic control system includes a disinfection motion planning module with a Local Reference Frame (R_(L.R.F)) attached to the movable object that carries out the steps of” obtaining a reference point cloud in advance; collecting a target point cloud with the 3D camera; utilizing a vector space with a point cloud registration algorithm to output a transformation matrix in the vector space based on the reference point cloud and the target point cloud, wherein the transformation matrix is acquired for converting from the reference point cloud to the target point cloud; transferring the original motion planning that is attached to the movable object to the target object according to the output transformation matrix; and causing the robot end effector to follow a given path and trajectory to finish the disinfection of the movable object without being required to perform additional motion planning for the entire path and trajectory.
14. The disinfection robot according to claim 13 wherein the point cloud registration algorithm comprises the steps of: calculating a Point Feature Histogram (PFH) of the reference point cloud and the target point cloud, wherein the. PFH features are related to the three-dimensional data of the coordinate axis and the surface normal; and calculating the transformation relationship between the two different point clouds with their PFH features using the algorithm of Iterative Closest Point (ICP), which outputs a result with a score value where lower score means a more accurate and confident result.
15. The disinfection robot according to claim 1 for conducting a disinfection task with respect to a movable object, further including a 3D camera on the end effector of the robot, and wherein the electronic control system includes a disinfection motion planning module using a movable object points set observed at .sub.E.E. directly as the system state, which directly ensures the relative spatial relationship from the end effector to the object; wherein the motion plan is a series of continuously evolving sets or “tube” in non-vector space generated by (a) expert demonstration in
.sub.E.E. by joystick E.E. control or (b) convention from vector space trajectory in
.sub.W.R.F., and t e module comprises the steps of: having the 3D scanner on the end effector simultaneously collect a series of cloud points of the object to be disinfected while the E.E. is controlled by an expert in a disinfection demonstration process (a); converting the motion plan from vectors described in
.sub.W.R.F. to sets observed in
.sub.E.E. to the non-vector space tube while a virtual end effector with ToF LiDAR FoV culling point cloud collector travels in a described trajectory in a simulation environment (b); having the 3D scanner at the robot end effector obtain segmented points set K(t) of the movable object in real-time; using a designed non-vector space controller to converge the Wasserstein distance as the control error from K(t) to {circumflex over (K)} with theoretical and experimentally proven exponential stable by calculating the appropriate velocity of the end effector; and causing the robot end effector to maintain a certain relative spatial relationship to the movable object and to impose effective UVC dosage without being required to perform additional motion planning for the entire path and trajectory.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] This patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.
[0009] The foregoing and other objects and advantages of the present invention will become more apparent when considered in connection with the following detailed description and appended drawings in which like designations denote like elements in the various views, and wherein:
[0010]
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
DETAILED DESCRIPTION OF THE INVENTION
[0034] As shown in
[0035] The manipulator includes a LiDAR laser ranging module in the UVC lamp 16 at the end of the manipulator arm. It is used to measure the distance between the UVC lamp and the disinfected surface. In this way, the lamp is able to get much closer to the surface so as to improve the sterilization efficiency.
[0036] An electronic control system 20 for the robot is mounted on the base 12 and mainly consists of one processing board, e.g., a NVIDA Jetson which provides multiple neural networks in parallel for applications like image classification, object detection, segmentation, and speech processing. This processing board is used in the present invention for sensor information processing and task planning. The system also includes a microprocessor for robot control, e.g., a STMicroelectronics STM32. The on-board sensors include a multi-line laser scanner and a 360 degrees camera. The system communicates using Internet as well as cellular service. The architecture of the electronic system is illustrated in
[0037] As shown in
[0038] An ARM Cortex processor 28 receives the processed signals and converts them into a drive signal for the wheels 11 and the movement of the joints on the manipulator 14. In particular the wheel drive signals pass to a universal asynchronous receiver-transmitter (UART) 26, which in turn divides the input into output signals for the motor drivers and motors for Omni wheels 11A, 11B and 11C on the robot base 12. The signals for the joints are passed through pulse with modulator circuit 27 and then to the motor drivers 29 and ARM motors for joints 1, 2 and 3.
[0039] As shown in
[0040] where S.sub.z and C.sub.z equal sin(θ.sub.z) and cos(θ.sub.z), respectively; Oz means current orientation of the mobile platform; S.sub.1 and C.sub.1 equal sin(q.sub.1) and cos(q.sub.1), respectively; q.sub.1 is the angle value of the joint 1; S.sub.12 and C.sub.12 equal sin(q.sub.1+q.sub.2) and cos(q.sub.1+q.sub.2), respectively; q.sub.2 is the angle value of the joint 2; S.sub.123 and C.sub.123 equal sin(q.sub.1+q.sub.2+q.sub.3) and cos(q.sub.1+q.sub.2+q.sub.3), respectively; q.sub.3 is the angle value of the joint 3; L.sub.1, L.sub.2 and L.sub.3 are the length of link on the robot arm.
[0041] The inverse kinematics are determined as follows:
[0042] where x.sub.ee is the distance from end end-effector 15 to the arm platform frame along the X axis of arm platform frame; y.sub.ee is the distance from the end-effector to the arm platform along the Y axis of arm platform frame; z.sub.ee is the distance from the end-effector to arm platform along the Z axis of arm platform frame; n.sub.x and n.sub.y belong to the rotation matrix; x.sub.0, y.sub.0 are the position of the end-effector on the world reference frame (W.R.F.); P.sub.x, P.sub.y and θ.sub.z are the pose of the mobile platform; θ.sub.front, θ.sub.left and θ.sub.right are the angle of wheel 11 of the mobile platform 12; L is the distance between center of the mobile platform and the wheel 11; R is the radius of wheel 11; and .sub.0.sup.2T is a transformation matrix from no. 2 frame to no. 0 frame.
[0043] The Jacobian matrix is defined as:
where {dot over (θ)}.sub.a is a vector of joint velocity and {dot over (θ)}.sub.b is a vector of wheel velocity.
[0044] Dynamics modeling is achieved by the following equations:
[0045] where D(q) is the inertia matrix; C(q,{dot over (q)}) is the Coriolis matrix; g(q) is the gravitational potential energy; J.sub.v.sub.
[0046] During Motion planning, the velocity, acceleration, position and time according to the known initial and end points are acquired. Based on this information a plan is generated to control the end-effector so it follows a desired trajectory. When planning this trajectory, it is divided into three sections: third-order, fifth-order and third-order trajectory, as shown in the following formula.
t.sub.0->t.sub.1:a.sub.13t.sup.3+a.sub.12t.sup.2+a.sub.11t+a.sub.10=x(t)
t.sub.1->t.sub.2a.sub.25t.sup.5+a.sub.24t.sup.4+a.sub.23t.sup.3+a.sub.22t.sup.2+a.sub.21t+a.sub.20=x(t)
t.sub.2->t.sub.f:a.sub.33t.sup.3+a.sub.32t.sup.2+a.sub.31t+a.sub.30=x(t)
[0047] After performing the 3-5-3 order trajectories in the planner, the pose of each point in task space can be transformed into joint space to control the robot's joint through inverse kinematics and a Jacobian matrix as shown above. The pose of a robot provides information on the location of the robot in either two or three dimensions and also its orientation. For mobile robots, the pose can be a simple three element vector, but for arm-based robots, a matrix is often used to describe the pose. Therefore, in joint space, the joint controller can publish commands to control the joint based on the position and velocity, where the velocity and position correspond to a specific time.
[0048] In the controller system, the input contains two parts: feedforward and feedback. The first part (feedforward) is acquired through the Dynamics model indicated above to compute the torque in the controller, and the formula of torque is set forth below. The second part (feedback) is the position and velocity. Based on the position and velocity error, the position loop and velocity loop are constructed. Then the feedforward of torque is combined with the position loop and velocity loop to calculate the decoupling output. The controller output is torque. This torque value satisfies the requirement of the entire robot system in order to realize a target, including mutual influence between each joint. Finally, the torque output is provided to the driver for the manipulator, and the driver generates current to make the manipulator motors operate according to the corresponding torque. The formulas for torque are
[0049] Where τ.sub.1, τ.sub.2 and τ.sub.3 are the torque of joint 1, joint 2 and joint 3 respectively; τ.sub.right, τ.sub.front and τ.sub.left are the torque of the wheel in the mobile platform; d.sub.ij is the element in the inertia matrix; c.sub.ijk is the Coriolis matrix of k-th joint; g(q) is the gravitational potential energy; and F.sub.x, F.sub.y and M.sub.z are the driving force and moment applied to the mobile platform.
[0050] It is vital for the robot to have robustness and flexibility when executing mobile manipulation tasks, such as indoors disinfection. However, there are difficulties in mobile manipulation tasks in unstructured environments. To be more specific, there are three main challenges in indoor mobile manipulation
[0051] As shown in
[0052]
[0053] The “Object Perceptive Local Planer” method of the present invention as shown in
[0054] One key to this process is that the motion planning and controlling steps utilize both spatial and shape information about an object of interest in the global frame to adapt to unexpected disturbances in the robot local frame. The “Object Perceptive Local Planer” takes advantage of a time-of-flight (ToF) camera 54 attached to the UV light 16 at the end-effector (E.E.) and the Light Detection and Ranging (LiDAR) device 17 on the robot base as shown in
[0055] The overall framework of the Algorithm is shown in
[0056] The sliced perceptive object tracker 88 engages in trajectory planning with dynamic constraints on the geodesics path and Wasserstein barycentre as intersections from X to X′. Tracker 88 produces as an output to the point cloud registration (based on sliced Wasserstein distance) at a rate of 10 Hz, which is passed to the point cloud sliced Wasserstein based perceptive motion local planner 85 (which is equivalent to unit 64 of
[0057] In brief the framework uses global localization as an initial condition, then it mainly compares reference point cloud and present point cloud in the end effector frame (E.E.F.) of the mobile manipulator based on Wasserstein distance. Then it outputs rigid transformation information between them at low frequency and desired velocity information along the optimal path of the end effector at high frequency.
[0058] In E.E.F., Wasserstein Distance (X(t), X_ref (s)) leads to a rigid transform and an optimal transport plan with a geodesics path between them. Directly estimating the rigid transform of two-point clouds is time consuming. However, with the Wasserstein barycenter, as the intersections of two-point clouds, the transform can be rapidly obtained and is guaranteed to be on the shortest path in terms of optimal transport from the source point cloud to the destination. The main contribution of this part is to integrate the perception and control module together to provide “spatial perceptive ability” of the motion in real time.
[0059] The UV-C light module, as shown in
[0060] As shown in
[0061] A block diagram of the control system of the UVC is shown in
[0062] A schematic of the control system of the UVC is shown in
[0063] Test were conducted with the disinfection robot of the present invention. The photograph of
[0064]
[0065]
[0066] As noted above, a disinfection motion plan, such as the trajectory of the robot End-Effector (E.E.) UVC light 16, can be described in the World Reference Frame (.sub.W.R.F) established in the global point cloud map. Such a disinfection plan is valid based on the assumption that the pose of the object being disinfected is static in the Global Reference Frame (G.R.F) where T.sub.W.sup.L, is static. Static objects include buttons on the walls, fixed bookshelves and desks. Then with pose estimation of the robot end effector in the global map(T.sub.W.sup.EE), the pose control error can be calculated and gradually eliminated. However, this planning and control framework suffers in two respects.
[0067] First, when encountering movable objects like chairs, the static object pose assumption cannot be valid since their pose (T.sub.W.sup.L) can vary from time to time in W.R.F. See
[0068] When considering the disinfection task with respect to a movable object, it becomes clear that the relative spatial relationship from the end effector (R_(E.E.)) to the latest object pose (R_(L.R.F)) is vital. Therefore, to meet the challenges of disinfecting movable objects, it is proposed as part of the present invention that the disinfection motion planning include a Local Reference Frame (R_(L.R.F)) which is attached to the movable object. The advantage of this is that it avoids motion re-planning each time the object pose is changed.
[0069] Additionally, the pose estimation reference is no longer the whole environment, just the object, which can reject the unreliable pose estimation caused by the environment changes. Two frameworks, one in vector space and one in non-vector space, are considered for accomplishing these innovative ideas, including reference frame conversion of the motion plan, error estimation and control in the local reference frame (R_(L.R.F)).
[0070] For the movable object, the vector space method uses the point cloud registration algorithm to output the transformation matrix in the vector space based on two different point clouds, one of the point clouds is the reference point cloud and the other one is the currently observed point cloud, called the “target point cloud.” Both reference and target point cloud are described in the Local Reference Frame (.sub.L.R.F). The transformation matrix is acquired for converting from the reference point cloud to the target point cloud. Then robot can follow the given path and trajectory to finish the disinfection task and does not need to perform the motion planning again.
[0071] The vector space method needs to obtain the reference point cloud in advance. A relatively complete reference point cloud is usually collected, and it is described in the .sub.L.R.F. The end effector of the robot is provided with a 3D camera, which collects the target point cloud. The target point cloud is also described in the same coordinate frame with the reference point cloud. Moreover, when there are a reference point cloud and a target point cloud, the point cloud registration algorithm is used to calculate the transform matrix. First, the Point Feature Histogram (PFH) of the reference point cloud and the target point cloud need to be calculated. PFH features are not only related to the three-dimensional data of the coordinate axis, but they are also related to the surface normal. Then, the algorithm of Iterative Closest Point (ICP) is used to calculate the transformation relationship between these two different point clouds with their PFH features. The ICP algorithm will output a result with a score value. A lower score means a more accurate and confident result.
[0072] After obtaining the transform relationship, the original motion planning that is attached to the movable object can be transferred to the target object by the output transform relationship. Therefore, the robot just needs to follow the given path and trajectory to disinfect the movable object without re-planning the entire path and trajectory for the movable object. The path of motion planning shows in the
[0073] The idea of non-vector space planning and control framework involves using the movable object points set observed at .sub.E.E. directly as the system state, which directly ensures the relative spatial relationship from the end effector to the object and largely omits feature extraction and vectorization computation cost in the vector-space algorithms, such as localization.
[0074] In motion plan conversion, the original vector space motion plan is a series of UVC light (.sub.E.E.) trajectory (time-varying pose, velocity, and acceleration vectors) in the
.sub.W.R.F. On the other hand, the motion plan is described by a series of continuously evolving sets, called “tube” in non-vector space. By having a virtual end effector ideally travel as described in the mentioned trajectory in a simulation environment (or the real end effector moves in real world), the 3D scanner on the end effector would simultaneously collect a series of points cloud of the object to be disinfected. Therefore the motion plan is converted from vectors described in
.sub.W.R.F. to sets observed in
.sub.E.E.. A motion plan conversion example from vector-space line trajectory in
.sub.W.R.F (
[0075] As for the control process, the 3D scanner at the robot end effector obtains segmented points set K(t) of the movable object in real-time. Then a designed non-vector space controller converges the Wasserstein distance as the control error from K(t) to {circumflex over (K)} by calculating the appropriate velocity of the end effector. During the control process, the movable object can still be moved and the robot end effector will robustly adopt such unexpected changes with the system shown in
[0076] In order to cover a larger object surface, a more complicated disinfection path can be expressed by more than one single tube in non-vector space. The snapshots in .sub.E.E. during a complete chair disinfection in Wasserstein non-vector space. The directions of three reference tubes are shown by arrows on one side, the front, the other side and the top.
[0077]
[0078] The vector space framework realizes high position control accuracy because it estimates the Euclidean transformation from the past scanned object to the present observed object once by point cloud registration algorithms. During the disinfection period, the end effector can approach the object closely with both an updated motion plan in R_(G.R.F) and closed-loop control in R_(G.R.F). However, the if the shape of the observed object is different from the reference object's shape, the point cloud registration will fail. Also, if the object is moved during the control process, the point cloud registration must be performed one more time.
[0079] The non-vector space framework shows better robust performance when there are partial observations of an interested object because these observations do not rely on the vector space transform precision of point cloud registration, and the unexpected movements of the object due to it's closed-loop control in R_(E.E.) throughout the whole disinfection process without a feature extraction process. Also, a disinfection plan can be used on multiple objects with similar shapes because the representation of sets error is Wasserstein distance, which can describe not only isometric transform but also shape difference. The drawback of non-vector space is that both the storage size and computation cost of creating the set are large. This is overcome by a point cloud voxel down-sampling filter and would be solved more efficiently by tools like compressive sensing.
[0080] At the disinfection action level, there are still many situations, such as the object being occupied, an occupied object becoming free, an empty container and a full container as shown in
[0081] In facing the challenge of performing actions depending on the situations, the robot needs to detect the object state when it is performing disinfection tasks. After the occupancy or empty states are detected the system can rearrange its action to skip disinfection or postpone it. Because the system uses event-based detection in its framework, and the event-based motion planning allows rearrangement of the action without redoing the entire motion plan, its operation can be very efficient.
[0082] The most important thing in this embodiment is to detect and classify different situations, which is realized by an algorithm called an “action online planner”. According to the information from the RGB camera and 3D camera, the situations can be classified. Through the RGB camera, using the You Only Look Once (YOLO) algorithm can find different objects, such as chairs, sofa and human as shown in
[0083] While the invention is explained in relation to certain embodiments, it is to be understood that various modifications thereof will become apparent to those skilled in the art upon reading the specification. Therefore, it is to be understood that the invention disclosed herein is intended to cover such modifications as fall within the scope of the appended claims.