Local Autonomy-Based Haptic Robot Interaction with Dual Proxy Model
20230339121 · 2023-10-26
Inventors
Cpc classification
International classification
Abstract
Haptic-robot control based on local autonomy and a dual-proxy model is provided. The dual proxy guarantees generation of safe and consistent commands for two local controllers, which ensure the compliance and stability of the systems on both sides. A Force-Space Particle Filter enables an autonomous modeling and rendering of the task contact geometry from the robot state and sensory data. The method suppresses the instability issues caused by the transfer of power variables through a network with communication delays in conventional haptic-robot controllers. The results demonstrated the transparency and high fidelity of the method, and robustness to communication delays. While the conventional method failed for communication delays higher that 150 milliseconds, the dual proxy method maintained high performance for delays up to one and a half seconds. The local autonomy-based haptic control of robots with the dual-proxy model enables applications in areas such as medical, underwater and space robotics.
Claims
1. A method for controlling a robot via a haptic interface, comprising: (a) having a robot controlled by a local, independent robot controller, wherein the robot interacts with an environment, and wherein the robot controller distinguishes a unified force and motion controller, parametrized autonomously; (b) having a haptic device controlled by a local, independent haptic controller, wherein the haptic controller distinguishes a unified force and motion controller, parametrized autonomously; (c) having an environment estimator near or within the robot controller estimating single or multi-dimensional geometric constraints from the robot interactions with the environment, wherein the geometric constraints are defined in orientation and position; and (d) having a dual proxy model separated connecting the robot controller and the haptic controller, wherein the dual proxy model calculates outputs defined as (i) a parametrization for the robot and haptic unified controllers, (ii) a force command and a motion command for the robot unified force and motion controller, (iii) a force feedback command and a motion command for the unified force and motion haptic controller, wherein the outputs are calculated by the dual proxy model receiving inputs defined as (j) position information from the haptic device, (jj) the geometric constraints from the environment estimator, and (jjj) position information from the robot, and wherein the calculated outputs are consistent with the robot and haptic device capabilities, and with a safe haptic-robot interaction.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
DETAILED DESCRIPTION
Framework for Haptic-Robot Interaction
[0023] In this invention, the inventors' rethought haptic teleoperation through a local autonomy-based control strategy. The idea was to implement local controllers at the robot and haptic device sides, to respectively monitor the task actions and provide a proper perception to the user, while only exchanging high-level passive variables between the two robots. This sliced approach is based on the introduction of the dual-proxy model: a smart bridge between the two local loops which encodes haptic-robot interactions.
Local Autonomy-Based Approach
[0024] The state-of-the-art points that a mandatory compromise has to be done between stability and fidelity once power variables are transmitted through a global feedback loop between the haptic device and the remote robot. In a position-position controller, intrinsic stability and robustness to time delay can be achieved by removing force/velocity non-collocation in the global bilateral architecture. To this end, the approach presented herein minimizes non-collocations by only closing the feedback loop locally at the device and robot sides, and exchanging high-level information between the two local controllers. A dual-proxy model computes setpoints to the two local loops with respect to the task-controlled modalities and exchanged data.
[0025] The concept of the haptic control framework is shown in
[0026] Using autonomous local controllers make this approach very modular and general because they can be easily changed and tuned to adapt to different tasks and different hardware, without having to redesign the whole system.
Dual-Proxy Model
[0027] The idea of proxy in haptics is to create a point, the proxy, in the virtual environment that kinematically follows the user input, but is subject to the constraints of the virtual environment. This proxy is then used to generate a haptic spring force that tries to bring the haptic device back to the position of the proxy. It is a very effective way to render a virtual environment haptically.
[0028] The idea of the dual-proxy model is to define two proxys, one for the robot and one for the haptic device that will be used to generate commands for the local controllers. The proxys are, therefore, virtual objects which represents the behavior of the two robots in the workspace of their counterparts. With this proxy-based approach, only position information has to be transmitted over the network, which cannot jeopardize stability of the teleoperation framework. The dual-proxy model uses knowledge of the robot environment, that can be known a priori or computed with the online perception algorithm described infra, and the knowledge of the robot and haptic device states to generate inputs to the local controllers. The dual-proxy model handles the communication between the robot and haptic device and prevents non-collocation issues. It is described in
Computing Haptic and Robot Proxys
[0029] The first role of the dual-proxy model is to handle the communication between the two robots and to generate a proxy for each. The proxy captures the behavior of the robot and of the haptic device in the frame of the other. To this end, the dual-proxy model copies each robot position data and transforms it through a proper workspace mapping. The most commonly used mapping relates the two workspaces via a translation, rotation and constant scaling factor.
[0030] The haptic device current position x.sub.h and orientation R.sub.h, are measured in the device frame. They encode the human desired pose for the robot. These position and orientation are sent to the dual-proxy model, where they are mapped onto the robot frame, thanks to the rotation matrix from the device frame to the robot frame R.sub.h-r, the translation between the robot frame and the desired center of the robot workspace in position p.sub.r-cw and orientation R.sub.r-cw, and the scaling factor s, to obtain the robot proxy position x.sub.p,r and orientation R.sub.p,r:
x.sub.p,r=p.sub.r-cw+sR.sub.h-r.sup.Tx.sub.h
R.sub.p,r=R.sub.h-r.sup.TR.sub.hR.sub.h-rR.sub.r-cw (1)
[0031] The reverse transformation is done to compute the haptic proxy x.sub.p,h, R.sub.p,h from the robot position x.sub.r and orientation R.sub.r:
x.sub.p,h=R.sub.h-r(x.sub.r−p.sub.r-cw)/s
R.sub.p,h=R.sub.h-rR.sub.rR.sub.r-cw.sup.TR.sub.h-r.sup.T (2)
[0032] It is worth noting that the same workspace mapping transformation is applied to any data passed through the network from one robot to the other, to maintain the physical consistency between all remote information. In particular, the model of the robot environment geometry needs to be mapped to the haptic device to generate correct force feedback.
[0033] In general purpose teleoperation applications, the communication network is often slower than the desired control frequency and rarely ensures the hard-real-time constraints of haptic-robot interaction. In addition, communication delays and packet losses generate many issues in conventional teleoperation systems that use one global feedback loop. This is particularly true when there is a significant distance between the haptic device and the robot, for example when teleoperating a robot in space.
[0034] The dual-proxy model removes these communication issues. Indeed, since the dual-proxy model and the two local controllers are independently implemented, they can run at different frequencies. Typically, the local control loops will run at 1 kHz or more to get the best possible performances. However, the communication channel can run at a lower frequency, which will mostly depend on the chosen communication technology, and will typically be between 10-100 Hz. In addition, this frequency difference enables the computation of smooth proxys by averaging, in the fast controllers, position data between two consecutive communication messages. Finally, communication delay and latency spikes can be monitored by the dual proxy in many ways without compromising on the performance of the local control loops as it runs independently.
Task-Related Proxy Projection
[0035] The second role of the dual proxy is to use the proxys, robot configurations and robot environment information to generate motion and force control inputs. A broad spectrum of physical interaction is involved when performing tasks in haptic teleoperation. Remote minimally-invasive surgical operations require a high accuracy in motion while assembly tasks or polishing works need precise perception and regulation of the contact forces with the environment. This range of interaction implies different objectives for the local control loops. To achieve high fidelity in both motion and contact tasks, the dual-proxy model smartly generates the control inputs such as they properly reflect the real-world physical interaction. The strategy includes either computing a motion command, if the robot is moving in some direction, or computing a force command when the robot is in contact with the environment.
[0036] The computation of the control inputs is based on a projection of the robot proxy and haptic proxy onto motion and force spaces, defined by the task physical interaction, whether the robot is in contact with the environment or not. The environment geometry is described locally by the directions of constraints and directions of free motion. These are encoded by selection matrices for the robot force space Ω.sub.f and the robot motion space Ω.sub.m. These matrices are 3D projection matrices, and their rank corresponds to the number of degrees of freedom in the force and motion space respectively. Besides, the force space and motion space need to be orthogonal so we have Ω.sub.f+Ω.sub.m=I.sub.3×3. A similar selection process projects the end-effector rotational motion and applied torques onto rotation and moment spaces, respectively through Ω.sub.rot and Ω.sub.τ, with Ω.sub.rot+Ω.sub.τ=I.sub.3×3. Note that because of these relationships, only the force and moment selection matrices need to be transferred from the robot to the haptic device. However, they need to be mapped to the haptic device workspace. The force and moment selection matrices for the haptic device are therefore:
Ω.sub.f,h=R.sub.h-rΩ.sub.fR.sub.h-r.sup.T
Ω.sub.τ,h=R.sub.h-rΩ.sub.τR.sub.h-r.sup.T (3)
[0037]
Δx.sub.m=Ω.sub.m(x.sub.p,r−x.sub.r)
F.sub.r=k.sub.vir,rΩ.sub.f(x.sub.p,r−x.sub.r) (4)
where k.sub.vir,r is the stiffness of the virtual spring on the robot side. Monitoring these task-based control modalities at the robot level gives an intuitive and explicit control of the task interaction, whether it involves precise motions or contact forces. The robot local controller regulates the end-effector dynamic behavior to reach the desired control objectives and properly perform the remote task. On the haptic side, the controller must reflect the task interaction to the human operator. The haptic control objective is, therefore, to produce a high-fidelity remote sensing of the contact forces without any constraint in free space. The haptic control inputs reflect the contact force onto the force space while ensuring free motion of the operator onto the motion space. The projection-based approach ensures a good free space transparency as the desired control force is set to zero in motion space. In the force space, the feedback force is generated as:
F.sub.h=k.sub.vir,hΩ.sub.f,h(x.sub.p,h−x.sub.h) (5)
[0038] In addition to avoiding the exchange of force signals, computing the desired haptic and robot force through a virtual-spring model is a convenient tool to adjust the haptic feedback, with respect to the device maximum stiffness, and to adapt the robot behavior in contact. For example, one could reduce the spring stiffness to bring compliance in the force control input and to increase safety of the interaction. If the same virtual-spring model is implemented on both sides, the same (opposite) desired forces, up to the scaling factor, are input to the robot and haptic local loops.
Adjusting the Local Control Inputs
[0039] The last component of the dual-proxy model aims at maintaining safety and adequacy of the control inputs for the local loops. The desired force and motion, computed from the proxy projection, are adjusted according to high-level exchanged information, such as the control and communication frequencies, the robot and haptic device limits, or task-related constraints. Therefore, each proxy-based input not only reflects the interaction, but is also adapted to the controlled robot and can be modified to increase task performances.
[0040] The first step is to perform safety checks on the control inputs with respect to hardware limitations. It ensures physical consistency of the desired force and motion for the two robots. The desired haptic feedback is maintained under the haptic device maximum force, and damping can be added in the force space to prevent undesired oscillations. On robot side, if the motion input goes outside the range of motion, it is projected back onto the robot workspace. The robot control force can also be saturated to a maximum value to preserve safety of the robot/environment contact. Finally, the monitoring of communication delays and possible latency spikes enables the dual proxy model to reject potentially dangerous and outdated commands.
[0041] An additional safety stage is implemented on the robot side to prevent the control motion to go outside the robot kinematic and dynamic operational range. The motion command is interpolated according to the robot maximum velocity and acceleration in the operational space. The desired robot motion is, therefore, safely reachable, even if the human motion input is too fast or involves high acceleration change. In addition, the frequency difference between the communication loop and the control loops might result in undesired step commands in the force space. Therefore, the desired forces on both sides are also interpolated to avoid step commands in the forces that would be uncomfortable for the human on the haptic device and potentially dangerous on the robot side.
[0042] These safety measures enable the generation of safe commands for the robot and haptic device, regardless of what is happening on the other side of the communication network, or the time it took for the data to be transmitted.
Online Perception of Robot Environment for Task-Level Autonomy
[0043] With the framework, the proper knowledge and communication of the robot environment geometry, that is to say the force space and motion space, is crucial to the haptic force fidelity. In certain tasks (in manufacturing for example), it is possible to know in advance the location and direction of the constraints. However, in general use cases, one would want the robot to autonomously detect the directions of constraints, in real time, to infer the force and motion spaces and properly parameterize the proxy projection. Here, we design a particle filter called the Force-Space Particle Filter (FSPF). The FSPF states will represent the likely contact normals at the robot end effector, and will allow one to find the force and motion spaces. For an exemplary embodiment, the FSPF was developed and implemented for linear force and motion spaces, but its extension to angular force/motion spaces is immediate.
The Force-Space Particle Filter
[0044] The idea behind the FSPF is the following: given a robot in contact that is moving, one should be able to find the contact normal by combining contact force and velocity data. In particular, the contact normal is going to be more or less in the direction of the sensed force, and orthogonal to the robot velocity.
[0045] The FSPF state space is the 3d unit sphere plus its center. The sphere represents all the possible contact normals at the robot end effector, while its center encodes the belief that there is no contact on the robot end effector. ) be the list of particles at the k.sub.th iteration of the filter, and n.sub.p the number of particles. We will use the symbol p to refer to individual particles. Let us also note F.sub.s,vs,F.sub.m,F.sub.f respectively the end effector sensed force, the sensed velocity, the control force in motion space and the control force in force space. These four quantities are required in the update and resampling steps.
[0046] The update step involves creating a list of augmented particles P.sub.k knowing the current list of particles and the control and sensor inputs. P.sub.k contains n.sub.p>n.sub.p particles. The resampling step first assigns a weight w(ρ, F.sub.s, vs)∈[0, 1] to each particle ρ∈P.sub.k that measures the likeliness of that particle given the sensor measurements, and, then, resamples n.sub.p particles from P.sub.k by prioritizing the particles with high weights in order to obtain the next list of particles P.sub.k+1.
Step 1: Update
[0047] At any point during robot operation, 3 things can happen: [0048] The contact directions can change. [0049] A contact direction can appear. [0050] A contact direction can disappear.
[0051] The update step needs to accommodate for these three possibilities. The augmented particle list is created, starting with an empty list
=∅ and using the procedure described in Algorithm 1. In one example, a parameter 0<ε<<1 is used to numerically determine if the vectors are at the origin. The algorithm is explained next: [0052] 1. First, one takes each particle in
.sub.k and move it randomly following a normal distribution N (α, Σ) (if it is not at the center). α is the mean of the normal distribution and Σ its standard deviation. One will then add the obtained particle to
.sub.k. This allows to take into account the possible rotation of the force space. [0053] 2. Second, one will look at the direction of the motion control force F.sub.m and create a tentative particle ρ.sub.tent. One will assume that if a contact appears, it will be in the direction towards which the robot is moving. To know if this tentative particle should be added, one could look at its likeliness using the same weight function as the resampling step (that will be later described) w(β.sub.tent, F.sub.s, v.sub.s). One then creates new particles between the force control F.sub.f direction and the tentative particle. The higher the likeliness of ρ.sub.tent, the more particles one would add. This step allows the filter to take into account a new direction of contact. [0054] 3. Third, one would add one particle at the center in order to take into account a possible contact loss. [0055] 4. Finally, one would normalize all the particles that are not in the center to project them back onto the sphere.
TABLE-US-00001 Algorithm 1: FSPF Update Step Result: A list of augmented particles .sub.k /* start with an empty list */
.sub.k = [ ]; /* move the existing particles */ for ρ ∈
.sub.k do | if ∥ρ∥ > ε then | | d ~
(α, Σ); | else | | d = 0; | end |
.sub.k.append(ρ + d) end /* add particles in the direction of motion control */ if ∥F.sub.m∥ > ε then | ρ.sub.tent = F.sub.m/∥F.sub.m∥; | if ∥F.sub.f∥ > ε then | | u.sub.f = F.sub.f/∥F.sub.f∥; | else | | u.sub.f = 0; | end | n.sub.add = w(ρ.sub.tent, F.sub.s, ν.sub.s) * n.sub.p; | for i ∈ [1, n.sub.add] do | | s = (i − 0.5)/n.sub.add; | |
.sub.k.append(s * u.sub.f + (1 − s) * ρ.sub.tent); | end end /* add a particle at the center */
.sub.k.append(0); /* renormalize all particles */ for
.sub.k do | if ∥
TABLE-US-00002 Algorithm 2: FSPF Resampling Step Result: A list of resampled particles /* compute the list of cumulative weights */ w.sub.list = [ ]; w.sub.cumul = 0; for
do | w.sub.cumul = w.sub.cumul + w(
= [ ];
.append(
[l]); | r = r + 1/n.sub.p; end
Step 2: Resampling
[0056] One defines a force weight function w.sub.f and a velocity weight function w.sub.v for a given particle ρ and the value of the measured force F.sub.s and velocity v.sub.s. These functions represent a credence value for the particle based on the force sensor measurement and the velocity measurement. For the force weight function and particles on the sphere, one will consider that if the sensed force is higher than a threshold f.sub.h in the direction of the particle, then it is highly likely, and if it is lower than a threshold f.sub.l, it is not likely at all. For particles at the center of the sphere, one would consider them likely if there is no sensed force, and not likely if the sensed force is high. Equation 6 is defined as:
where <., .> represents the euclidean scalar product. For the velocity weight function and particles on the sphere, one will consider that a particle is not very likely if there is a significant velocity in its direction or its opposite direction, and it is likely otherwise. For particles at the center, the velocity gives no information (as the robot could be moving in free space or in contact) so one will assign a velocity weight of 0.5. Equation 7 is defined as:
where v.sub.h and v.sub.l are high and low velocity thresholds.
[0057] Using the previously defined functions, one can perform the resampling step. The weight of each particle w(ρ,F.sub.s,v.sub.s) is defined as the product of the force and velocity weights for that particle:
w(ρ,F.sub.s,v.sub.s)=w.sub.f(ρ,F.sub.s)w.sub.v(ρ,v.sub.s) (8)
[0058] One can see that w(ρ,F.sub.s,v.sub.s) is always between 0 and 1. Besides, the product means that as soon as one of the sensor modalities deems the particle unlikely, then it will be excluded. For the resampling itself, one could use low variance resampling to prevent the loss of particles that are isolated but have a high probability. The resampling step is implemented using Algorithm 2.
Finding the Force and Motion Spaces
[0059] The output of the FSPF, described supra, is a list of likely particles at the current time. One would need to extract a likely force space and motion space from this particle list. Intuitively, the distribution of the particles will give enough information to estimate the force and motion directions. In particular: [0060] When most particles are at the center, one expects the robot to be in free space, [0061] When most particles are grouped somewhere on the sphere, one expects the force space to be of dimension one, and the location of the particles gives its expected direction. [0062] When most particles are along an arc of circle, one expects the force space to be of dimension two, and its direction is given by the plane that contains the arc of circle. [0063] When the particles are scattered in a 3d pattern on the sphere, one expects the force space to be of dimension three.
[0064] To find the dimension and direction of the force space, one performs a SVD of the particles. We define X.sub.p, a matrix of dimension 3×1.5*n.sub.p such that the first n.sub.p columns of X.sub.p are the coordinates of the n.sub.p particles of the current state of the particle filter P.sub.k, and the last 0.5n.sub.p columns are zero (which represent particles at the center). The left SVD of X.sub.p will give a 3d ellipsoid that best explains the points, as depicted in
[0065] Let's note d.sub.fs the dimension of the force space that can be between 0 and 3. Let's also note U the 3×3 matrix of the left singular vectors of X.sub.p, and U=[U.sub.1 U.sub.2 U.sub.3] its columns. Let us finally note S=[σ.sub.0, σ.sub.1, σ.sub.2] the vector whose elements are the singular values of X.sub.p ordered in decreasing order (σ.sub.0≥σ.sub.1≥σ.sub.2). Here one applies Algorithm 3 to find the selection matrices Ω.sub.f and Ω.sub.m. The algorithm is also described here: [0066] The norm of S gives an indication on the number of particles that are not at the center. Indeed, since all particles have norm 1 or 0, let m be the number of particles not at the center, we have ∥S∥=√m. In general, one will consider that the robot is in free space unless at least half of the particles are not at the center. Therefore, we have the following relationship:
if ∥S∥<√{square root over (n.sub.p/2)},d.sub.fs=0 (9) [0067] When d.sub.fs>0, two thresholds are defined 0<α.sub.add<1 and 0<α.sub.remove≤α.sub.add that will be used to add or remove dimensions in the force space. When σ1>α.sub.addσ.sub.0, and σ.sub.2>α.sub.addσ0, we add their directions to the force space. When σ.sub.1<αremove σ.sub.0, and σ.sub.2<α.sub.removeσ.sub.0, one removes their directions from the force space. Choosing α.sub.add>α.sub.remove generates some hysteresis in the value of d.sub.fs which helps stabilizing the transitions. [0068] With the value of d.sub.fs, we can compute the projection matrices Ω.sub.f and Ω.sub.m from the columns of U.
[0069] The three algorithms presented in this section show the full implementation of the FSPF. The particle filter is used to provide an added layer of local autonomy. The controllers on the robot and haptic side can self-parameterize autonomously in real time using sensory observations. The FSPF and the dual-proxy model are the key innovations in the local autonomy-based approach. For the robot and haptic local controllers, many choices can be made.
TABLE-US-00003 Algorithm 3: FSPF Force and Motion space Matrices Computation Result: d.sub.fs, Ω.sub.f and Ω.sub.m /* compute the SVD of the particles */ X.sub.p = Matrix.Zeros(3, 1.5n.sub.p); i = 0 for ρ ∈ .sub.k do | X.sub.p[i, :] = ρ; | i = i + 1; end U,S,V = SV D(X.sub.p) ; // U is 3x3, S is 3x1 and the coefficients are in decreasing order /* compute the dimension of the force space and the projection matrices */ if ∥S∥ < √{square root over (n.sub.p/2)} then | d.sub.fs = 0; | Ω.sub.f = 0; | Ω.sub.m = I.sub.3; else | /* compute the dimension using the lower | and higher thresholds */ | d.sub.lb = 3 − (σ.sub.1 < α.sub.removeσ.sub.0) − (σ.sub.2 < α.sub.removeσ.sub.0); | d.sub.hb = 1 + (σ.sub.1 > α.sub.addσ.sub.0) + (σ.sub.2 > α.sub.addσ.sub.0); | if d.sub.fs < d.sub.lb then | | d.sub.fs = d.sub.lb; | end | if d.sub.fs > d.sub.hb then | | d.sub.fs = d.sub.hb; | end | /* compute the projection matrices */ | if d.sub.fs == 1 then | | Ω.sub.f = U.sub.1U.sub.1.sup.T; | | Ω.sub.m = I.sub.3 − Ω.sub.f; | else | | if d.sub.fs == 2 then | | Ω.sub.m = U.sub.3U.sub.3.sup.T; | | Ω.sub.f = I.sub.3 − Ω.sub.m; | | else | | Ω.sub.f = I.sub.3; | | Ω.sub.m = 0; | | end | end end
Implementation of the Local Controllers
[0070] The local autonomy-based dual-proxy method allows one to choose controllers for the robot and haptic device independently of everything else. The dual-proxy model and FSPF together ensure the safety and physical consistency of the local control inputs, as well as the correct parameterization of force and motion spaces. The robot local controller needs to be able to regulate both free space motions and physical interactions in a stable way. Two good choices are to implement an impedance controller or a unified force/motion control. The latter requires a re-parameterization of the controller when the force space changes, but it enables a completely decoupled dynamic response in force space and in motion space. In haptic applications, the choice of stiffness in force space will mostly depend on the maximum stiffness that the haptic device can render, and the compliance needed for the task. One would want to be able to select it independently of the free-space motion tracking performance. For this reason, the inventors advocate for the usage of operational space control on the robot side. On the haptic side, the controller will mostly depend on which haptic device is used. When using a low-inertia haptic device with isotropic force capabilities, an open-loop force controller can be implemented to monitor the force feedback. Note that even if one used impedance control on the robot side, the force/motion-space projection would still be required to compute consistent haptic control inputs.
Local Robot and Haptic Device Controllers
[0071] Here the inventors used the unified force/motion controller in operational space to locally control both the motion and force modality on the robot. With this decoupling technique and assuming perfect estimates of the dynamic components, the motion/force loops regulate the desired behavior of the decoupled end effector equivalent to a unit mass system in motion space and a force source in force space. This unified control strategy provides good performances in both motion and contact tasks. For completeness, the robot controller is described here. Let's take a robot arm in contact with the environment at its end effector. The dynamic equations of the robot, expressed in joint space, are:
A(q){umlaut over (q)}+b(q,{dot over (q)})+g(q)+J.sup.TF.sub.c=Γ (10)
where A is the robot mass matrix, b is the vector of centrifugal and coriolis forces, g is the joint gravity vector, J is the task jacobian at the end effector, F.sub.c is the vector of contact forces resolved at the task point and Γ is the joint torque vector. The operational space dynamics of the robot at the end effector is obtained by multiplying the previous equation by the transpose of the dynamically consistent inverse of the Jacobian:
Λ{umlaut over (x)}+μ+p+F.sub.c=F (11) [0072] where
Λ=(JA.sup.−1J.sub.T).sup.−1
μ=
p=
are respectively the task space inertia matrix, the task space centrifugal and coriolis forces and the task space gravity. F is the task force and x is the task space coordinates.
[0073] With the robot in contact, its workspace is separated into the force space (directions of constraints) and the motion space (directions of free motion). The force-space and motion-space specification matrices, respectively Ωf and Ω.sub.m, are obtained via the FSPF. By projecting the robot-proxy interaction, the dual-proxy model computes two setpoints for the robot local controller: the displacement Δx.sub.m on motion space and the desired contact force F.sub.r on force space. With the chosen unified force/motion control approach, two distinct controllers are, then, designed to monitor the robot behavior in the force and motion directions.
[0074] The control force in motion space Fm is generated by an operational space proportional derivative (PD) controller:
F.sub.m={circumflex over (Λ)}(Kp.sub.mΔx.sub.m−Kv.sub.mΩ.sub.m{dot over (x)}.sup.r) (12)
where the estimate of the task space inertia {circumflex over (Λ)} is used to dynamically decouple the system. K.sub.pm is the proportional gain and K.sub.vm is the derivative gain.
[0075] The control force in force space F.sub.f is computed to follow the desired robot force F.sub.r computed by the dual-proxy model thanks to a proportional integral (PI) controller with feedforward force compensation and a velocity-based damping term. The proportional and integral gains are respectively K.sub.pf and K.sub.if and the damping gain is K.sub.vf. The sensed task force F.sub.s is used as feedback to the PI loop.
F.sub.f=F.sub.r−Kp.sub.f(F.sub.s−F.sub.r)−Ki.sub.f∫(F.sub.s−F.sub.r)−Kv.sub.fΩ.sub.f{dot over (x)}.sub.r (13)
[0076] Finally, the two controllers are composed together, with two additional gravity and Coriolis/centrifugal compensation terms, to produce the dynamically decoupled and unified force/motion control force F.
F=F.sub.f+F.sub.m+{circumflex over (μ)}+{circumflex over (p)} (14)
μ{circumflex over ( )} and p{circumflex over ( )}respectively represent estimates of the task space Coriolis and centrifugal forces, and of the gravity. The control torques in joint space are, then, simply computed as:
Γ=J.sup.TF (15)
[0077] If the robot is redundant with respect to the task, the redundancy can be controlled thanks to a control torque vector FO projected into the dynamically consistent null-space of the task N such that:
Γ=J.sup.TF+N.sup.TΓ.sub.0 (16)
where
N=I.sub.n×n−
[0078] Such a unified force/motion closed loop at the robot level yields the expected end-effector dynamic behavior to perform the remote task. It monitors the task-consistent control inputs, generated by the dual-proxy model to produce the haptic-robot interactions.
[0079] On the haptic side, the haptic local closed loop must reflect the task interaction to the human operator. The desired haptic feedback Fh that was computed by the dual-proxy model, is sent as an input to the local controller. This force command is exclusively within the force space, such that the motion feels fully unconstrained in the motion space. The haptic controller also needs to compensate for parasitic dynamic components of the haptic device, which can be decently assimilated to the operational space estimated gravity p{circumflex over ( )}h when using a low inertia, low friction and parallel haptic device. For such devices, which rarely have a force sensor, open-loop force control is usually satisfactory. We also neglect Coriolis and centrifugal effects as they are small and depend on the human arm dynamics. The haptic control force is computed, with a velocity-based damping term of gain K.sub.vf,h, by:
f.sub.f,h==F.sub.h+{circumflex over (p)}.sub.h−Kv.sub.f,hΩ.sub.f{dot over (x)}.sub.h (18)
[0080] Overall, the full implementation of the local controllers is depicted in
[0081] This application claims the benefit from U.S. Provisional Application 63/116,418 filed Nov. 20, 2020, which is hereby incorporated by reference for everything it teaches.