Robot control apparatus, robot control method, program, recording medium and robot system

09616573 ยท 2017-04-11

Assignee

Inventors

Cpc classification

International classification

Abstract

An end effector can be removed to a target position and calibration loads can be reduced, even if there is an error in a kinematic operation in a robot main body and/or cameras. A positional deviation integral torque obtained by integrating a value corresponding to a positional deviation is applied to joints while being superimposed with torques based on angular deviations. If movements of the joints stop or are about to stop before the end effector reaches a target position due to an error in a kinematic operation, the positional deviation integral torque increases with time, to move said joints and move the end effector to the target position. Thus, the end effector can be reliably moved to the target position by the positional deviation integral torque, and calibration loads can be reduced.

Claims

1. A robot control apparatus that controls a robot configured to include a freely displaceable joint and move an end effector according to a displacement of the joint, comprising: an imager to image a reference position provided on the robot while capturing the reference position and a destination of the reference position in a field of view; a displacement amount detector to detect a displacement amount of the joint; a positional deviation acquirer to acquire a positional deviation from the destination for the reference position based on an imaging result of the imager; an application amount calculator to calculate a first application amount from a result of acquiring a value of integral acquired by performing an integral operation on a value corresponding to the positional deviation for the reference position and a second application amount by performing a proportional operation on a value corresponding to a displacement amount deviation between a detected displacement amount by the displacement amount detector and a target displacement amount which is the displacement amount of the joint when the reference position coincides with the destination, each of said first and second application amount being a force or a torque applied to the joint to displace the joint; and a drive controller to move the reference position to the destination thereof by applying the first application amount and second application amount to the joint.

2. The robot control apparatus according to claim 1, controlling the robot to move the end effector according to a rotation of the joint, wherein: the imager images the reference position provided on the end effector while capturing the reference position and the destination of the reference position; the displacement amount detector is an angle detector to detect a rotation angle of the joint; and the application amount calculator is a torque calculator to calculate a first torque as the first application amount by performing the integral operation on the value corresponding to the positional deviation and a second torque as the second application amount by performing the proportional operation on a value corresponding to an angular deviation between a detected angle by the angle detector and a target angle as the target displacement amount which is the rotation angle of the joint when the reference position is at the destination; and the drive controller controls a joint drive mechanism to drive the joint so as to apply the first and second torques to the joint.

3. The robot control apparatus according to claim 2, wherein: the application amount calculator calculates a third torque by performing a proportional operation on the value corresponding to the positional deviation; and the drive controller controls the joint drive mechanisms so that the third torque is further applied to the joint.

4. The robot control apparatus according to claim 2, wherein the application amount calculator calculates the first torque by performing the integral operation on the positional deviation multiplied by a transposed Jacobian matrix.

5. The robot control apparatus according to claim 1, controlling the robot in which the end effector is attached to one end of an arm that has a plurality of the joints and has six or more degrees of freedom and to which a plurality of the reference positions including three or more reference positions are set, wherein the imager images the three or more reference positions while capturing the destination of each of the three or more reference positions that include at least two positions set on the end effector side from a tip joint closest to one end side out of the joints in the field of view; the positional deviation acquirer acquires the positional deviation for each reference position; the application amount calculator calculates the first application amount and the second application amount for each joint; and the drive controller controls a location and a posture of the end effector in three dimensions by applying the first application amount and second application amount to the plurality of the joints to move the three or more reference positions toward the destinations thereof.

6. The robot control apparatus according to claim 5, wherein the drive controller moves each of the three or more reference positions including a specific reference position that is at a position other than the at least two positions and on the tip joint or on an other end side opposite to the one end side from the tip joint toward the destination thereof.

7. The robot control apparatus according to claim 6, wherein the specific reference position is set on a specific joint or on the other end side from the specific joint which is a third joint from the one end side out of the plurality of the joints.

8. The robot control apparatus according to claim 7, wherein the specific reference position is set on the one end side from the joint which is the third joint from the other end out of the plurality of the joints.

9. The robot control apparatus according to claim 7, wherein the application amount calculator calculates the first application amount by adding a result of multiplying the value of integral for the specific reference position by a 1.sup.st weigh coefficient and a result of multiplying the value of integral for the reference positions other than the specific reference position by a 2.sup.nd weigh coefficient, and the 1.sup.st weigh coefficient is larger than the 2.sup.nd weigh coefficient.

10. The robot control apparatus according to claim 5, wherein the application amount calculator calculates a third application amount from a result of acquiring a proportional value, which is acquired by performing a proportional operation on the value corresponding to the positional deviation, for each reference position and the drive controller further applies the third application amount to the plurality of the joints.

11. The robot control apparatus according to claim 5, wherein the application amount calculator calculates the first application amount by acquiring the value of integral by performing an integral operation on the positional deviation multiplied by a transposed Jacobian matrix.

12. A robot control method that controls a robot configured to include a freely displaceable joint and move an end effector according to a displacement of the joint, comprising: a step of imaging a reference position provided on the robot while capturing the reference position and a destination of the reference position in a field of view; a step of acquiring a positional deviation from the destination for the reference position based on an imaging result of the reference position; a step of acquiring a detected displacement amount by detecting a displacement amount of the joint; a step of calculating a first application amount from a result of acquiring a value of integral acquired by performing an integral operation on a value corresponding to the positional deviation for the reference position and a second application amount by performing a proportional operation on a value corresponding to a displacement amount deviation between the detected displacement amount and a target displacement amount which is the displacement amount of the joint when the reference position coincides with the destination, each of said first and second application amount being a force or a torque applied to the joint to displace the joint; and a step of moving the reference position to the destination thereof by applying the first application amount and second application amount to the joint.

13. A non-transitory computer-readable recording medium having recorded thereon a robot control program that causes a computer to control a robot configured to include a freely displaceable joint and move an end effector according to a displacement of the joint, causing the computer to perform: a step of imaging a reference position provided on the robot while capturing the reference position and a destination of the reference position in a field of view; a step of acquiring a positional deviation from the destination for the reference position based on an imaging result of the reference position; a step of acquiring a detected displacement amount by detecting a displacement amount of the joint; a step of calculating a first application amount from a result of acquiring a value of integral acquired by performing an integral operation on a value corresponding to the positional deviation for the reference position and a second application amount by performing a proportional operation on a value corresponding to a displacement amount deviation between the detected displacement amount and a target displacement amount which is the displacement amount of the joint when the reference position coincides with the destination, each of said first and second application amount being a force or a torque applied to the joint to displace the joint; and a step of moving the reference position to the destination thereof by applying the first application amount and second application amount to the joint.

14. A robot system, comprising: a robot configured to include a freely displaceable joint and move an end effector according to a displacement of the joint; an imager to image a reference position provided on the robot while capturing the reference position and a destination of the reference position in a field of view; a displacement amount detector to detect a displacement amount of the joint; a positional deviation acquirer to acquire a positional deviation from the destination for the reference position based on an imaging result of the imager; an application amount calculator to calculate a first application amount from a result of acquiring a value of integral acquired by performing an integral operation on a value corresponding to the positional deviation for the reference position and a second application amount by performing a proportional operation on a value corresponding to a displacement amount deviation between a detected displacement amount by the displacement amount detector and a target displacement amount which is the displacement amount of the joint when the reference position coincides with the destination, each of said first and second application amount being a force or a torque applied to the joint to displace the joint; and a drive controller to move the reference position to the destination thereof by applying the first application amount and second application amount to the joint.

Description

BRIEF DESCRIPTION OF DRAWINGS

(1) FIG. 1 is a diagram showing an example of a robot system according to a first embodiment of the invention.

(2) FIG. 2 is a block diagram showing an electrical configuration for executing the position control of the end effector 4 in the robot system of FIG. 1.

(3) FIG. 3 is a block diagram showing an example of the position control of the end effector executed in the robot system.

(4) FIG. 4 is a diagram showing an example of a robot system according to a second embodiment of the invention.

(5) FIG. 5 is a diagram showing an example of states of torques acting on the end effector in the process of moving the reference points to the destinations thereof.

(6) FIG. 6 is a graph showing experimental results in Example 1.

(7) FIG. 7 is a graph showing experimental results in Example 1.

(8) FIG. 8 is a graph showing experimental results in Example 2.

(9) FIG. 9 is a graph showing experimental results in Example 2.

(10) FIG. 10 is a graph showing experimental results in Example 2.

(11) FIG. 11 is a graph showing experimental results in Example 2.

(12) FIG. 12 is a graph showing an experimental result in Example 3.

(13) FIG. 13 is a graph showing an experimental result in Example 3.

(14) FIG. 14 is a table showing set values of gains in Example 4.

(15) FIG. 15 is a graph showing a simulation result in Example 4.

(16) FIG. 16 is a table showing the set values of the gains in Example 5.

(17) FIG. 17 is a graph showing a simulation result in Example 5.

(18) FIG. 18 is a table showing the set values of the gains in Example 6.

(19) FIG. 19 is a graph showing a simulation result in Example 6.

(20) FIG. 20 is a table showing the set values of the gains in Example 7.

(21) FIG. 21 is a graph showing a simulation result in Example 7.

(22) FIG. 22 is a table showing the set values of the gains in Example 8.

(23) FIG. 23 is a graph showing a simulation result in Example 8.

DESCRIPTION OF EMBODIMENTS

(24) First Embodiment

(25) FIG. 1 is a diagram showing an example of a robot system according to a first embodiment of the invention. As shown in FIG. 1, the robot system 1 includes a robot 2 and two cameras C1, C2. Note that, in FIG. 1, the robot 2 is schematically shown by symbol notation of joints Q1 to Q3, links L1 to L3 and an end effector 4 and the cameras C1, C2 are schematically shown by being represented by image planes IM1, IM2 thereof.

(26) The robot 2 has such a schematic configuration that the end effector 4 (tool) is attached to the tip of an arm 3 which includes freely rotatable joints Q1 to Q3 and moves according to the rotation of the joints Q1 to Q3. Specifically, the arm 3 is formed by attaching the link L1 between the joints Q1, Q2, attaching the link L2 between the joints Q2, Q3 and attaching one end of the link L3 to the joint Q3. The end effector 4 is attached to the other end of the link L3 (tip of the arm 3). The joint Q1 rotates around a vertical axis, thereby rotating the members on the side of the end effector 4 around the vertical axis. The joint Q2 rotates to change angles between the link L1 and the members on the side of the end effector 4. The joint Q3 rotates to change angles between the members on the side of the end effector 4 and the link L2.

(27) In the thus configured robot 2, the end effector 4 can be moved by changing rotation angles q1 to q3 of the joints Q1 to Q3. Particularly, this robot system 1 controls a position p of the end effector 4 by adjusting the rotation angles q1 to q3 of the joints Q1 to Q3 based on detection results of the position p of the end effector 4 (specifically, representative point such as a TCP (Tool Center Point)) by the cameras C1, C2. On this occasion, a mark such as an LED (Light Emitting Diode) may be attached to the representative point to improve visibility in the cameras C1, C2.

(28) That is, the cameras C1, C2 are respectively positioned to include a target position pd as a destination of the end effector 4 in a field of view thereof, and a positional deviation p between the position p of the end effector 4 and the target position pd is captured in mutually different planes. Specifically, the cameras C1, C2 are, for example, so arranged that the image planes IM1, IM2 are perpendicular to each other, a YZ plane of a task coordinate system is imaged by the camera C1 and a ZX plane of the task coordinate system is imaged by the camera C2. The position control of the end effector 4 is executed by adjusting the rotation angles q1 to q3 of the joints Q1 to Q3 to reduce the positional deviation p between the target position pd and the position p of the end effector 4 detected by imaging results of the cameras C1, C2 (visual feedback).

(29) On this occasion, it may be configured so that an imaging result of the robot 2 at a singular point is not fed back to the position control of the end effector 4. In a specific example, the cameras C1, C2 may be so arranged that the robot 2 at the singular point is outside the fields of view of the cameras C1, C2. In this way, it can be suppressed that the visual feedback is performed based on the imaging result of the robot 2 at the singular point and the control of the robot 2 becomes unstable.

(30) Here, the notation of coordinate systems and each control amount used in the position control of the end effector 4 is described. As shown in FIG. 1, the three-dimensional task coordinate system configured by X, Y and Z axes perpendicular to each other with the Z axis as a vertical axis is defined for a task space where the end effector 4 operates. Thus, the position p of the end effector 4 is given by a three-dimensional vector (px, py, pz). Similarly, the target position pd of the end effector 4 is also given by a three-dimensional vector (pdx, pdy, pdz).

(31) A three-dimensional camera coordinate system configured by Ui, Vi and Zi axes perpendicular to each other is defined for each camera Ci. Here, i is a number for distinguishing the camera (i=1, 2), the Ui axis is a horizontal axis of the i.sup.th camera Ci, the Vi axis is a vertical axis of the i.sup.th camera Ci and the Wi axis is a depth axis of the i.sup.th camera Ci. Further, in FIG. 1, image planes IMi of the cameras Ci in UiVi planes are shown and coordinates i as projection of the position p (reference point) on the image planes IMi (i.e. coordinates i of the position p in the coordinate systems of the cameras Ci) and coordinates i as projection of the target position pd (destination) on the image planes IMi (i.e. coordinates i of the target position pd in the coordinate systems of the cameras Ci) are shown. These coordinates i, i are specifically as follows.

(32) [ i u i v ] : COORDINATE i AS PROJECTION OF REFERENCE POINT p ON IMAGE PLANE IMi [ i u d i v d ] : COORDINATE i AS PROJECTION OF DESTINATION pd ON IMAGE PLANE IMi [ Equation 1 ]

(33) The rotation angles q of the joint Q of the robot 2 are expressed by a vector (q1, q2, q3) including the rotation angle qn of the joint Qn as each component. Here, the notation of joints Q is the collective notation of the joints Q1 to Q3 and n is a number for distinguishing the joint (n=1, 2, 3). Further, target angles qd (=qd1, qd2, qd3) are the rotation angles q of the joints Q when the end effector 4 is at the target position pd. Furthermore, torques applied to the joints Q of the robot 2 are expressed by a vector (1, 2, 3) including a torque n acting on the joint Qn as each component.

(34) The above is description of the notation of the coordinate systems and the control amounts. Next, the position control of the end effector 4 is described in detail. FIG. 2 is a block diagram showing an electrical configuration for executing the position control of the end effector 4 in the robot system of FIG. 1. As shown in FIG. 2, the robot system 1 includes a controller 5 in charge of an arithmetic function and configured by a CPU (Central Processing Unit), a memory and the like. This controller 5 executes the position control of the end effector 4 in accordance with a program 7 recorded in a recording medium 6. Note that various media such as CDs (Compact Discs), DVDs (Digital Versatile Discs), USB (Universal Serial Bus) memories can be used as the recording medium 6. Further, in the robot system 1, a motor Mn for driving the joint Qn is provided for each of the joints Q1 to Q3, and an encoder En for detecting a rotational position of the motor Mn is provided for each of the motors M1 to M3.

(35) The controller 5 adjusts the rotation angles q of the joints Q of the robot 2 by controlling each of the motors M1 to M3. On this occasion, to perform the aforementioned visual feedback, the controller 5 detects the positional deviation p (=pdp) of the end effector 4 from the imaging results of the end effector 4 by the cameras Ci (external sensors). On this occasion, the controller 5 detects the positional deviation p of the end effector 4 while controlling panning/tiling of the cameras Ci so that the target position pd coincides with or is proximate to origins of the coordinate systems of the cameras Ci (centers of the image planes IMi).

(36) Further, in parallel with the detection of the positional deviation p of the end effector 4, the controller 5 detects angular deviations q (=qdq) of the joints Q from outputs of the encoders E1 to E3 (internal sensors). Then, the controller 5 calculates the torques based on the positional deviation p (=pdp) and the angular deviations q (qdq). Then, the motors M1 to M3 apply the torques to the joints Q, thereby adjusting the rotation angles q of the joints Q. As just described, the detection results of the cameras Ci and the encoders En are fed back to the torques to control the position of the end effector 4 in this embodiment.

(37) FIG. 3 is a block diagram showing an example of the position control of the end effector executed in the robot system. In FIG. 3 and the following description, a symbol indicates that a quantity with this symbol possibly includes an error, a matrix with a symbol T on an upper right corner indicates a transposed matrix of the matrix to which the symbol T is attached, a matrix with a symbol 1 on an upper right corner indicates an inverse matrix of the matrix to which the symbol 1 is attached, a symbol s is a variable of Laplace transform (Laplace variable) and a symbol . indicates a derivative. Further, in FIG. 3 and the following equations, a symbol d attached to the target position pd and the target angles qd and the like are written as a subscript as appropriate.

(38) When the torques are applied to the joints Q of the robot 2, the robot 2 moves in accordance with robot dynamics 201 and the joints Q of the robot 2 have the rotation angles q. Here, the robot dynamics 201 specify a relationship between torques acting on the mechanism of the robot 2 and acceleration created by these torques. As a result, the end effector 4 moves to the position p corresponding to the rotation angles q in accordance with robot kinematics 202. Further, to execute the aforementioned feedback control, the controller 5 has an external loop Lx to feed the positional deviation p back to the torques and an internal loop Lq to feed the angular deviations q back to the torques .

(39) In the external loop Lx, the position p of the end effector 4 is detected by two cameras C1, C2. In other words, the position p of the end effector 4 in the task coordinate system is transformed into coordinates 1, 2 of the coordinate systems of the respective cameras C1, C2 by coordinate transform 203. Then, the cameras C1, C2 output values (1-1), (2-2) indicating the positional deviation p in the respective coordinate systems. Specifically, the value (i-i) indicating the positional deviation p in the coordinate system of the camera Ci is as follows.

(40) [ u d i - i u v d i - i v ] : VALUE ( i - i ) INDICATING POSITIONAL DEVIATION p IN COORDINATE SYSTEM OF CAMERA Ci [ Equation 2 ]

(41) Then, the torques based on the positional deviation p in the task coordinate system are calculated from the positional deviations (i-i) expressed in the coordinate systems of the respective cameras C1, C2. In this calculation of the torques , a relationship established between the positional deviations in the camera coordinate systems and that in the position coordinate system can be used.

(42) That is, the following relationship is established between the coordinate systems of the cameras Ci and the task coordinate system.

(43) i l [ i u i v 1 ] = i A [ R T i - R T i i ] [ px py pz 1 ] = i P [ px py pz 1 ] i l : DISTANCE BETWEEN IMAGE PLANE IMi AND POSITION p IN DEPTH - AXIS DIRECTION OF i th CAMERA Ci ( DEPTH DISTANCE ) i A : 3 3 ROTATION MATRIX EXPRESSION INTERNAL PARAMETER A OF i th CAMERA Ci i R : 3 3 ROTATION MATRIX TRANSFORMATION COORDINATE FROM COORDINATE SYSTEM OF i th CAMERA Ci TO TASK COORDINATE SYSTEM : VECTOR FROM ORIGIN OF COORDINATE SYSTEM OF i th CAMERA Ci TO ORIGIN OF TASK COORDINATE SYSTEM i P = [ P 11 i P 12 i P 13 i P 14 i P 21 i P 22 i P 23 i P 24 i P 31 i P 32 i P 33 i P 34 i ] : PERSPECTIVE PROJECTION MATRIX OF i th CAMERA Ci [ Equation 3 ]

(44) Note that a matrix expressing an internal parameter A of the camera is specifically given by the following equation.

(45) A = [ fk u fk u cot u 0 0 fk v / sin v 0 0 0 1 ] [ Equation 4 ] f: FOCAL LENGTH OF CAMERA k.sub.u: SCALE IN HORIZONTAL-AXIS DIRECTION OF CAMERA k.sub.v: SCALE IN VERTICAL-AXIS DIRECTION OF CAMERA : ANGLE BETWEEN HORIZONTAL-AXIS DIRECTION AND VERTICAL-AXIS DIRECTION OF CAMERA u.sub.0: HORIZONTAL COMPONENT AT POINT WHERE OPTICAL AXIS INTERSECT WITH IMAGE PLANE v.sub.0: VERTICAL COMPONENT AT POINT WHERE OPTICAL AXIS INTERSECT WITH IMAGE PLANE

(46) Since the relationship of Equation 3 is established, the following relationship is established between the positional deviations (i-i) in the camera coordinate systems and the positional deviation (pdp) in the task coordinate system.

(47) [ u d i - i u v d i - i v 0 ] = 1 l d i i P [ pdx pdy pdy 1 ] - 1 i l i P [ px py pz 1 ] = 1 l d i A i i R T [ px py pz ] + ( 1 l d i - 1 i l ) i P [ px py pz 1 ] = 1 l d i A i i R T [ px py pz ] + ( i l - l d i l d i ) [ i u i v 0 ] [ Equation 5 ] .sup.il.sub.d: DISTANCE BETWEEN IMAGE PLANE IMi AND DESTINATION pd IN DEPTH-AXIS DIRECTION OF i.sup.th CAMERA Ci (DEPTH DISTANCE)

(48) Further, the following relationship is also established.

(49) i l - l d i = [ 0 0 1 ] { i P [ pdx pdy pdy 1 ] - i P [ px py pz 1 ] } = [ 0 0 1 ] A i i R T [ px py pz ] = P 31 i px + P 32 i py + P 33 i pz [ Equation 6 ]

(50) Then, the following equation is obtained from Equations 5 and 6.

(51) [ u d i - i u v d i - i v 0 ] = 1 l d i [ A i i R T - i ] [ px py pz ] i = [ P 31 i i u P 32 i i u P 33 i i u P 31 i i v P 32 i i v P 33 i i v P 31 i P 32 i P 33 i ] [ Equation 7 ]

(52) Here, a coefficient matrix i relating the positional deviations (i-i) expressed in the camera coordinate systems and the positional deviation (pdp) expressed in the task coordinate system is defined by the following equation.

(53) 1 = [ A ~ 1 1 R ~ T ] - 1 I 0 1 E ~ - 1 2 = [ A ~ 2 2 R ~ T ] - 1 I 0 2 E ~ - 1 I 0 = [ 1 0 0 1 0 0 ] [ Equation 8 ] .sup.iE: 22 MATRIX EXPRESSING LENS DISTORTION E OF i.sup.th CAMERA Ci

(54) Note that a matrix expressing a lens distortion E of the camera specifically satisfies the following equation.

(55) [ u dt i - u t i v dt i - v t i ] = i E [ u d i - i u v d i - i v ] [ u dt i - u t i v dt i - v t i ] : VALUE EXPRESSING POSITION DEVIATION p WHERE EFFECT OF LENS DISTORTION IS ELIMINATED WITH COORDINATE SYSTEM OF CAMERA Ci [ Equation 9 ]

(56) The following equation relating the positional deviations (i-i) expressed in the camera coordinate systems of the two cameras C1, C2 and the positional deviation (pdp) expressed in the task coordinate system is obtained from Equations 7 and 8.

(57) 0 l ~ d 1 1 [ 1 u d - 1 u v d 1 - 1 v ] + l ~ d 2 2 [ 2 u d - 2 u v d 2 - 2 v ] = [ l ~ d 1 l d 1 [ A ~ 1 1 R ~ T ] - 1 I 0 E ~ - 11 EI o T [ A 1 1 R T - 1 ] + l ~ d 2 l d 2 [ A ~ 2 2 R ~ T ] - 1 I 0 1 E ~ - 12 EI o T [ A 2 2 R T - 2 ] ] [ pd - p ] [ Equation 10 ] .sup.i{tilde over (l)}.sub.d: ESTIMATE OF DEPTH DISTANCE .sup.il.sub.d TO DESTINATION pd

(58) Note that an estimate of a depth distance to the target position pd is used in Equation 10. This estimate can be an appropriate constant. Specifically, if the task coordinate system and the coordinate system of the camera Ci is sufficient distant, a distance between the origins of these coordinate systems may be set as this estimate. Alternatively, both the camera C1 and the target position pd may be captured in the field of view of the camera C2, the estimate of the camera C1 may be obtained from the imaging result of the camera C2 and the estimate of the camera C2 may be also similarly obtained from the imaging result of the camera C1.

(59) Further, a correction matrix is defined by the following equation.

(60) 0 = [ 1 A ~ 1 R ~ T ] - 1 I 0 I 0 T [ 1 A ~ 1 R ~ T - 1 ~ ] + [ 2 A ~ 2 R ~ T ] - 1 I 0 I 0 T [ 2 A ~ 2 R ~ T - 2 ~ ] . [ Equation 11 ] .sub.0: CORRECTION MATRIX

(61) An inverse matrix of this correction matrix has a function of normalizing a coefficient multiplied to the positional deviation (pdp) on the right side of Equation 10. Accordingly, the product of the inverse matrix of the correction matrix and this coefficient is equal to an identity matrix. However, since each parameter has an error, this product includes, strictly speaking, a component corresponding to the position p and is expressed as follows using a matrix (p). Here, (p) is a matrix having a component corresponding to the position p and substantially equal to the identity matrix.

(62) 0 - 1 [ 1 l ~ d 1 l d [ 1 A ~ 1 R ~ T ] - 1 I 0 1 E ~ - 11 EI o T [ 1 A 1 R T - 1 ] + 2 l ~ d 2 l d [ 2 A ~ 2 R ~ T ] - 1 I 0 1 E ~ - 12 EI o T [ 2 A 2 R T - 2 ] ] = ~ ( p ) [ Equation 12 ]

(63) As a result, the following relationship is obtained from the relationships of Equations 10 and 12.

(64) 0 - 11 l ~ d 1 [ 1 u d - 1 u 1 v d - 1 v ] + 0 - 11 l ~ d 2 [ 2 u d - 2 u 2 v d - 2 v ] = ~ ( p ) [ pd - p ] [ Equation 13 ]

(65) As just described, a positional deviation detection amount (=(p)(pdp)) corresponding to the positional deviation (pdp) expressed in the task coordinate system can be obtained from the positional deviations (1-1), (2-2) expressed in the respective coordinate systems of the two cameras C1, C2. Accordingly, in the external loop Lx, values obtained by applying operations 204, 205 respectively to the positional deviations (1-1), (2-2) in the coordinate systems of the cameras C1, C2 are added to calculate the positional deviation detection amount (=(p)(pdp)) in the task coordinate system as shown in FIG. 3.

(66) Note that, as described later, this embodiment has an advantage of being able to precisely move the end effector 4 to the target position pd without requiring highly accurate calibration. Thus, the correction matrix and the coefficient matrix may include an error in the operations 204, 205. Corresponding to this, the symbol is attached to these in the block diagram of FIG. 3.

(67) An operation 206 is performed to multiply the positional deviation detection amount by a transposed Jacobian matrix (transposed matrix of a Jacobian matrix) in the controller 5. Further, a proportional operation and an integral operation are performed on the positional deviation detection amount multiplied by the transposed Jacobin matrix in the controller 5. Specifically, the proportional operation is performed to multiply the positional deviation detection amount multiplied by the transposed Jacobian matrix by a proportional gain (operation 207). Further, the integral operation by time is performed on the positional deviation detection amount multiplied by the transposed Jacobian matrix (operation 208). Furthermore, an integral operation value is multiplied by an integral gain (operation 209). Then, the results of these operations 207, 209 are fed back to an input side of the robot dynamics 201 (i.e. torques ). The above is the operation of the external loop Lx.

(68) In the internal loop Lq, angle detection 210 is performed to detect the rotation angle q of the joint Q. Note that the rotation angle q detected by this angle detection possibly includes an error. Corresponding to this, in the angle detection 210 of FIG. 3, a detection value of the rotation angle q is expressed as an output value of an error function c(q). This error function can be expressed by a linear function of (a true value of) the rotation angle q such as (q)=c1q+c2. Further, in the internal loop Lq, a derivative operation and a proportional operation are performed on the detected value (q) of the rotation angle q. Specifically, the derivative operation by time is performed on the detected value (q) (operation 211). Further, the derivative operation value is multiplied by a derivative gain (operation 212). Furthermore, the proportional operation 213 is performed to multiply the deviation q from the target angle qd by a proportional gain. The results of these operations are fed back to the input side of the robot dynamics 201 (i.e. torques ). The above is the operation of the internal loop Lq.

(69) The controller 5 determines the torques by adding a gravitational force compensation term g(q) to the operation results of these external and internal loops Lx, Lq. Here, the gravitational force compensation term g(q) is equivalent to a torque necessary to make the robot 2 stationary against a gravitational force. As a result, the torques to be applied to the joints Q are given by the following equation.

(70) = g ~ ( q ~ d ) + G P ( q ~ d - q ~ ) - G D q ~ . + G Pc J ~ T ( q ~ ) ~ ( p ~ ) [ pd - p ] + G I to t J ~ T ( q ~ ) ~ ( p ~ ) [ pd - p ] [ Equation 14 ] G.sub.P: PROPORTIONAL GAIN FOR ANGULAR DEVIATION q G.sub.D: DERIVATIVE GAIN FOR ANGLE q G.sub.Pc: PROPORTIONAL GAIN FOR POSITIONAL DEVIATION p G.sub.I: INTEGRAL GAIN FOR POSITIONAL DEVIATION p {tilde over (J)}.sup.T: MATRIX CONTAINING OF ESTIMATE OF TRANSPOSED JACOBIAN MATRIX (TRANSPOSED MATRIX OF JACOBIAN MATRIX) t: TIME to: CONTROL START TIME

(71) In Equation 14, the first term is the gravitational force compensation term, the second term is a term for executing a proportional control on the angular deviations q, the third term is a term for executing a derivative control on the rotation angles q, the fourth term is a term for executing a proportional control on the positional deviation p, and the fifth term is a term for executing an integral control on the positional deviation p. Note that although an integral interval is a time from a control start time t.sub.0 in Equation 14, the integral interval is not limited to this and can be appropriately changed. A specific example is as follows. If the end effector 4 starts moving from a position outside the fields of view of the cameras C1, C2, a time at which the end effector 4 enters the fields of view of the both cameras C1, C2 and the visual feedback starts properly functioning may be set as a starting point of the integral interval.

(72) As described above, in this embodiment, the end effector 4 is imaged while the target position pd of the end effector 4 is included in the fields of view, and the rotation angles q of the joints Q are detected (first step). Then, the torques based on the positional deviation p of the end effector 4 (torques given by the fifth term of Equation 14) are calculated based on the imaging results and the torques based on the angular deviations q of the joints Q (torques given by the second term of Equation 14) are calculated (second step). Then, the motors M1 to M3 are controlled to apply these torques to the joints Q (third step). In this way, these torques are superimposed and applied to the joints Q. In such a configuration, the rotation angles q of the joints Q can be detected even if the positional deviation p of the end effector 4 cannot be obtained from the imaging results. Thus, the end effector 4 is brought closer to the target position pd by rotating the joints Q by the torques based on the angular deviations q. However, in a configuration where the torques based on the angular deviations q and the torques based on the positional deviation p are superimposed and applied to the joints Q, if there is an error in a kinematic operation, the rotation of the joints Q may stop before the end effector 4 reaches the target position pd and the end effector 4 may stop at a position deviated from the target position pd.

(73) Contrary to this, in this embodiment, the torque based on the positional deviation p to be applied to the joints Q while being superimposed with the torques based on the angular deviations q is a positional deviation integral torque obtained by performing the integral operation on the value corresponding to the positional deviation p. Thus, if the rotation of the joints Q stops or is about to stop before the end effector 4 reaches the target position pd because of an error in the kinematic operation, the positional deviation integral torque increases with time to rotate the joints Q that have stopped or are about to stop. Then, this positional deviation integral torque keeps the joints Q rotating to move the end effector 4 to the target position pd until the positional deviation is finally eliminated. Thus, in this embodiment, even if there is an error in the kinematic operation, the end effector 4 can be reliably moved to the target position pd by the function of the positional deviation integral torque. As a result, highly accurate calibration is not required and loads of calibration can be reduced.

(74) Particularly, in this embodiment, the positional deviation integral torque is calculated by the kinematic operation of performing the integral operation on the positional deviation p multiplied by the transposed Jacobian matrix. Contrary to this, according to this embodiment, the end effector 4 can be reliably moved to the target position pd even if there is an error in the kinematic operation. Thus, even if the transposed Jacobian matrix is uncertain and there is an error in the kinematic operation of obtaining the positional deviation integral torque, the end effector 4 can be reliably moved to the target position pd. Therefore, it is not particularly necessary to highly accurately carry out calibration to precisely obtain the transposed Jacobian matrix and loads of calibration can be reduced.

(75) The above technical content may be understood as follows. That is, in the configuration where the torques based on the angular deviations q and torques based on the positional deviation p are superimposed and applied to the joints Q, the both torques are balanced out to stop the rotation of the joints Q if there is an error in the kinematic operation. This is thought to be because a potential distribution in the task space has a minimum value at a position different from the target position pd and the end effector 4 falls to this minimum value if there is an error in the kinematic operation. Contrary to this, in the case of applying the above positional deviation integral torque to the joints Q, the end effector 4 can be moved to the target position pd by the action of the positional deviation integral torque that increases with time. Thus, even in the case where the minimum value is located at a position different from the target position pd due to an error in the kinematic operation, the end effector 4 can be reliably moved to the target position pd. As a result, highly accurate calibration is not required and loads of calibration can be reduced.

(76) According to this embodiment, if the end effector 4 has the positional deviation p, the positional deviation integral torque acts on the joints Q and the end effector 4 can be moved to the target position pd. In the above description, an error in the kinematic operation is cited as a cause of this positional deviation p. However, in this embodiment, regardless of a cause of the positional deviation p of the end effector 4, the end effector 4 is moved to the target position pd to eliminate the positional deviation p if there is any positional deviation p. That is, positional deviations p due to the uncertainty of the term for gravitational force compensation and the values of the target angles qd, detected rotation angles q or the like can be eliminated by the action of the positional deviation integral torque. Thus, only if the positional deviation (p=pdp) of the end effector 4 is obtained, it is possible to move the end effector 4 to the target position pd. As a result, there are few parameters required to be accurate, therefore loads of calibration are very light and calibration can be omitted in some cases.

(77) As just described, in the first embodiment, the robot system 1 corresponds to an example of a robot system of the invention, the robot 2 corresponds to an example of a robot of the invention, the controller 5, the cameras Ci and the encoders En function in cooperation as an example of a robot control apparatus of the invention, the controller 5 corresponds to an example of a computer of the invention, the recording medium 6 corresponds to an example of a recording medium of the invention, the program 7 corresponds to an example of a program of the invention, the encoders En correspond to an example of an angle detector of the invention, the controller 5 functions as examples of a torque calculator and a drive controller of the invention, and the motors Mn correspond to an example of a joint drive mechanism of the invention. Further, in Equation 14, the torque given by the fifth term corresponds to an example of a first application amount or a first torque of the invention, the torque given by the second term corresponds to an example of a second application amount or a second torque of the invention and the torque given by the third term corresponds to an example of a third application amount or a third torque of the invention.

(78) Note that the invention is not limited to the first embodiment described above and various changes other than the aforementioned ones can be made without departing from the gist of the invention. For example, in the above first embodiment, the mark attached to the end effector 4 is recognized as the position of the end effector 4. However, a characteristic part (e.g. tip, hole) of the end effector 4 or a characteristic part of a target (e.g. bolt) gripped by the end effector 4 may be recognized as the position of the end effector 4.

(79) Further, a case where the end effector 4 is moved to one target position pd is described in the above first embodiment. However, in the case of moving the end effector 4 in consideration of the posture thereof, three representative points may be set for the end effector 4 and the three points may be moved to target positions thereof. Specifically, when positions of the three points set on the end effector 4 are denoted by x, y and z and the target positions thereof are denoted by zd, yd and zd, positional deviations of the three points are given by x(=xdx), y(=ydy) and z(=zdz). A positional deviation integral torque similar to the above one may be calculated for each of the positional deviations x, y and z and applied to the joints Q.

(80) Further, a control law expressed by Equation 14 can also be appropriately changed. In a specific example, a change to omit the first term for gravitational force compensation, the third term for executing the derivative control on the rotation angles q or the fourth term for executing the proportional control on the positional deviation p can be made.

(81) Further, panning/tilting of the cameras Ci is controlled to bring the target position pd into coincidence with or proximity to the origins of the coordinate systems of the cameras Ci (centers of the image planes IMi). However, the coincidence of the target position pd and the origins of the coordinate systems of the cameras Ci is not always necessary.

(82) Further, it is not necessary to constantly execute the above control to superimpose the external loop Lx and the internal loop Lq and feed them back to the torques. For example, if the end effector 4 is outside the fields of view of the cameras Ci, a feedback amount from the external loop Lx may be set at zero and the external loop Lx may not be performed. After the end effector 4 enters the fields of view of the cameras Ci, the external loop may be performed.

(83) On this occasion, an operation until the end effector 4 enters the fields of view of the cameras Ci may be taught to the robot 2. Such teaching suffices to be rough since it is sufficient to move the end effector 4 into the fields of view of the cameras Ci. However, it is, of course, all right to execute a control to track a trajectory of the end effector 4 until entry into the fields of view of the cameras Ci and locate the end effector 4 in the fields of view of the cameras Ci.

(84) Alternatively, it is also possible not to perform the external loop Lx and to use only the internal loop Lq while the end effector 4 is outside a predetermined range from the target position pd even if the end effector Ci enters the fields of view of the cameras Ci. In this case, the external loop Lx may be performed after the end effector 3 enters the predetermined range from the target position pd. In such a configuration, the end effector 4 can be highly accurately positioned with the external loop Lx at a final stage of a movement to the target position pd.

(85) Second Embodiment

(86) FIG. 4 is a diagram showing an example of a robot system according to a second embodiment of the invention. As shown in FIG. 4, the robot system 1 includes a robot 2 and two cameras C1, C2. Note that, in FIG. 1, the robot 2 is schematically shown by symbol notation of joints Q1 to Q6, links 30 and an end effector 4 and the cameras C1, C2 are schematically shown by being represented by image planes IM1, IM2 thereof.

(87) The robot 2 has such a schematic configuration that the end effector 4 (tool) is attached to the tip of an arm 3 which includes freely rotatable joints Q1 to Q6 and moves according to the rotation of the joints Q1 to Q6. Specifically, the joints Q1 to Q6 are coupled in this order via the links 30 from a base side (other end side) toward a tip side (one end side) of the robot 2 and one link 30 further projects toward the tip side from the joint Q6. The arm 3 is configured by the joints Q1 to Q6 and the links 30 in this way. Then, the end effector 4 is attached to the tip of the arm 3 (tip of the link 30 projecting toward the one end side from the joint Q6). Note that the joint Q6 closest to the tip side out of six joints Q1 to Q6 coupled to each other via the links 30 is appropriately called a tip joint.

(88) Each of the six joints Q1 to Q6 can rotate with one degree of freedom. Accordingly, the arm 3 can move the end effector 4 attached to the tip with six degrees of freedom by changing rotation angles q1 to q6 of the joints Q1 to Q6. Particularly, in this robot system 1, a reference point p1 is set on the tip joint Q6 or in a range of the base end side from the tip joint Q6, whereas reference points p2, p3 are set in a range of the end effector 4 side from the tip joint Q6. More specifically, in an example shown in FIG. 4, the reference point p1 is set on the link 30 between the joints Q3 and Q4, whereas the reference points p2, p3 are set on the end effector 4. The rotation angles q1 to q6 of the joints Q1 to Q6 are adjusted based on detection results of the reference points p1 to p3 by the cameras C1, C2 and the location and posture of the end effector 4 in a three dimensions are controlled. On this occasion, marks such as LEDs (Light Emitting Diodes) may be attached to the respective reference points p1 to p3 to improve visibility in the cameras C1, C2.

(89) That is, the cameras C1, C2 are positioned to capture destinations pd1 to pd3 of the reference points p1 to p3 in fields of view thereof and capture positional deviations p1 to p3 between the reference points p1 to p3 and the destinations pd1 to pd3 in mutually different planes (p1=pd1p1, p2=pd2p2, p3 =pd3p3 ). Specifically, the cameras C1, C2 are, for example, so arranged that the image planes IM1, IM2 are perpendicular to each other, a YZ plane of a task coordinate system XYZ is imaged by the camera C1 and a ZX plane of the task coordinate system XYZ is imaged by the camera C2. The location and posture of the end effector 4 in the three dimensions are controlled by adjusting the rotation angles q1 to q6 of the joints Q1 to Q6 to reduce the positional deviations p1 to p3 between the reference points p1 to p3 and the destinations pd1 to pd3 detected from imaging results of the cameras C1, C2 (visual feedback).

(90) In this visual feedback, the position control can be executed by using different degrees of freedom for the reference point p1 set on the link 30 between the joints Q3 and Q4 and for the reference points p2, p3 set on the end effector 4. As described above, the reference point p1 is set on the link 30 between the joints Q3 and Q4. In other words, the set position of the reference point p1 (specific reference position) is on the base side from the joint Q4 (specific joint), at which the total count is three when counting the degree of freedom of the joint in order from the tip side, and on the tip side from the joint Q3, at which the total count is three when counting the degree of freedom of the joint in order from the base side. Thus, the position of the reference point p1 can be controlled with three degrees of freedom realized by the joints Q1 to Q3, whereas the positions of the reference points p2, p3 can be controlled with three degrees of freedom realized by the joints Q4 to Q6. In this way, it is possible to execute a control to separately use the degrees of freedom for the position control of the reference point p1 (three degrees of freedom by the joints Q1 to Q3) and the degrees of freedom for the position control of the reference points p2, p3 (three degrees of freedom by the joints Q4 to Q6).

(91) Further, in this visual feedback, it is possible to adopt such a configuration that an imaging result of the robot 2 at a singular point is not fed back to the position control of the end effector 4. In a specific example, the cameras C1, C2 may be so arranged that the robot 2 at the singular point is outside the fields of view of the cameras C1, C2. In this way, it can be suppressed that the visual feedback is performed based on the imaging result of the robot 2 at the singular point and the control of the robot 2 becomes unstable.

(92) Here, the notation of coordinate systems and each control amount used in the position control of the end effector 4 is described. As shown in FIG. 4, the three-dimensional task coordinate system configured by X, Y and Z axes perpendicular to each other with the Z axis as a vertical axis is defined for a task space where the end effector 4 operates. Thus, the positions of the reference points p1 to p3 are respectively given by three-dimensional vectors (p1x, p1y, p1z), (p2x, p2y, p2z) and (p3 x, p3 y, p3 z). Similarly, the destinations pd1 to pd3 are respectively given by three-dimensional vectors (pd1x, pd1y, pd1z), (pd2x, pd2y, pd2z) and (pd3x, pd3y, pd3z). Further, the positional deviations p1 to p3 are respectively given by three-dimensional vectors (p1x, p1y, p1z), (p2x, p2y, p2z) and (p3 x, p3 y, p3 z).

(93) Note that the representative notation of the three reference points p1 to p3 by the reference points p without distinction, the representative notation of the three destinations pd1 to pd3 by the reference points pd without distinction and the representative notation of the three positional deviations pd1 to pd1 by the positional deviations pd without distinction are used as appropriate below. On this occasion, the positional deviation p is given by an equation p=pdp using the reference point p and the destination pd. Further, corresponding to this, the position of the reference point p is expressed by a three-dimensional vector (px, py, pz), the position of the destination pd is expressed by a three-dimensional vector (pdx, pdy, pdz) and the deviation p is expressed by a three-dimensional vector (px, py, pz).

(94) The rotation angles q of the joints Q of the robot 2 are expressed by a vector (q1, q2, q3, q4, q5, q6) including the rotation angle qn of the joint Qn as each component. Here, the notation of joints Q is the collective notation of the joints Q1 to Q6 and n is a number for distinguishing the joint (n=1, 2, 3, 4, 5, 6). Further, target angles qd (=qd1, qd2, qd3, qd4, qd5, qd6) are the rotation angles q of the joints Q when all the reference points p1 to p3 coincide with the corresponding destinations pd1 to pd3 thereof. Furthermore, torques applied to the joints Q of the robot 2 are expressed by a vector (1, 2, 3, 4, 5, 6) including a torque n acting on the joint Qn as each component.

(95) The above is description of the notation of the coordinate systems and the control amounts. Next, the position control of the end effector 4 is described in detail. The robot system 1 according to the second embodiment also has the electrical configuration shown in FIG. 2. A position control similar to the one for converging the position p of the end effector 4 to the target position pd in the first embodiment is executed for each of the reference points p1 to p3 (p) and the reference points p1 to p3 (p) are converged toward the destinations pd1 to pd3 (pd) thereof. The following description is made, centering on a configuration different from the first embodiment with the description of the configuration common to the first embodiment omitted as appropriate.

(96) In the robot system 1, a motor Mn for driving the joint Qn is provided for each of the joints Q1 to Q6, and an encoder En for detecting a rotational position of the motor Mn is provided for each of the motors M1 to M6. A controller 5 adjusts the rotation angles q of the joints Q of the robot 2 by controlling each of the motors M1 to M6. Particularly, to perform the aforementioned visual feedback, the controller 5 detects the positional deviations p (=pdp) of the reference points p from the imaging results of the reference points p by the cameras Ci (external sensors). On this occasion, the controller 5 detects the positional deviations p while controlling panning/tiling of the cameras Ci so that the destinations pd1 to pd3 are captured in the fields of view with bringing, e.g. geometric centers of gravity of the destinations pd1 to pd3 into coincidence with or proximity to the origins of the coordinate systems of the cameras Ci (centers of the image planes IMi). Note that the set positions of the reference points p on the robot 2 are stored in advance in a memory of the controller 5 (storage).

(97) Further, in parallel with the detection of the positional deviations p of the reference points p, the controller 5 detects angular deviations q (=qdq) of the joints Q from outputs of the encoders E1 to E6 (internal sensors). Then, the controller 5 calculates the torques based on the positional deviations p (=pdp) and the angular deviations q (qdq). Then, the motors M1 to M6 apply the torques to the joints Q, thereby adjusting the rotation angles q of the joints Q. As just described, the detection results of the cameras Ci and the encoders En are fed back to the torques i to control the position of the end effector 4 in this embodiment.

(98) Also in the second embodiment, the position control of the end effector 4 is executed in accordance with the block diagram of FIG. 3. Accordingly, the torques to be applied to the joints Q are determined based on the control law of Equation 14 derived from Equations 1 to 13. Note that, as described above, three reference points p1 to p3 are provided for the robot 2. Thus, the term for executing the proportional control on the positional deviations p (fourth term of Equation 14) is given by a linear combination of terms for executing a proportional control on each positional deviation p1 to p3 , and the terms for executing the integral control on the positional deviations p (fifth term of Equation 14) is given by a linear combination of terms for executing an integral control on each positional deviation p1 to p3. As a result, an equation for obtaining the torques applied to the joint Q can be rewritten from Equation 14 into the following equation.

(99) = g ~ ( q ~ d ) + G p ( q ~ d - q ~ ) - G D q ~ . + G pc .Math. j = 1 3 j J ~ j T ~ j ( pd j - p j ) + G I to t .Math. j = 1 3 j J ~ j T ~ j ( pd j - p j ) t .Math. j = 1 3 j = 1 [ Equation 15 ] j=1, 2, 3 p.sub.j: REFERENCE POINT P1, P2, P3 p.sub.dj: DESTINATION Pd1, Pd2, Pd3 G.sub.pc: POSITIONAL FEEDBACK GAIN MATRIX G.sub.I: INTEGRAL FEEDBACK GAIN MATRIX .sub.j>0: WEIGHT COEFFICIENT FOR REFERENCE POINT P.sub.j {tilde over (J)}.sub.j.sup.T: TRANSPOSED JACOBIAN MATRIX (TRANSPOSED MATRIX OF JACOBIAN MATRIX) CONTAINING ERROR FOR REFERENCE POINT P.sub.j {tilde over ()}.sub.j: MATRIX {tilde over ()} (p) FOR REFERENCE POINT p.sub.j

(100) Note that the fourth and fifth terms of Equation 15 are given by weighted averages of the result of (proportional/integral) operations on each positional deviation p1 to p3. Weight coefficients 1 to 3 of these weighted averages have a positive value greater than zero when the corresponding reference points p1 to p3 are in the fields of view of the both cameras C1, C2 and are zero when the corresponding reference points p1 to p3 are outside the field of view of either one of the cameras C1, C2 and the visual feedback for the reference points p1 to p3 outside the field(s) of view does not work.

(101) As described above, in this embodiment, the position of the end effector 4 is controlled by moving three or more reference points p1 to p3 , which is set for the robot 2 and includes at least two points p2, p3 set for the end effector 4 side from the tip joint Q6, toward the respective destinations pd1 to pd3. Specifically, the reference points p1 to p3 are imaged in a state where the destinations pd1 to pd3 of the respective reference points p1 to p3 are captured in the fields of view and the positional deviations p1 to p3 between the reference points p1 to p3 and the destinations pd1 to pd3 are obtained. In parallel with this, the rotation angles q1 to q6 of the joints Q1 to Q6 are detected and the angular deviations q between the detected angles q and the target angles qd are obtained. Then, the torques based on the positional deviations p1 to p3 (torques given by the fifth term of Equation 14 or 15) are calculated and the torques based on the angular deviations q (torques given by the second term of Equation 14 or 15) are calculated. Then, these torques are superimposed and applied to the joints Q. In such a configuration, even if the positional deviations p1 to p3 of the reference points p1 to p3 cannot be obtained from the imaging results, the rotation angles q of the joints Q can be detected. Thus, the joints Q are rotated by the torques based on the angular deviations q to move the reference points p1 to p3 toward the destinations pd1 to pd3, as a result, the end effector 4 can be moved toward the target position.

(102) However, as described above, in the configuration where the torques based on the positional deviations p and the torques based on the angular deviations q are superimposed and applied to the joints Q, unless a kinematic operation is certain, the torques based on the positional deviations p may act on the torques based on the angular deviations q in an opposite direction to stop the rotation of the joints Q. In this case, the end effector 4 stops with the positional deviations p left.

(103) Contrary to this, in this embodiment, the torques to be superimposed with the torques based on the angular deviations p are positional deviation integral torques obtained by performing the integral operation on values corresponding to the positional deviations p of the reference points p. Accordingly, if the rotation of the joints Q stops or are about to stop before the reference points preach the destinations pd due to an existence of an error in the kinematic operation, the positional deviation integral torques increase with time to rotate the joints Q that have stopped or are about to stop since the positional deviations p remain. These positional deviation integral torques keep the joints Q rotating to move the reference points p to the destination pd until the positional deviations p are finally eliminated. As a result, the end effector 4 can be moved to the target position. Thus, in this embodiment, even if there is an error in the kinematic operation, the end effector 4 can be reliably moved to the target position by the function of the torques based on the positional deviations p. As a result, highly accurate calibration is not required and loads of calibration can be reduced.

(104) Particularly, in this embodiment, the positional deviation integral torques are calculated by the kinematic operation of performing the integral operation on the positional deviations p multiplied by the transposed Jacobian matrix. Contrary to this, according to this embodiment, the end effector 4 can be reliably moved to the target position even if there is an error in the kinematic operation. Thus, even if the transposed Jacobian matrix is uncertain and there is an error in the kinematic operation of obtaining the positional deviation integral torques, the end effector 4 can be reliably moved to the target position. Therefore, it is not particularly necessary to highly accurately carry out calibration to precisely obtain the transposed Jacobian matrix and loads of calibration can be reduced.

(105) The above technical content can be understood as follows. That is, in the configuration where the torques based on the angular deviations q and the torques based on the positional deviations p are superimposed and applied to the joints Q, the both torques are balanced out to stop the rotation of the joints Q if there is an error in the kinematic operation. This is thought to be because a potential distribution in a task space has a minimum value at a position different from the target position and the end effector 4 falls to this minimum value if there is an error in the kinematic operation. Contrary to this, in the case of applying the above positional deviation integral torques to the joints Q, the end effector 4 can be moved to the target position by the action of the positional deviation integral torques that increase with time. Thus, even in the case where the minimum value is located at a position different from the target position due to an error in the kinematic operation, the end effector 4 can be reliably moved to the target position. As a result, highly accurate calibration is not required and loads of calibration can be reduced.

(106) According to this embodiment, if there is the positional deviation p between the reference point p and the destination pd, the positional deviation integral torque acts on the joints Q and the end effector 4 can be moved to the target position. In the above description, an error in the kinematic operation is cited as a cause of these positional deviations p. However, in this embodiment, regardless of a cause of the positional deviation p, the reference point p is moved to the destination pd to eliminate the positional deviation p if there is any positional deviation p. That is, positional deviations p due to the uncertainty of the term for gravitational force compensation and the values of the target angles qd, detected rotation angles q or the like can be eliminated by the action of the positional deviation integral torque. Thus, only if the positional deviation (p=pdp) of the reference point p is obtained, the end effector 4 can be moved to the target position by converging the reference point p toward the destination pd. As a result, there are few parameters required to be accurate, therefore loads of calibration are very light and calibration can be omitted in some cases.

(107) Further, in this embodiment, the position of the end effector 4 is controlled by moving three or more reference points p1 to p3 , which is set for the robot 2 and includes at least two points p2, p3 set for the end effector 4, toward the respective destinations pd1 to pd3. As a result, it is possible to properly control the end effector 4 to the target location and posture in the three dimensions.

(108) In the configuration where three or more reference points p1 to p3 are moved to the destinations pd1 to pd3, there may occur a problem of being difficult to converge all the reference points p1 to p3 to the respective destinations pd1 to pd3 thereof depending on a positional relationship of the set positions of the reference points p1 to p3 and the destinations pd1 to pd3. Particularly, such a problem is likely to occur when all the reference points p1 to p3 are set on the end effector 4 side from the tip joint Q6. This point is described, giving a specific example.

(109) FIG. 5 is a diagram showing an example of states of torques acting on the end effector in the process of moving the reference points to the destinations thereof. That is, a positional relationship between the reference points p1 to p3 and the destinations pd1 to pd3 possibly established in the process of moving the reference points p1 to p3 to the destinations pd1 to pd3 is illustrated in FIG. 5. If a state of FIG. 5 is reached, a torque T1 generated based on the positional deviation p1 and directed from the reference point p1 to the destination pd1, a torque T2 generated based on the positional deviation p2 and directed from the reference point p2 to the destination pd2 and a torque T3 generated based on the positional deviation p3 and directed from the reference point p3 to the destination pd3 are balanced out each other, therefore the end effector 4 stops. In other words, the end effector 4 falls in a local minimum in a potential distribution of a task space with the positional deviations p1 to p3 left. For this, in the configuration where all the reference points p1 to p3 are set on the end effector 4 side from the tip joint Q6, degrees of freedom for moving all the reference points p1 to p3 are common, therefore it has been difficult to move the end effector 4 out of the local minimum.

(110) Contrary to this, in this embodiment, the specific reference point p1 is set on the tip joint Q6 or on the other end side from the tip joint Q6. In the case of providing some reference point(s) (specific reference point p1) on the tip joint Q6 or on the other end side from the tip joint Q6, it is easy to destroy a balance of the torques T1 to T3 as described above, move the end effector 4 out of the local minimum and converge all the reference points p1 to p3 to the destinations pd1 to pd3 thereof.

(111) Particularly, in this embodiment, the specific reference point p1 is set on the specific joint Q4 or on other end side (base side) from the specific joint Q4, at which the total count is equal to or more than three when counting the degree of freedom of the joint in order from the one end side (tip side). In such a configuration, the specific reference point p1 can be properly converged to the destination pd1 with the degrees of freedom on the other end side from the specific joint Q4 (degrees of freedom by the joints Q1 to Q3). Further, the reference point p2, p3 set on the end effector 4 side from the specific reference point p1 can be properly converged to the destination pd2, pd3 with three or more degrees of freedom on the specific joint Q4 and on the one end side from the specific joint Q4 (degrees of freedom by the joints Q4 to Q6). That is, the control using different degrees of freedom for the specific reference point p1 and for the other reference points p2, p3 can be executed. As a result, even if the end effector 4 falls in the local minimum as described above, the balance of the torques T1 to T3 as described above can be easily destroyed by displacing the reference point p1 and the reference points p2, p3 respectively with different degrees of freedom. As a result, it becomes easy to move the end effector 4 out of the local minimum and properly converge the respective reference points p1 to p3 to the destinations.

(112) Further, in this embodiment, the specific reference point p 1 is set on the one end side (tip side) from the joint Q3, at which the total count is equal to or more than three when counting the degree of freedom of the joint in order from the other end side (base side). This enables the specific reference point p1 to be more reliably converged to the destination pd1 with three or more degrees of freedom. Particularly, in such a configuration, the reference point p 1 and the reference points p2, p3 can be respectively relatively freely displaced with different three or more degrees of freedom. As a result, the balance of the torques T1 to T3 as described above can be easily destroyed and it becomes easy to move the end effector 4 out of the local minimum and properly converge the respective reference points p1 to p3 to the destinations.

(113) Note that, in this specification, an expression that the joint at which the total count is equal to or more than N when counting the degree of freedom of the joint in order is used as appropriate. This expression indicates the first joint at which the total count is equal to or more than N when counting the degree of freedom of the joint in order. Thus, as described above, the joint at which the total count is equal to or more than three when counting the degree of freedom of the joint in order from the one end side is only the joint Q4, and the other joints Q1 to Q3 are not considered as such. Similarly, the joint having a successive total of three or more degrees of freedom of the joints from the other end side is only the joint Q3, and the other joints Q4 to Q6 are not considered as such.

(114) As shown in Equation 15, the positional deviation integral torques are given by the linear combination of the values of integral on the positional deviations p1 to p3 of the respective reference points p1 to p3 , particularly by the weight averages multiplied by the weight coefficients j (1, 2, 3). On this occasion, the weight coefficient of the value of integral on the specific reference point p1 (value of integral on the positional deviation p1) may be set larger than those 2, 3 of the values of integral on the reference points p2, p3 (values of integral on the positional deviations p2, p3 ) (1>2 and 1>3). In such a configuration, after the specific reference point p1 corresponding to the larger weight coefficient 1 is more quickly converged to the destination pd1, the reference points p2, p3 set on the end effector 4 side from the tip joint Q6 can be converged to the destinations pd2, pd3. Thus, the degrees of freedom on the specific joint Q4 and on the one end side from the specific joint Q4, i.e. degrees of freedom on the end effector 4 side from the specific reference point p1 can be substantially used only to move the reference points p2, p3. Therefore, these reference points p2, p3 can be reliably converged to the destinations pd2, pd3 with sufficient degrees of freedom.

(115) As described above, in the second embodiment, the robot system 1 corresponds to an example of the robot system of the invention, the robot 2 corresponds to an example of the robot of the invention, the controller 5, the cameras Ci and the encoders En function in cooperation as an example of the robot control apparatus of the invention, the controller 5 corresponds to an example of the computer of the invention, the recording medium 6 corresponds to an example of the recording medium of the invention, and the program 7 corresponds to an example of the program of the invention. Further, the cameras Ci correspond to an example of an imager of the invention, the encoders En correspond to an example of a displacement amount detector of the invention, the controller 5 functions as examples of a positional deviation acquirer, an application amount calculator and the drive controller of the invention. Further, in Equations 14 and 15, the torques given by the fifth term correspond to an example of the first application amount of the invention, the torques given by the second term corresponds to an example of the second application amount of the invention and the torques given by the third term corresponds to an example of the third application amount of the invention. Further, the reference points p1 to p3 correspond to an example of reference positions of the invention, the specific reference point p1 corresponds to a specific reference position of the invention, the tip joint Q6 corresponds to an example of a tip joint of the invention, and the specific joint Q4 corresponds to an example of a specific joint of the invention.

(116) As described above, the respective reference points p are reliably converged to the destinations pd by superimposing the torques obtained by performing the integral operation on the positional deviations p and the torques obtained by performing the proportional operation on the angular deviations q. Note that the invention is not limited to the second embodiment described above and various changes other than the aforementioned ones can be made without departing from the gist of the invention. For example, in the above second embodiment, the marks such as LEDs attached to the robot 2 are set as the reference points p1 to p3. However, characteristic parts (e.g. tips, holes) of the robot 2 may be set as the reference points p1 to p3.

(117) Further, the control law expressed by Equation 15 can also be appropriately changed. In a specific example, a change to omit the first term for gravitational force compensation, the third term for executing the derivative control on the rotation angles q or the fourth term for executing the proportional control on the positional deviations p can be made.

(118) Further, specific set values of the weight coefficients 1 to 3 of weighted addition in Equation 15 can also be appropriately changed. Accordingly, the weight coefficients for the other reference points p2, p3 may be set larger than the reference point 1 for the specific reference point p1 or all the weight coefficients 1 to 3 may be set at the same value.

(119) Further, the set position of the specific reference point p1 on the robot 2 can also be appropriately changed. Thus, the specific reference point p1 can be set at a suitable position on the tip joint Q6 or on the base end side from the tip joint Q6 on the robot 2. For example, the specific reference point p1 may be set on the tip joint Q6 or on the specific joint Q4.

(120) Further, the set positions of the reference points p2, p3 on the robot 2 can also be appropriately changed. Thus, the reference point p2 or p3 may be provided on the link 30 projecting toward the tip side from the tip joint Q6.

(121) Furthermore, it is not always necessary to provide the specific reference point p1 as described above. Thus, for example, all the reference points p1 to p3 may be provided on the end effector 4.

(122) Further, the number of the reference points p is not limited to three as in the above description. Thus, four or more reference points p may be set for the robot 2.

(123) Further, the number of degrees of freedom of the robot 2 is also not limited to six as in the above description. Thus, the invention can be applied also to a robot 2 having seven or more degrees of freedom.

(124) Further, in the above second embodiment, any of the joints Q1 to Q6 has one degree of freedom. However, the invention can be applied also to a robot 2 configured using joint Q having a plurality of degrees of freedom.

(125) Further, in the above second embodiment, panning/tiling of the cameras Ci is controlled to bring, for example, the geometric centers of gravity of the destinations pd1 to pd3 into coincidence with or proximity to the origins of the coordinate systems of the cameras Ci (centers of the image planes IMi). However, a control mode of the positional relationship between the destinations pd1 to pd3 and the coordinate systems of the cameras Ci is not limited to this.

(126) Further, in the above embodiments, the feedback amount from the external loop Lx becomes zero and the external loop Lx for the corresponding reference point p is not performed if the reference point p is outside the fields of view. After the corresponding reference point enters the fields of view of the cameras Ci, the external loop Lx is performed. On this occasion, an operation until the reference point p enters the fields of view of the cameras Ci may be taught to the robot 2 in advance. Such teaching suffices to be rough since it is sufficient to move the reference point p into the fields of view of the cameras Ci. However, it is, of course, all right to execute a control to track a trajectory of the end effector 4 until entry into the fields of view of the cameras Ci and locate the end effector 4 in the fields of view of the cameras Ci.

(127) Alternatively, it is also possible not to perform the external loop Lx while the reference point p is outside a predetermined range from the destination pd and to use only the internal loop Lq even if the reference point p enters the fields of view of the cameras Ci. In this case, the external loop Lx may be performed after the reference point p enters the predetermined range from the destination pd. In such a configuration, the reference point p can be highly accurately positioned with the external loop Lx at a final stage of a movement to the destination pd.

(128) Stability

(129) Next, the stability of the control law given by Equation 15 is studied. Robot dynamics are generally given by the following equation.
R(q){umlaut over (q)}+({dot over (R)}(q)+S(q,{dot over (q)})){dot over (q)}+g(q)=[Equation 16] R(q): (33) INERTIA MATRIX S(q,{dot over (q)}): (33) ALTERNATE MATRIX OF NONLINEAR TERM g(q): (31) VECTOR SHOWING GRAVITY TERM : (31) VECTOR SHOWING JOINT TORQUE q: (31) VECTOR SHOWING ROTATION ANGLE OF JOINT

(130) Here, a variable is defined as follows.

(131) = g ~ ( q ~ d ) - g ( q d ) + G p ( q ~ d - q d ) g I + to t .Math. j = 1 3 j J ~ j T ~ j ( pd j - p j ) t [ Equation 17 ] DEFINED AS G.sub.I=g.sub.iI I: IDENTITY MATRIX

(132) The following equation is obtained by substituting Equation 17 into Equation 16.

(133) R ( q ) q .Math. = - ( 1 2 R . ( q ) + S ( q , q . ) ) q . - g ( q ) + g ( q d ) + G p ( q d - q ) - G D q . + G pc .Math. j = 1 3 j J ~ j T ~ j ( pd j - p j ) + g I [ Equation 18 ]

(134) Note that it is assumed that the following conditions constantly hold.

(135) .Math. pd j - p j .Math. c 1 .Math. K .Math. c 2 G p = g p I G D = g D I G pc = g pc I G I = g I I rank .Math. j = 1 3 j J ~ j T ~ j = 6 [ Equation 19 ] C.sub.1, C.sub.2: POSITIVE CONSTANT K={tilde over (J)}.sub.j.sup.T{tilde over ()}.sub.j; : EUCLIDEAN NORM

(136) Here, the following function V is considered as a Lyapunov function candidate.

(137) V = 1 2 ( q d - q ) T G p ( q d - q ) + .Math. j = 1 3 1 2 ( pd j - p j ) T ( j G pc + j G D ) ( pd j - p j ) + 1 2 q . T R ( q ) q . + 1 2 g I T - .Math. j = 1 3 j ( pd j - p j ) T ~ j T J ~ j R ( q ) q . + g I ( q d - q ) T [ Equation 20 ]

(138) V>0 holds if is properly small and g.sub.I is sufficiently smaller than G.sub.P.

(139) Further, a time derivative of the function V is given by the following equation.

(140) 0 V . = - q . T G p ( q d - q ) - .Math. j = 1 3 p . j ( k p j I + G D ) ( pd j - p j ) + q . T R ( q ) q .Math. + 1 2 q . T R . ( q ) q . + g I . T - .Math. j = 1 3 j ( pd j - p j ) T ~ j J ~ j R ( q ) q .Math. + .Math. j = 1 3 j p . j T ~ j J ~ j R ( q ) q . - .Math. j = 1 3 j ( pd j - p j ) T t ( ~ j J ~ j R ( q ) ) q . - g I q . T + g I ( q d - q ) T . [ Equation 21 ]

(141) By transforming Equation 21, the time derivative of the function V is given by the following equation.

(142) V . = - g D q . T q . - g pc T - ( g p - g I ) ( q d - q ) T + g pc q . T .Math. j = 1 3 j ( J ~ j T ~ j - J ~ j T ) ( pd j - p j ) + g D q . T .Math. j = 1 3 j ( J ~ j T ~ j - J ~ j T ) ( pd j - p j ) + q . T ( g ( q d ) - g ( q ) ) - T ( g ( q d ) - g ( q ) ) + T ( 1 2 R . + S ) q . + .Math. j = 1 3 j p . j T ~ j T J ~ j R ( q ) q . - .Math. j = 1 3 j ( pd j - p j ) T t ( ~ j T J ~ j R ( q ) ) q . [ Equation 22 ]

(143) Here, a variable is defined as follows.

(144) = .Math. j = 1 3 j J ~ j T ~ j ( pd j - p j ) [ Equation 23 ]

(145) On this occasion, the following inequality is obtained for the time derivative of the function V.

(146) V . - g D q . T q . - g pc T + g pc .Math. j = 1 3 j q . T ( J ~ j T ~ j - J j T ) ( pd j - p j ) + g D .Math. j = 1 3 j q . T ( J ~ j T ~ j - J j T ) ( pd j - p j ) - ( g D - g I ) ( q d - q ) T + ( c 1 + c 3 ) .Math. q . .Math. 2 + ( c 2 + c 4 ) .Math. q d - q .Math. 2 [ Equation 24 ]

(147) The following relationship is utilized in obtaining the above inequality.

(148) q . T ( g ( q d ) - g ( q ) ) c 1 .Math. q . .Math. 2 + c 2 .Math. q d - q .Math. 2 T ( 1 2 R . ( q ) + S ( q , q . ) q . + .Math. j = 1 3 p . j T ~ j T J ~ j R ( q ) q . - .Math. j = 1 3 ( pd j - p j ) T t ( ~ j T J ~ j R ( q ) ) q . c 3 .Math. q . .Math. 2 T ( g ( q d ) - g ( q ) ) c 4 .Math. q d - q .Math. 2 [ Equation 25 ] C.sub.1, C.sub.2, C.sub.3, C.sub.4: POSITIVE CONSTANT

(149) The following equation is obtained by Taylor expansion about the target angle qd.

(150) p j ( q ) = p j ( q d ) + p j q ( q - q d ) + j ( q - q d ) [ Equation 26 ]

(151) Further, the following equation is obtained from Equation 26.
pd.sub.jp.sub.j=(Jd.sub.j+.sub.j)(q.sub.dq)
Jd.sub.j=J.sub.j(q)[Equation 27] .sub.j: HIGH-ORDER COEFFICIENT AT TAYLOR EXPANSION

(152) Further, it is assumed that the following equation holds.
Jd.sub.j.sub.jC.sub.5
{tilde over (J)}.sub.j.sup.T{tilde over ()}.sub.jJ.sub.j.sup.TC.sub.[Equation 28] C.sub.5, C.sub.: POSITIVE CONSTANT

(153) On this occasion, the following inequality is obtained for the second term of Equation 24 for giving the time derivative of the function V.

(154) k p .Math. j = 1 3 j q . T ( J ~ j T ~ j - J j T ) ( pd j - p j ) = k p .Math. j = 1 3 q . T ( J ~ j T ~ j - J j T ) ( Jd j + j ) ( q d - q ) 3 2 k p c .Math. c 5 ( .Math. q . .Math. 2 + .Math. q d - q .Math. 2 ) [ Equation 29 ]

(155) Further, the following inequality is obtained for the third term of Equation 24 for giving the time derivative of the function V.

(156) .Math. j = 1 3 j q . T ( G D J ~ j T j - J j T G D ) ( pd j - p j ) 3 2 g D c .Math. c 5 ( .Math. q . .Math. 2 + .Math. q d - q .Math. 2 ) [ Equation 30 ]

(157) On this occasion, it is important that the following equation can hold in Equation 24 so that the time derivative of the function V becomes negative.
(q.sub.dq).sup.T0[Equation 31]

(158) Here, the following equation holds.

(159) ( q d - q ) T = ( q d - q ) T .Math. j = 1 3 j J ~ j T ~ j ( pd j - p j ) = ( q d - q ) T .Math. j = 1 3 j J ~ j T ~ j ( Jd j + j ) ( q d - q ) [ Equation 32 ]

(160) Thus, a condition of Equation 31 can be transformed into the following condition.

(161) .Math. j = 1 3 j J ~ j T ~ j ( Jd j + j ) > 0 [ Equation 33 ]

(162) Finally, the following inequality is obtained for the time derivative of the Lyapunov function.

(163) 0 V . - g pc .Math. .Math. 2 - [ ( 1 - 3 2 c .Math. c 5 ) g D - c 1 - c 3 - 3 2 g pc c .Math. c 5 ] .Math. q . .Math. 2 - [ ( g p - g I ) min ( + T 2 ) - c 2 - c 4 - 3 2 ( g pc + g D ) c .Math. c 5 ] .Math. q d - q .Math. 2 DEFINED AS = .Math. j = 1 3 J ~ j T ~ j ( J dj + j ) [ Equation 34 ] .sub.minA: MINIMAL EIGENVALUE OF MATRIX A

(164) On this occasion, the following equation is given as in Equation 33.

(165) .Math. j = 1 3 j J ~ j T ~ j ( Jd j + j ) > 0 [ Equation 35 ]

(166) Accordingly, if the left side of Equation 35 is a positive definite matrix and Equation 35 is satisfied, the time derivative of the function V can be negative and the respective reference points p1 to p3 are stable at the destinations pd1 to pd3 thereof in the control law given by Equation 15. Thus, a setting mode and stability of the reference points p1 to p3 are considered next.

(167) In the above second embodiment, the reference point p1 is set between the joints Q3 and Q4 and the reference points p2, p3 are set on the end effector 4 for the robot 2 having six degrees of freedom by coupling the joints Q1 to Q6. Since the joints Q4 to Q6 are not used to move the reference point p1 in this case, the following equation is obtained.
{dot over (p)}.sub.1=J.sub.1{dot over (q)}=[A.sub.1O]{dot over (q)}[Equation 36] A.sub.1: 33 MATRIX O: 33 MATRIX WITH ALL IT'S COMPONENT BEING ZERO [A.sub.1O]: 36 MATRIX

(168) Accordingly, the left side of Equation 35 can be given by the following equation.

(169) .Math. j = 1 3 j J ~ j T ~ j ( Jd j + j ) = 1 [ A ~ 1 T O ] ~ 1 [ A 1 + 1 O ] + [ A ~ 2 T B ~ 2 T ] ~ 2 [ A 2 + 2 B 2 + 2 ] + [ A ~ 3 T B ~ 3 T ] ~ 3 [ A 3 + 3 B 3 + 3 ] [ Equation 37 ]

(170) Here, it is assumed that the following equation is satisfied.
{tilde over ()}.sub.j1
.sub.j0[Equation 38]

(171) On this occasion, Equation 37 is equal to the following equation.

(172) [ 1 A ~ 1 T A 1 + 2 A ~ 2 T A 2 + A ~ 3 T A 3 , 2 A ~ 2 T B 2 + 3 A ~ 3 T B 3 2 B ~ 2 T A 2 + 3 B ~ 3 T A 3 , 2 B ~ 2 T B 2 + 3 B ~ 3 T B 3 ] [ Equation 39 ]

(173) Here, if the weight coefficient 1 is set sufficiently larger than the other weight coefficients 2, 3, the following equation holds. That is, the respective reference points p1 to p3 are stable at the destinations pd1 to pd3 thereof.

(174) .Math. j = 1 3 j J ~ J T ~ J ( Jd j + j ) > 0 [ Equation 40 ]

(175) It is assumed that the following equation is satisfied.
.sub.1.sup.TA.sub.1>0
{tilde over (B)}.sub.2.sup.TB.sub.2+{tilde over (B)}.sub.3.sup.TB.sub.3>0[Equation 41]

(176) Further, if all the three reference points p1 to p3 are set on the end effector 4, the following equation holds instead of Equation 38. Thus, if a matrix on the right side of the following equation is a positive definite matrix, the respective reference points p1 to p3 are stable at the destinations pd1 to pd3 thereof.

(177) [ Equation 42 ] .Math. j = 1 3 j J ~ J T ~ J ( Jd j + j ) = [ 1 A ~ 1 T A 1 + 2 A ~ 2 T A 2 + 3 A ~ 3 T A 3 , 1 A ~ 1 T B 1 + 2 A ~ 2 T B 2 + 3 A ~ 3 T B 3 1 B ~ 1 T A 1 + 2 B ~ 2 T A 2 + 3 B ~ 3 T A 3 , 1 B ~ 1 T B 1 + 2 B ~ 2 T B 2 + 3 B ~ 3 T B 3 ]

(178) Alternatively, assuming that the position of the reference point p1 is controlled with three degrees of freedom realized by the joints Q1 to Q3 and the other reference points p2, p3 are mainly controlled with three degrees of freedom realized by the joints Q4 to Q6, the following equation using a partial Jacobian matrix can also be assumed.
{dot over (p)}.sub.1=J.sub.1{dot over (q)}=[A.sub.1O]{dot over (q)}
{dot over (J)}.sub.2=[O{tilde over (B)}.sub.2]
{tilde over (J)}.sub.3=[O{tilde over (B)}.sub.3][Equation 43] B.sub.2: 33 MATRIX B.sub.3: 33 MATRIX [B.sub.2O]: 36 MATRIX [B.sub.3O]: 36 MATRIX

(179) If Equation 35 is satisfied in this assumption, the respective reference points p1 to p3 are stable at the destinations pd1 to pd3 thereof in the control law given by Equation 15.

(180) Others

(181) Note that the invention is not limited to the second embodiment described above and various changes other than the aforementioned ones can be made without departing from the gist of the invention. For example, in the above embodiment, an estimate of a rotation matrix R of a coordinate is used in the torque calculation process. In the above description, a specific technique for obtaining the estimate of the rotation matrix R is not particularly mentioned. However, various techniques can be employed as the specific technique for obtaining the rotation matrix R. In a specific example, azimuth sensors, posture sensors or gyro sensors may be attached to the robot 2 and the cameras Ci and a positional relationship of the robot 2 and the camera Ci may be calculated from respective sensor outputs to obtain the estimate of the rotation matrix R. In such a configuration, regardless of an arrangement of the robot 2 and the cameras Ci, azimuth angles thereof are obtained. Thus, even if the robot 2 and the cameras Ci are suitably arranged according to user operability and the operability of the robot 2, the estimate of the rotation matrix R can be obtained from the azimuth angles of the robot 2 and the cameras Ci. Further, the positional relationship of the robot 2 and the cameras Ci may be, for example, automatically calculated by the program 7. Note that, in the case of using gyro sensors, the azimuth angles may be obtained from values of integral of the respective sensors after the directions of the respective sensors of the robot 2 and the cameras Ci are aligned in the same direction.

(182) Further, in the above embodiment, the imaging result of the robot 2 at the singular point is not fed back to the position control of the end effector by arranging the cameras C1, C2 so that the robot 2 at the singular point is located outside the fields of view of the cameras C1, C2. However, such a configuration is not essential and the robot 2 at the singular point may be located in the fields of view of the cameras C1, C2.

(183) Further, if the robot 2 at the singular point is located in the fields of view of the cameras C1, C2, it may be prevented that the imaging result of the robot 2 at the singular point is fed back to the position control of the end effector by changing the control mode by the controller 5. It can be suppressed by this that the control of the robot 2 becomes unstable due to the singular point.

(184) Further, a specific transformation mode from the camera coordinate systems to the task coordinate system is not limited to the aforementioned contents and can be appropriately changed.

(185) Further, the arrangement and number of the cameras Ci are not limited to the aforementioned contents and can be appropriately changed.

(186) Further, the numbers, dimensions and operating directions of the links L1 to L3 and the joints Q constituting the robot 2 can also be appropriately changed from those shown in the above embodiments. Thus, prismatic joints can also be used as the joints Q. Incidentally, in applying the invention to the prismatic joints Q, displacement amounts in directions of linear movement may be detected instead of the angles and forces may be calculated instead of the torques to control movements of the prismatic joints Q.

EXAMPLES

(187) Next, examples of the invention are shown. However, the invention is not limited by the following examples. Thus, the invention can be, of course, carried out while being appropriately changed within a range capable of conforming to the gist described above and below and any of such changes is included in the technical scope of the invention.

(188) The result of an experiment of moving the end effector 4 to the target position pd using a 3DOF (Three Degrees Of Freedom) robot unit produced by MMSE (Man-Machine Synergy Effectors) and having a configuration similar to that of the robot 2 of FIG. 1 is shown below. Specifically, it was confirmed by this experiment that the end effector 4 could be moved to the target position pd in a state where calibration was carried out for none of internal parameters of the cameras C1, C2, external parameters of the cameras C1, C2, the positions of the cameras C1, C2 and a relationship between the robot 2 and the cameras C1, C2.

(189) Note that the controller 5 for controlling the robot 2 was configured by an Intel Core i7 960 (CPU) and a DRAM (Dynamic Random Access Memory) of 4 GB, and cameras with 2 million pixels were used as the cameras C1, C2. The position p and the target position pd of the end effector 4 were detected by applying template matching to extraction results of edges of markers provided at the respective positions. Calibration of the robot 2 is allowed to have an error. Specifically, the values of the rotation angles q have an error of 5 with respect to 90 and the lengths of the links L1 to L3 have an error of within 100 [mm].

Example 1

(190) An equation expressing a control law of Example 1 is as follows.

(191) = g ~ ( q d ~ ) + G P ( q d ~ - q ) - G D q . + J ~ 0 T [ G P c [ u d 1 - 1 u u d 2 - 2 u v d 1 - 1 v + v d 2 - 2 v 2 ] + G I to t [ u d 1 - 1 u u d 2 - 2 u v d 1 - 1 v + v d 2 - 2 v 2 ] ] [ Equation 44 ] {tilde over (J)}.sub.0: JACOBIAN MATRIX HAVING CONSTANT COMPONENT

(192) As shown in Equation 44, in Example 1, a U1 axis component of the positional deviation in the coordinate system of the camera C1 is employed as an X axis component of the positional deviation p in the task coordinate system, a U2 axis component of the positional deviation in the coordinate system of the camera C2 is employed as a Y axis component of the positional deviation p in the task coordinate system, and an average of the U1, U2 axis components of the positional deviation in the coordinate systems of the cameras C1, C2 is employed as a Z axis component of the positional deviation p in the task coordinate system. Further, since a Jacobian matrix is not required to be accurate according to the invention as described above, each component of the Jacobian matrix suffices to be a suitable constant. Accordingly, the value of each component of the Jacobian matrix at the target angle qd can be employed.

(193) FIGS. 6 and 7 are graphs showing experimental results in Example 1. FIGS. 6 and 7 both show a graph having a horizontal axis expressing time and a vertical axis expressing the positional deviation p in an upper part and a graph having a horizontal axis expressing time and a vertical axis expressing the target angle and the rotation angle in a lower part. A component in each of X, Y and Z axes is shown for the positional deviation p. A difference between FIGS. 6 and 7 is a feedback gain. Particularly, an integral gain is set larger in the experiment of FIG. 6 than in the experiment of FIG. 7.

(194) Each feedback gain in the experiment of FIG. 6 is as follows.
G.sub.P=diag(0.18,0.24,0.15)
G.sub.D=diag(2.5,3.5,2.0)
G.sub.Pc=diag(0.010,0.010,0.010)
G.sub.I=diag(0.00030,0.00030,0.00030)[Equation 45] diag: DIAGONAL MATRIX HAVING COMPONENT IN PARENTHESES AS ON-DIAGONAL ELEMENT

(195) Each feedback gain in the experiment of FIG. 7 is as follows.
Q.sub.P=diag(0.40,0.60,0.10)
G.sub.D=diag(1.2,1.8,0.8)
G.sub.Pc=diag(0.050,0.050,0.050)
G.sub.I=diag(0.00010,0.00010,0.00010)[Equation 46]

(196) As shown in the upper graphs of FIGS. 6 and 7, the positional deviation p converges to about 1 pixel in both experimental results, in other words, the position of the end effector 4 can be controlled with an accuracy of about 0.3 [mm]. That is, the end effector 4 can be moved to the target position pd even if calibration is carried out for none of the internal parameters of the cameras C1, C2, the external parameters of the cameras C1, C2, the positions of the cameras C1, C2 and the relationship between the robot 2 and the cameras C1, C2.

Example 2

(197) FIGS. 8 to 11 are graphs showing experimental results in Example 2. Notation in FIGS. 8 and 10 corresponds to that in the upper parts of FIGS. 6 and 7, and notation in FIGS. 9 and 11 corresponds to that in the lower parts of FIGS. 6 and 7. Time is in each figure is a time at which the control of the robot 2 was started. Two experimental results having different positional relationships of the cameras C1, C2 and paths of the end effector 4 are shown in this Example 2. It is the same as in Example 1 that calibration is omitted.

(198) In an experiment shown in FIGS. 8 and 9, the cameras C1, C2 were so arranged that a range of 30 [cm] from the target position pd was substantially captured in the fields of view of the cameras C1, C2 and a movement of the end effector 4 was started inside the fields of view of the cameras C1, C2. In an experiment shown in FIGS. 10 and 11, the cameras C1, C2 were so arranged that a range of 5 [cm] from the target position pd was substantially captured in the fields of view of the cameras C1, C2 and a movement of the end effector 4 was started outside the fields of view of the cameras C1, C2.

(199) As shown in FIGS. 8 and 10, the positional deviation p can be converged regardless of distances between the cameras C1, C2 and the target position pd. Further, the positional deviation p can be converged regardless of whether the movement of the end effector 4 is started inside or outside the fields of view of the cameras C1, C2. From these results, it is found that the positional deviation can be converged without being substantially affected by the positional relationships of the cameras C1, C2 and the paths of the end effector 4 and these relationships are sufficient to be rough.

Example 3

(200) FIGS. 12 and 13 are graphs showing an experimental result in Example 3. Note that notation in FIG. 12 corresponds to that in the upper parts of FIGS. 6 and 7, and notation in FIG. 13 corresponds to that in the lower parts of FIGS. 6 and 7. Time is in each figure is a time at which the control of the robot 2 was started. In this Example 3, the proportional gain for the positional deviation p is set at zero and the proportional control is not executed for the positional deviation p. Note that the cameras C1, C2 were so arranged that a range of 5 [cm] from the target position pd was substantially captured in the fields of view of the cameras C1, C2 and a movement of the end effector 4 was started outside the fields of view of the cameras C1, C2. It is the same as in Example 1 that calibration is omitted. As shown in FIG. 12, it is found that the positional deviation p can be converged by the integral control for the positional deviation p even if the proportional control is not executed for the positional deviation p.

(201) Examples 1 to 3 described above show the result of a simulation using the robot configured similarly to the robot 2 of FIG. 1. Next, the result of a simulation of moving the reference points p set on the robot 2 to the destinations pd using a robot configured similarly to the robot 2 of FIG. 4 is described. Specifically, it was confirmed by this simulation that the end effector 4 could be moved to the target position pd by converging the reference points p to the destinations pd in a state where calibration was carried out for none of the internal parameters of the cameras, the external parameters of the cameras, the positions of the cameras and the relationship between the robot 2 and the cameras.

(202) Further, in carrying out this simulation, it is possible to use an Intel Core i7 960 (CPU) and a DRAM (Dynamic Random Access Memory) of 4 GB as the controller 5 for controlling the robot 2 and use cameras with 2 million pixels as the cameras C1, C2. Calibration of the robot 2 was allowed to have an error. Specifically, an error of 2.0% was introduced to joint angles and joint angular velocities, and an error of 5.0% was introduced to the lengths of the links. Note that in this Example, p in the above embodiment is changed to x and j in the above embodiment is changed to i.

Example 4

(203) As in the above embodiment, in Example 4, the reference point p1 (specific reference point) is set between the joints Q3 and Q4 and the reference points p2, p3 are set on the end effector 4. An equation expressing a control law of Example 4 is as follows.

(204) = g ~ ( q d ~ ) + G P ( q d ~ - q ~ ) - G D q ~ . + G Pc 1 J ~ 1 T ( q ~ ) ~ 1 ( x 1 ) [ x d 1 - x 1 ] + G Ic 1 t 0 J ~ 1 T ( q ~ ) ~ 1 ( x 1 ) [ x d 1 - x 1 ] + G Pc 2 J ~ 2 T ( q ~ ) ~ 2 ( x 2 ) [ x d 2 - x 2 ] + G Ic 2 t 0 J ~ 2 T ( q ~ ) ~ 2 ( x 2 ) [ x d 2 - x 2 ] + G Pc 3 J ~ 3 T ( q ~ ) ~ 3 ( x 3 ) [ x d 3 - x 3 ] + G Ic 3 t 0 J ~ 3 T ( q ~ ) ~ 3 ( x 3 ) [ x d 3 - x 3 ] [ Equation 47 ]

(205) Any of various gains in Equation 47 is a diagonal matrix and a diagonal component thereof is as shown in a table of FIG. 14. Here, FIG. 14 is a table showing set values of gains in Example 4. The diagonal component of each gain G.sub.P, . . . corresponding to each of the joint angles q1 to q6 is shown in the table of FIG. 14. Here, the gain G.sub.D is set large to suppress vibration of the fifth joint Q5. Further, a proportional gain (=0.20) for the positional deviation p1 is set larger than proportional gains (=0.10) for the positional deviations p2, p3 and an integral gain (=0.006) for the positional deviation p1 is set larger than integral gains (=0.004) for the positional deviations p2, p3.

(206) FIG. 15 is a graph showing a simulation result in Example 4. As shown in FIG. 15, any of the positional deviations p1 to p3 converges to zero and the end effector 4 has reached the target position. Further, since each gain for the positional deviation p1 is set larger than each gain for the positional deviations p2, p3 , it is found that the positional deviation p1 more quickly converges than the positional deviations p2, p3.

Example 5

(207) The set positions of the reference points p1 to p3 in Example 5 are the same as in Example 4. On the other hand, a control law of Example 5 does not include a term for the proportional control on the positional deviations p and is equivalent to the one in which the proportional gains for the positional deviations p are set at zero in Equation 47. Specific set values of the gains are as follows.

(208) FIG. 16 is a table showing the set values of the gains in Example 5. A diagonal component of each gain G.sub.P, . . . corresponding to each of the joint angles q1 to q6 is shown in the table of FIG. 16. An integral gain (=0.032) for the positional deviation p2 is set larger than integral gains (=0.02) for the positional deviations p1, p3.

(209) FIG. 17 is a graph showing a simulation result in Example 5. As shown in FIG. 17, any of the positional deviations p1 to p3 converges to zero and the end effector 4 is capable of reaching the target position.

Example 6

(210) In Example 6, all the reference points p1 to p3 are set on the end effector 4. A control law of Example 6 is similar to Equation 47 shown in Example 4. Specific set values of the gains are as follows.

(211) FIG. 18 is a table showing the set values of the gains in Example 6. A diagonal component of each gain G.sub.P, . . . corresponding to each of the joint angles q1 to q6 is shown in the table of FIG. 18. Here, the gain G.sub.D is set large to suppress vibration of the fifth joint Q5.

(212) FIG. 19 is a graph showing a simulation result in Example 6. As shown in FIG. 19, any of the positional deviations p1 to p3 converges to zero and the end effector 4 is capable of reaching the target position.

Example 7

(213) In Example 7, all the reference points p1 to p3 are set on the end effector 4. A control law of Example 7 is similar to Equation 47. However, the reference points p and the destinations pd are detected by three cameras and coordinate transform from coordinate systems of the cameras to the task coordinate system is carried out based on the following equation.

(214) ~ i ( x ~ i ) [ x di - x i ] = [ ( 1 u di - u i 1 + u di 3 - u i 3 ) / 2 ( u di 2 - u i 2 + v di 3 - v i 3 ) / 2 ( v di 1 - v i 1 + v di 2 - 2 v i ) / 2 ] ( i = 1 , 2 , 3 ) [ Equation 48 ]

(215) FIG. 20 is a table showing the set values of the gains in Example 7. A diagonal component of each gain G.sub.P, . . . corresponding to each of the joint angles q1 to q6 is shown in the table of FIG. 20. FIG. 21 is a graph showing a simulation result in Example 7. As shown in FIG. 21, any of the positional deviations p1 to p3 converges to zero and the end effector 4 is capable of reaching the target position.

Example 8

(216) In Example 8, five reference points p1 to p5 are all set on the end effector 4. A control law of Example 8 is as follows.

(217) = g ~ ( q d ~ ) + G P ( q d ~ - q ~ ) - G D q ~ . + G Pc 1 J ~ 1 T ( q ~ ) ~ 1 ( x ~ 1 ) [ x d 1 - x 1 ] + G Ic 1 t 0 J ~ 1 T ( q ~ ) ~ 1 ( x ~ 1 ) [ x d 1 - x 1 ] t + G Pc 2 J ~ 2 T ( q ~ ) ~ 2 ( x ~ 2 ) [ x d 2 - x 2 ] + G Ic 2 t 0 J ~ 2 T ( q ~ ) ~ 2 ( x ~ 2 ) [ x d 2 - x 2 ] t + G Pc 3 J ~ 3 T ( q ~ ) ~ 3 ( x ~ 3 ) [ x d 3 - x 3 ] + G Ic 3 t 0 J ~ 3 T ( q ~ ) ~ 3 ( x ~ 3 ) [ x d 3 - x 3 ] t + G Pc 4 J ~ 4 T ( q ~ ) ~ 4 ( x ~ 4 ) [ x d 4 - x 4 ] + G Ic 4 t 0 J ~ 4 T ( q ~ ) ~ 4 ( x ~ 4 ) [ x d 4 - x 4 ] t + G Pc 5 J ~ 5 T ( q ~ ) ~ 5 ( x ~ 5 ) [ x d 5 - x 5 ] + G Ic 5 t 0 J ~ 5 T ( q ~ ) ~ 5 ( x ~ 5 ) [ x d 5 - x 5 ] t [ Equation 49 ]

(218) Further, the reference points p and the destinations pd are detected by three cameras and coordinate transform from coordinate systems of the cameras to the task coordinate system is carried out based on the following equation.

(219) 0 ~ i ( x ~ i ) [ x di - x i ] = [ ( 1 u di - u i 1 + u di 3 - u i 3 ) / 2 ( u di 2 - u i 2 + v di 3 - v i 3 ) / 2 ( v di 1 - v i 1 + v di 2 - 2 v i ) / 2 ] ( i = 1 , 2 , 3 , 4 , 5 ) [ Equation 50 ]

(220) FIG. 22 is a table showing the set values of the gains in Example 8. A diagonal component of each gain G.sub.P, . . . corresponding to each of the joint angles q1 to q6 is shown in the table of FIG. 22. FIG. 23 is a graph showing a simulation result in Example 8. As shown in FIG. 23, any of positional deviations p1 to p5 of five reference points p1 to p5 converges to zero and the end effector 4 is capable of reaching the target position.

INDUSTRIAL APPLICABILITY

(221) This invention can be suitably applied to a technology on a robot configured to include a freely rotatable joint and move an end effector according to the rotation of the joint. For example, the invention can be applied in the case of assembling parts or packing box by use of the robot.

REFERENCE SIGNS LIST

(222) 1 . . . robot system, 2 . . . robot, 3 . . . arm, 30 . . . link, 4 . . . end effector, 5 . . . controller, 6 . . . recording medium, 7 . . . program, C1 . . . camera, C2 . . . camera, Ci camera, L1 . . . link, L2 . . . link, L3 . . . link, Q1 to Q6 . . . joint, Q . . . joint, Qn . . . joint, En . . . encoder, p . . . position (reference point), pd . . . target position (destination), p positional deviation, q . . . rotation angle, qd . . . target angle, q . . . angular deviation, . . . torque, Lq . . . internal loop, Lx . . . external loop