ROBOT APPARATUS HAVING LEARNING FUNCTION
20170205802 ยท 2017-07-20
Inventors
Cpc classification
G05B2219/39352
PHYSICS
G05B19/404
PHYSICS
International classification
Abstract
A robot apparatus includes a robot mechanism; a sensor provided in a portion whose position is to be controlled, of the robot mechanism, for detecting a physical quantity to obtain positional information of the portion; and a robot controller having an operation control unit for controlling the operation of the robot mechanism. The robot controller includes a learning control unit for calculating a learning correction value to improve a specific operation of the robot mechanism based on the physical quantity detected, while the operation control unit makes the robot mechanism perform the specific operation, with the sensor; and a learning extension unit for obtaining the relationship between the learning correction value calculated by the learning control unit and information about the learned specific operation, and calculates another learning correction value to improve a new operation by applying the obtained relationship to information about the new operation without sensor.
Claims
1. A robot apparatus comprising: a robot mechanism; a sensor provided in a portion, the position of which is to be controlled, of the robot mechanism, for detecting a physical quantity to directly or indirectly obtain positional information of the portion; and a robot controller having an operation control unit for controlling the operation of the robot mechanism, wherein the robot controller includes: a learning control unit for calculating a learning correction value to improve a specific operation of the robot mechanism based on the physical quantity detected, while the operation control unit makes the robot mechanism perform the specific operation, with the sensor; and a learning extension unit for obtaining the relationship between the learning correction value calculated by the learning control unit and information about the learned specific operation, and calculates another learning correction value to improve a new operation that is different from the specific operation of the robot mechanism, by applying the obtained relationship to information about the new operation without sensor.
2. The robot apparatus according to claim 1, wherein the learning extension unit obtains a transfer function between the learning correction value and the operation information from spectrograms of the learning correction value calculated by the learning control unit and the information about the specific operation of the robot mechanism, and calculates the learning correction value for the new operation based on the transfer function by input of the information about the new operation.
3. The robot apparatus according to claim 1, wherein the operation information of the robot mechanism includes at least one of position, operation velocity, acceleration, and inertia.
4. The robot apparatus according to claim 1, wherein the robot controller makes the operation control unit operate the robot mechanism at a maximum speed or a maximum acceleration allowed by the robot mechanism or in a simulation mode to obtain the operation information on the robot mechanism, and the learning extension unit calculates the learning correction value for the new operation based on the operation information.
5. The robot apparatus according to claim 1, wherein the specific operation is an operation used in an operation program, or an operation in or about an X axis, a Y axis, or a Z axis automatically generated in a specified operation range.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] The objects, features, and advantages of the present invention will be more apparent from the following description of an embodiment in conjunction with the attached drawings, wherein:
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
DETAILED DESCRIPTION OF THE INVENTION
[0018] A robot apparatus according to an embodiment of the present invention will be described below with reference to the drawings.
[0019] The sensor 2 is provided in a portion, the position of which is to be controlled, of the robot mechanism 1. The sensor 2 detects a physical quantity to directly or indirectly obtain positional information of the portion. For example, an acceleration sensor is usable as the sensor 2. The portion, the position of which is to be controlled, of the robot mechanism 1 is, for example, a spot welding gun provided in a spot welding robot. However, the portion of the robot apparatus is not limited thereto, but may be another portion.
[0020] The robot controller 4 has an operation control unit 3 for controlling the operation of the robot mechanism 1. The operation control unit 3 drives the robot mechanism 1 in response to an operation command received from a host control device (not-shown) or the like, and also controls the operation of the robot mechanism 1 based on feedback data from the robot mechanism 1. As a feedback control, at least one of a position feedback control, a velocity feedback control, and a current feedback control is available. By the feedback control, the position or velocity of the robot mechanism 1 is controlled so as to coincide with a command position or a command velocity.
[0021] The robot controller 4 further includes a learning control unit 5 and a learning extension unit 6. The learning control unit 5 calculates a learning correction value to improve a specific operation of the robot mechanism 1 based on a physical quantity detected, while the operation control unit 3 makes the robot mechanism 1 perform the specific operation, with the sensor 2. The specific operation performed by the robot mechanism 1 includes, for example, spot welding operations in an arrow direction (Y direction) as shown in
[0022] When an acceleration sensor is used as the sensor 2, the sensor 2 detects the acceleration of the portion, the position of which is to be controlled, of the robot mechanism 1, as the physical quantity. The learning correction value calculated by the learning control unit 5 is added to a command value from the operation control unit 3 in an adder 7, and the corrected command value is inputted to the robot mechanism 1.
[0023] The present invention proposes a method and apparatus for calculating another learning correction value for an unlearned operation without sensor, based on the operation information on the learned operation and the generated learning correction value. To be more specific, while the robot performs the specific operation, the correction value is learned to improve the operation of the robot. Next, a transfer function between the operation information and the learning correction value is obtained based on spectrograms of the information about the specific operation and the learning correction value. Using this transfer function, another learning correction value is calculated based on information about an unlearned operation without sensor.
[0024] The learning extension unit 6 calculates the relationship between the learning correction value calculated by the learning control unit 5 and the information about the learned specific operation. The learning extension unit 6 calculates another learning correction value by applying the obtained relationship to information about a new operation that is different from the specific operation of the robot mechanism 1, to improve the new operation without sensor.
[0025] To be more specific, after the robot mechanism 1 performs the specific operation and the learning correction value is obtained, the robot mechanism 1 is operated based on a new operation command. During the shutdown period or during the operation, the learning extension unit 6 calculates the transfer function that represents the relationship between the operation information and the learning correction value stored in the memory 8. The new operation information obtained from the operation command is inputted to the learning extension unit 6. The learning extension unit 6 calculates the learning correction value for the new operation based on the above transfer function. The learning correction value is added to the command value outputted from the operation control unit 3.
[0026] As described above, the learning extension unit 6 obtains the transfer function between the learning correction value and the operation information from the spectrograms of the learning correction value calculated by the learning control unit 5 and the specific operation information of the robot mechanism 1, and calculates the other learning correction value for the new operation based on the transfer function by input of the information about the new operation.
[0027] In the robot controller 4, the operation control unit 3 may make the robot mechanism 1 operate at a maximum speed or a maximum acceleration allowable by the robot mechanism 1 or in a simulation mode, to obtain the operation information on the robot mechanism 1.
[0028] Note that, the learning control unit 5 and the learning extension unit 6 are not operated at the same time. The learning control unit 5 works only during learning, but does not work during reproduction (operation) after the completion of the learning. On the contrary, the learning extension unit 6 works only during the reproduction (operation), but does not work during the learning.
[0029] Next, a method for calculating the transfer function will be described.
[0030]
[0031] Next, in step S2, spectrograms of the operation information (for example, velocity data) and the learning correction value are calculated as follows in each section by a short-time Fourier transform.
[0032] Here, y.sub.i represents a velocity, Y.sub.i represents the obtained spectrogram of the velocity, represents a window function, k represents an index, m represents a section number, j.sub.m represents an index at the end time of the section m, f represents an index of a section of a frequency, n represents a number of data of a predetermined operation, and a numerical subscript i represents an axial number that indicates the i-th axis of a plurality of axes. For example, in a six-axis articulated robot, i is 1 to 6.
[0033] When SIFT represents the short-time Fourier transform, the above equation (1) is represented as follows.
Y.sub.i=STFT{y.sub.i}(2)
[0034] When Y.sub.i is already known, a process of obtaining y.sub.i is represented as follows using an inverse transform ISTFT.
y.sub.i=ISTFT{Y.sub.i}(3)
[0035] The spectrogram of the learning correction value and an inverse transform thereof are calculated in a like manner.
X.sub.i=STFT{x.sub.i}(4)
x.sub.i=ISTFT{X.sub.i}(5)
[0036] Here, x.sub.i represents the learning correction value, X.sub.i represents the obtained spectrogram of the learning correction value, and a numerical subscript i represents an axial number.
[0037] Next, in step S3, the transfer function between the velocity and the learning correction value is obtained in each section as follows.
[0038] Here, C.sub.i represents the transfer function between the velocity and the learning correction value, m represents a number of a section, and the numerical subscript i represents an axial number.
[0039]
[0040] P.sub.t represents the position of the portion to be controlled at a certain time obtained from an operation command.
[0041] First, in step S11, the new operation position is obtained from the operation command. The average distance between the new operation position and each section is calculated, and the section having the shortest average distance is selected. The distance between each interpolated point of a learned position and P.sub.t (x.sub.pt, y.sub.pt, z.sub.pt) is calculated as follows.
[0042] wherein, P.sub.k represents a position (x.sub.pk, y.sub.pk, z.sub.pk) at a time k.
[0043] The average distance S.sub.m between P.sub.t and P.sub.k is calculated over each section, and the section having the shortest distance to P.sub.t is selected. However, a section having the closest inertia may be selected.
[0044] wherein, m represents a section number and n represents number of data in a section m, respectively.
[0045] Next, in step S12, a new learning correction value for P.sub.t is calculated from velocity information based on a transfer function of the selected section.
{dot over (P)}.sub.joint=STFT{{dot over (p)}.sub.joint}(9)
Q.sub.i=C.sub.i(m,f){dot over (P)}.sub.joint(10)
q.sub.iISTFT{Q.sub.i}(11)
[0046] wherein, P.sub.joint represents a position of each axis on a section of each axis of P.sub.t, {dot over (p)}.sub.joint represents a velocity, i.e. the differentiation of P.sub.joint, {dot over (P)}.sub.joint represents spectrogram of {dot over (p)}.sub.joint, q.sub.i represents a learning correction value for the position P.sub.t, Q.sub.i represents spectrogram of q.sub.i, and a numerical subscript i represents an axial number.
[0047] Next, in step S13, the learning correction value x.sub.i calculated in step S12 is added to the operation command from the operation control unit 3, and a result is outputted to the robot having the robot mechanism 1.
[0048] The velocity data is used as the operation information in the above description, but is not limited thereto, and position, acceleration, inertia, or the like may be used as the operation information instead.
[0049] As described above, the present invention allows calculating the transfer function between the operation information and the learning correction value on the robot that performs the specific operation. Thus, the use of the transfer function allows calculating the other learning correction value for the unlearned operation, without using a sensor. This eliminates the need for setting up the sensor and the burden of relearning, thus reducing setup time for learning. Also, the present invention is applicable to systems that detect a work in an unfixed position using a vision sensor or the like.
[0050] According to the robot apparatus of the present invention, obtaining the relationship between the learning correction value and the operation information allows calculating the other learning correction value for the unlearned new operation, thus eliminating the need for a relearning operation, which is performed with a sensor. The present invention is also applicable to systems that perform a tracking operation using a vision sensor or the like, though conventional techniques cannot be applied to the systems.