Technique for Parameter Conversion Between a Robotic Device and a Controller for the Robotic Device
20220219322 · 2022-07-14
Inventors
Cpc classification
G05B2219/39295
PHYSICS
G05B2219/39056
PHYSICS
International classification
Abstract
An apparatus for parameter transformation between a robotic device and a controller for the robotic device is presented. The apparatus is configured to receive at least one non-transformed parameter indicative of at least one of a position and a velocity of or for the robotic device, wherein the non-transformed parameter is received from the robotic device or the controller, respectively. The apparatus is further configured to transform the received non-transformed parameter to a transformed parameter using a transformation between an ideal parameter domain of the controller and a real parameter domain of the robotic device, the ideal parameter domain including ideal parameters processable by the controller and the real parameter domain including real parameters measurable at the robotic device and capable of deviating from the associated ideal parameters. The apparatus is further configured to transmit the transformed parameter to the other one of the robotic device and the controller, A method, a system, a computer-program product and a cloud computing system for parameter transformation are presented also.
Claims
1. An apparatus for parameter transformation between a robotic device and a controller for the robotic device, the apparatus being configured to: receive at least one non-transformed parameter indicative of at least one of a position and a velocity of or for the robotic device, wherein the non-transformed parameter is received from the robotic device or the controller, respectively; transform the received non-transformed parameter to a transformed parameter using a transformation between an ideal parameter domain of the controller and a real parameter domain of the robotic device, the ideal parameter domain including ideal parameters processable by the controller and the real parameter domain including real parameters measurable at the robotic device and capable of deviating from the associated ideal parameters; and transmit the transformed parameter to the other one of the robotic device and the controller.
2. The apparatus according to claim 1, wherein the transformation is obtained from a machine learning algorithm.
3. The apparatus according to claim 2, wherein an input for training the machine learning algorithm to obtain the transformation is at least one of a transformation previously obtained from the machine learning algorithm and a deviation between a real parameter and an associated ideal parameter.
4. The apparatus according to claim 2, wherein an initial real parameter for training the machine learning algorithm is derived by moving the robotic device to a predefined position and wherein an initial ideal parameter for training the machine learning algorithm is derived by instructing the controller to move the robotic device to the same predefined position.
5. The apparatus according to claim 1, wherein the transformation is representative of an adjustment of a deviation between a real parameter and an associated ideal parameter.
6. The apparatus according to claim 5, wherein the transformation is configured to adjust at least one of a deviation between an ideal position and an associated real position and a deviation between an ideal velocity and a real velocity.
7. The apparatus according to claim 5, wherein the transformation depends on at least one of an ideal parameter, a real parameter and an update time of the controller.
8. The apparatus according to claim 1, wherein the received non-transformed parameter is one of a real parameter received from the robotic device and an ideal parameter received from the controller.
9. The apparatus according to claim 1, wherein the transmitted transformed parameter is one of a real parameter transmitted to the robotic device and an ideal parameter transmitted to the controller.
10. The apparatus according to claim 1, wherein the non-transformed parameter is contained in one of a command message for the robotic device received from the controller and a status message for the controller received from the robotic device.
11. The apparatus according to claim 1, wherein the transformed parameter is contained in one of a command message transmitted to the robotic device from the controller and a status message transmitted to the controller from the robotic device.
12. The apparatus according to claim 1, wherein the transformation is configured to be updated responsive to a reconfiguration of the robotic device.
13. The apparatus according to claim 12, wherein the reconfiguration comprises at least one of: a change in a movement path performed by the robotic device; a change in a degree of precision of a movement path performed by the robotic device; and a change in a location of the robotic device.
14. A network analytics system configured to be located between the robotic device and the controller for the robotic device, the network analytics system comprising the apparatus of claim 1.
15. The network analytics system of claim 14, configured to monitor operational performance of the robotic device; and obtain a new transformation in case of a decrease in the operational performance.
16. A parameter transformation system comprising the apparatus according to claim 2 and a computing cloud-based training module for the machine learning algorithm.
17. The parameter transformation system according to claim 16, configured to train the machine learning algorithm in the cloud-based training module.
18. The parameter transformation system according to claim 16, configured to transmit the transformation from the computing cloud-based training module to the apparatus.
19. (canceled)
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0031] Further aspects, details and advantages of the present disclosure will become apparent from the detailed description of exemplary embodiments below and from the drawings, wherein:
[0032]
[0033]
[0034]
[0035]
[0036]
DETAILED DESCRIPTION
[0037] In the following description, for purposes of explanation and not limitation, specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent to one skilled in the art that the present disclosure may be practiced in other embodiments that depart from these specific details.
[0038] While, for example, the following description focuses on specific radio access network types, such as 5G networks, for a wireless communication between a robotic device and its controller, the present disclosure can also be implemented in connection with other wired or wireless communication techniques. Moreover, while certain aspects in the following description will exemplarily be described in connection with cellular networks, particularly as standardized by the 3.sup.rd Generation Partnership Project (3GPP), the present disclosure is not restricted to any specific cellular access type.
[0039] Those skilled in the art will further appreciate that the steps, services and functions explained herein may be implemented using individual hardware circuitry, using software functioning in conjunction with a programmed microprocessor or general purpose computer, using one or more Application Specific Integrated Circuits (ASICs) and/or using one or more Digital Signal Processors (DSPs). It will also be appreciated that when the present disclosure is described in terms of a method, it may also be embodied in one or more processors and one or more memories coupled to the one or more processors, wherein the one or more memories store one or more programs that perform the steps, services and functions disclosed herein when executed by the one or more processors.
[0040] In the following description of exemplary embodiments, the same reference numerals denote the same components.
[0041]
[0042] The robot cell domain 100A comprises at least one robot cell 101 with multiple robotic devices 102. The robotic devices 102 can be or include various actuators such as robot arms movable within various degrees of freedom. The various robotic devices 102 within the robot cell 101 may collaboratively work on the same task (e.g., on the same work product).
[0043] The wireless access domain 100B may be a cellular or non-cellular network, such as a cellular 5G network as specified by 3GPP. The wireless access domain 100B may comprise a base station or a wireless access point (not shown) that enables a wireless communication between components of the robot cell 101 on the one hand and the cloud computing domain 100C on the other via the wireless access domain 100B. Further, it is to be understood that the wireless access domain 1006 is only depicted for exemplary purposes. The technique presented herein is not limited to wireless signal transmission between the robot cell domain 100A and the cloud computing domain 100C. Any other way of transmitting data between two domains of a network can be likewise applied in the presented technique.
[0044] As illustrated in
[0045] The cloud computing domain 100C comprises a controller 106 for the robotic device 102. The controller 106 is composed of cloud computing resources. The controller 106 is configured to receive the status messages from the robotic devices 102 via the wireless access domain 1006. The controller 106 is further configured to process these status messages and, in some variants, to forward the processed status messages as new command messages to the wireless access domain 100B for distribution to the individual robotic devices 102.
[0046] For controlling the robotic devices 102, the controller 106 assumes an ideal deployment of the robotic devices 102, based on a calibration model of the robotic device 102. In other words, the controller 106 processes ideal parameters, i.e., parameters representing an assumed ideal status of the robotic device 102. On the other hand, the robotic devices 102 generate status messages containing real parameters, i.e., parameters representing the actual status of a particular robotic device 102. These real parameters in the robot cell domain 100A of the robotic device 102 may deviate from the associated ideal parameters processed by the controller 106 in the cloud computing domain 100C. The deviations may have various causes, such as unintended positional shifts of the robotic devices 102, intended reconfiguration of the robotic device 102, and so on.
[0047] Moreover, the cloud computing domain 100C further comprises a transformation apparatus 108 for parameter transformation between a robotic device 102 and the controller 106. In other variants, the transformation apparatus 108 may also be located in the wireless access domain 100B or in a core network associated with the wireless access domain 100B. The transformation apparatus 108 will in the following also be referred to simply as “apparatus” 108.
[0048] The apparatus 108 is configured to transform the real parameters received from the robotic device 102 to ideal parameters processable by the controller 106. As can be seen from
[0049] The cloud computing domain 100C further comprises a training module 110, in which the transformation between the ideal parameter domain and the real parameter domain is obtained. The training module 110 is depicted in
[0050] In the training module 110, a machine learning algorithm, such as a deep neural network, is trained to generate and, if necessary, update the transformation. The transformation is transmitted from the training module 110 to the apparatus 108, as depicted by the dashed arrow in
[0051] The transformation apparatus 108 depicted in the network system 100 of
[0052]
[0053] The processor 202 is generally configured to transform any received non-transformed parameters to transformed parameters and transmit the transformed parameters to the robotic device 102 and the controller 106, respectively. The transformation process will be explained in greater detail below with reference to
[0054] In more detail, the apparatus 108 of
[0055]
[0056]
[0057] The purpose of the network analytics system 112 is to ensure a specific quality of service (QoS), or quality of control (QoC), in regard of the robot cell domain 100A. Therefore, the network analytics system 112 is configured to monitor the operation of the robotic devices 102 and to perform specific actions if a decrease in QoS or QoC of the robotic device 102 is detected. Such actions may include triggering the training module 110 to update the transformation applied by the transformation apparatus 108 or to generate a new transformation. Of course, the training module 110 will initially generate a transformation regardless of the any QoS or QoC.
[0058] In the training module 110 illustrated in
[0059] As a further example, the machine learning algorithm may also be trained with former transformations which are weighted with different weighting factors. For example, the real position (in arbitrary units) of the robotic device 102 may have changed from (0.1; 0.1; 0.1) to (0.2; 0.2; 0.2), as in the above example. However, in this case, the machine learning algorithm may output a transformation determined from three different formerly used transformations, which are also weighted differently. The first transformation may be the one used in the case when the real position (in arbitrary units) of the robotic device 102 has changed from (0; 0; 0) to (0.1; 0.1; 0.1). Said transformation may be weighted with a first weighting factor such as 0.5. The second transformation may be the one used in the case when the real position (in arbitrary units) of the robotic device 102 has changed from (0; 0; 0) to (0.2; 0.2; 0.2). Said transformation may be weighted with a second weighting factor such as 0.2. The third transformation may be the one used in the case when the real position (in arbitrary units) of the robotic device 102 has changed from (0.1; 0.1; 0.1) to (0.15; 0.15; 0.15). Said transformation may be weighted with a third weighting factor such as 0.3.
[0060] The weighting factors may be derived based on different criteria. For example, the greater a time difference between a previously obtained transformation and/or a deviation between a real parameter and an associated ideal parameter used to train the machine learning algorithm (for simplicity referred to as “the training sample”) and the last obtained transformation and/or the last deviation between a real parameter and an associated ideal parameter (for simplicity referred to as “the actual sample”), the lower the weighting factor of the training sample may be. As a further example, the smaller a mathematical difference between the training sample and the actual sample, the higher the weighting factor may be. Moreover, if a training sample and the actual sample are the same, the weighting factor of said training sample may be the highest of all weighting factors (e.g., 0.99 or 0.9).
[0061] As the amount of input data for training the machine learning algorithm increases with every training step, a difference between a transformation obtained by the machine learning between a real parameter and an associated ideal parameter and an actual deviation between said parameters may decrease with every training step. In other words, the precision of the machine learning algorithm may increase with every training step. Moreover, the machine learning algorithm may not necessarily provide an optimal parameter transformation for every single data transmission between the robotic device 102 and the controller 106, but may instead provide an optimal overall precision (i.e., optimizing the parameter transformation for many data transmissions as a whole).
[0062]
[0063] The method starts in step S402 with the apparatus 108 receiving, via the receiving module 208 or the interface 206, a non-transformed parameter from either the robotic device 102 or the controller 106. The received non-transformed parameter is one of a real parameter contained in a status message from the robotic device 102 and an ideal parameter contained in a command message from the controller 106, as described above with reference to
[0064] Transforming the received non-transformed parameter to a transformed parameter is done in further step S404. For this purpose, a transformation between an ideal parameter domain of the controller 106 and a real parameter domain of the robotic device 102 is used. The transformation is done by the transforming module 210 or the processor 204 of the apparatus 108. The transformation step 5404 will be explained in more detail below with reference to
[0065] Optionally, the method may comprise the step S406 (depicted in dashed lines) of storing at least one non-transformed parameter, at least one associated transformed parameter and a corresponding transformation. The stored parameters and/or stored transformation may be used to train the machine learning algorithm, e.g. in the training module 110, to obtain subsequent transformations.
[0066] As indicated in step S408, the method continues with transmitting, by the transmitting module 212 or the interface 206, the transformed parameter to either the controller 106 (when the non-transformed parameter is received from the robotic device 102) or the robotic device 102 (when the non-transformed parameter is received from the controller 106).
[0067] In a flow diagram 500 depicted in
[0068] The method starts with step S502 in
[0069] In step S504, an initial real parameter for training the machine learning algorithm by the training module 110 is derived. For this purpose, the robotic device 102 (i.e., a reference point of a tool center of the robotic device 102) is moved to a predefined position. For example, when the robotic device 102 is calibrated to work on a plane surface such as a table, the predefined position may be an edge of an upper surface of the table, represented by the positional coordinates (0; 0; 0) in a real coordinate system as used in the robot cell domain 100A. For the sake of simplicity, only three-dimensional parameters are used herein. However, also six-dimensional parameters consisting of three position coordinates and three angular coordinates may be used.
[0070] An initial ideal parameter for training the machine learning algorithm is derived in step S506. To this end, the controller 106 is instructed (e.g., by a user or a calibration program) to move the robotic device 102 to the same predefined position as in step S504 and use the corresponding position coordinates as reference coordinates for an ideal coordinate system used to control the robotic device 102. As described above, the ideal parameters used by the controller 106 may deviate from the real parameters used by the robotic device 102. Again, the initial ideal parameters may be either three-dimensional parameters or six-dimensional parameters.
[0071] Based on multiple pairs of initial real parameters and associated ideal parameters derived in steps S504 and S506, the machine learning algorithm is trained by the training module 110 to output a transformation (e.g., a position correction function deltaQ) in step S508. In the following, the position correction function deltaQ is used as an example for explaining the transformation. However, it is to be understood that the presented technique is not limited to a position correction function and could, for example, equally be used to generate a velocity correction function or any other correction function.
[0072] In step S510, the position correction function deltaQ is transmitted from the training module 110 to the transformation apparatus 108. The position correction function deltaQ may then be used by the apparatus 108 as a transformation for any non-transformed parameters as generated by the controller 106 or the robotic device 102.
[0073] In step S512, depicted in
[0074] In case the received non-transformed parameter is an ideal position as processable by (e.g., as generated by) the controller 106, the ideal position is transformed to a real position of the robotic device 102 is in step S514 using the position correction function deltaQ. The transformation may have the exemplary form
q.sub.real=q.sub.ideal+deltaQ(q.sub.ideal).
[0075] Herein, q.sub.real indicates a real position (either three-dimensional or six-dimensional) measurable at the robotic device 102, q.sub.ideal indicates an ideal position (either three-dimensional or six-dimensional) processable by the controller and deltaQ indicates the position correction function. As is apparent from the above formula, the transformation (i.e., the position correction function deltaQ) is representative of an adjustment of a deviation between the real position q.sub.real and the ideal position q.sub.ideal.
[0076] Given the case that the received non-transformed parameter is a real position q.sub.real measurable (e.g., as actually measured) at the robotic device 102, an ideal position q.sub.ideal processable by the controller 106 (e.g., in a feedback loop or for inverse kinematics) is obtained in step S516 using the following example transformation:
q.sub.ideal=q.sub.real−deltaq(q.sub.real).
[0077] In case a real position is transformed into an ideal position, the accuracy of the transformation can be improved by iterative refinement, as depicted in step S518. In other words, the transformation is repeated several times, wherein during every iteration step the following example transformation is applied
q.sub.ideal(n+1)=q.sub.real−deltaQ(q.sub.ideal(n)).
[0078] Herein, a q.sub.ideal(n+1) denotes an ideal position obtained in the n+1.sup.st iteration step and q.sub.ideal(n) denotes an ideal position obtained in the n.sup.th iteration step. q.sub.real and deltaQ denominate the same parameters as used in steps S514 and S516.
[0079] A real velocity v.sub.real (either three-dimensional or six-dimensional) measurable by the robotic device 102 is obtained in step S520 using the following example transformation:
v.sub.real=v.sub.ideal[deltaQ(q.sub.ideal+v.sub.ideal*T)−deltaQ(q.sub.ideal)]/T.
[0080] Herein, T denotes an update time of the controller 106, such as the time between two consecutive command messages (e.g., as received from the controller 106 at the transformation apparatus 108). Said update time T may thus be measured by the apparatus 108 or may alternatively be obtained as additional information in the command messages received from the controller 106. Further, q.sub.ideal denotes an ideal position and v.sub.ideal denotes an ideal velocity. q.sub.ideal and v.sub.ideal may either be received from the controller 106 in a command message or may be previously stored in the memory 204 of the apparatus according to step S406.
[0081] In step S522, an ideal velocity V.sub.ideal (either three-dimensional or six-dimensional) processable by the controller 106 is obtained with respect to a real position q.sub.real and a real velocity v.sub.real, as well as an update time T of the controller 106, as defined above with reference to step S520. Similar to step S520, q.sub.real and v.sub.real may either be received from the robotic device 102 in a status message or may be previously stored in the memory 204 of the apparatus according to step S406. For example, the following transformation may be used
v.sub.ideal=v.sub.real+[deltaQ(q.sub.real+v.sub.real*T)−deltaQ(q.sub.real)]/T.
[0082] Similar to step S518, the accuracy of the transformation between a real position v.sub.real and an ideal position v.sub.ideal may be improved in step S524 by iterative refinement.
[0083] After the transformations according to the steps S514 to S524 have been performed, the corresponding transformed parameters are transmitted to the controller 106 or the robotic device 102, respectively, in step S526. In particular, a transformed ideal position q.sub.ideal and a transformed ideal velocity v.sub.ideal are transmitted to the controller 106 and a transformed real position q.sub.real and a transformed real velocity v.sub.real are transmitted to the robotic device 102.
[0084] The transformation technique presented herein may be used in a variety of different applications, preferably in scenarios requiring a frequent reconfiguration of the robotic device 102 (e.g., in factories where the robotic device 102 has to perform multiple tasks over a short period of time such as days or weeks). For example, the reconfiguration may be required for producing different products or for a cooperative collaboration between different robotic devices. The required reconfiguration of the robotic device 102 may comprise a change in a movement path performed by the robotic device 102 (e.g., for producing different products), a change in a degree of precision of a movement path performed by the robotic device 102 (e.g., for reducing the failure rate of the robotic device 102) and/or a change in a location of the robotic device 102 (e.g., when working cooperatively with other robotic devices on the same product). In each of these cases, the transformation may be updated by the training module 110 responsive to the reconfiguration.
[0085] As has become apparent from the above description of exemplary embodiments, the technique proposed herein helps to accelerate reconfiguration procedures of robotic devices 102. As a machine learning algorithm may specifically be trained in response to every reconfiguration, a suitable transformation between an ideal parameter domain and a real parameter domain can be outputted faster (e.g., in real time) and more reliable after every reconfiguration process. Therefore, the required reconfiguration time is constantly reduced while at the same time the precision of the calibration process is increased. Moreover, as the presented technique also corrects a possible deviation between real parameters and ideal parameters, neither the configuration of the robotic device nor the configuration of the robot controller has to be adjusted manually during operation of the robotic devices 102.
[0086] It will be appreciated that the present disclosure has been described on the basis of exemplary embodiments and can be modified in many ways. As such, the invention is intended to be limited only by the claims appended hereto.