METHOD FOR USING A MULTI-LINK ACTUATED MECHANISM, PREFERABLY A ROBOT, PARTICULARLY PREFERABLY AN ARTICULATED ROBOT, BY A USER BY MEANS OF A MOBILE DISPLAY APPARATUS
20210170603 · 2021-06-10
Assignee
Inventors
Cpc classification
G05B2219/40125
PHYSICS
G05B2219/39449
PHYSICS
B25J9/1666
PERFORMING OPERATIONS; TRANSPORTING
G05B2219/36167
PHYSICS
B25J13/006
PERFORMING OPERATIONS; TRANSPORTING
International classification
Abstract
A method at least including the steps of aligning an image capturing element of a mobile display apparatus on a multi-link actuated mechanism by a user, capturing at least the multi-link actuated mechanism by means of the image capturing element of the mobile display apparatus, identifying the multi-link actuated mechanism in the captured image data of the image capturing element of the mobile display apparatus, indicating in three dimensions the multi-link actuated mechanism on the basis of the captured image data together with the depth information items, and overlaying the virtual representation of the multi-link actuated mechanism on the multi-link actuated mechanism in the display element of the mobile display apparatus, wherein the overlay is implemented taking account of the geometric relationships of the multi-link actuated mechanism.
Claims
1. Method for the use, by a user, of a multi-link actuated mechanism, preferably a robot, particularly preferably an articulated robot, by means of a mobile display apparatus, wherein the multi-link actuated mechanism comprises at least: a plurality of links interconnected by actuated joints, and an end effector connected to at least one link, wherein the mobile display apparatus comprises at least: at least one display element designed to display to the user at least one real-world representation of the multi-link actuated mechanism, preferably together with the surroundings thereof, and at least one image capturing element designed to capture the multi-link actuated mechanism, preferably together with the surroundings thereof, as image data together with depth information, wherein the display element is further configured to overlay, for the user, at least one virtual representation of the multi-link actuated mechanism on the real-world representation of the multi-link actuated mechanism, and preferably in the surroundings thereof, comprising at least the steps of: orienting, by the user, the image capturing element of the mobile display apparatus towards the multi-link actuated mechanism, preferably together with the surroundings thereof, capturing at least the multi-link actuated mechanism, preferably together with the surroundings thereof, by means of the image capturing element of the mobile display apparatus, identifying the multi-link actuated mechanism, and preferably the surroundings thereof, in the captured image data of the image capturing element of the mobile display apparatus, indicating, in three dimensions, the multi-link actuated mechanism, and preferably in the surroundings thereof, on the basis of the captured image data together with the depth information, and overlaying the virtual representation of the multi-link actuated mechanism, and preferably in the surroundings thereof, on the multi-link actuated mechanism in the display element of the mobile display apparatus, wherein the overlaying is carried out while taking account of the geometric relationships of the multi-link actuated mechanism, and preferably the surroundings thereof.
2. Method according to claim 1, characterized by at least the further step of: indicating, by the user, a first point, preferably a first pose, by means of the mobile display apparatus, a virtual representation of the first point, preferably the first pose, being overlaid for the user in the display element of the mobile display apparatus, preferably comprising at least the further step of: indicating, by the user, a second point, preferably a second pose, by means of the mobile display apparatus, a virtual representation of the second point, preferably the second pose being overlaid for the user in the display element of the mobile display apparatus.
3. Method according to claim 1, characterized by at least the further step of: selecting, by the user, a first object by means of the mobile display apparatus, a virtual representation of the selection of the first object being overlaid for the user in the display element of the mobile display apparatus, preferably comprising at least the further step of: selecting, by the user, a second object by means of the mobile display apparatus, a virtual representation of the selection of the second object being overlaid for the user in the display element of the mobile display apparatus.
4. Method according to claim 3, characterized by at least the sub-steps of selecting: orienting, by the user, the image capturing element of the mobile display apparatus towards the first object or towards the second object, capturing the first object or the second object by means of the image capturing element of the mobile display apparatus, and marking the first object or the second object in the display element of the mobile display apparatus, preferably also confirming, by the user, that the first object or the second object is to be selected.
5. Method according to claim 2, characterized by at least the further steps of: creating at least one trajectory between a start pose and a target pose, the start pose being the current pose of the end effector of the multi-link actuated mechanism, and the target pose being the first point, preferably the first pose, and/or the start pose being the first point, preferably the first pose, and the target pose being the second point, preferably the second pose, or vice versa, or the start pose being the current pose of the end effector of the multi-link actuated mechanism, and the target pose being the first object, and/or the start pose being the first object and the target pose being the second object, or vice versa, and travelling along the trajectory by means of the virtual representation of the multi-link actuated mechanism.
6. Method according to claim 5, characterized by at least the further step of: identifying a collision of the virtual representation of the multi-link actuated mechanism with a real-world collision object by comparing the captured surroundings with the movement of the virtual representation of the multi-link actuated mechanism, a virtual representation of the collision being overlaid for the user in the display element of the mobile display apparatus, and preferably, in response to an identified collision, stopping the movement of the virtual representation of the multi-link actuated mechanism.
7. Method according to claim 6, characterized by at least the further step of: in response to an identified collision, marking at least one portion of the virtual representation of the multi-link actuated mechanism, preferably marking the virtual representation of the multi-link actuated mechanism in portions where the collision has occurred.
8. Method according to claim 6, characterized by at least the further steps of: creating at least one alternative trajectory between at least the start pose and the target pose, and travelling along the trajectory by means of the virtual representation of the multi-link actuated mechanism.
9. Method according to claim 6, characterized by at least the further steps of: indicating, by the user, a further point, preferably a further pose, by means of the mobile display apparatus, a virtual representation of the further point, preferably the further pose, being overlaid for the user in the display element, creating at least one alternative trajectory between a start pose and a target pose while taking account of the further point, preferably the further pose, and travelling along the trajectory by means of the virtual representation of the multi-link actuated mechanism.
10. Method according to claim 5, characterized by at least the further step of: travelling along the trajectory by means of the multi-link actuated mechanism.
11. Method according to claim 1, characterized by the steps, prior to the overlaying, of at least: initializing the method, preferably by at least the sub-steps of: creating the virtual representation of the multi-link actuated mechanism orienting the virtual representation of the multi-link actuated mechanism on the basis of the poses of the links and/or the actuated joints and/or the end effector of the multi-link actuated mechanism, capturing the multi-link actuated mechanism and/or a reference indication of the multi-link actuated mechanism, and referencing the virtual representation of the multi-link actuated mechanism to the multi-link actuated mechanism on the basis of the captured multi-link actuated mechanism or on the basis of the reference indication.
12. Method according to claim 1, characterized in that the indicating, the selecting and/or the confirming by the user are carried out by means of at least one operator input by the user, the operator input of the user preferably being overlaid in the display element as a virtual representation, the operator input of the user preferably being a gesture that is captured by the image capturing element of the mobile display apparatus or a touch that is captured by the display element of the mobile display apparatus.
13. Method according to claim 1, characterized in that the multi-link actuated mechanism further comprises at least one image capturing unit, which is arranged and oriented so as to capture at least the surroundings in front of the end effector, the image capturing unit preferably being arranged and oriented on the end effector or on an end-effector unit so as to capture the surroundings immediately in front of the end effector, the method being carried out while also taking account of the image data of the image capturing unit of the multi-link actuated mechanism.
14. Method according to claim 1, characterized by at least one virtual representation of at least one piece of information that is overlaid in the display element of the mobile display apparatus, the virtual representation preferably comprising at least: a control element for interaction with the user, preferably by means of at least one operator input, and/or a coordinate system of the end effector, and/or a coordinate system of at least one point, preferably of at least one pose, and/or a trajectory, and/or a duration of a trajectory, and/or a total length of a trajectory, and/or the energy requirement for a trajectory, and/or the image capturing range of an image capturing unit of the multi-link actuated mechanism, and/or a singularity of the multi-link actuated mechanism, and/or a boundary of the working space of the multi-link actuated mechanism, and/or a boundary of the articulation space of the multi-link actuated mechanism, and/or a predetermined limit of the multi-link actuated mechanism, and/or an instruction to the user.
15. System for the use, by a user, of a multi-link actuated mechanism, preferably a robot, particularly preferably an articulated robot, by means of a mobile display apparatus, wherein the multi-link actuated mechanism comprises at least: a plurality of links interconnected by actuated joints, and an end effector connected to at least one link, wherein the mobile display apparatus comprises at least: at least one display element designed to display to the user at least one real-world representation of the multi-link actuated mechanism, preferably together with the surroundings thereof, and at least one image capturing element designed to capture the multi-link actuated mechanism, preferably together with the surroundings thereof, as image data together with depth information, wherein the display element is further configured to overlay, for the user, at least one virtual representation of the multi-link actuated mechanism on the real-world representation of the multi-link actuated mechanism, and preferably in the surroundings thereof, wherein the system, preferably the multi-link actuated mechanism and/or the mobile display apparatus, is configured to carry out a method according to claim 1, wherein the multi-link actuated mechanism preferably further comprises at least one image capturing unit, which is arranged and oriented so as to capture at least the surroundings in front of the end effector, wherein the image capturing unit is preferably arranged and oriented on the end effector or on an end-effector unit so as to capture the surroundings immediately in front of the end effector.
16. Mobile display apparatus for use in a system according to claim 15, comprising at least one display element, which is designed to display to the user at least one real-world representation of the multi-link actuated mechanism, preferably together with the surroundings thereof, and comprising at least one image capturing element, which is designed to capture the multi-link actuated mechanism, preferably together with the surroundings thereof, as image data together with depth information, wherein the display element is further configured to overlay, for the user, at least one virtual representation of the multi-link actuated mechanism on the real-world representation of the multi-link actuated mechanism, and preferably in the surroundings thereof.
17. Multi-link actuated mechanism for use in a system according to claim 15, comprising a plurality of links interconnected by actuated joints, and comprising an end effector connected to at least one link, wherein the multi-link actuated mechanism preferably further comprises at least one image capturing unit, which is arranged and oriented so as to capture at least the surroundings in front of the end effector, wherein the image capturing unit is preferably arranged and oriented on the end effector or on an end-effector unit so as to capture the surroundings immediately in front of the end effector .
18. Computer program product comprising a program code stored on a computer-readable medium, for carrying out a method according to claim 1.
Description
[0134] Two embodiments and further advantages of the invention will be explained below in relation to the following drawings, in which:
[0135]
[0136]
[0137]
[0138]
[0139] The above-mentioned figures are viewed in Cartesian coordinates. There is a longitudinal direction X, which can also be referred to as the depth X. Perpendicular to the longitudinal direction X is a transverse direction Y, which can also be referred to as the width Y. Perpendicular to both the longitudinal direction X and the transverse direction 1′ is a vertical direction Z, which can also be referred to as the height Z.
[0140]
[0141] On the foundation surface 30, a first object 31 is arranged in the form of an item 31 that can be gripped by an articulated robot 1 using its end effector 14 and set down on a second object 32 in the form of a first set-down surface 32. For this purpose, the articulated robot 1 can travel towards the item along a first trajectory e1, grip the item, and move it along a second trajectory e2 towards the first set-down surface 32, where it sets the item down.
[0142] To program this “picking and placing” application, a user 2 uses a mobile display apparatus 4 in the form of a tablet 4. The tablet 4 has a holder element 40 in the form of a casing 40, which encloses the edges and underside of the tablet 4. A user 2 can hold the tablet 4 on the side with at least one hand via the casing 40. On its top face, the tablet 4 has a display element 41 in the form of a screen facing the user 2,
[0143] On the opposite side of the tablet 4 at the upper edge of the rim of the casing 40, the tablet 4 further comprises an image capturing element 42 in the form of a stereoscopic area scan camera. Using the image capturing element 42, images, in this case of the articulated robot 1 and its surroundings, can be captured and can also contain depth information due to the stereoscopic characteristic of the image capturing element 42. The captured image data can also be displayed to the user 2 by the display element 41, such that thereon the user can see an image of what they pointed the tablet 4 or its image capturing element 42 at. In addition to the captured real-world image data, which can simply be rendered by the display element 41, additional virtual representations can be displayed, as will be described in more detail below.
[0144]
[0145]
[0146] The user 2 orients 000 the image capturing element 42 of the mobile display apparatus 4 towards the articulated robot 1 together with the surroundings thereof; see
[0147] The articulated robot 1 together with the surroundings thereof is captured 030 by means of the image capturing element 42 of the mobile display apparatus 4, images being captured of the detail of the surroundings that at that moment can be captured by the image capturing element 42 due to the orientation carried out by the user 2.
[0148] The articulated robot 1 and the surroundings thereof are identified 050 in the captured image data of the image capturing element 42 of the mobile display apparatus 4. This can be done by means of known image processing and pattern detection methods.
[0149] The articulated robot 1 and the surroundings thereof are indicated 070, in three dimensions, on the basis of the captured image data together with the depth information. In this case, the depth information is made available by the image capturing element 42 in the form of a stereoscopic area scan camera. The indicating 070 of the articulated robot 1 and of objects 31-34 in the surroundings thereof in three dimensions (see
[0150] The method is initialized 100; this has to be carried out just once before the method is used for the current operation, and preferably comprises a plurality of sub-steps. The virtual representation of the articulated robot 1 is thus created 110 on the basis of a kinematic model corresponding to the design of the corresponding real-world articulated robot 1. The virtual representation of the articulated robot 1′ is oriented 130 on the basis of the poses of the links 11 or the actuated joints 12 and the end effector 14 of the articulated robot 1 such that the real-world articulated robot 1 and its virtual representation match. In the process, for example, the angular positions of the joints 12 of the real-world articulated robot 1 are taken into account; they are captured by sensor and thus made available.
[0151] A reference indication 35 of the articulated robot 1 is captured 150 in the form of an optical marker 35, which is arranged in the immediate vicinity of the base 10 of the articulated robot 1 on the foundation surface 30 of the foundation 3 and is located, when the image capturing element 42 of the mobile display apparatus 4 is in this orientation, in the image capturing range of the image capturing element 42 thereof. Alternatively, the articulated robot 1 itself could also be identified, but the capturing and identification of an optical marker 35 may be simpler to implement.
[0152] The virtual representation of the articulated robot is referenced 170 to the real-world articulated robot 1 on the basis of the captured optical marker 35. In other words, the virtual representation of the articulated robot 1′ is displaced onto the real-world articulated robot 1 such that they correspond to each other; the links 11, joints 12 and end effector 14 have already been oriented relative to one another in the initial orientation 130.
[0153] The virtual representation of the articulated robot 1′, and in the surroundings thereof, is overlaid 200 on the real-world articulated robot 1 in the display element 41 of the mobile display apparatus 4. In other words, the data on the virtual surroundings and the real-world surroundings are merged together or superimposed on one another. In the process, the overlaying 200 is carried out while taking account of the geometric relationships of the articulated robot 1 and the surroundings thereof. As a result, the depth information of the three-dimensional surroundings map can be incorporated into the overlaying such that the articulated robot 1 and other objects 31-34 can be displayed in the correct position and the correct orientation. This can prevent virtually displayed bodies from obscuring real-world bodies and can make the augmented reality thus created more comprehensible for the user 2. In particular, this can make commissioning and programming more intuitive for the user.
[0154] A first pose is indicated 300a by the user 2 by means of the mobile display apparatus 4, a virtual representation of the first pose D1 being overlaid for the user 2 in the display element 41 of the mobile display apparatus 4; see e.g.
[0155] A trajectory e1, e2 is created 500 between the current pose of the end effector 14 (as the start pose) and the first pose D1 (as the target pose) via the second pose D2 (as the intermediate pose), this overall trajectory being divided into a first (sub-)trajectory e1 between the current pose of the end effector 14 and the second pose D2, and into a second (sub-)trajectory e2 between the second pose 02 and the first pose D1; see e.g.
[0156] The trajectory e1, e2 is travelled along 550 by means of the virtual representation of the articulated robot 1′; see e.g.
[0157] Since this trajectory e1, e2 is optically followed by the user 2 and is assessed as being permitted for want of any collision, the trajectory e1, e2 can then be travelled along 900 by means of the real-world articulated robot 1. The user 2 is displayed a virtual representation of a duration F1 and the total length F2 of the trajectory e1, e2 by the display element 41 of the mobile display apparatus 4. The programming of this movement has thus been successfully completed.
[0158] Alternatively, the user 2 selects 300b a first object 31 and selects 400b a second object 32 by means of the mobile display apparatus 4 in that the user 2 orients 310b; 410b the image capturing element 42 of the mobile display apparatus 4 towards the first object 31 or towards the second object 32, respectively; see e.g.
[0159] In this case too, at least one trajectory e1, e2 is now created 500 between a start pose and a target pose, said trajectory running from the current pose of the end effector 14 (represented by pose C thereof) to a third pose D3 via a first pose D1 and a second pose D2; the (sub-)trajectories e1, e2 run between the first pose D1 and the second pose D2 and between the second pose D2 and the third pose D3, respectively; see e.g.
[0160] The trajectory e1, e2 is travelled along 550 by means of the virtual representation of the articulated robot 1′, but in this case a collision object 36 is located along the first trajectory e1. A collision of the virtual representation of the articulated robot 1′ with the real-world collision object 36 is thus identified 600 by comparing the captured surroundings with the movement of the virtual representation of the articulated robot 1′, a virtual representation of the collision H being overlaid for the user 2 in the display element 41 of the mobile display apparatus 4. Furthermore, in response to the identified collision, the movement of the virtual representation of the articulated robot 1′ is stopped 610. Moreover, in response to the identified collision, the end effector 14 of the virtual representation of the articulated robot 1′ is marked 630 since the collision occurred in this portion of the virtual representation of the articulated robot 1′.
[0161] On one hand, at least one alternative trajectory e1, e2 can now be created 700 between at least the start pose and the target pose and can be executed automatically by the articulated robot 1. For instance, a further pose can be added to the trajectory e1 to bypass the real-world collision object 36.
[0162] The alternative trajectory e1, e2 is then travelled along 550 by means of the virtual representation of the articulated robot If this movement is collision-free, the trajectory e1, e2 can then be travelled along 900 by means of the real-world articulated robot 1. If this is successful, the programming of this movement has thus been successfully completed.
[0163] On the other hand, the user 2 can indicate 800 a further pose by means of the mobile display apparatus 4, a virtual representation of the further pose being overlaid for the user 2 in the display element 41. This further pose can also be added to the trajectory e1 to bypass the real-world collision object 36. On the basis of this further pose, at least one alternative trajectory e1, e2 can be created 500 between a start pose and a target pose while taking account of the further pose. If this movement is collision-free, the trajectory e1, e2 can then be travelled along 900 by means of the real-world articulated robot 1. If this is successful, the programming of this movement has thus been successfully completed.
LIST OF REFERENCE SIGNS (part of description)
[0164] a Image capturing range of the image capturing unit 15 of the mechanism 1
[0165] A Virtual representation of the image capturing range of the image capturing unit 15 of the mechanism 1
[0166] b Gesture by the user 2
[0167] B Virtual representation of a gesture by the user 2
[0168] C Virtual representation of a coordinate system of the end effector 14
[0169] D1 Virtual representation (of a coordinate system) of a first point or a first pose
[0170] D2 Virtual representation (of a coordinate system) of a further/second point or a further/second pose
[0171] D3 Virtual representation (of a coordinate system) of a further/third point or a further/third pose
[0172] e1 First trajectory
[0173] E1 Virtual representation of a first trajectory e1
[0174] e2 Second trajectory
[0175] E2 Virtual representation of a second trajectory e2
[0176] F1 Virtual representation of a duration of a trajectory e1, e2
[0177] F2 Virtual representation of a total length of a trajectory e1, e2
[0178] G1 Virtual representation of the selection of the first object 31
[0179] G2 Virtual representation of the selection of the second object 32
[0180] H Virtual representation of a collision
[0181] X Longitudinal direction; depth
[0182] Y Transverse direction; width
[0183] Z Vertical direction; height
[0184] 1 Multi-link actuated mechanism; (articulated) robot
[0185] 1′ Virtual representation of the multi-link actuated mechanism 1
[0186] 10 Base
[0187] 11 Links
[0188] 12 Actuated (pivot) joints
[0189] 13 End-effector unit
[0190] 14 End effector; gripper
[0191] 15 Image capturing unit
[0192] 16 Control unit; arithmetic unit; main computer; motion control system
[0193] 2 User
[0194] 3 Foundation
[0195] 30 Foundation surface
[0196] 31 First object; item
[0197] 32 Second object; first set-down surface
[0198] 33 Third object; second set-down surface
[0199] 34 Fourth object; third set-down surface
[0200] 35 Reference indication; optical marker
[0201] 36 Collision object
[0202] 4 Mobile display apparatus; mixed-reality glasses; augmented reality glasses; HoloLens; contact lens; handheld device; tablet; smartphone
[0203] 40 Holder element; casing; temple
[0204] 41 Display element
[0205] 42 Image capturing element
[0206] 000 Orienting image capturing element 42 towards the mechanism 1 by the user 2
[0207] 030 Capturing mechanism 1 by means of the image capturing element 42
[0208] 050 Identifying mechanism 1 in the captured image data
[0209] 070 Indicating mechanism 1 in three dimensions on the basis of the captured image data together with depth information
[0210] 100 Initializing method
[0211] 110 Creating virtual representation of the mechanism 1′
[0212] 130 Orienting virtual representation of the mechanism 1′
[0213] 150 Capturing mechanism 1 and/or reference indication 35
[0214] 170 Referencing virtual representation of the mechanism 1′ to the mechanism 1
[0215] 200 Overlaying virtual representation of the mechanism 1′ on the mechanism 1 in the display element 41
[0216] 300a Indicating, by the user 2, a first point or first pose by means of the mobile display apparatus 4
[0217] 300b Selecting, by the user 2, a first object 31 by means of the mobile display apparatus 4
[0218] 310b Orienting, by the user 2, the image capturing element 42 towards the first object 31
[0219] 330b Capturing first object 31 by means of the image capturing element 42
[0220] 350b Marking first object 31 in the display element 41
[0221] 370a Confirming, by the user 2, that the first object 31 is to be selected
[0222] 400a Indicating, by the user 2, a second point or second pose by means of the mobile display apparatus 4
[0223] 400b Selecting, by the user 2, a second object 32 by means of the mobile display apparatus 4
[0224] 410b Orienting, by the user, the image capturing element 42 towards the second object 32.
[0225] 430b Capturing second object 32 by means of the image capturing element 42
[0226] 450b Marking second object 32 in the display element 41
[0227] 470a Confirming, by the user 2, the second object 32 as to be selected
[0228] 500 Creating trajectory e1, e2. between the start pose and target pose
[0229] 550 Travelling along trajectory e1, e2 by means of the virtual representation of the mechanism 1
[0230] 600 Identifying collision of the virtual representation of the mechanism 1′ with a real-world collision object 36
[0231] 610 Stopping movement of the virtual representation of the mechanism 1′ in response to an identified collision
[0232] 630 Marking portion of the virtual representation of the mechanism 1′ in response to an identified collision
[0233] 700 Creating alternative trajectory e1, e2 between the start pose and target pose
[0234] 800 Indicating, by the user 2, a further point or further pose by means of the mobile display apparatus 4
[0235] 900 Travelling along trajectory e1, e2 by means of the mechanism 1