PROGRAM EDITING DEVICE
20230182292 · 2023-06-15
Assignee
Inventors
Cpc classification
B25J9/1656
PERFORMING OPERATIONS; TRANSPORTING
B25J9/1658
PERFORMING OPERATIONS; TRANSPORTING
International classification
Abstract
The present invention makes it possible to create a vision detection program without a sense of incongruity, even for an operator who is used to robot programming but not used to vision detection programming. Provided is a program editing device for editing a motion program for a robot, the program editing device including: a program editing unit which receives common input operations with respect to a first type of icon that corresponds to commands relating to control of the robot and a second type of icon that corresponds to commands relating to image capture by a visual sensor and to processing of captured images; and a program generation unit which generates the motion program in accordance with the first type of icon and second type of icon subjected to editing.
Claims
1. A program editing device for editing a motion program of a robot, the device comprising: a program editing unit configured to receive a shared editing operation on a first type of icons corresponding to commands related to control of the robot and a second type of icons corresponding to commands related to imaging with a visual sensor and processing of captured images, and a program generation unit configured to generate the motion program in accordance with the edited first type of icons and second type of icons.
2. The program editing device according to claim 1, wherein the program editing unit comprises: an editing screen display unit configured to display a program creation area for generating a control program of the robot by arranging the first type of icons, and an operation input reception unit configured to receive an operation input for arranging the first type of icons in the program creation area, wherein the operation input reception unit is further configured so as to receive an operation input for arranging the second type of icons in the program creation area, and the program generation unit generates the motion program in accordance with the first type of icons and the second type of icons arranged in the program creation area.
3. The program editing device according to claim 2, wherein the editing screen display unit further displays an icon display area for displaying a list of the first type of icons and the second type of icons, and the operation input reception unit receives an operation input for selecting the first type of icons and the second type of icons from the icon display area and arranging the selected first type of icons and second type of icons in the program creation area.
4. The program editing device according to claim 1, wherein the program editing unit comprises: an editing screen display unit configured to display a first program creation area for generating a control program of the robot by arranging the first type of icons, and an operation input reception unit configured to receive an operation input for arranging the first type of icons in the first program creation area, the editing screen display unit further displays a second program creation area for generating a program related to imaging with the visual sensor and processing of captured images by arranging the second type of icons, and the program generation unit generates the control program of the robot in accordance with the first type of icons arranged in the first program creation area and generates a program related to imaging with the visual sensor and processing of captured images in accordance with the second type of icons arranged in the second program creation area.
5. The program editing device according to claim 4, wherein a first icon display area for displaying a list of the first type of icons and a second icon display area for displaying a list of the second type of icons are further displayed, and the operation input reception unit receives an operation input for selecting the first type of icons from the first icon display area and arranging the selected first type of icons in the first program creation area, and receives an operation input for selecting the second type of icons from the second icon display area and arranging the selected second type of icons in the second program creation area.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0010]
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
[0020] Next, the embodiments of the present disclosure will be described with reference to the drawings. In the referenced drawings, identical constituent portions or functional portions are assigned the same reference signs. In order to facilitate understanding, the scales of the drawings have been appropriately modified. Furthermore, the forms shown in the drawings are merely examples for carrying out the present invention. The present invention is not limited to the illustrated forms.
[0021]
[0022] The visual sensor controller 40 has a function for controlling the visual sensor 41 and a function for performing image processing on the images captured by the visual sensor 41. The visual sensor controller 40 detects the position of the object 1 from the image captured by the visual sensor 41, and supplies the detected position of the object 1 to the robot controller 50. As a result, the robot controller 50 can execute correction of the teaching positions, extraction of the object 1, etc. Below, the function of detecting the position of the object from the image captured by the visual sensor may be referred to as vision detection, and the function of correcting the teaching position based on the position detection by the visual sensor may be referred to as vision correction. Though
[0023] The visual sensor 41 may be a camera which captures grayscale images or color images, or a stereo camera or a three-dimensional sensor which can capture distance images or three-dimensional point groups. A plurality of visual sensors may be arranged in the robot system 100. The visual sensor controller 40 retains model patterns of objects, and executes image processing for detecting an object by pattern matching between an image of an object in the captured image and a model pattern.
[0024] The program editing device 10 is used to create a motion program for the robot 30 for executing handling of the object 1. The program editing device 10 is, for example, a teaching device (teach pendant, tablet terminal, etc.) connected to the robot controller 50. The program editing device 10 may have a configuration as a general computer having a CPU, ROM, RAM, a storage device, an input/output interface, a network interface, etc. The program editing device 10 may be a so-called “programming device” (PC or the like) for performing programming offline.
[0025] As will be described in detail below, the program editing device 10 can be used for programming by means of icons related to both commands used in control of the robot 30 and commands related to imaging with the visual sensor and processing (vision detection) of captured images. Below, when icons of commands used in control of the robot 30 and icons representing commands used in vision detection are distinguished, the former are referred to as the first type of icons and the latter are referred to as the second type of icons.
[0026]
[0027] The functional blocks of the program editing device 10 shown in
[0028]
[0029] The editing operation will be described using the case where a mouse is used as the input device for editing the motion program as an example. In the editing screen 400 shown in
[0030] In the editing screen 400 of
[0031] Icon 201: command to close hand and grasp object
[0032] Icon 202: command to open hand
[0033] Icon 203: command to move tip of robot arm in straight line trajectory
[0034] Icon 204: command to move tip of robot arm in arcuate trajectory
[0035] Icon 205: command to add a waypoint to the route
[0036] Icon 206: command to rotate hand
[0037] Icon 207: command to stop hand
[0038] Icon 208: palletizing (loading) execution command
[0039] Icon 209: de-palletizing (unloading) execution command
[0040] The upper program creation area 300 in the editing screen 400 is an area for creating a motion program by arranging icons in the order of operation. In the editing screen 400, icons are dragged and dropped from the icon display area 200 to the program creation area 300 by operating a mouse. The operation input reception unit 13 arranges a copy of the selected icon in the program creation area in response to such a drag-and-drop operation. By such an operation, the operator can create a motion program by selecting icons from the icon display area 200 and arranging them in the desired positions in the program creation area. In the program creation area 300, the icons selected from the icon display area 200 are arranged from left to right in the order of operation.
[0041] When an icon arranged in the program creation area 300 is selected and a detail tab 262 is selected, the lower area of the editing screen 400 becomes a parameter setting screen (not illustrated) for setting the detailed operation of the command of the icon. The operator can set detailed parameters related to the operation command of the selected icon via the parameter setting screen. As an example, when the icon 203 (straight line movement) arranged in the program creation area 300 is selected, the icon 203 is highlighted. If the operator selects the detail tab 262 in this state, the setting screen for the command (straight line movement) of the icon 203 is displayed in the lower area of the editing screen 400. In this case, the contents of the detailed settings include the following setting items (target position/posture and moving speed). The operator inputs, for example, the following numerical data in each setting item.
(Parameter Setting Items)
[0042] Target position/posture: X: 345.6, Y: 456.7, Z: 567.8 [0043] W: 345.6, P: 456.7, R: 567.8
[0044] Target speed: 750 mm/sec
[0045] In the example of
[0046] When the operator selects the “view and pick up” icon 251 and incorporates it into the motion program, the robot control icons 201 to 209 can be arranged in the program creation area 300 by the same operation as the case in which they are arranged in the program creation area 300 by dragging and dropping.
[0047] When the detail tab 262 is selected in a state in which the icon 251 arranged in the program creation area 300 in
[0048] If a vision detection program has already been registered in the program editing device 10, when the detail tab 262 is selected in a state in which the icon 251 is selected on the editing screen 400 of
[0049]
[0050] As an example, in the program creation area 300A of the editing screen 400A in
[0051] SNAP1: capture image 1
[0052] SNAP2: capture image 2
[0053] FIND_SHAPE: find contours from images
[0054] FIND_BLOB: find blob from images
[0055] CALC_OFFSET: calculate offset value
[0056] In this case, the operator selects each icon arranged in the program creation area 300A, displays the parameter setting screen at the bottom of the editing screen 400A, and sets detailed parameters. The parameter setting of the image capture icon 252 (command SNAP) includes the following setting items.
[0057] Exposure time
[0058] Whether or not the LED lighting is on
[0059] Image reduction rate
[0060] The following setting items are set in the parameter setting of the detection icon 253 (command FIND_SHAPE or FIND_BLOB). In the following setting items, the “matching threshold” and the “contrast threshold” are parameters related to thresholds in the image processing for object detection.
[0061] Image to be used (image by SNAP1 or image by SNAP2)
[0062] Shape to be found
[0063] Matching threshold
[0064] Contrast threshold
[0065] In the correction calculation icon 254 (command CALC_OFFSET), for example, the position of the object in the image is obtained based on the detection results of the two detection icons 253 (command FIND_SHAPE and FIND_BLOB), and by converting the position in the image into three-dimensional coordinates in the robot coordinate system, an offset amount for correcting the teaching position of the robot is obtained.
[0066]
[0067] In the example of the motion program of the program creation area 300 in
[0068] An example of a vision program created in the program creation area 300 or 300A will be described below with reference to
[0069] The vision program 602 of
[0070] In the vision program 602, the vision correction is applied in the two linear movement commands. In the vision program 602, for example, there are a plurality of objects in the field of view (captured image) of the visual sensor, and an operation for picking up the plurality of objects found in the captured image in order while applying vision correction is realized.
[0071] In the vision program 602 of
[0072] In the vision program 603 of
[0073] (A1) The first camera detects one end of the object (auxiliary icon 256 on the left side in
[0074] (A2) The second camera detects the other end of the object (auxiliary icon 256 on the right side in
[0075] (A3) In the correction calculation icon 254, the position of the midpoint of the object is obtained from each of the detected positions, this position is set as the position of the object, and the position of the object in the robot coordinate system is obtained. As a result, the position where the teaching point should be corrected is obtained.
[0076] In the vision program 603 of
[0077] In the vision program 604 of
[0078] In the present embodiment as described above, the first type of icons for robot programming and the second type of icons for the vision program are similarly listed in the icon display area in the editing screen. Furthermore, in the present embodiment, the operation of arranging the icons of the vision program in the program creation area can be performed by the same operations as the operation of arranging the robot programming icons in the program creation area.
[0079] Specifically, according to the present embodiment, a motion program can be created by editing the first type of icons and the second type of icons with a common editing operation. Thus, according to the present embodiment, even an operator who is accustomed to robot programs but who is not accustomed to vision programs can create the vision program without difficulty.
[0080] Though typical embodiments have been used above to describe the present invention, a person skilled in the art would understand that changes and other various modifications, omissions, and additions can be made to the embodiments described above without deviating from the scope of the present invention.
[0081] In the embodiments described above, configuration examples in which the display method and operation method of the icons were standardized when the operation commands of the robot and the operation commands of the processing of images by the visual sensor were expressed as icons were described. In the creation of motion programs, the method of standardizing the command display method and input operation method in robot control programming and vision programming can also be realized by text-based programming.
[0082] As an example, it is assumed that the following text-based motion program (hereinafter referred to as motion program F) is input using the program editing device. In the program list below, the leftmost number on each line is the line number. In motion program F, the object is detected from the image captured by the first camera (camera A), the object is handled while correcting the position of the robot (line numbers 1 to 10), and next, the object is detected from the image captured by the second camera (camera B), and the object is handled while correcting the position of the robot (line numbers 12 to 19).
[0083] (Motion Program F)
[0084] 1: linear position[1] 2000 mm/sec positioning;
[0085] 2: ;
[0086] 3: vision detection ‘A’;
[0087] 4: vision correction data acquisition ‘A’ vision register[1] jump label[100];
[0088] 5: ;
[0089] 6: !Handling;
[0090] 7: linear position[2] 2000 mm/sec smooth100 vision correction, vision register[1] tool correction, position register[1];
[0091] 8: linear position[2] 500 mm/sec positioning vision correction, vision register[1];
[0092] 9: call HAND_CLOSE;
[0093] 10: linear position[2] 2000 mm/sec smooth100 vision correction, vision register[1] tool correction, position register[1];
[0094] 11: ;
[0095] 12: vision detection ‘B’;
[0096] 13: vision correction data acquisition ‘B’ vision register[2] jump label[100];
[0097] 14: ;
[0098] 15: !Handling;
[0099] 16: linear position[2] 2000 mm/sec smooth100 vision correction, vision register[2] tool correction, position register[1];
[0100] 17: linear position[2] 500 mm/sec positioning vision correction, vision register[2];
[0101] 18: call HAND_CLOSE;
[0102] 19: linear position[2] 2000 mm/sec smooth100 vision correction, vision register[1] tool correction, position register[1];
[0103] 20:
[0104] In motion program F above, the commands “linear position[ ]” and “call HAND_CLOSE” are commands belonging to robot control. Specifically, the command “linear position[ ]” is a command to move the tip of the arm of the robot, and the command “call HAND_CLOSE” is a command to call the process for closing the hand. In motion program F, the commands “vision detection” and “vision correction data acquisition” are commands belonging to the vision program. Specifically, the command “vision detection” is a command for capturing an image with a camera, and the command “vision correction data acquisition” is a command for detecting an object and determining the position for correction.
[0105] The program editing device displays a list of instructions, for example, in a pop-up menu on the editing screen where the instructions of the motion program F are input. The pop-up menu includes, for example, a list of instructions as shown in Table 1 below.
TABLE-US-00001 TABLE 1 No. Instruction 1 Linear position[ ] 2 Call HAND_OPEN 3 Call HAND_CLOSE 4 Vision detection 5 Vision correction data acquisition
[0106] The program editing device displays the instruction pop-up menu when the operation to display the instruction pop-up menu is performed. The program editing device then inserts the instruction in the pop-up menu selected by the selection operation (mouse click, touch operation on the touch panel) using the input device into the line where the cursor is present in the editing screen. The operator repeats the operation of selecting and inserting instructions to create the motion program F.
[0107] Even in the creation and editing of such text-based motion programs, instructions regarding robot control and instructions of the vision program are displayed by the same display method, and the operator can insert the instructions of the vision program into the motion program by the same operation method as selecting the instructions of the robot control and inserting them into the program. Thus, even in the case of such a configuration, the same effect as in the case of programming using icons in the embodiments described above can be achieved.
[0108] In the embodiments described above, as a specific example of realizing a predetermined function by capturing an image with a visual sensor and processing the captured image, an example of detecting the position of an object and using the detected position for correcting the operation of the robot has been described. In addition to the above examples, the functions realized by capturing an image with a visual sensor and processing the captured image include various functions, such as inspection and barcode reading, that can be realized by using a visual sensor.
[0109] For example, the case in which the robot system 100 of
[0110] When the function of reading a barcode using the visual sensor 41 is added to the robot system 10 of
REFERENCE SIGNS LIST
[0111] 10 program editing device
11 program editing unit
12 editing screen display unit
13 operation input reception unit
14 program generation unit
18 input device
19 display device
30 robot
31 hand
40 visual sensor controller
41 visual sensor
50 robot controller
100 robot system
200, 200A icon display area
300, 300A program creation area
400, 400A editing screen