Robot vision super visor for hybrid homing, positioning and workspace UFO detection enabling industrial robot use for consumer applications
10589423 ยท 2020-03-17
Inventors
Cpc classification
B25J15/0009
PERFORMING OPERATIONS; TRANSPORTING
B25J11/0045
PERFORMING OPERATIONS; TRANSPORTING
B25J9/1676
PERFORMING OPERATIONS; TRANSPORTING
B25J9/1666
PERFORMING OPERATIONS; TRANSPORTING
A47J36/00
HUMAN NECESSITIES
B25J9/04
PERFORMING OPERATIONS; TRANSPORTING
International classification
G05B19/04
PHYSICS
G05B19/18
PHYSICS
B25J15/00
PERFORMING OPERATIONS; TRANSPORTING
B25J11/00
PERFORMING OPERATIONS; TRANSPORTING
Abstract
A robot vision supervisor system for use with industrial robots being placed with average home consumers or at restaurants that is capable of guiding the robot to its homing position for initial homing or when a loss of reference has occurred. Further the robot vision supervisor is able to detect presence of unidentified foreign objects in the workspace and guide the robot to navigate around and prevent collisions. Image processing algorithms are used with primary, orthogonal and cross camera arrays to increase reliability and accuracy of the robot vision supervisor. Lookup tables of images are captured and stored by controller for discretized positions in the full robot workspace when robot is in known good calibration to compare with images during regular operation for detecting anomalies.
Claims
1. A robot vision supervisor comprising: a robot with an end effector moving along a plurality of axes with reference to a frame; a robot controller directing said end effector to commanded positions with reference to said frame; a plurality of cameras attached to said reference frame; at least one of said plurality of cameras viewing the said end effector and said robot controller computing current position of said end effector along one of said plurality of axes; wherein, said robot controller compares the said current position determined by said camera to said commanded position by said robot controller to detect any loss of referencing; wherein, the said cameras use image processing to extract boundaries of said robot workspace and compare with the boundaries of the said robot moving parts and said end effector to prevent collisions and guide said robot to its home position.
2. A robot vision supervisor as in claim 1, wherein the said cameras are able to detect any unidentified foreign objects other than known objects in the workspace to prevent collisions with the robot moving parts and notify the said controller of their presence.
3. A robot vision supervisor as in claim 1, wherein the said cameras are able to compare position of said end effector against reference features on said frame.
4. A robot vision supervisor as in claim 1, wherein the said cameras are able to compare position of said end effector against features on said frame.
5. A robot vision supervisor as in claim 1, wherein the said cameras are arranged in arrays and employ image stitching algorithms to improve positioning accuracy by joining the viewed images to that of adjacent said cameras.
6. A robot vision supervisor as in claim 1, wherein an image lookup table is generated against all position and orientation combinations conducted when said robot controller is known to work accurately to use for comparison when the said robot is in regular operation later.
7. A robot vision supervisor as in claim 1, further comprising: a plurality of primary cameras; a plurality of orthogonal cameras viewing in a perpendicular direction of said primary cameras.
8. A robot vision supervisor as in claim 5, further comprising: a plurality of cross cameras viewing in a mutually perpendicular direction of said primary cameras and said orthogonal cameras.
Description
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING
(1) The following is a description, by way of example only, of different embodiments of the mechanism, its variations, derivations and reductions.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
DETAILED DESCRIPTION OF THE INVENTION
(13) Now referring to the drawings, wherein like numerals designate like components,
(14) A robot vision supervisor comprised of one or plural primary direction cameras 13, 14, 15 fixed to frame 1, viewing along x-axis. One or plural orthogonal direction cameras 15, 16, 17, 18 mounted on frame 1 viewing in a direction perpendicular to primary direction cameras along y-axis and one of plural third cross cameras 20, 21, 22 mounted on the roof, mounting structure not shown for clarity, also fixed with reference to frame 1 viewing in a mutually perpendicular direction to viewing directions or primary and orthogonal direction cameras. Robot vision supervisor is also assisted by one or plural fixed primary reference marks 23, 24, 25 on frame 1, one or plural secodary reference marks 26, 27, 28 on frame 1, a cutout feature 29 and a corner feature 30 on frame 1.
(15) The 4-axis robot is operated using a robot motion controller directing it to go to different positions and orientations as needed to perform its tasks. Each axis may be comprised of motion transmission elements including motors, encoders, belts, pulleys, clutches as is known in the art. When the robot is first started a homing sequence is run where each robot axis is manually guided to a home position where a sensor trips letting the controller know it has reached a designated origin or home position. As seen in
(16) Once a robot is homed it can successfully move around the workspace doing tasks. It can also go around known workspace objects or obstacles as the programmers would be aware of such obstacles and program the motion path around it. This is based on the condition that the robot is sincerely following all the motion commands and no errors or drifts are building up. Sometimes an event such as a power failure can erase the current positions in controller memory as can be seen in
(17)
(18)
(19) As can be seen in camera viewed images
(20) Even though a single primary camera can successfully compute all positions a plurality of primary cameras is used to cover large robot motion range due to limited field of views of a single camera and image distortions causing lack of accuracy. Image stitching algorithms are employed to improve positioning accuracy between the primary and secondary camera arrays by joining the viewed images to that of adjacent cameras creating panoramic views. Further an image library of positions or a lookup table of images is stored when robot is in good calibration. This image library position lookup table is compared to compute positioning errors of traditional controls and detecting anomalies.
(21) Although the invention has been described herein in connection with various preferred embodiments, there is no intention to limit the invention to those embodiments. It should be understood that various changes and modifications to the preferred embodiments will be apparent to those skilled in the art. Such changes and modifications may be made without departing from the spirit and scope of the present invention and without diminishing its attendant advantages. Therefore, the appended claims are intended to cover such changes and modifications.