Patent classifications
G05B2219/39057
SYSTEM, DEVICE AND METHOD FOR DETERMINING ERROR IN ROBOTIC MANIPULATOR-TO-CAMERA CALIBRATION
Disclosed herein is a device, system and method for determining error in robotic manipulator-to-camera calibration. The method includes detecting a test object by a camera coupled to a robotic manipulator. One or more test points are identified on the test object based on a CAD model and pre-defined contact points corresponding to the test object. Arm poses are determined for the robotic manipulator to reach the test points on the 3D test object by using current robotic manipulator-to-camera calibration. While driving an end effector of the robotic manipulator based on the arm poses, any contact of the end effector on the 3D test object is recorded upon receiving a feedback from the 3D test object. An error is determined in the current robotic manipulator-to-camera calibration based on current position of the end effector relative to the one or more test points on the 3D test object.
SYSTEM AND METHOD FOR THREE-DIMENSIONAL CALIBRATION OF A VISION SYSTEM
This invention provides a system and method for calibration of a 3D vision system using a multi-layer 3D calibration target that removes the requirement of accurate pre-calibration of the target. The system and method acquires images of the multi-layer 3D calibration target at different spatial locations and at different times, and computes the orientation difference of the 3D calibration target between the two acquisitions. The technique can be used to perform vision-based single-plane orientation repeatability inspection and monitoring. By applying this technique to an assembly working plane, vision-based assembly working plane orientation repeatability, inspection and monitoring can occur. Combined with a moving robot end effector, this technique provides vision-based robot end-effector orientation repeatability inspection and monitoring. Vision-guided adjustment of two planes to achieve parallelism can be achieved. The system and method operates to perform precise vision-guided robot setup to achieve parallelism of the robot's end-effector and the assembly working plane.
IMAGE VOLUME FOR OBJECT POSE ESTIMATION
Apparatuses, systems, and techniques estimate a pose of an object based on images generated from a combined image volume. In at least one embodiment, the combined image volume is obtained from a plurality of image volumes generated based on a plurality of images of an object.
METHOD OF TEACHING ROBOT AND ROBOT SYSTEM
A robot system includes a robot, a vision sensor, and a controller. The vision sensor is configured to be detachably attached to the robot. The controller is configured to measure a reference object by using the vision sensor and calibrate a relative relationship between a sensor portion of the vision sensor and an engagement portion of the vision sensor, and teach the robot by referring to the relative relationship and by using the vision sensor, after the vision sensor is attached to the robot.
IMAGE PROCESSING APPARATUS THAT PROCESSES IMAGE PICKED UP BY IMAGE PICKUP APPARATUS ATTACHED TO ROBOT, CONTROL METHOD THEREFOR, AND STORAGE MEDIUM STORING CONTROL PROGRAM THEREFOR
An image processing apparatus capable of simplifying operations for determining an image pickup posture of an image pickup apparatus attached to a robot. The image processing apparatus processes an image that an image pickup apparatus attached to a robot picks up. The image processing apparatus includes a memory device that stores a set of instructions, and at least one processor that executes the set of instructions to specify a working area of the robot based on teaching point information showing a plurality of designated teaching points, specify an image pickup area of the image pickup apparatus so as to include the specified working area; and determine an image pickup posture of the robot based on the specified image pickup area.
Method of teaching robot and robot system
A robot system includes a robot, a vision sensor, and a controller. The vision sensor is configured to be detachably attached to the robot. The controller is configured to measure a reference object by using the vision sensor and calibrate a relative relationship between a sensor portion of the vision sensor and an engagement portion of the vision sensor, and teach the robot by referring to the relative relationship and by using the vision sensor, after the vision sensor is attached to the robot.
Method of Automated Calibration for In-Hand Object Location System
A method of automated in-hand calibration including providing at least one robotic hand including a plurality of grippers connected to a body and providing at least one camera disposed on a periphery surface of the plurality of grippers. The method also includes providing at least one tactile sensor disposed in the at least one illumination surface and actuating the plurality of grippers to grasp an object. The method further includes locating a position of the object with respect to the at least one robotic hand and calibrating a distance parameter via the at least one camera. The method also includes calibrating the at least one tactile sensor with the at least one camera and generating instructions to grip and manipulate an orientation of the object via an image feed from the at least one camera for a visualization of the object.
VISION SENSOR SYSTEM, CONTROL METHOD, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM
On the basis of a measured shape, which is a three-dimensional shape of a subject measured by means of a measuring unit using an image captured when an image capturing unit is disposed in a first position and a first attitude, a movement control unit determines a second position and a second attitude for capturing an image of the subject again, and sends an instruction to a movement mechanism. The three-dimensional shape is represented by means of height information from a reference surface. The movement control unit extracts, from the measured shape, a deficient region having deficient height information, and determines the second position and the second attitude on the basis of the height information around the deficient region of the measured shape. The position and attitude of the image capturing unit can be determined in such a way as to make it easy to eliminate the effects of shadows.
METHOD AND APPARATUS FOR MANAGING ROBOT SYSTEM
Embodiments of the present disclosure provide methods for managing a robot system. In the method, orientations for links in the robot system may be obtained when the links are arranged in at least one posture, here each of the orientations indicates a direction pointed by one of the links. At least one image of an object placed in the robot system may be obtained from a vision device equipped on one of the links. Based on the orientations and the at least one image, a first mapping may be determined between a vision coordinate system of the vision device and a link coordination system of the link. Further, embodiments of present disclosure provide apparatuses, systems, and computer readable media for managing a robot system. The vision device may be calibrated by the first mapping and may be used to manage operations of the robot system.
Simultaneous kinematic and hand-eye calibration
Described are machine vision systems and methods for simultaneous kinematic and hand-eye calibration. A machine vision system includes a robot and a 3D sensor in communication with a control system. The control system is configured to move the robot to poses, and for each pose: capture a 3D image of calibration target features and robot joint angles. The control system is configured to obtain initial values for robot calibration parameters, and determine initial values for hand-eye calibration parameters based on the initial values for the robot calibration parameters, the 3D image, and joint angles. The control system is configured to determine final values for the hand-eye calibration parameters and robot calibration parameters by refining the hand-eye calibration parameters and robot calibration parameters to minimize a cost function.