Patent classifications
G05B2219/39039
Positioning system using robot
A positioning system using a robot, capable of eliminating an error factor of the robot such as thermal expansion or backlash can be eliminated, and carrying out positioning of the robot with accuracy higher than inherent positioning accuracy of the robot. The positioning system has a robot with a movable arm, visual feature portions provided to a robot hand, and vision sensors positioned at a fixed position outside the robot and configured to capture the feature portions. The hand is configured to grip an object on which the feature portions are formed, and the vision sensors are positioned and configured to capture the respective feature portions.
Robot apparatus and method for controlling robot apparatus
A robot system includes a fixed camera that obtains first measurement data by detecting a plurality of features positioned within a detection range, the detection range including at least part of a range in which a robot arm is movable, a hand camera movable with the robot arm, and a control apparatus that controls the robot arm. A calibration function that relates a value obtained as part of the first measurement data to a command value provided to the robot arm at each of a plurality of positions and orientations at which the hand camera obtains second measurement data by detecting each mark.
System and method for tying together machine vision coordinate spaces in a guided assembly environment
This invention provides a system and method that ties the coordinate spaces at the two locations together during calibration time using features on a runtime workpiece instead of a calibration target. Three possible scenarios are contemplated: wherein the same workpiece features are imaged and identified at both locations; wherein the imaged features of the runtime workpiece differ at each location (with a CAD or measured workpiece rendition available); and wherein the first location containing a motion stage has been calibrated to the motion stage using hand-eye calibration and the second location is hand-eye calibrated to the same motion stage by transferring the runtime part back and forth between locations. Illustratively, the quality of the first two techniques can be improved by running multiple runtime workpieces each with a different pose, extracting and accumulating such features at each location; and then using the accumulated features to tie the two coordinate spaces.
END EFFECTOR CALIBRATION ASSEMBLIES, SYSTEMS, AND METHODS
An end effector calibration assembly includes an electronic controller, a first camera assembly communicatively coupled to the electronic controller, and a second camera assembly communicatively coupled to the electronic controller. A first image capture path of the first camera assembly intersects a second image capture path of the second camera assembly. The electronic controller receives image data from the first camera assembly, receives image data from the second camera assembly, and calibrates a position of the robot end effector based on the image data received from the first camera assembly and the second camera assembly.
Robot Device Configured to Determine an Interaction Machine Position of at Least One Element of a Predetermined Interaction Machine, and Method
A robot device includes an optical detection device configured to detect a surrounding area image of an area surrounding the robot device. The robot device further includes a control device storing a predetermined reference marking and a predetermined reference position of the reference marking. The control device is configured to detect an image detail that shows the reference marking of the interaction machine in the surrounding area image of the area surrounding the robot device, detect the predetermined reference marking in the image detail, determine a distortion of the predetermined reference marking in the image detail, determine a spatial position of the reference marking, determine an interaction machine position of at least one element of the interaction machine with respect to the robot device from the spatial position of the reference marking, and subject the robot device to closed-loop control and/or open-loop control.
AUTOMATIC ROBOTIC ARM CALIBRATION TO CAMERA SYSTEM USING A LASER
A system for calibration of a robot includes an imaging system (136) including two or more cameras (132). A registration device (120) is configured to align positions of a light spot (140) on a reference platform as detected by the two or more cameras with robot positions corresponding with the light spot positions to register an imaging system coordinate system (156) with a robot coordinate system (150).
SYSTEM AND METHOD FOR TYING TOGETHER MACHINE VISION COORDINATE SPACES IN A GUIDED ASSEMBLY ENVIRONMENT
This invention provides a system and method that ties the coordinate spaces at the two locations together during calibration time using features on a runtime workpiece instead of a calibration target. Three possible scenarios are contemplated: wherein the same workpiece features are imaged and identified at both locations; wherein the imaged features of the runtime workpiece differ at each location (with a CAD or measured workpiece rendition available); and wherein the first location containing a motion stage has been calibrated to the motion stage using hand-eye calibration and the second location is hand-eye calibrated to the same motion stage by transferring the runtime part back and forth between locations. Illustratively, the quality of the first two techniques can be improved by running multiple runtime workpieces each with a different pose, extracting and accumulating such features at each location; and then using the accumulated features to tie the two coordinate spaces.
AUTOMATING ROBOT OPERATIONS
A method to control operation of a robot includes generating at least one virtual image by an optical 3D measurement system and with respect to a 3D measurement coordinate system, the at least one virtual image capturing a surface region of a component. The method further includes converting a plurality of point coordinates of the virtual image into point coordinates with respect to a robot coordinate system by a transformation instruction and controlling a tool element of the robot using the point coordinates with respect to the robot coordinate system so as to implement the operation.
Controlling method of robot system, program, recording medium, and robot system
A controlling method of a robot system is provided with highly accurately determination of an origin offset at individual joints, even with a small number of cameras. A controlling unit 08 controls a robot 01 and a camera 04 to perform a photographing step for each of pivotal joints 021, 031 and 051 to acquire photographed data, and subsequently performs computational control. The photographing step assigns predetermined coordinate angles to multiple joints of the robot 01, respectively, to cause the joints to take predetermined positions and orientations, and subsequently causes the camera 04 to photograph a mark 03 during a process of causing the robot 01 to rotate at one of the multiple joints from the predetermined position and orientation. The computational control identifies the joint causing a rotational axis offset among the multiple joints of the robot 01, based on the photographed data acquired by trajectory acquiring control.