Patent classifications
G05B2219/39039
End effector calibration assemblies, systems, and methods
An end effector calibration assembly includes an electronic controller, a first camera assembly communicatively coupled to the electronic controller, and a second camera assembly communicatively coupled to the electronic controller. A first image capture path of the first camera assembly intersects a second image capture path of the second camera assembly. The electronic controller receives image data from the first camera assembly, receives image data from the second camera assembly, and calibrates a position of the robot end effector based on the image data received from the first camera assembly and the second camera assembly.
POSITIONING SYSTEM USING ROBOT
A positioning system using a robot, capable of eliminating an error factor of the robot such as thermal expansion or backlash can be eliminated, and carrying out positioning of the robot with accuracy higher than inherent positioning accuracy of the robot. The positioning system has a robot with a movable arm, visual feature portions provided to a robot hand, and vision sensors positioned at a fixed position outside the robot and configured to capture the feature portions. The hand is configured to grip an object on which the feature portions are formed, and the vision sensors are positioned and configured to capture the respective feature portions.
Automatic robotic arm calibration to camera system using a laser
A system for calibration of a robot includes an imaging system (136) including two or more cameras (132). A registration device (120) is configured to align positions of a light spot (140) on a reference platform as detected by the two or more cameras with robot positions corresponding with the light spot positions to register an imaging system coordinate system (156) with a robot coordinate system (150).
Vision sensor system, control method, and non-transitory computer readable storage medium
On the basis of a measured shape, which is a three-dimensional shape of a subject measured by means of a measuring unit using an image captured when an image capturing unit is disposed in a first position and a first attitude, a movement control unit determines a second position and a second attitude for capturing an image of the subject again, and sends an instruction to a movement mechanism. The three-dimensional shape is represented by means of height information from a reference surface. The movement control unit extracts, from the measured shape, a deficient region having deficient height information, and determines the second position and the second attitude on the basis of the height information around the deficient region of the measured shape. The position and attitude of the image capturing unit can be determined in such a way as to make it easy to eliminate the effects of shadows.
Information processing apparatus, information processing method, and information processing system
There is provided an information processing apparatus to estimate a position of a distal end of a movable unit with a reduced processing load, the information processing including a position computer that computes, on the basis of first positional information obtained from reading of a projected marker by a first visual sensor and second positional information including positional information obtained from reading of the marker by a second visual sensor that moves relative to the first visual sensor, a position of a movable unit in which the second visual sensor is disposed. This makes it possible to estimate the position of the distal end of the movable unit with a reduced processing load.
System and method for tying together machine vision coordinate spaces in a guided assembly environment
This invention provides a system and method that ties the coordinate spaces at the two locations together during calibration time using features on a runtime workpiece instead of a calibration target. Three possible scenarios are contemplated: wherein the same workpiece features are imaged and identified at both locations; wherein the imaged features of the runtime workpiece differ at each location (with a CAD or measured workpiece rendition available); and wherein the first location containing a motion stage has been calibrated to the motion stage using hand-eye calibration and the second location is hand-eye calibrated to the same motion stage by transferring the runtime part back and forth between locations. Illustratively, the quality of the first two techniques can be improved by running multiple runtime workpieces each with a different pose, extracting and accumulating such features at each location; and then using the accumulated features to tie the two coordinate spaces.
VISION SENSOR SYSTEM, CONTROL METHOD, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM
On the basis of a measured shape, which is a three-dimensional shape of a subject measured by means of a measuring unit using an image captured when an image capturing unit is disposed in a first position and a first attitude, a movement control unit determines a second position and a second attitude for capturing an image of the subject again, and sends an instruction to a movement mechanism. The three-dimensional shape is represented by means of height information from a reference surface. The movement control unit extracts, from the measured shape, a deficient region having deficient height information, and determines the second position and the second attitude on the basis of the height information around the deficient region of the measured shape. The position and attitude of the image capturing unit can be determined in such a way as to make it easy to eliminate the effects of shadows.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING SYSTEM
There is provided an information processing apparatus to estimate a position of a distal end of a movable unit with a reduced processing load, the information processing including a position computer that computes, on the basis of first positional information obtained from reading of a projected marker by a first visual sensor and second positional information including positional information obtained from reading of the marker by a second visual sensor that moves relative to the first visual sensor, a position of a movable unit in which the second visual sensor is disposed. This makes it possible to estimate the position of the distal end of the movable unit with a reduced processing load.
Automating robot operations
A method to control operation of a robot includes generating at least one virtual image by an optical 3D measurement system and with respect to a 3D measurement coordinate system, the at least one virtual image capturing a surface region of a component. The method further includes converting a plurality of point coordinates of the virtual image into point coordinates with respect to a robot coordinate system by a transformation instruction and controlling a tool element of the robot using the point coordinates with respect to the robot coordinate system so as to implement the operation.
SYSTEM AND METHOD FOR TYING TOGETHER MACHINE VISION COORDINATE SPACES IN A GUIDED ASSEMBLY ENVIRONMENT
This invention provides a system and method that ties the coordinate spaces at the two locations together during calibration time using features on a runtime workpiece instead of a calibration target. Three possible scenarios are contemplated: wherein the same workpiece features are imaged and identified at both locations; wherein the imaged features of the runtime workpiece differ at each location (with a CAD or measured workpiece rendition available); and wherein the first location containing a motion stage has been calibrated to the motion stage using hand-eye calibration and the second location is hand-eye calibrated to the same motion stage by transferring the runtime part back and forth between locations. Illustratively, the quality of the first two techniques can be improved by running multiple runtime workpieces each with a different pose, extracting and accumulating such features at each location; and then using the accumulated features to tie the two coordinate spaces.