Patent classifications
G05B2219/39046
Calibration system and calibration method calibrating mechanical parameters of wrist part of robot
A calibration system able to calibrate mechanical parameters of a wrist part by a simpler manner is provided. The calibration system utilizes a target fastened at a predetermined position with respect to the joint closest to a hand of a robot and an imaging device set around the robot so as to calibrate the parameters of a mechanical model representing the wrist part of the robot. The posture of the target is changed from a predetermined initial position to generate a plurality of preliminary positions. Using these preliminary positions as starting points, the end point position of the robot whereby the target becomes a predetermined positional relationship with respect to the imaging device on the image obtained by capturing an image of the target is calculated. The calibration system uses the calculated end point position as the basis to calibrate the mechanical parameters of the wrist part.
GENERATING A MODEL FOR AN OBJECT ENCOUNTERED BY A ROBOT
Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.
Generating a model for an object encountered by a robot
Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.
Generating a model for an object encountered by a robot
Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.
Integrated obstacle detection and payload centering sensor system
Disclosed are various embodiments for an integrated obstacle detection and payload centering sensor system. A robotic drive unit (RDU) captures, with a downward facing camera mounted to itself, an image of fiducial located on the ground. The RDU then positions itself over the fiducial and subsequently rotates. As it rotates, the RDU captures a point cloud of the surround vicinity a forward-facing three-dimensional camera mounted to itself. The RDU then identifies in the point cloud at least two legs of a storage unit positioned over the robotic drive unit. Subsequently, the RDU determines a location for each of the at least two legs relative to the fiducial and triangulates a center of the storage unit based at least in part on the location of each of the at least two legs. The RDU then centers itself underneath the storage unit.
APPARATUS AND METHOD OF CONTROLLING ROBOT ARM
An apparatus for controlling a robot arm includes: the robot arm; a calibration board on which calibration marks for self-diagnosis are shown; a distance sensor mounted on the robot arm and configured to measure a distance; an image sensor mounted on the robot arm and configured to obtain an image; and a processor configured to move the robot arm to a position for the self-diagnosis, measure a distance from a predetermined part of the robot arm to the calibration board by using the distance sensor, obtain an image of the calibration board by using the image sensor, and output a signal indicating a malfunction of the robot arm in response to the measured distance being outside a distance error range, and an image measurement value of the obtained image being outside an image error range.
MECHANICAL ARM POSITIONING METHOD AND SYSTEM ADOPTING THE SAME
A mechanical arm positioning method configured to position a mechanical arm at a fixed point. The method includes capturing a positioning pattern through an image-capturing module disposed on mechanical arm to generate an image with a positioning image, the positioning image corresponding to positioning pattern. Subsequently, whether a center of positioning image is located at a center of the image is determined. If not, a position of mechanical arm is adjusted in parallel with a plane where the positioning pattern is located until the center of positioning image is located at the center of the image. Subsequently, whether an area of positioning image is substantially equal to a predetermined area is determined. If not, a distance between mechanical arm and positioning pattern is adjusted perpendicular to plane where the positioning pattern is located until the area of positioning image is substantially equal to predetermined area.
INTEGRATED OBSTACLE DETECTION AND PAYLOAD CENTERING SENSOR SYSTEM
Disclosed are various embodiments for an integrated obstacle detection and payload centering sensor system. A robotic drive unit (RDU) captures, with a downward facing camera mounted to itself, an image of fiducial located on the ground. The RDU then positions itself over the fiducial and subsequently rotates. As it rotates, the RDU captures a point cloud of the surround vicinity a forward-facing three-dimensional camera mounted to itself. The RDU then identifies in the point cloud at least two legs of a storage unit positioned over the robotic drive unit. Subsequently, the RDU determines a location for each of the at least two legs relative to the fiducial and triangulates a center of the storage unit based at least in part on the location of each of the at least two legs. The RDU then centers itself underneath the storage unit.
GENERATING A MODEL FOR AN OBJECT ENCOUNTERED BY A ROBOT
Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.
GENERATING A MODEL FOR AN OBJECT ENCOUNTERED BY A ROBOT
Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.