Patent classifications
G05B2219/39509
Deep machine learning methods and apparatus for robotic grasping
Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a deep neural network to predict a measure that candidate motion data for an end effector of a robot will result in a successful grasp of one or more objects by the end effector. Some implementations are directed to utilization of the trained deep neural network to servo a grasping end effector of a robot to achieve a successful grasp of an object by the grasping end effector. For example, the trained deep neural network may be utilized in the iterative updating of motion control commands for one or more actuators of a robot that control the pose of a grasping end effector of the robot, and to determine when to generate grasping control commands to effectuate an attempted grasp by the grasping end effector.
Robot hand controller, robot system, and robot hand control method
A robot hand controller includes an air supply unit configured to supply air into fingers of a robot hand and configured to discharge air in the fingers, and a controller configured to control the air supply unit, where the air supply unit includes two or more air passages respectively connected to the different fingers, the air passages capable of supplying the air into the fingers and discharging the air in the fingers independently from each other, and the controller controls supply and discharge of the air through each of the two or more air passages in response to a shape of the workpiece and an object in a vicinity of a transport destination of the workpiece.
Robot Hand, Robot Hand Control Method, and Program
A robot hand, and a robot hand control method and program are provided that are capable of performing an assembly task at high speed while alleviating shock between a gripped object (20) and an assembly target object (22). A robot hand (100) includes a hand (12), a displacement sensor (14), an estimation section, and a controller (16). The hand (12) includes an anti-slip mechanism at a contact portion with the gripped object (20) and a mechanism capable of anisotropic movement in three degrees of freedom under external force. The displacement sensor (14) is configured to detect a displacement amount of the hand (12) when an external force has been applied to the hand (12) from a state of mechanical equilibrium existing prior to application of the external force. Based on the displacement amount detected by the sensor, the estimation section is configured to estimate position-orientation-displacement amounts of the gripped object (20) when the gripped object (20) is being assembled to an assembly target object (22). The controller (16) includes a control section configured to control the hand (12) based on the position/orientation-displacement amounts of the gripped object as estimated by the estimation section so as to assemble the gripped object (20) to the assembly target object (22).
Robotic device and gripping method
A robotic device includes an end effector device, a first sensor, and a controller. The end effector device includes two fingers for gripping a workpiece. The first sensor detects a pressure distribution on a gripping position on the workpiece by the two fingers. The controller performs, based on a temporal variation in the pressure distribution when the workpiece is lifted, posture control including rotation of the end effector device.
Method and computing system for performing motion planning based on image information generated by a camera
A system and method for motion planning is presented. The system is configured, when an object is or has been in a camera field of view of a camera, to receive first image information that is generated when the camera has a first camera pose. The system is further configured to determine, based on the first image information, a first estimate of the object structure, and to identify, based on the first estimate of the object structure or based on the first image information, an object corner. The system is further configured to cause an end effector apparatus to move the camera to a second camera pose, and to receive second image information for representing the object's structure. The system is configured to determine a second estimate of the object's structure based on the second image information, and to generate a motion plan based on at least the second estimate.
METHOD AND COMPUTING SYSTEM FOR PERFORMING MOTION PLANNING BASED ON IMAGE INFORMATION GENERATED BY A CAMERA
A system and method for motion planning is presented. The system is configured, when an object is or has been in a camera field of view of a camera, to receive first image information that is generated when the camera has a first camera pose. The system is further configured to determine, based on the first image information, a first estimate of the object structure, and to identify, based on the first estimate of the object structure or based on the first image information, an object corner. The system is further configured to cause an end effector apparatus to move the camera to a second camera pose, and to receive second image information for representing the object's structure. The system is configured to determine a second estimate of the object's structure based on the second image information, and to generate a motion plan based on at least the second estimate.
Method and computing system for performing motion planning based on image information generated by a camera
A system and method for motion planning is presented. The system is configured, when an object is or has been in a camera field of view of a camera, to receive first image information that is generated when the camera has a first camera pose. The system is further configured to determine, based on the first image information, a first estimate of the object structure, and to identify, based on the first estimate of the object structure or based on the first image information, an object corner. The system is further configured to cause an end effector apparatus to move the camera to a second camera pose, and to receive second image information for representing the object's structure. The system is configured to determine a second estimate of the object's structure based on the second image information, and to generate a motion plan based on at least the second estimate.
METHOD AND COMPUTING SYSTEM FOR PERFORMING MOTION PLANNING BASED ON IMAGE INFORMATION GENERATED BY A CAMERA
A system and method for motion planning is presented. The system is configured, when an object is or has been in a camera field of view of a camera, to receive first image information that is generated when the camera has a first camera pose. The system is further configured to determine, based on the first image information, a first estimate of the object structure, and to identify, based on the first estimate of the object structure or based on the first image information, an object corner. The system is further configured to cause an end effector apparatus to move the camera to a second camera pose, and to receive second image information for representing the object's structure. The system is configured to determine a second estimate of the object's structure based on the second image information, and to generate a motion plan based on at least the second estimate.
METHOD AND COMPUTING SYSTEM FOR PERFORMING OBJECT DETECTION OR ROBOT INTERACTION PLANNING BASED ON IMAGE INFORMATION GENERATED BY A CAMERA
A method and computing system for performing object detection are presented. The computing system may be configured to: receive first image information that represents at least a first portion of an object structure of an object in a camera's field of view, wherein the first image information is associate with a first camera pose; generate or update, based on the first image information, sensed structure information representing the object structure; identify an object corner associated with the object structure; cause the robot arm to move the camera to a second camera pose in which the camera is pointed at the object corner; receive second image information associated with the second camera pose; update the sensed structure information based on the second image information; determine, based on the updated sensed structure information, an object type associated with the object; determine one or more robot interaction locations based on the object type.
DEEP MACHINE LEARNING METHODS AND APPARATUS FOR ROBOTIC GRASPING
Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a deep neural network to predict a measure that candidate motion data for an end effector of a robot will result in a successful grasp of one or more objects by the end effector. Some implementations are directed to utilization of the trained deep neural network to servo a grasping end effector of a robot to achieve a successful grasp of an object by the grasping end effector. For example, the trained deep neural network may be utilized in the iterative updating of motion control commands for one or more actuators of a robot that control the pose of a grasping end effector of the robot, and to determine when to generate grasping control commands to effectuate an attempted grasp by the grasping end effector.