Patent classifications
G05B2219/39543
Method for gripping an object and suction gripper
The invention relates to a method for gripping an object by a handling system, including a robot with at least one robot arm, a gripping device which is connected to the robot arm and has a pneumatically operated suction gripper having an elastically deformable contact portion for contact with an outer surface of the object to be gripped, an identifier for identifying the outer surface of the object to be gripped and a controller which interacts with the identifier and is designed to control the robot.
Multistep Visual Assistance for Automated Inspection
Illustrative embodiments provide a method by which artificial intelligence in combination with vision systems or cameras cooperate with a robot to automate a process for inspecting a workpiece. An illustrative method includes providing a set of cameras to image a set of workpieces that are randomly disposed in a storage area. A controller employing a neural network trained to identify workpieces then processes images from the set of cameras to identify each workpiece, and uses workpiece identity to customize the operation of an inspection system.
SYSTEMS AND METHODS FOR CONTROL OF ROBOTIC MANIPULATION
A robot system is provided that includes a base, an articulable arm, a visual acquisition unit, and at least one processor. The articulable arm extends from the base and is configured to be moved toward a target. The visual acquisition unit is mounted to the arm or the base, and acquires environmental information. The at least one processor is operably coupled to the arm and the visual acquisition unit, the at least one processor configured to: generate an environmental model using the environmental information; select, from a plurality of planning schemes, using the environmental model, at least one planning scheme to translate the arm toward the target; plan movement of the arm toward the target using the selected at least one planning scheme; and control movement of the arm toward the target using the at least one selected planning scheme.
GENERATING A MODEL FOR AN OBJECT ENCOUNTERED BY A ROBOT
Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.
MACHINE LEARNING CONTROL OF OBJECT HANDOVERS
A robotic control system directs a robot to take an object from a human grasp by obtaining an image of a human hand holding an object, estimating the pose of the human hand and the object, and determining a grasp pose for the robot that will not interfere with the human hand. In at least one example, a depth camera is used to obtain a point cloud of the human hand holding the object. The point cloud is provided to a deep network that is trained to generate a grasp pose for a robotic gripper that can take the object from the human's hand without pinching or touching the human's fingers.
Robot, robot system, control device, and control method
A robot includes: a hand; and a control unit that operates the hand, in which the control unit generates three-dimensional point group information for a partial image forming a captured image obtained by an imaging unit, and causes the hand to hold an object included in the partial image.
Hand control apparatus and hand control system
A hand control apparatus including an extracting unit extracting a grip pattern of an object having a shape closest to that of the object acquired by a shape acquiring unit from a storage unit storing and associating shapes of plural types of objects and grip patterns, a position and posture calculating unit calculating a gripping position and posture of the hand in accordance with the extracted grip pattern, a hand driving unit causing the hand to grip the object based on the calculated gripping position and posture, a determining unit determining if a gripped state of the object is appropriate based on information acquired by at least one of the shape acquiring unit, a force sensor and a tactile sensor, and a gripped state correcting unit correcting at least one of the gripping position and the posture when it is determined that the gripped state of the object is inappropriate.
Generating a model for an object encountered by a robot
Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.
Supervised Autonomous Grasping
A computer-implemented method, executed by data processing hardware of a robot, includes receiving a three-dimensional point cloud of sensor data for a space within an environment about the robot. The method includes receiving a selection input indicating a user-selection of a target object represented in an image corresponding to the space. The target object is for grasping by an end-effector of a robotic manipulator of the robot. The method includes generating a grasp region for the end-effector of the robotic manipulator by projecting a plurality of rays from the selected target object of the image onto the three-dimensional point cloud of sensor data. The method includes determining a grasp geometry for the robotic manipulator to grasp the target object within the grasp region. The method includes instructing the end-effector of the robotic manipulator to grasp the target object within the grasp region based on the grasp geometry.
Gripping system with machine learning
A gripping system includes a hand that grips a workpiece, a robot that supports the hand and changes at least one of a position and a posture of the hand, and an image sensor that acquires image information from a viewpoint interlocked with at least one of the position and the posture of the hand. Additionally, the gripping system includes a construction module that constructs a model by machine learning based on collection data. The model corresponds to at least a part of a process of specifying an operation command of the robot based on the image information acquired by the image sensor and hand position information representing at least one of the position and the posture of the hand. An operation module executes the operation command of the robot based on the image information, the hand position information, and the model, and a robot control module operates the robot based on the operation command of the robot operated by the operation module.