Patent classifications
G05B2219/39474
OBJECT MANIPULATION
A robot for object manipulation may include sensors, a robot appendage, actuators configured to drive joints of the robot appendage, a planner, and a controller. Object path planning may include determining poses. Object trajectory optimization may include assigning a set of timestamps to the poses, optimizing a cost function which may be a cost function for finger sliding based on a penalty for a sliding distance, a change in desired normal direction, and a wrench error associated with sliding a robot finger, and generating an object trajectory based on the optimized cost function. Grasp sequence planning may be model-based or deep reinforcement learning (DRL) policy based. The controller may execute the object trajectory and the grasp sequence via the robot appendage and actuators.
OBJECT MANIPULATION
A robot for object manipulation may include sensors, a robot appendage, actuators configured to drive joints of the robot appendage, a planner, and a controller. Object path planning may include determining poses. Object trajectory optimization may include assigning a set of timestamps to the poses, optimizing a cost function based on an inverse kinematic (IK) error, a difference between an estimated required wrench and an actual wrench, and a grasp efficiency, and generating a reference object trajectory based on the optimized cost function. Grasp sequence planning may be model-based or deep reinforcement learning (DRL) policy based. The controller may implement the reference object trajectory and the grasp sequence via the robot appendage and actuators.
Determining final grasp pose of robot end effector after traversing to pre-grasp pose
Grasping of an object, by an end effector of a robot, based on a final grasp pose, of the end effector, that is determined after the end effector has been traversed to a pre-grasp pose. An end effector vision component can be utilized to capture instance(s) of end effector vision data after the end effector has been traversed to the pre-grasp pose, and the final grasp pose can be determined based on the end effector vision data. For example, the final grasp pose can be determined based on selecting instance(s) of pre-stored visual features(s) that satisfy similarity condition(s) relative to current visual features of the instance(s) of end effector vision data, and determining the final grasp pose based on pre-stored grasp criteria stored in association with the selected instance(s) of pre-stored visual feature(s).
SYSTEMS AND METHODS FOR DETERMINING A TYPE OF GRASP FOR A ROBOTIC END-EFFECTOR
Substantially as described and illustrated herein including devices, methods of operation for the systems or devices, articles of manufacture including processor-executable instructions, and a system including a robot.
Systems, devices, articles, and methods for prehension of items
A system and method of determining a grasp type of an end-effector of a robot when interacting with an item wherein a plurality of velocity values of the end-effector at various positions of its movement are collected and used to determine the grasp type when a given velocity value is below a predetermined threshold.
SYSTEMS, DEVICES, ARTICLES, AND METHODS FOR PREHENSION OF ITEMS
Substantially as described and illustrated herein including devices, methods of operation for the systems or devices, articles of manufacture including processor-executable instructions, and a system including a robot.
Object manipulation
A robot for object manipulation may include sensors, a robot appendage, actuators configured to drive joints of the robot appendage, a planner, and a controller. Object path planning may include determining poses. Object trajectory optimization may include assigning a set of timestamps to the poses, optimizing a cost function based on an inverse kinematic (IK) error, a difference between an estimated required wrench and an actual wrench, and a grasp efficiency, and generating a reference object trajectory based on the optimized cost function. Grasp sequence planning may be model-based or deep reinforcement learning (DRL) policy based. The controller may implement the reference object trajectory and the grasp sequence via the robot appendage and actuators.
Eye-on-Hand Reinforcement Learner for Dynamic Grasping with Active Pose Estimation
A controller is provided for performing dynamic grasping of a target object using visual sensory inputs. The controller includes a robotic interface connected to a robotic arm including links connected by joints having actuators and encoders, and a gripper of the end-effector of the robotic arm configured to grasp the target object in response to robot control signals, and a vision sensor configured to continuously provide visual observations for tracking poses of the target object in a workspace and compute grasp poses, wherein the vision sensor is mounted on a distal end of the robotic arm adjacent to the gripper. The controller trains the Eye-on-Hand reinforcement learner policy, tracks the poses of the target object, and generates robot control signals to follow the target object while keeping it in the field of view of the vision sensor and grasp the target object in the workspace.