G05B2219/39473

Robotic grasping prediction using neural networks and geometry aware object representation

Deep machine learning methods and apparatus, some of which are related to determining a grasp outcome prediction for a candidate grasp pose of an end effector of a robot. Some implementations are directed to training and utilization of both a geometry network and a grasp outcome prediction network. The trained geometry network can be utilized to generate, based on two-dimensional or two-and-a-half-dimensional image(s), geometry output(s) that are: geometry-aware, and that represent (e.g., high-dimensionally) three-dimensional features captured by the image(s). In some implementations, the geometry output(s) include at least an encoding that is generated based on a trained encoding neural network trained to generate encodings that represent three-dimensional features (e.g., shape). The trained grasp outcome prediction network can be utilized to generate, based on applying the geometry output(s) and additional data as input(s) to the network, a grasp outcome prediction for a candidate grasp pose.

System and method for robotic bin picking

A method and computing system comprising identifying one or more candidate objects for selection by a robot. A path to the one or more candidate objects may be determined based upon, at least in part, a robotic environment and at least one robotic constraint. A feasibility of grasping a first candidate object of the one or more candidate objects may be validated. If the feasibility is validated, the robot may be controlled to physically select the first candidate object. If the feasibility is not validated, at least one of a different grasping point of the first candidate object, a second path, or a second candidate object may be selected.

METHOD FOR PICKING UP AN OBJECT BY MEANS OF A ROBOTIC DEVICE

A method for picking up an object by means of a robotic device. The method includes obtaining at least one depth image of the object; determining, for each of a plurality of points of the object, the value of a measure of the scattering of surface normal vectors in an area around the point of the object; supplying the determined values to a neural network configured to output, in response to an input containing measured scattering values, an indication of object locations for pick-up; determining a location of the object for pick-up from an output which the neural network outputs in response to the supply of the determined values; and controlling the robotic device to pick up the object at the determined location.

LARGE OBJECT ROBOTIC FRONT LOADING ALGORITHM

A method and system are herein disclosed wherein a robot handles objects that are large, unwieldy, highly-deformable, or otherwise difficult to contain and carry. The robot is operated to navigate an environment and detect and classify objects using a sensing system. The robot determines the type, size and location of objects and classifies the objects based on detected attributes. Grabber pad arms and grabber pads move other objects out of the way and move the target object onto the shovel to be carried. The robot maneuvers objects into and out of a containment area comprising the shovel and grabber pad arms following a process optimized for the type of object to be transported. Large, unwieldy, highly deformable, or otherwise difficult to maneuver objects may be managed by the method disclosed herein.

COLLISION HANDLING METHODS IN GRASP GENERATION
20230166398 · 2023-06-01 ·

A robotic grasp generation technique for part picking applications. Part and gripper geometry are provided as inputs, typically from CAD files. Gripper kinematics are also defined as an input. A set of candidate grasps is provided using any known preliminary grasp generation tool. A point model of the part and a model of the gripper contact surfaces with a clearance margin are used in an optimization computation applied to each of the candidate grasps, resulting in an adjusted grasp database. The adjusted grasps optimize grasp quality using a virtual gripper surface, which positions the actual gripper surface a small distance away from the part. A signed distance field calculation is then performed on each of the adjusted grasps, and those with any collision between the gripper and the part are discarded. The resulting grasp database includes high quality collision-free grasps for use in a robotic part pick-and-place operation.

Depth perception modeling for grasping objects
11185978 · 2021-11-30 · ·

Methods, grasping systems, and computer-readable mediums storing computer executable code for grasping an object are provided. In an example, a depth image of the object may be obtained by a grasping system. A potential grasp point of the object may be determined by the grasping system based on the depth image. A tactile output corresponding to the potential grasp point may be estimated by the grasping system based on data from the depth image. The grasping system may be controlled to grasp the object at the potential grasp point based on the estimated tactile output.

Hybrid machine learning-based systems and methods for training an object picking robot with real and simulated performance data

For training an object picking robot with real and simulated grasp performance data, grasp locations on an object are assigned based on object physical properties. A simulation experiment for robot grasping is performed using a first set of assigned locations. Based on simulation data from the simulation, a simulated object grasp quality of the robot is evaluated for each of the assigned locations. A first set of candidate grasp locations on the object is determined based on data representative of simulated grasp quality from the evaluation. Based on sensor data from an actual experiment for the robot grasping using each of the candidate grasp locations, an actual object grasp quality is evaluated for each of the candidate locations.

CONTROL DEVICE, ROBOT SYSTEM, CONTROL METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM

A control device includes a target position setting part, a trajectory estimation part, and a target position selector. The target position setting part determines a target position of an actor of a robot based on a form of an object located in an operating environment of the robot. The trajectory estimation part estimates a predicted trajectory of the actor based on motion of the actor up to the present, and estimates a trajectory of the actor from a current position to the target position as an approach trajectory using a predetermined function. The target position selector selects one target position based on a degree of similarity between the predicted trajectory and each of the approach trajectory.

SYSTEM AND METHOD FOR HEIGHT-MAP-BASED GRASP EXECUTION
20210187741 · 2021-06-24 ·

Systems and method for grasp execution using height maps.

ROBOTIC GRASPING PREDICTION USING NEURAL NETWORKS AND GEOMETRY AWARE OBJECT REPRESENTATION

Deep machine learning methods and apparatus, some of which are related to determining a grasp outcome prediction for a candidate grasp pose of an end effector of a robot. Some implementations are directed to training and utilization of both a geometry network and a grasp outcome prediction network. The trained geometry network can be utilized to generate, based on two-dimensional or two-and-a-half-dimensional image(s), geometry output(s) that are: geometry-aware, and that represent (e.g., high-dimensionally) three-dimensional features captured by the image(s). In some implementations, the geometry output(s) include at least an encoding that is generated based on a trained encoding neural network trained to generate encodings that represent three-dimensional features (e.g., shape). The trained grasp outcome prediction network can be utilized to generate, based on applying the geometry output(s) and additional data as input(s) to the network, a grasp outcome prediction for a candidate grasp pose.