Patent classifications
G05B2219/39536
Industrial robotics systems and methods for continuous and automated learning
In an aspect of the disclosure, a method, a computer-readable medium, and an apparatus are provided. The apparatus may maintain an first dataset configured to select pick points for objects. The apparatus may receive, from a user device, a user dataset including a user selected pick point associated with at least one object and a first image of the at least one first object. The apparatus may generate a second dataset based at least in part on the first dataset and the user dataset. The apparatus may receive a second image of a second object. The apparatus may select a pick point for the second object using the second dataset and the second image of the second object. The apparatus may send information associated with the pick point selected for the second object to a robotics device for picking up the second object.
Robotic grasping prediction using neural networks and geometry aware object representation
Deep machine learning methods and apparatus, some of which are related to determining a grasp outcome prediction for a candidate grasp pose of an end effector of a robot. Some implementations are directed to training and utilization of both a geometry network and a grasp outcome prediction network. The trained geometry network can be utilized to generate, based on two-dimensional or two-and-a-half-dimensional image(s), geometry output(s) that are: geometry-aware, and that represent (e.g., high-dimensionally) three-dimensional features captured by the image(s). In some implementations, the geometry output(s) include at least an encoding that is generated based on a trained encoding neural network trained to generate encodings that represent three-dimensional features (e.g., shape). The trained grasp outcome prediction network can be utilized to generate, based on applying the geometry output(s) and additional data as input(s) to the network, a grasp outcome prediction for a candidate grasp pose.
SYSTEM AND METHOD FOR DETERMINING A GRASPING HAND MODEL
Method for determining a grasping hand model suitable for grasping an object by receiving an image including at least one object; obtaining an object model estimating a pose and shape of the object from the image of the object; selecting a grasp class from a set of grasp classes by means of a neural network, with a cross entropy loss, thus, obtaining a set of parameters defining a coarse grasping hand model; refining the coarse grasping hand model, by minimizing loss functions referring to the parameters of the hand model for obtaining an operable grasping hand model while minimizing the distance between the finger of the hand model and the surface of the object and preventing interpenetration; and obtaining a mesh of the hand represented by the enhanced set of parameters.
TASK-ORIENTED GRASPING OF OBJECTS
A computer-implemented method includes obtaining a collection of object models for a plurality of different types of objects belonging to a same object category, generating a canonical representation for objects belonging to the object category, performing a plurality of downstream tasks using a plurality of different robot grasps on instances of objects belonging to the category and evaluating each grasp according to success or failure of the downstream task; and generating one or more category-level grasping areas for the canonical representation for objects belonging to the object category including aggregating the evaluations of grasps according to the downstream task.
GRASP LEARNING USING MODULARIZED NEURAL NETWORKS
A method for modularizing high dimensional neural networks into neural networks of lower input dimensions. The method is suited to generating full-DOF robot grasping actions based on images of parts to be picked. In one example, a first network encodes grasp positional dimensions and a second network encodes rotational dimensions. The first network is trained to predict a position at which a grasp quality is maximized for any value of the grasp rotations. The second network is trained to identify the maximum grasp quality while searching only at the position from the first network. Thus, the two networks collectively identify an optimal grasp, while each network's searching space is reduced. Many grasp positions and rotations can be evaluated in a search quantity of the sum of the evaluated positions and rotations, rather than the product. Dimensions may be separated in any suitable fashion, including three neural networks in some applications.
Handling device, control device, and holding method
A handling device according to an embodiment has an arm, a holder, a storage, and a controller. The arm includes at least one joint. The holder is attached to the arm and is configured to hold an object. The storage stores a function map including at least one of information about holdable positions of the holder and information about possible postures of the holder. The detector is configured to detect information about the object. The controller is configured to generate holdable candidate points on the basis of the information detected by the detector, to search the function map for a position in an environment in which the object is present, the position being associated with the generated holdable candidate points, and to determine a holding posture of the holder on the basis of the searched position. The function map associates a manipulability with each position in the environment in which the object is present. The manipulability is a parameter calculated from at least one joint angle of the holder.
Generating simulated training examples for training of machine learning model used for robot control
Implementations are directed to generating simulated training examples for training of a machine learning model, training the machine learning model based at least in part on the simulated training examples, and/or using the trained machine learning model in control of at least one real-world physical robot. Implementations are additionally or alternatively directed to performing one or more iterations of quantifying a “reality gap” for a robotic simulator and adapting parameter(s) for the robotic simulator based on the determined reality gap. The robotic simulator with the adapted parameter(s) can further be utilized to generate simulated training examples when the reality gap of one or more iterations satisfies one or more criteria.
SETTINGS SUPPORT DEVICE, SETTINGS SUPPORT METHOD, AND PROGRAM
A technique allows efficient registration of an accurate gripping position of a robot hand with an auxiliary view appearing on a screen in accordance with the robot hand. A user selects a hand type to be used in gripping a gripping target and designates an auxiliary view to be rendered in accordance with the hand. In response to a two-finger hand being selected (step S11), a plane (step S13), a cylinder (step S14), or a rectangular prism (step S15) is rendered based on the view designated by the user (step S12). In response to a suction hand being selected, a plane is rendered (step S16).
Machine learning methods and apparatus for automated robotic placement of secured object in appropriate location
Training and/or use of a machine learning model for placement of an object secured by an end effector of a robot. A trained machine learning model can be used to process: (1) a current image, captured by a vision component of a robot, that captures an end effector securing an object; (2) a candidate end effector action that defines a candidate motion of the end effector; and (3) a target placement input that indicates a target placement location for the object. Based on the processing, a prediction can be generated that indicates likelihood of successful placement of the object in the target placement location with application of the motion defined by the candidate end effector action. At many iterations, the candidate end effector action with the highest probability is selected and control commands provided to cause the end effector to move in conformance with the corresponding end effector action. When at least one release criteria is satisfied, control commands can be provided to cause the end effector to release the object, thereby leading to the object being placed in the target placement location.
OBJECT MANIPULATION
A robot for object manipulation may include sensors, a robot appendage, actuators configured to drive joints of the robot appendage, a planner, and a controller. Object path planning may include determining poses. Object trajectory optimization may include assigning a set of timestamps to the poses, optimizing a cost function which may be a cost function for finger sliding based on a penalty for a sliding distance, a change in desired normal direction, and a wrench error associated with sliding a robot finger, and generating an object trajectory based on the optimized cost function. Grasp sequence planning may be model-based or deep reinforcement learning (DRL) policy based. The controller may execute the object trajectory and the grasp sequence via the robot appendage and actuators.