Patent classifications
G05B2219/40309
DATA GENERATION METHOD AND APPARATUS, AND STORAGE MEDIUM
The present disclosure discloses a data generation method and apparatus, and a computer-readable storage medium, the method including: importing a robot model by using a game engine; simulating a Red-Green-Blue Depth (RGBD) camera by a scene capture component in the game engine; controlling a human hand of the imported robot model to move within a field of view of the RGBD camera by using a joint control module in the game engine; acquiring RGBD image data by using the RGBD camera; and generating an annotated data set with coordinates of 21 key points according to the RGBD image data and coordinate information of a 3D pose of the 21 key points.
METHOD FOR DETERMINING A GRASPING HAND MODEL
Method for determining a grasping hand model suitable for grasping an object by obtaining a first RGB image including at least one object; obtaining an object model estimating a pose and shape of said object from the first image of the object; selecting a grasp taxonomy from a set of grasp taxonomies by means of a Convolutional Neural Network, with a cross entropy loss, thus, obtaining a set of parameters defining a coarse grasping hand model; refining the coarse grasping hand model, by minimizing loss functions referring to the parameters of the hand model for obtaining an operable grasping hand model while minimizing the distance between the finger of the hand model and the surface of the object and preventing interpenetration; and obtaining a mesh of the hand represented by the enhanced set of parameters.
DUAL ARM ROBOT TEACHING FROM DUAL HAND HUMAN DEMONSTRATION
A method for dual arm robot teaching from dual hand detection in human demonstration. A camera image of the demonstrator's hands and workpieces is provided to a first neural network which determines the identity of the left and right hand from the image, and also provides cropped sub-images of the identified hands. The cropped sub-images are provided to a second neural network which detects the poses of both the left and right hand from the images. The dual hand pose data for an entire operation is converted to robot gripper pose data and used for teaching two robot arms to perform the operation on the workpieces, where each hand's motion is assigned to one robot arm. Edge detection from camera images may be used to refine robot motions in order to improve part localization for tasks requiring precision, such as inserting a part into an aperture.
SYSTEMS AND METHODS FOR COLLISION-FREE TRAJECTORY PLANNING IN HUMAN-ROBOT INTERACTION THROUGH HAND MOVEMENT PREDICTION FROM VISION
Various embodiments of systems and methods for collision-free trajectory planning in human-robot interaction through hand movement prediction from vision are disclosed.