Patent classifications
G05B2219/39542
Image processing device, image processing method, and computer program
A workpiece model is created based on three-dimensionally measured data obtained by performing a three-dimensional measurement on a working space in which the workpieces are loaded in bulk. Holding data including a holding position of the workpiece model and a posture of a holding unit of a robot are set by setting a model coordinate system. The setting of the model coordinate system and the setting of the holding data are repeated while sequentially changing the posture of the workpiece, and multiple pieces of holding data are stored such that the multiple pieces of holding data correspond to the workpiece models. A three-dimensional search process is performed on the obtained three-dimensionally measured data, and an operation of the holding unit of the robot is controlled based on the holding data set to the workpiece model.
RUNTIME ASSESSMENT OF SUCTION GRASP FEASIBILITY
An autonomous system can detect out-of-distribution (OOD) data in robotic grasping systems, based on evaluating image inputs of the robotic grasping systems. Furthermore, the system makes various decisions based on detecting the OOD data, so as to avoid inefficient or hazardous situations or other negative consequences (e.g., damage to products). For example, the system can determine whether a suction-based gripper is optimal for grasping objects in a given scene, based at least in part on determining whether an image defines OOD data.
SYSTEM AND METHOD FOR OPTIMIZING BODY AND OBJECT INTERACTIONS
Systems and methods for optimizing body and object interactions are provided. Based on obtained contact pressure maps and coefficient of friction (COF) maps at a contact interface where at least a portion of a body is in physical contact with a surface of an object, friction force maps can be determined, which can be used to optimize body and object interactions.
Information processing device and information processing method
An information processor calculates, for a robot hand including a plurality of fingers, a gripping pose at which the robot hand grips a target object. The information processor includes a candidate single-finger placement position detector that detects, based on three-dimensional measurement data obtained through three-dimensional measurement of the target object and hand shape data about a shape of the robot hand, candidate placement positions for each of the plurality of fingers of the robot hand, a multi-finger combination searcher that searches for, among the candidate placement positions for each of the plurality of fingers, a combination of candidate placement positions to allow gripping of the target object, and a gripping pose calculator that calculates, based on the combination of candidate placement positions for each of the plurality of fingers, a gripping pose at which the robot hand grips the target object.
Image Processing Device, Image Processing Method, And Computer Program
A workpiece model is created based on three-dimensionally measured data obtained by performing a three-dimensional measurement on a working space in which the workpieces are loaded in bulk. Holding data including a holding position of the workpiece model and a posture of a holding unit of a robot are set by setting a model coordinate system. The setting of the model coordinate system and the setting of the holding data are repeated while sequentially changing the posture of the workpiece, and multiple pieces of holding data are stored such that the multiple pieces of holding data correspond to the workpiece models. A three-dimensional search process is performed on the obtained three-dimensionally measured data, and an operation of the holding unit of the robot is controlled based on the holding data set to the workpiece model.
Method for the localization of gripping points of objects
The invention relates to a method for the localization of gripping points of objects, wherein the objects are scanned by means of a 3D sensor and the objects are illuminated by means of at least one first illumination unit while the objects are detected by means of a camera, wherein the relative positions of the 3D sensor, of the first illumination unit and of the camera with respect to one another are known and the 3D sensor, the first illumination unit and the camera are arranged in a fixed position with respect to one another. In this respect, the boundary of the objects is determined from a two-dimensional image generated by the camera, a spatial position is determined from detected distance information of the 3D sensor and of the two-dimensional image and the gripping points for the objects are determined from the boundaries and from the spatial position of the objects.
Method and computing system for performing motion planning based on image information generated by a camera
A system and method for motion planning is presented. The system is configured, when an object is or has been in a camera field of view of a camera, to receive first image information that is generated when the camera has a first camera pose. The system is further configured to determine, based on the first image information, a first estimate of the object structure, and to identify, based on the first estimate of the object structure or based on the first image information, an object corner. The system is further configured to cause an end effector apparatus to move the camera to a second camera pose, and to receive second image information for representing the object's structure. The system is configured to determine a second estimate of the object's structure based on the second image information, and to generate a motion plan based on at least the second estimate.
Systems and methods for planning a robot grasp that can withstand task disturbances
In one embodiment, a system and method for planning a robot grasp involve measuring interaction forces imposed on an object by an environment while a task is demonstrated using the object to obtain a disturbance distribution dataset, modeling a task requirement based upon the disturbance distribution dataset, identifying robot grasp types that can be used to satisfy the task requirement, calculating a grasp wrench space for each identified robot grasp, and calculating a grasp quality of each grasp.