Patent classifications
G05B2219/39536
Information processing apparatus, control method for information processing apparatus, and recording medium
An information processing apparatus comprises an obtaining unit configured to obtain sensing data on an area including an installation target component; a detection unit configured to detect a position/orientation of the installation target component as a component position/orientation based on the sensing data; a setting unit configured to set, based on the component position/orientation and shape data on the installation target component, a candidate gripped portion of the installation target component to be gripped by a hand mechanism; a calculation unit configured to calculate, as a candidate hand position/orientation, a position/orientation of the hand mechanism in which the candidate gripped portion can be gripped; and a generation unit configured to generate candidate teaching data for gripping operation by the hand mechanism by associating the candidate gripped portion and the candidate hand position/orientation with each other.
ELECTRONIC APPARATUS AND CONTROL METHOD THEREOF
An electronic apparatus includes a camera; a gripper configured to grip a grip target object; a memory configured to store a neural network model; and a processor configured to: obtain movement information and rotation information of the gripper by inputting at least one image captured by the camera to the neural network model, and control the gripper based on the movement information and the rotation information. The at least one image includes at least a part of the gripper and at least a part of the grip target object, and the neural network model is configured to output the movement information and the rotation information for positioning the gripper to be adjacent to the grip target object, based on the at least one image, the movement information includes one of a first direction movement, a second direction movement, or a movement stop of the gripper, and the rotation information includes one of a first direction rotation, a second direction rotation, or a non-rotation of the gripper.
PARAMETER ADJUSTMENT DEVICE, PARAMETER ADJUSTMENT METHOD, AND PROGRAM
Three-dimensional measurement data is obtained (step S1). The poses of bulk parts are calculated (step S2). The gripping poses of a hand relative to the bulk parts are calculated (step S3). Individual evaluation indices are calculated (step S4). Overall evaluation indices are calculated (step S5). The gripping poses are sorted using the overall evaluation indices (step S6). The calculated gripping poses appear on a screen (step S7). The gripping poses are sorted by a user in an intended order (step S8). Determination is performed as to whether the difference between the results of the sorting in step S6 and in step S8 is small (step S9). In response to the difference being sufficiently small, parameters used in the calculation of the overall evaluation indices are stored (step S11). In response to the difference not being sufficiently small, the parameters are updated (step S10) and the processing returns to step S5.
GRASPING POSITION AND ORIENTATION REGISTRATION DEVICE, GRASPING POSITION AND ORIENTATION REGISTRATION METHOD, AND PROGRAM
The rotation center is calculated (step S11). A gripping pattern is determined (step S12). The rotation axis is set to X-axis (step S13) of a tool coordinate system in response to the gripping pattern being a fan-shaped pattern, to Y-axis (step S14) in response to being a cylinder pattern, and to Z-axis (step S15) in response to being a circle pattern. A start angle θ_start is set as a rotation angle θ (step S16). The pose of a tool is calculated (step S17). The current pose is registered (step S18). The angle θ is changed by a predetermined angle of Δθ (step S19). Steps S17 to S19 are repeated until the angle θ reaches an end angle θ_end (step S20).
SYSTEMS AND METHODS FOR DISTRIBUTED TRAINING AND MANAGEMENT OF AI-POWERED ROBOTS USING TELEOPERATION VIA VIRTUAL SPACES
In some aspects, a system comprises a computer hardware processor and a non-transitory computer-readable storage medium storing processor-executable instructions for receiving, from one or more sensors, sensor data relating to a robot; generating, using a statistical model, based on the sensor data, first control information for the robot to accomplish a task; transmitting, to the robot, the first control information for execution of the task; and receiving, from the robot, a result of execution of the task.
Grasping error correction method, grasping error correction apparatus, and grasping error correction program
A grasping error correction method includes a position information acquisition step of acquiring position information of a plurality of areas of a lower component 2, a grasping error value calculation step of calculating a grasping error value based on the position information at the time of the reproduction and the position information of the plurality of areas of the lower component 2 at the time of teaching, and an arm control step of controlling an operation of a multi-axis articulated arm 11a so as to correct the grasping error value. Further, in the grasping error value calculation step, the grasping error value is calculated so that a grasping error in a processing nearby area, which is one of the plurality of areas of the lower component 2 that is closest to the processing area, is preferentially eliminated over those in the other areas of the lower component 2.
Determining grasping parameters for grasping of an object by a robot grasping end effector
Methods and apparatus related to training and/or utilizing a convolutional neural network to generate grasping parameters for an object. The grasping parameters can be used by a robot control system to enable the robot control system to position a robot grasping end effector to grasp the object. The trained convolutional neural network provides a direct regression from image data to grasping parameters. For example, the convolutional neural network may be trained to enable generation of grasping parameters in a single regression through the convolutional neural network. In some implementations, the grasping parameters may define at least: a “reference point” for positioning the grasping end effector for the grasp; and an orientation of the grasping end effector for the grasp.
EFFICIENT ROBOT CONTROL BASED ON INPUTS FROM REMOTE CLIENT DEVICES
Utilization of user interface inputs, from remote client devices, in controlling robot(s) in an environment. Implementations relate to generating training instances based on object manipulation parameters, defined by instances of user interface input(s), and training machine learning model(s) to predict the object manipulation parameter(s). Those implementations can subsequently utilize the trained machine learning model(s) to reduce a quantity of instances that input(s) from remote client device(s) are solicited in performing a given set of robotic manipulations and/or to reduce the extent of input(s) from remote client device(s) in performing a given set of robotic operations. Implementations are additionally or alternatively related to mitigating idle time of robot(s) through the utilization of vision data that captures object(s), to be manipulated by a robot, prior to the object(s) being transported to a robot workspace within which the robot can reach and manipulate the object.
Determining final grasp pose of robot end effector after traversing to pre-grasp pose
Grasping of an object, by an end effector of a robot, based on a final grasp pose, of the end effector, that is determined after the end effector has been traversed to a pre-grasp pose. An end effector vision component can be utilized to capture instance(s) of end effector vision data after the end effector has been traversed to the pre-grasp pose, and the final grasp pose can be determined based on the end effector vision data. For example, the final grasp pose can be determined based on selecting instance(s) of pre-stored visual features(s) that satisfy similarity condition(s) relative to current visual features of the instance(s) of end effector vision data, and determining the final grasp pose based on pre-stored grasp criteria stored in association with the selected instance(s) of pre-stored visual feature(s).
REGION-BASED GRASP GENERATION
A region-based robotic grasp generation technique for machine tending or bin picking applications. Part and gripper geometry are provided as inputs, typically from CAD files, along with gripper kinematics. A human user defines one or more target grasp regions on the part, using a graphical user interface displaying the part geometry. The target grasp regions are identified by the user based on the user's knowledge of how the part may be grasped to ensure that the part can be subsequently placed in a proper destination pose. For each of the target grasp regions, an optimization solver is used to compute a plurality of quality grasps with stable surface contact between the part and the gripper, and no part-gripper interference. The computed grasps for each target grasp region are placed in a grasp database which is used by a robot in actual bin picking operations.