Patent classifications
G05B2219/39395
Robot device controller for controlling position of robot
A first characteristic portion of a first workpiece and a second characteristic portion of a second workpiece are previously determined. A characteristic amount detection unit detects a first characteristic amount related to the position of the first characteristic portion and a second characteristic amount related to the position of the second characteristic portion in an image captured by a camera. A calculation unit calculates, as a relative position amount, the difference between the first characteristic amount and the second characteristic amount. A command generation unit generates a movement command for operating a robot based on a relative position amount in the image captured by the camera and a relative position amount in a predetermined reference image.
ROBOT DEVICE CONTROLLER FOR CONTROLLING POSITION OF ROBOT
A first characteristic portion of a first workpiece and a second characteristic portion of a second workpiece are previously determined. A characteristic amount detection unit detects a first characteristic amount related to the position of the first characteristic portion and a second characteristic amount related to the position of the second characteristic portion in an image captured by a camera. A calculation unit calculates, as a relative position amount, the difference between the first characteristic amount and the second characteristic amount. A command generation unit generates a movement command for operating a robot based on a relative position amount in the image captured by the camera and a relative position amount in a predetermined reference image.
Action prediction networks for robotic grasping
Deep machine learning methods and apparatus related to the manipulation of an object by an end effector of a robot are described herein. Some implementations relate to training an action prediction network to predict a probability density which can include candidate actions of successful grasps by the end effector given an input image. Some implementations are directed to utilization of an action prediction network to visually servo a grasping end effector of a robot to achieve a successful grasp of an object by the grasping end effector.
Information processing apparatus, control method, robot system, and storage medium
The information processing apparatus includes an estimation unit to estimate information indicating a holding success possibility from an image of a plurality of the target objects by using a pre-trained model that estimates the information indicating the holding success possibility in at least one or more partial regions, a determination unit to determine a holding region for the robot to hold the target object among the partial regions based on the information indicating the holding success possibility, and a control unit to move the robot based on the holding region and cause the robot to perform a holding operation on the target object. In a case where the holding operation performed on the target object by the robot is failed, the determination unit determines a partial region, among the partial regions, satisfying a predetermined condition as a next holding region based on the information indicating the holding success possibility.
ROBOT DEVICE CONTROLLER FOR CONTROLLING POSITION OF ROBOT
A first characteristic portion of a first workpiece and a second characteristic portion of a second workpiece are previously determined. A characteristic amount detection unit detects a first characteristic amount related to the position of the first characteristic portion and a second characteristic amount related to the position of the second characteristic portion in an image captured by a camera. A calculation unit calculates, as a relative position amount, the difference between the first characteristic amount and the second characteristic amount. A command generation unit generates a movement command for operating a robot based on a relative position amount in the image captured by the camera and a relative position amount in a predetermined reference image.
ACTION PREDICTION NETWORKS FOR ROBOTIC GRASPING
Deep machine learning methods and apparatus related to the manipulation of an object by an end effector of a robot are described herein. Some implementations relate to training an action prediction network to predict a probability density which can include candidate actions of successful grasps by the end effector given an input image. Some implementations are directed to utilization of an action prediction network to visually servo a grasping end effector of a robot to achieve a successful grasp of an object by the grasping end effector.
INFORMATION PROCESSING APPARATUS, CONTROL METHOD, ROBOT SYSTEM, AND STORAGE MEDIUM
The information processing apparatus includes an estimation unit to estimate information indicating a holding success possibility from an image of a plurality of the target objects by using a pre-trained model that estimates the information indicating the holding success possibility in at least one or more partial regions, a determination unit to determine a holding region for the robot to hold the target object among the partial regions based on the information indicating the holding success possibility, and a control unit to move the robot based on the holding region and cause the robot to perform a holding operation on the target object. In a case where the holding operation performed on the target object by the robot is failed, the determination unit determines a partial region, among the partial regions, satisfying a predetermined condition as a next holding region based on the information indicating the holding success possibility.
Enhanced system and method for control of robotic devices
A point cloud system having two separate sets of points, each of these sets having different points of view, creating data with potentially occluded points in the point cloud. An accelerated approach of close sister points is used to determine which occluded points can be removed by looking out from an assumed non-occluded point, then finding the closest point in the next set of points, then looking back into the first set of points, or jumping to the closest not occluded point and looking back, and if this second sister is close to initial point, it is a close sister.
ENHANCED SYSTEM AND METHOD FOR CONTROL OF ROBOTIC DEVICES
A point cloud system having two separate sets of points, each of these sets having different points of view, creating data with potentially occluded points in the point cloud. An accelerated approach of close sister points is used to determine which occluded points can be removed by looking out from an assumed non-occluded point, then finding the closest point in the next set of points, then looking back into the first set of points, or jumping to the closest not occluded point and looking back, and if this second sister is close to initial point, it is a close sister.
Robot device controller for controlling position of robot
A first characteristic portion of a first workpiece and a second characteristic portion of a second workpiece are previously determined. A characteristic amount detection unit detects a first characteristic amount related to the position of the first characteristic portion and a second characteristic amount related to the position of the second characteristic portion in an image captured by a camera. A calculation unit calculates, as a relative position amount, the difference between the first characteristic amount and the second characteristic amount. A command generation unit generates a movement command for operating a robot based on a relative position amount in the image captured by the camera and a relative position amount in a predetermined reference image.