G05B2219/39509

Deep machine learning methods and apparatus for robotic grasping

Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a deep neural network to predict a measure that candidate motion data for an end effector of a robot will result in a successful grasp of one or more objects by the end effector. Some implementations are directed to utilization of the trained deep neural network to servo a grasping end effector of a robot to achieve a successful grasp of an object by the grasping end effector. For example, the trained deep neural network may be utilized in the iterative updating of motion control commands for one or more actuators of a robot that control the pose of a grasping end effector of the robot, and to determine when to generate grasping control commands to effectuate an attempted grasp by the grasping end effector.

ROBOT HAND CONTROLLER, ROBOT SYSTEM, AND ROBOT HAND CONTROL METHOD
20210023701 · 2021-01-28 · ·

A robot hand controller includes an air supply unit configured to supply air into fingers of a robot hand and configured to discharge air in the fingers, and a controller configured to control the air supply unit, where the air supply unit includes two or more air passages respectively connected to the different fingers, the air passages capable of supplying the air into the fingers and discharging the air in the fingers independently from each other, and the controller controls supply and discharge of the air through each of the two or more air passages in response to a shape of the workpiece and an object in a vicinity of a transport destination of the workpiece.

Control device and machine learning device
10864630 · 2020-12-15 · ·

A control device and a machine learning device enable control for gripping an object having small reaction force. The machine learning device included in the control device includes a state observation unit that observes gripping object shape data related to a shape of the gripping object as a state variable representing a current state of an environment, a label data acquisition unit that acquires gripping width data, which represents a width of the hand of the robot in gripping the gripping object, as label data, and a learning unit that performs learning by using the state variable and the label data in a manner to associate the gripping object shape data with the gripping width data.

ROBOTIC DEVICE AND GRIPPING METHOD
20200368902 · 2020-11-26 · ·

A robotic device includes an end effector device, a first sensor, and a controller. The end effector device includes two fingers for gripping a workpiece. The first sensor detects a pressure distribution on a gripping position on the workpiece by the two fingers. The controller performs, based on a temporal variation in the pressure distribution when the workpiece is lifted, posture control including rotation of the end effector device.

DEEP MACHINE LEARNING METHODS AND APPARATUS FOR ROBOTIC GRASPING
20190283245 · 2019-09-19 ·

Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a deep neural network to predict a measure that candidate motion data for an end effector of a robot will result in a successful grasp of one or more objects by the end effector. Some implementations are directed to utilization of the trained deep neural network to servo a grasping end effector of a robot to achieve a successful grasp of an object by the grasping end effector. For example, the trained deep neural network may be utilized in the iterative updating of motion control commands for one or more actuators of a robot that control the pose of a grasping end effector of the robot, and to determine when to generate grasping control commands to effectuate an attempted grasp by the grasping end effector.

METHOD AND COMPUTING SYSTEM FOR PERFORMING OBJECT DETECTION OR ROBOT INTERACTION PLANNING BASED ON IMAGE INFORMATION GENERATED BY A CAMERA
20240165820 · 2024-05-23 ·

A method and computing system for object detection are presented. The computing system is configured to receive first image information representing at least a first portion of an object structure of an object in a camera's field of view, wherein the first image information is associate with a first camera pose; generate or update, based on the first image information, sensed structure information representing the object structure; identify an object corner associated with the object structure; cause the robot arm to move the camera to a second camera pose pointed at the object corner; receive second image information associated with the second camera pose; update the sensed structure information based on the second image information; determine, based on the updated sensed structure information, an object type associated with the object; determine one or more robot interaction locations based on the object type.

CONTROL DEVICE AND MACHINE LEARNING DEVICE
20190152055 · 2019-05-23 · ·

A control device and a machine learning device enable control for gripping an object having small reaction force. The machine learning device included in the control device includes a state observation unit that observes gripping object shape data related to a shape of the gripping object as a state variable representing a current state of an environment, a label data acquisition unit that acquires gripping width data, which represents a width of the hand of the robot in gripping the gripping object, as label data, and a learning unit that performs learning by using the state variable and the label data in a manner to associate the gripping object shape data with the gripping width data.

Deep machine learning methods and apparatus for robotic grasping

Deep machine learning methods and apparatus related to manipulation of an object by an end effector of a robot. Some implementations relate to training a deep neural network to predict a measure that candidate motion data for an end effector of a robot will result in a successful grasp of one or more objects by the end effector. Some implementations are directed to utilization of the trained deep neural network to servo a grasping end effector of a robot to achieve a successful grasp of an object by the grasping end effector. For example, the trained deep neural network may be utilized in the iterative updating of motion control commands for one or more actuators of a robot that control the pose of a grasping end effector of the robot, and to determine when to generate grasping control commands to effectuate an attempted grasp by the grasping end effector.

Method and computing system for performing motion planning based on image information generated by a camera

A system and method for motion planning is presented. The system is configured, when an object is or has been in a camera field of view of a camera, to receive first image information that is generated when the camera has a first camera pose. The system is further configured to determine, based on the first image information, a first estimate of the object structure, and to identify, based on the first estimate of the object structure or based on the first image information, an object corner. The system is further configured to cause an end effector apparatus to move the camera to a second camera pose, and to receive second image information for representing the object's structure. The system is configured to determine a second estimate of the object's structure based on the second image information, and to generate a motion plan based on at least the second estimate.

Stable grasp point selection for robotic grippers with machine vision and ultrasound beam forming

Technologies are generally described for grasp point selection for robotic grippers through machine vision and ultrasound beam based deformation. The grasp point selection may aim to increase a probability that the grasp points on an object behave in a substantially similar way when a robotic gripper executes a corresponding grasp on the object. According to some examples, an outline of an object may be extracted from a three-dimensional (3D) image of the object and a set of points may be selected as candidate grasp points from the outline based on the candidate grasp points' potential to achieve force closure. One or more ultrasound transducers may be used to exert a local force on the candidate grasp points through an ultrasound beam and resulting local deformations may be recorded. Final grasp points may be selected based on having similar response to the force applied by the ultrasound transducers.