Patent classifications
G05B2219/39271
MACHINE LEARNING CONTROL OF OBJECT HANDOVERS
A robotic control system directs a robot to take an object from a human grasp by obtaining an image of a human hand holding an object, estimating the pose of the human hand and the object, and determining a grasp pose for the robot that will not interfere with the human hand. In at least one example, a depth camera is used to obtain a point cloud of the human hand holding the object. The point cloud is provided to a deep network that is trained to generate a grasp pose for a robotic gripper that can take the object from the human's hand without pinching or touching the human's fingers.
Method and apparatus for manipulating a tool to control in-grasp sliding of an object held by the tool
A tool control system may include: a tactile sensor configured to, when a tool holds a target object and slides the target object downward across the tool, obtain tactile sensing data from the tool; one or more memories configured to store a target velocity and computer-readable instructions; and one or more processors configured execute the computer-readable instructions to: receive the tactile sensing data from the tactile sensor; estimate a velocity of the target object based on the tactile sensing data, by using one or more neural networks that are trained based on a training image of an sample object captured while the sample object is sliding down; and generate a control parameter of the tool based on the estimated velocity and the target velocity.
METHOD AND SYSTEM FOR OPTIMIZING REINFORCEMENT-LEARNING-BASED AUTONOMOUS DRIVING ACCORDING TO USER PREFERENCES
A method for optimizing autonomous driving includes applying different autonomous driving parameters to a plurality of robot agents in a simulation through an automatic setting by means of the system or a direct setting by means of a manager, so that the robot agents learn robot autonomous driving; and optimizing the autonomous driving parameters by using preference data for the autonomous driving parameters.
ROBOT NAVIGATION USING A HIGH-LEVEL POLICY MODEL AND A TRAINED LOW-LEVEL POLICY MODEL
Training and/or using both a high-level policy model and a low-level policy model for mobile robot navigation. High-level output generated using the high-level policy model at each iteration indicates a corresponding high-level action for robot movement in navigating to the navigation target. The low-level output generated at each iteration is based on the determined corresponding high-level action for that iteration, and is based on observation(s) for that iteration. The low-level policy model is trained to generate low-level output that defines low-level action(s) that define robot movement more granularly than the high-level action—and to generate low-level action(s) that avoid obstacles and/or that are efficient (e.g., distance and/or time efficiency).
Toolpath adjustments based on 3-dimensional scan data of physically manufactured parts
A computing system may include a data access engine and a toolpath adjustment engine. The data access engine may be configured to access a computer-aided design (CAD) model of a part design and a computer-aided manufacturing (CAM) setup for the part design. The CAM setup may include a nominal toolpath specified through the CAD model for performing a finishing operation for the part design. The data access engine may also be configured to obtain 3-dimensional (3D) scan data for a physical part manufactured from the part design. The toolpath adjustment engine may be configured to extract, from the 3D scan data, a manufactured geometry of the physical part manufactured from the part design and generate an adjusted toolpath for the physical part to account for the manufactured geometry extracted from the 3D scan data.
ROBOTIC SYSTEMS USING LEARNING TO PROVIDE REAL-TIME VIBRATION-SUPRESSING CONTROL
A robot control method, and associated robot controllers and robots operating with such methods and controllers, providing real-time vibration suppression. The control method involves learning to support real-time, vibration-suppressing control. The method uses state-of-the-art machine learning techniques in conjunction with a differentiable dynamics simulator to yield fast and accurate vibration suppression. Vibration suppression using offline simulation approaches that can be computationally expensive may be used to create training data for the controller, which may be provide by a variety of neural network configurations. In other cases, sensory feedback from sensors onboard the robot being controlled can be used to provide training data to account for wear of the robot's components.
NEURAL NETWORK ADAPTIVE TRACKING CONTROL METHOD FOR JOINT ROBOTS
The present disclosure discloses a neural network adaptive tracking control method for joint robots, which proposes two schemes: robust adaptive control and neural adaptive control, comprising the following steps: 1) establishing a joint robot system model; 2) establishing a state space expression and an error definition when taking into consideration both the drive failure and actuator saturation of the joint robot system; 3) designing a PID controller and updating algorithms of the joint robot system; and 4) using the designed PID controller and updating algorithms to realize the control of the trajectory motion of the joint robot. The present disclosure may solve the following technical problems at the same time: the drive saturation and coupling effect in the joint system, processing parameter uncertainty and non-parametric uncertainty, execution failure handling during the system operation, compensation for non-vanishing interference, and the like.
Method to optimize robot motion planning using deep learning
Methods and systems are provided for high-speed constrained motion planning. In one embodiment, a method includes computing, with a neural network trained on trajectories generated by a non-convex optimizer, a trajectory from one or more initial states of an autonomous system to one or more final states of the autonomous system, updating, with the non-convex optimizer, the trajectory according to kinematic limits and dynamic limits of the autonomous system to obtain a final trajectory, and automatically controlling the autonomous system from an initial state of the one or more initial states to a final state of the one or more final states according to the final trajectory. In this way, efficient and smooth trajectories can be rapidly computed for effective real-time control while accounting for obstacles and physical constraints of an autonomous system.
Action prediction networks for robotic grasping
Deep machine learning methods and apparatus related to the manipulation of an object by an end effector of a robot are described herein. Some implementations relate to training an action prediction network to predict a probability density which can include candidate actions of successful grasps by the end effector given an input image. Some implementations are directed to utilization of an action prediction network to visually servo a grasping end effector of a robot to achieve a successful grasp of an object by the grasping end effector.
TOOL SELECTION METHOD, DEVICE, AND TOOL PATH GENERATION METHOD
This tool selection method is provided with: a step for respectively calculating, with respect to a plurality of known workpieces, feature amounts based on the shapes of a plurality of machining surfaces wherein, to each of the plurality of known workpieces, one main tool which is preselected from a tool list that includes a plurality of tools is allocated as being suitable for machining the plurality of machining surfaces; a step for executing, with respect to the plurality of known workpieces, machine learning by taking the feature amounts as inputs and the main tools as outputs; a step for calculating a feature amount for a target workpiece; and a step for selecting, with respect to the target workpiece, a main tool from the tool list on the basis of a machine learning result obtained by using the feature amount of the target workpiece as an input.