Patent classifications
G05B2219/39289
Trajectory planning with droppable objects
Example implementations may relate to methods and systems for determining a safe trajectory for movement of an object by a robotic system. According to these various implementations, the robotic system may determine at least first and second candidate trajectories for moving the object. For at least a first point along the first candidate trajectory, the robotic system may determine a predicted cost of dropping the object at the first point along the first candidate trajectory. And for at least a second point along the second candidate trajectory, the robotic system may determine a predicted cost of dropping the object at the second point along the second candidate trajectory. Then, based on these various determined predicted costs, the robotic system may select between the first and second candidates trajectories and may then move the object along the selected trajectory.
MACHINE LEARNING METHODS AND APPARATUS RELATED TO PREDICTING MOTION(S) OF OBJECT(S) IN A ROBOT'S ENVIRONMENT BASED ON IMAGE(S) CAPTURING THE OBJECT(S) AND BASED ON PARAMETER(S) FOR FUTURE ROBOT MOVEMENT IN THE ENVIRONMENT
Some implementations of this specification are directed generally to deep machine learning methods and apparatus related to predicting motion(s) (if any) that will occur to object(s) in an environment of a robot in response to particular movement of the robot in the environment. Some implementations are directed to training a deep neural network model to predict at least one transformation (if any), of an image of a robot's environment, that will occur as a result of implementing at least a portion of a particular movement of the robot in the environment. The trained deep neural network model may predict the transformation based on input that includes the image and a group of robot movement parameters that define the portion of the particular movement.
ROBOT NAVIGATION USING A HIGH-LEVEL POLICY MODEL AND A TRAINED LOW-LEVEL POLICY MODEL
Training and/or using both a high-level policy model and a low-level policy model for mobile robot navigation. High-level output generated using the high-level policy model at each iteration indicates a corresponding high-level action for robot movement in navigating to the navigation target. The low-level output generated at each iteration is based on the determined corresponding high-level action for that iteration, and is based on observation(s) for that iteration. The low-level policy model is trained to generate low-level output that defines low-level action(s) that define robot movement more granularly than the high-level action—and to generate low-level action(s) that avoid obstacles and/or that are efficient (e.g., distance and/or time efficiency).
APPARATUS AND METHODS FOR OPERATING ROBOTIC DEVICES USING SELECTIVE STATE SPACE TRAINING
Apparatus and methods for training and controlling of e.g., robotic devices. In one implementation, a robot may be utilized to perform a target task characterized by a target trajectory. The robot may be trained by a user using supervised learning. The user may interface to the robot, such as via a control apparatus configured to provide a teaching signal to the robot. The robot may comprise an adaptive controller comprising a neuron network, which may be configured to generate actuator control commands based on the user input and output of the learning process. During one or more learning trials, the controller may be trained to navigate a portion of the target trajectory. Individual trajectory portions may be trained during separate training trials. Some portions may be associated with robot executing complex actions and may require additional training trials and/or more dense training input compared to simpler trajectory actions.
Gripping system with machine learning
A gripping system includes a hand that grips a workpiece, a robot that supports the hand and changes at least one of a position and a posture of the hand, and an image sensor that acquires image information from a viewpoint interlocked with at least one of the position and the posture of the hand. Additionally, the gripping system includes a construction module that constructs a model by machine learning based on collection data. The model corresponds to at least a part of a process of specifying an operation command of the robot based on the image information acquired by the image sensor and hand position information representing at least one of the position and the posture of the hand. An operation module executes the operation command of the robot based on the image information, the hand position information, and the model, and a robot control module operates the robot based on the operation command of the robot operated by the operation module.
Predictive robotic controller apparatus and methods
Robotic devices may be trained by a user guiding the robot along target action trajectory using an input signal. A robotic device may comprise an adaptive controller configured to generate control signal based on one or more of the user guidance, sensory input, performance measure, and/or other information. Training may comprise a plurality of trials, wherein for a given context the user and the robot's controller may collaborate to develop an association between the context and the target action. Upon developing the association, the adaptive controller may be capable of generating the control signal and/or an action indication prior and/or in lieu of user input. The predictive control functionality attained by the controller may enable autonomous operation of robotic devices obviating a need for continuing user guidance.
Machine learning methods and apparatus related to predicting motion(s) of object(s) in a robot's environment based on image(s) capturing the object(s) and based on parameter(s) for future robot movement in the environment
Some implementations of this specification are directed generally to deep machine learning methods and apparatus related to predicting motion(s) (if any) that will occur to object(s) in an environment of a robot in response to particular movement of the robot in the environment. Some implementations are directed to training a deep neural network model to predict at least one transformation (if any), of an image of a robot's environment, that will occur as a result of implementing at least a portion of a particular movement of the robot in the environment. The trained deep neural network model may predict the transformation based on input that includes the image and a group of robot movement parameters that define the portion of the particular movement.
PREDICTIVE ROBOTIC CONTROLLER APPARATUS AND METHODS
Robotic devices may be trained by a user guiding the robot along target action trajectory using an input signal. A robotic device may comprise an adaptive controller configured to generate control signal based on one or more of the user guidance, sensory input, performance measure, and/or other information. Training may comprise a plurality of trials, wherein for a given context the user and the robot's controller may collaborate to develop an association between the context and the target action. Upon developing the association, the adaptive controller may be capable of generating the control signal and/or an action indication prior and/or in lieu of user input. The predictive control functionality attained by the controller may enable autonomous operation of robotic devices obviating a need for continuing user guidance.
Configuring a system which interacts with an environment
A system is described for configuring another system, e.g., a robotics system. The other system interacts with an environment according to a deterministic policy by repeatedly obtaining, from a sensor, sensor data indicative of a state of the environment, determining a current action, and providing, to an actuator, actuator data causing the actuator to effect the current action in the environment. To configure the other system, the system optimizes a loss function based on an accumulated reward distribution with respect to a set of parameters of the policy. The accumulated reward distribution includes an action probability of an action of a previous interaction log being performed according to the current set of parameters. The action probability is approximated using a probability distribution defined by an action selected by the deterministic policy according to the current set of parameters.
Mitigating reality gap through simulating compliant control and/or compliant contact in robotic simulator
Mitigating the reality gap through utilization of technique(s) that enable compliant robotic control and/or compliant robotic contact to be simulated effectively by a robotic simulator. The technique(s) can include, for example: (1) utilizing a compliant end effector model in simulated episodes of the robotic simulator; (2) using, during the simulated episodes, a soft constraint for a contact constraint of a simulated contact model of the robotic simulator; and/or (3) using proportional derivative (PD) control in generating joint control forces, for simulated joints of the simulated robot, during the simulated episodes. Implementations additionally or alternatively relate to determining parameter(s), for use in one or more of the techniques that enable effective simulation of compliant robotic control and/or compliant robotic contact.