G05B2219/40116

SYSTEM AND METHODS FOR ROBOTIC PROCESS AUTOMATION

There is disclosed a method of training an RPA robot to use a GUI. The method comprises capturing video of the GUI as an operator uses the GUI to carry out a process; capturing a sequence of events triggered as the operator uses the GUI to carry out said process; and analyzing said video and said sequence of events to thereby generate a workflow. The workflow, when executed by an RPA robot, causes the RPA robot to carry out said process using the GUI.

Apparatus and methods for control of robot actions based on corrective user inputs

Robots have the capacity to perform a broad range of useful tasks, such as factory automation, cleaning, delivery, assistive care, environmental monitoring and entertainment. Enabling a robot to perform a new task in a new environment typically requires a large amount of new software to be written, often by a team of experts. It would be valuable if future technology could empower people, who may have limited or no understanding of software coding, to train robots to perform custom tasks. Some implementations of the present invention provide methods and systems that respond to users' corrective commands to generate and refine a policy for determining appropriate actions based on sensor-data input. Upon completion of learning, the system can generate control commands by deriving them from the sensory data. Using the learned control policy, the robot can behave autonomously.

ROBOTIC MANIPULATION METHODS AND SYSTEMS FOR EXECUTING A DOMAIN-SPECIFIC APPLICATION IN AN INSTRUMENTED ENVIORNMENT WITH ELECTRONIC MINIMANIPULATION LIBRARIES
20220305648 · 2022-09-29 ·

Embodiments of the present disclosure are directed to methods, computer program products, and computer systems of a robotic apparatus with robotic instructions replicating a food preparation recipe. In one embodiment, a robotic control platform, comprises one or more sensors; a mechanical robotic structure including one or more end effectors, and one or more robotic arms; an electronic library database of minimanipulations; a robotic planning module configured for real-time planning and adjustment based at least in part on the sensor data received from the one or more sensors in an electronic multi-stage process file, the electronic multi-stage process recipe file including a sequence of minimanipulations and associated timing data; a robotic interpreter module configured for reading the minimanipulation steps from the minimanipulation library and converting to a machine code; and a robotic execution module configured for executing the minimanipulation steps by the robotic platform to accomplish a functional result.

SYSTEMS, APPARATUS, AND METHODS FOR ROBOTIC LEARNING AND EXECUTION OF SKILLS

Systems, apparatus, and methods are described for robotic learning and execution of skills. A robotic apparatus can include a memory, a processor, sensors, and one or more movable components (e.g., a manipulating element and/or a transport element). The processor can be operatively coupled to the memory, the movable elements, and the sensors, and configured to obtain information of an environment, including one or more objects located within the environment. In some embodiments, the processor can be configured to learn skills through demonstration, exploration, user inputs, etc. In some embodiments, the processor can be configured to execute skills and/or arbitrate between different behaviors and/or actions. In some embodiments, the processor can be configured to learn an environmental constraint. In some embodiments, the processor can be configured to learn using a general model of a skill.

System for configuring a robotic manipulator

Described are techniques for storing and retrieving items using a robotic manipulator. Images depicting a human interacting with an item, sensor data from sensors instrumenting the human or item, data regarding physical characteristics of the item, and constraint data relating to the robotic manipulator or the item may be used to generate one or more configurations for the robotic manipulator. The configurations may include points of contact and force vectors for contacting the item using the robotic manipulator.

Robot teaching programming method, apparatus and system, and computer-readable medium
11235468 · 2022-02-01 · ·

In robot teaching programming, a robot teaching programming method, apparatus and system, and a computer-readable medium, can realize the programming of a robot simply, and are not restricted in terms of robot types. A robot teaching programming system includes a movable apparatus for imitating movement of an end effector of a robot in a working space of the robot; a robot teaching programming apparatus for recording first movement information of the movable apparatus in a first coordinate system and converting the same to second movement information in a second coordinate system of the robot, and then programming the robot according to the second movement information. Using a movable apparatus to simulate an end effector of a robot has the advantages of ease of operation, and no restrictions in terms of robot types. Teaching programming is accomplished through simple coordinate transformation, and there is no need for advanced programming skills.

GENERATING A ROBOT CONTROL POLICY FROM DEMONSTRATIONS
20220040861 · 2022-02-10 ·

Learning to effectively imitate human teleoperators, even in unseen, dynamic environments is a promising path to greater autonomy, enabling robots to steadily acquire complex skills from supervision. Various motion generation techniques are described herein that are rooted in contraction theory and sum-of-squares programming for learning a dynamical systems control policy in the form of a polynomial vector field from a given set of demonstrations. Notably, this vector field is provably optimal for the problem of minimizing imitation loss while providing certain continuous-time guarantees on the induced imitation behavior. Techniques herein generalize to new initial and goal poses of the robot and can adapt in real time to dynamic obstacles during execution, with convergence to teleoperator behavior within a well-defined safety tube.

VERBAL-BASED FOCUS-OF-ATTENTION TASK MODEL ENCODER

Traditionally, robots may learn to perform tasks by observation in clean or sterile environments. However, robots are unable to accurately learn tasks by observation in real environments (e.g., cluttered, noisy, chaotic environments). Methods and systems are provided for teaching robots to learn tasks in real environments based on input (e.g., verbal or textual cues). In particular, a verbal-based Focus-of-Attention (FOA) model receives input, parses the input to recognize at least a task and a target object name. This information is used to spatio-temporally filter a demonstration of the task to allow the robot to focus on the target object and movements associated with the target object within a real environment. In this way, using the verbal-based FOA, a robot is able to recognize “where and when” to pay attention to the demonstration of the task, thereby enabling the robot to learn the task by observation in a real environment.

Systems, Methods, and Computer-Readable Media for Task-Oriented Motion Mapping on Machines, Robots, Agents and Virtual Embodiments Thereof Using Body Role Division

Systems, methods, and computer-readable media are disclosed for task-oriented motion mapping on an agent using body role division. One method includes: receiving task demonstration information of a particular task; receiving a set of instructions for the particular task; receiving a configuration of an agent to perform the particular task, the configuration of the agent including a plurality of joints, and each joint belong to one or more of a configurational group, a positional group, and a orientational group: mapping the configurational group of the agent based on the task demonstration information; changing values in the orientational group based on one or more of the task demonstration information and the set of instructions; changing values in the positional group based on the set of instructions; and producing a task-oriented motion mapping based on the mapped configuration group, changed values in the orientation group, and changed values in the positional group.

SYSTEMS, APPARATUS, AND METHODS FOR ROBOTIC LEARNING AND EXECUTION OF SKILLS

Systems, apparatus, and methods are described for robotic learning and execution of skills. A robotic apparatus can include a memory, a processor, sensors, and one or more movable components (e.g., a manipulating element and/or a transport element). The processor can be operatively coupled to the memory, the movable elements, and the sensors, and configured to obtain information of an environment, including one or more objects located within the environment. In some embodiments, the processor can be configured to learn skills through demonstration, exploration, user inputs, etc. In some embodiments, the processor can be configured to execute skills and/or arbitrate between different behaviors and/or actions. In some embodiments, the processor can be configured to learn an environmental constraint. In some embodiments, the processor can be configured to learn using a general model of a skill.