Patent classifications
G05B2219/40391
ROBOTIC MANIPULATION METHODS AND SYSTEMS FOR EXECUTING A DOMAIN-SPECIFIC APPLICATION IN AN INSTRUMENTED ENVIORNMENT WITH ELECTRONIC MINIMANIPULATION LIBRARIES
Embodiments of the present disclosure are directed to methods, computer program products, and computer systems of a robotic apparatus with robotic instructions replicating a food preparation recipe. In one embodiment, a robotic control platform, comprises one or more sensors; a mechanical robotic structure including one or more end effectors, and one or more robotic arms; an electronic library database of minimanipulations; a robotic planning module configured for real-time planning and adjustment based at least in part on the sensor data received from the one or more sensors in an electronic multi-stage process file, the electronic multi-stage process recipe file including a sequence of minimanipulations and associated timing data; a robotic interpreter module configured for reading the minimanipulation steps from the minimanipulation library and converting to a machine code; and a robotic execution module configured for executing the minimanipulation steps by the robotic platform to accomplish a functional result.
METHOD FOR LEARNING ROBOT TASK AND ROBOT SYSTEM USING THE SAME
The present invention relates to methods for learning a robot task and robots systems using the same. A robot system may include a robot configured to perform a task, and detect force information related to the task, a haptic controller configured to be manipulatable for teaching the robot, the haptic controller configured to output a haptic feedback based on the force information while teaching of the task to the robot is performed, a sensor configured to sense first information related to a task environment of the robot and second information related to a driving state of the robot, while the teaching is performed by the haptic controller for outputting the haptic feedback, and a computer configured to learn a motion of the robot related to the task, by using the first information and the second information, such that the robot autonomously performs the task.
Methods and systems for food preparation in a robotic cooking kitchen
The present disclosure is directed to methods, computer program products, and computer systems for instructing a robot to prepare a food dish by replacing the human chef's movements and actions. Monitoring a human chef is carried out in an instrumented application-specific setting, a standardized robotic kitchen in this instance, and involves using sensors and computers to watch, monitor, record and interpret the motions and actions of the human chef, in order to develop a robot-executable set of commands robust to variations and changes in the environment, capable of allowing a robotic or automated system in a robotic kitchen to prepare the same dish to the standards and quality as the dish prepared by the human chef.
SYSTEMS, DEVICES, ARTICLES, AND METHODS FOR USING TRAINED ROBOTS
Robotic systems, methods of operation of robotic systems, and storage media including processor-executable instructions are disclosed herein. The system may include a robot, at least one processor in communication with the robot, and an operator interface in communication with the robot and the at least one processor. The method may include executing a first set of autonomous robot control instructions which causes a robot to autonomously perform the at least one task in an autonomous mode, and generating a second set of autonomous robot control instructions from the first set of autonomous robot control instructions and a first set of environmental sensor data received from a senor. The second set of autonomous robot control instructions when executed causes the robot to autonomously perform the at least one task. The method may include producing at least one signal that represents the second set of autonomous robot control instructions.
Generating robotic trajectories with motion harmonics
Aspects of the generation of new robotic motion trajectories are described. In one embodiment, a new robot motion trajectory may be generated by gathering demonstrated motion trajectories, adapting the demonstrated motion trajectories into robot-reachable motion trajectories based on a joint space of a robot model, for example, and generating motion harmonics with reference to the motion trajectories. Further, one or more constraints may be specified for a new goal. The weights of the motion harmonics may then be searched to identify or generate a new motion trajectory for a robot, where the new motion minimizes discrepancy from the demonstrated motion trajectories and error due to the at least one constraint. In the new motion trajectory, the degree to which the constraints are satisfied may be tuned using a weight. According to the embodiments, new motion variants may be generated without the need to learn or review new demonstrated trajectories.
GENERATING A ROBOT CONTROL POLICY FROM DEMONSTRATIONS
Learning to effectively imitate human teleoperators, even in unseen, dynamic environments is a promising path to greater autonomy, enabling robots to steadily acquire complex skills from supervision. Various motion generation techniques are described herein that are rooted in contraction theory and sum-of-squares programming for learning a dynamical systems control policy in the form of a polynomial vector field from a given set of demonstrations. Notably, this vector field is provably optimal for the problem of minimizing imitation loss while providing certain continuous-time guarantees on the induced imitation behavior. Techniques herein generalize to new initial and goal poses of the robot and can adapt in real time to dynamic obstacles during execution, with convergence to teleoperator behavior within a well-defined safety tube.
VERBAL-BASED FOCUS-OF-ATTENTION TASK MODEL ENCODER
Traditionally, robots may learn to perform tasks by observation in clean or sterile environments. However, robots are unable to accurately learn tasks by observation in real environments (e.g., cluttered, noisy, chaotic environments). Methods and systems are provided for teaching robots to learn tasks in real environments based on input (e.g., verbal or textual cues). In particular, a verbal-based Focus-of-Attention (FOA) model receives input, parses the input to recognize at least a task and a target object name. This information is used to spatio-temporally filter a demonstration of the task to allow the robot to focus on the target object and movements associated with the target object within a real environment. In this way, using the verbal-based FOA, a robot is able to recognize “where and when” to pay attention to the demonstration of the task, thereby enabling the robot to learn the task by observation in a real environment.
Systems, Methods, and Computer-Readable Media for Task-Oriented Motion Mapping on Machines, Robots, Agents and Virtual Embodiments Thereof Using Body Role Division
Systems, methods, and computer-readable media are disclosed for task-oriented motion mapping on an agent using body role division. One method includes: receiving task demonstration information of a particular task; receiving a set of instructions for the particular task; receiving a configuration of an agent to perform the particular task, the configuration of the agent including a plurality of joints, and each joint belong to one or more of a configurational group, a positional group, and a orientational group: mapping the configurational group of the agent based on the task demonstration information; changing values in the orientational group based on one or more of the task demonstration information and the set of instructions; changing values in the positional group based on the set of instructions; and producing a task-oriented motion mapping based on the mapped configuration group, changed values in the orientation group, and changed values in the positional group.
METHOD AND DEVICE FOR OPERATING A MACHINE
A device for and method of operating a machine. The method includes providing a sequence of skills of the machine for executing a task, selecting a sequence of states from a plurality of sequences of states, depending on a likelihood, wherein the likelihood is determined depending on a transition probability from a final state of a first sub-sequence of states of the sequence of states for a first skill in the sequence of skills to an initial state of a second sub-sequence of states of the sequence of states for a second skill in the sequence of skills.
SYSTEMS, APPARATUS, AND METHODS FOR ROBOTIC LEARNING AND EXECUTION OF SKILLS
Systems, apparatus, and methods are described for robotic learning and execution of skills. A robotic apparatus can include a memory, a processor, sensors, and one or more movable components (e.g., a manipulating element and/or a transport element). The processor can be operatively coupled to the memory, the movable elements, and the sensors, and configured to obtain information of an environment, including one or more objects located within the environment. In some embodiments, the processor can be configured to learn skills through demonstration, exploration, user inputs, etc. In some embodiments, the processor can be configured to execute skills and/or arbitrate between different behaviors and/or actions. In some embodiments, the processor can be configured to learn an environmental constraint. In some embodiments, the processor can be configured to learn using a general model of a skill.