Patent classifications
B25J9/1656
Distributed robotic demonstration learning
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributed robotic demonstration learning. One of the methods includes receiving a skill template to be trained to cause a robot to perform a particular skill having a plurality of subtasks. One or more demonstration subtasks defined by the skill template are identified, wherein each demonstration subtask is an action to be refined using local demonstration data. On online execution system uploads sets of local demonstration data to a cloud-based training system. The cloud-based training system generates respective trained model parameters for each set of local demonstration data. The skill template is executed on the robot using the trained model parameters generated by the cloud-based training system.
Robot execution system
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for rule execution in an online robotics system. One of the systems includes an execution engine subsystem and an execution memory subsystem. The execution engine receives rules having types and subtypes that represent a particular entity in an operating environment of a robot, provides subscription requests to the execution memory subsystem, and receives events emitted by the execution memory subsystem. The an execution memory receives subscription requests from the execution engine subsystem, receives new observations, converts the new observations into fact updates, performs pattern matching with the fact updates against the patterns of the subscription requests, and emits events to the execution engine subsystem for patterns that have been matched by the fact updates.
TRAINING AND/OR UTILIZING MACHINE LEARNING MODEL(S) FOR USE IN NATURAL LANGUAGE BASED ROBOTIC CONTROL
Techniques are disclosed that enable training a goal-conditioned policy based on multiple data sets, where each of the data sets describes a robot task in a different way. For example, the multiple data sets can include: a goal image data set, where the task is captured in the goal image; a natural language instruction data set, where the task is described in the natural language instruction; a task ID data set, where the task is described by the task ID, etc. In various implementations, each of the multiple data sets has a corresponding encoder, where the encoders are trained to generate a shared latent space representation of the corresponding task description. Additional or alternative techniques are disclosed that enable control of a robot using a goal-conditioned policy network. For example, the robot can be controlled, using the goal-conditioned policy network, based on free-form natural language input describing robot task(s).
PROGRAM EDITING DEVICE
The present invention makes it possible to create a vision detection program without a sense of incongruity, even for an operator who is used to robot programming but not used to vision detection programming. Provided is a program editing device for editing a motion program for a robot, the program editing device including: a program editing unit which receives common input operations with respect to a first type of icon that corresponds to commands relating to control of the robot and a second type of icon that corresponds to commands relating to image capture by a visual sensor and to processing of captured images; and a program generation unit which generates the motion program in accordance with the first type of icon and second type of icon subjected to editing.
Calibration and programming of robots
Methods includes calibrating robots without the use of external measurement equipment and copying working programs between un-calibrated robots. Both methods utilize the properties of a closed chain and the relative position of the links in the chain in order to update the kinematic models of the robots.
SMART HAND TOOLS WITH SENSING AND WIRELESS CAPABILITIES, AND SYSTEMS AND METHODS FOR USING SAME
A smart hand tool includes a tool body having a distal working end and an opposite handle end configured to be gripped by a user, at least one sensor coupled to the tool body, and a wireless transmitter in communication with the at least one sensor and configured to transmit information from the at least one sensor to a remote device. The at least one sensor is configured to measure one or more physical parameters when the hand tool is in use, such as movement of the tool body working end, rotation of the tool body working end, torque at the tool body working end, pressure at the tool body working end, and temperature at the tool body working end.
METHOD FOR CONTROLLING ROBOT, ROBOT, AND RECORDING MEDIUM
A robot detects, through a sensor, the location and movement direction of a user and an object near the user, sets a nearby ground area in front at the feet of the user according to the detected location and movement direction of the user, controls an illumination device in the robot to irradiate the nearby ground area with light while driving at least one pair of legs or wheels of the robot to cause the robot to accompany the user, specifies the type and the location of the detected object, and if the object is a dangerous object and is located ahead of the user, controls the illumination device to irradiate a danger area including at least a portion of the dangerous object with light in addition to irradiating the nearby ground area with light.
PROGRAM GENERATION DEVICE AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM STORING PROGRAM
A program generation device generating an operation program causing a robot to execute a task is provided. The program generation device includes: a display control unit displaying an input screen including an operation block, display area where an operation block relating to an operation of the robot is displayed, an operation block arrangement area where the operation block selected from the operation block display area is arranged to generate the operation program, and a text display area where the operation program is displayed in a text format; and a text editing unit editing the operation program in the text format and displaying the edited operation program in the text display area.
Optimization of robot control programs in physics-based simulated environment
A disclosed system includes a physically plausible virtual runtime environment to simulate a real-life environment for a simulated robot and a test planning and testing component to define a robotic task and generate virtual test cases for the robotic task. The test planning and testing component is further operable to generate virtual test cases for the robotic task, determine a control strategy for executing the virtual test cases, and create the physics-based simulated environment. The system further includes a robot controller operable to execute the virtual test cases in parallel in the physics-based simulated environment, measure a success of the execution, and store training and validation data to a historical database to train a machine learning algorithm. The robot controller may continuously execute the virtual test cases and use the machine learning algorithm to adjust parameters of the control strategy until optimal test cases are determined.
DETERMINING AND UTILIZING CORRECTIONS TO ROBOT ACTIONS
Methods, apparatus, and computer-readable media for determining and utilizing human corrections to robot actions. In some implementations, in response to determining a human correction of a robot action, a correction instance is generated that includes sensor data, captured by one or more sensors of the robot, that is relevant to the corrected action. The correction instance can further include determined incorrect parameter(s) utilized in performing the robot action and/or correction information that is based on the human correction. The correction instance can be utilized to generate training example(s) for training one or model(s), such as neural network model(s), corresponding to those used in determining the incorrect parameter(s). In various implementations, the training is based on correction instances from multiple robots. After a revised version of a model is generated, the revised version can thereafter be utilized by one or more of the multiple robots.