G05B2219/36442

WEARABLE ROBOT DATA COLLECTION SYSTEM WITH HUMAN-MACHINE OPERATION INTERFACE

A data collection system that performs data collection of human-driven robot actions for robot learning. The data collection system includes: i) a wearable computation subsystem that is worn by a human data collector and that controls the data collection process and ii) a human-machine operation interface subsystem that allows the human data collector to use the human-machine operation interface to operate an attached robotic gripper to perform one or more actions. A user interface subsystem receives instructions from the wearable computation subsystem that direct the human data collector to perform the one or more actions using the human-machine operation interface subsystem. A visual sensing subsystem includes one or more cameras that collect raw visual data related to the pose and movement of the robotic gripper while performing the one or more actions. A data collection subsystem receives collected data related to the one or more actions.

Facilitating robotic control using a virtual reality interface

A method of deriving autonomous control information involves receiving one or more sets of associated environment sensor information and device control instructions. Each set of associated environment sensor information and device control instructions includes environment sensor information representing an environment associated with an operator controllable device and associated device control instructions configured to cause the operator controllable device to simulate at least one action taken by at least one operator experiencing a representation of the environment generated from the environment sensor information. The method also involves deriving autonomous control information from the one or more sets of associated environment sensor information and device control instructions, the autonomous control information configured to facilitate generating autonomous device control signals from autonomous environment sensor information representing an environment associated with an autonomous device, the autonomous device control signals configured to cause the autonomous device to take at least one autonomous action.

Automatic analysis of real time conditions in an activity space

Efficient and effective workspace condition analysis systems and methods are presented. In one embodiment, a method comprises: accessing information associated with an activity space, including information on a newly discovered previously unmodeled entity; analyzing the activity information, including activity information associated with the previously unmodeled entity; forwarding feedback on the results of the analysis, including analysis results for the updated modeled information; and utilizing the feedback in a coordinated path plan check process. In one exemplary implementation the coordinated path plan check process comprises: creating a solid/CAD model including updated modeled information; simulating an activity including the updated modeled information; generating a coordinated path plan for entities in the activity space; and testing the coordinated path plan. The coordinated path plan check process can be a success. The analyzing can include automatic identification of potential collision points for a first actor, including potential collision points with the newly discovered object. The newly discovered previously unmodeled entity interferes with an actor from performing an activity. The newly discovered object is a portion of a tool component of a product.

Skill template distribution for robotic demonstration learning

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributing skill templates for robotic demonstration learning. One of the methods includes receiving, from the user device by a skill template distribution system, a selection of an available skill template. The skill template distribution system provides a skill template, wherein the skill template comprises information representing a state machine of one or more tasks, and wherein the skill template specifies which of the one or more tasks are demonstration subtasks requiring local demonstration data. The skill template distribution system trains a machine learning model for the demonstration subtask using a local demonstration data to generate learned parameter values.

Distributed robotic demonstration learning

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributed robotic demonstration learning. One of the methods includes receiving a skill template to be trained to cause a robot to perform a particular skill having a plurality of subtasks. One or more demonstration subtasks defined by the skill template are identified, wherein each demonstration subtask is an action to be refined using local demonstration data. On online execution system uploads sets of local demonstration data to a cloud-based training system. The cloud-based training system generates respective trained model parameters for each set of local demonstration data. The skill template is executed on the robot using the trained model parameters generated by the cloud-based training system.

Method for the surface treatment of an article

A method for the surface treatment of an article (2) by means of a robotic device (3) comprising a robotic arm (5) and a spraying head (4) fitted on the robotic arm (5); the method comprises a learning step, during which the operator moves the spraying head (4) by means of a handling device (9) and the movements made by the spraying head (4) are stored by a storage unit (8); and a reproduction step, which is subsequent to the learning step and during which the robotic arm (5) is operated so that the spraying head (4) repeats the movements stored by the storage unit (8).

System for configuring a robotic manipulator

Described are techniques for storing and retrieving items using a robotic manipulator. Images depicting a human interacting with an item, sensor data from sensors instrumenting the human or item, data regarding physical characteristics of the item, and constraint data relating to the robotic manipulator or the item may be used to generate one or more configurations for the robotic manipulator. The configurations may include points of contact and force vectors for contacting the item using the robotic manipulator.

VERBAL-BASED FOCUS-OF-ATTENTION TASK MODEL ENCODER

Traditionally, robots may learn to perform tasks by observation in clean or sterile environments. However, robots are unable to accurately learn tasks by observation in real environments (e.g., cluttered, noisy, chaotic environments). Methods and systems are provided for teaching robots to learn tasks in real environments based on input (e.g., verbal or textual cues). In particular, a verbal-based Focus-of-Attention (FOA) model receives input, parses the input to recognize at least a task and a target object name. This information is used to spatio-temporally filter a demonstration of the task to allow the robot to focus on the target object and movements associated with the target object within a real environment. In this way, using the verbal-based FOA, a robot is able to recognize “where and when” to pay attention to the demonstration of the task, thereby enabling the robot to learn the task by observation in a real environment.

Verbal-based focus-of-attention task model encoder

Traditionally, robots may learn to perform tasks by observation in clean or sterile environments. However, robots are unable to accurately learn tasks by observation in real environments (e.g., cluttered, noisy, chaotic environments). Methods and systems are provided for teaching robots to learn tasks in real environments based on input (e.g., verbal or textual cues). In particular, a verbal-based Focus-of-Attention (FOA) model receives input, parses the input to recognize at least a task and a target object name. This information is used to spatio-temporally filter a demonstration of the task to allow the robot to focus on the target object and movements associated with the target object within a real environment. In this way, using the verbal-based FOA, a robot is able to recognize “where and when” to pay attention to the demonstration of the task, thereby enabling the robot to learn the task by observation in a real environment.

MANIPULATOR AND METHOD FOR CONTROLLING THEREOF

A manipulator and a method for controlling the manipulator are disclosed. The manipulator includes: a plurality of links respectively corresponding to a user’s upper arm, fore arm, and hand, a plurality of motors rotating the plurality of links, a communication interface comprising communication circuitry, a memory storing at least one instruction, and a processor configured to execute the at least one instruction, wherein the processor is configured to: based on first rotation angle information for motors corresponding to the upper arm and the fore arm among the plurality of motors, obtain information for a body frame of a link corresponding to the fore arm, obtain equilibrium angle information that positions the body frame in equilibrium with a specified reference frame, based on receiving a sensing value indicating the posture of the hand from an external sensor through the communication interface, obtain second rotation angle information for motors corresponding to the hand among the plurality of motors based on the sensing value and the equilibrium angle information, and control the motors corresponding to the hand based on the second rotation angle information.