Patent classifications
G05B2219/40116
WEARABLE ROBOT DATA COLLECTION SYSTEM WITH HUMAN-MACHINE OPERATION INTERFACE
A data collection system that performs data collection of human-driven robot actions for robot learning. The data collection system includes: i) a wearable computation subsystem that is worn by a human data collector and that controls the data collection process and ii) a human-machine operation interface subsystem that allows the human data collector to use the human-machine operation interface to operate an attached robotic gripper to perform one or more actions. A user interface subsystem receives instructions from the wearable computation subsystem that direct the human data collector to perform the one or more actions using the human-machine operation interface subsystem. A visual sensing subsystem includes one or more cameras that collect raw visual data related to the pose and movement of the robotic gripper while performing the one or more actions. A data collection subsystem receives collected data related to the one or more actions.
Robot apparatus, robot system, robot control method, and storage medium
A robot apparatus includes a storage that stores first instructional information which serves as a guide to first work operation, an acquirer that acquires second instructional information which serves as a guide to second work operation similar to the first work operation or second work operation related to the first work operation from a different apparatus having the second instructional information, and a work controller that performs the first work operation based on the first instructional information stored in the storage and the second work operation based on the second instructional information acquired by the acquirer.
Electronic device and method for determining task including plural actions
Provided is an electronic device. The electronic device may include: a user interface; a processor operatively connected to the user interface; and a memory operatively connected to the processor, wherein the memory may store instructions that, when executed, cause the processor to control the electronic device to: receive an input via the user interface; determine a task including plural actions based on the input; execute a first action among the plural actions of the determined task; obtain context information related to the task while executing the first action; determine at least one first threshold associated with the first action based at least in part on the obtained context information; and determine the result of the first action based on the execution of the first action being completed based on the at least one first threshold.
DEVICE AND METHOD FOR CONTROLLING A ROBOT
A method for controlling a robot. The method includes performing demonstrations and descriptor images for the demonstrations from a point of view of the robot of the object; selecting a set of feature points, wherein the feature points are selected by searching an optimum of an objective function which rewards selected feature points being visible in the descriptor images; training a robot control model using the demonstrations and controlling the robot for a control scene with the object by determining a descriptor image of the object, locating the selected set of feature points in the descriptor image of the object; determining Euclidean coordinates of the located feature points; estimating a pose from the determined Euclidean coordinates; and controlling the robot to handle the object by means of the robot control model with the estimated pose.
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
An information processing device includes: a joint detection unit that detects a joint of a person striking a pose to imitate a pose of a robot device including a joint; a human body joint angle estimation unit that estimates an angle of the joint of the person; and a mapping learning unit that learns mapping between the angle of the joint of the person and an angle of the joint of the robot device in the pose.
HUMAN ROBOT COLLABORATION FOR FLEXIBLE AND ADAPTIVE ROBOT LEARNING
Example implementations described herein involve systems and methods for training and managing machine learning models in an industrial setting. Specifically, by leveraging the similarity across certain production areas, example implementations can group together these areas to train models efficiently that use human pose data to predict human activities or specific task(s) the workers are engaged in. The example implementations do away with previous methods of independent model construction for each production area and takes advantage of the commonality amongst different environments.
Robot Training System
A method for configuring an electromechanical system to perform a first task includes accepting a specification of the first task, accepting first user input from an operator related to the first task, the first user input including a representation of user-referenced points, and forming control data for causing the system to perform the task based on the specification of the first task and the first user input.
Skill template distribution for robotic demonstration learning
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributing skill templates for robotic demonstration learning. One of the methods includes receiving, from the user device by a skill template distribution system, a selection of an available skill template. The skill template distribution system provides a skill template, wherein the skill template comprises information representing a state machine of one or more tasks, and wherein the skill template specifies which of the one or more tasks are demonstration subtasks requiring local demonstration data. The skill template distribution system trains a machine learning model for the demonstration subtask using a local demonstration data to generate learned parameter values.
Distributed robotic demonstration learning
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for distributed robotic demonstration learning. One of the methods includes receiving a skill template to be trained to cause a robot to perform a particular skill having a plurality of subtasks. One or more demonstration subtasks defined by the skill template are identified, wherein each demonstration subtask is an action to be refined using local demonstration data. On online execution system uploads sets of local demonstration data to a cloud-based training system. The cloud-based training system generates respective trained model parameters for each set of local demonstration data. The skill template is executed on the robot using the trained model parameters generated by the cloud-based training system.
SYSTEMS, APPARATUS, AND METHODS FOR ROBOTIC LEARNING AND EXECUTION OF SKILLS
Systems, apparatus, and methods are described for robotic learning and execution of skills. A robotic apparatus can include a memory, a processor, sensors, and one or more movable components (e.g., a manipulating element and/or a transport element). The processor can be operatively coupled to the memory, the movable elements, and the sensors, and configured to obtain information of an environment, including one or more objects located within the environment. In some embodiments, the processor can be configured to learn skills through demonstration, exploration, user inputs, etc. In some embodiments, the processor can be configured to execute skills and/or arbitrate between different behaviors and/or actions. In some embodiments, the processor can be configured to learn an environmental constraint. In some embodiments, the processor can be configured to learn using a general model of a skill.