Patent classifications
G05B2219/40391
TEACHING TOOL, AND TEACHING DEVICE FOR USING OPERATOR'S HAND TO SET TEACHING POINT
A robot control device is provided with a camera which images a teaching tool which includes a characteristic area, and a characteristic position detection unit which detects the position of the characteristic area. The robot control device includes a movement instruction generating unit which, when an operator has moved the teaching tool, changes the position and orientation of the robot such that the camera follows the characteristic area. The robot control device includes a calculation unit which, on the basis of the position of the characteristic area, calculates the position and orientation of an auxiliary coordinate system set for the teaching tool. The robot control device includes a setting unit which, on the basis of the position and orientation of the auxiliary coordinate system, sets the position of the teaching point and the orientation of the robot at the teaching point.
Engagement detection and attention estimation for human-robot interaction
A method includes receiving, from a camera disposed on a robotic device, a two-dimensional (2D) image of a body of an actor and determining, for each respective keypoint of a first subset of a plurality of keypoints, 2D coordinates of the respective keypoint within the 2D image. The plurality of keypoints represent body locations. Each respective keypoint of the first subset is visible in the 2D image. The method also includes determining a second subset of the plurality of keypoints. Each respective keypoint of the second subset is not visible in the 2D image. The method further includes determining, by way of a machine learning model, an extent of engagement of the actor with the robotic device based on (i) the 2D coordinates of keypoints of the first subset and (ii) for each respective keypoint of the second subset, an indicator that the respective keypoint is not visible.
Machine learning device, robot controller, robot system, and machine learning method for learning action pattern of human
A machine learning device for a robot that allows a human and the robot to work cooperatively, the machine learning device including a state observation unit that observes a state variable representing a state of the robot during a period in that the human and the robot work cooperatively; a determination data obtaining unit that obtains determination data for at least one of a level of burden on the human and a working efficiency; and a learning unit that learns a training data set for setting an action of the robot, based on the state variable and the determination data.
ROBOTIC SYSTEM
The present disclosure relates generally to robotic systems, and more specifically to systems and methods for a robotic platform comprising an on-demand intelligence component. An exemplary computer-enabled method for operating a robot comprises obtaining an instruction for the robot, wherein the instruction is associated with a first user; identifying, based on the instruction, a task; transmitting the task to the robot; receiving, from the robot, a request associated with the task; determining whether the request can be solved by one or more trained machine-learning algorithms; if the request cannot be solved by the one or more trained machine-learning algorithms, transmitting a query to a second user's electronic device; receiving a response to the query from the second user; and causing the task to be performed by the robot based on the response
SYSTEM AND PROCESS FOR DECONFOUNDED IMITATION LEARNING
A processor-implemented method includes observing an environment via one or more sensors associated with a robotic device. The processor-implemented method also includes generating, via an inference model, a belief of the environment based on data associated with prior actions of the robotic device in the environment. The processor-implemented method further includes controlling the robotic device to perform an action in the environment based on generating the belief.
SYSTEMS, DEVICES, ARTICLES, AND METHODS FOR USING TRAINED ROBOTS
Robotic systems, methods of operation of robotic systems, and storage media including processor-executable instructions are disclosed herein. The system may include a robot, at least one processor in communication with the robot, and an operator interface in communication with the robot and the at least one processor. The method may include executing a first set of autonomous robot control instructions which causes a robot to autonomously perform the at least one task in an autonomous mode, and generating a second set of autonomous robot control instructions from the first set of autonomous robot control instructions and a first set of environmental sensor data received from a sensor. Execution of the second set of autonomous robot control instructions causes the robot to autonomously perform the at least one task. The method may include producing at least one signal that represents the second set of autonomous robot control instructions.
Robotic System Having Shuttle
A robotic system includes a robot having a picking arm to grasp an inventory item and a shuttle. The shuttle includes a platform adapted to receive the inventory item from the picking arm of the robot. The platform is moveable in at least a two-dimensional horizontal plane between a pick-up location located substantially adjacent to the robot and an end location spaced a distance apart from the pick-up location. The system improves efficiency as transportation of the item from the pick-up location to the end location is divided between the robot and the shuttle.
Skill templates for robotic demonstration learning
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using skill templates for robotic demonstration learning. One of the methods includes receiving a skill template for a task to be performed by a robot, wherein the skill template defines a state machine having a plurality of subtasks and one or more respective transition conditions between one or more of the subtasks. Local demonstration data for a demonstration subtask of the skill template is received, where the local demonstration data is generated from a user demonstrating how to perform the demonstration subtask with the robot. A machine learning model is refined for the demonstration subtask and the skill template is executed on the robot, causing the robot to transition through the state machine defined by the skill template to perform the task.
ROBOTIC DEMONSTRATION LEARNING
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using simulated local demonstration data for robotic demonstration learning. One of the methods includes receiving perceptual data of a workcell of a robot to be configured to execute a task according to a skill template, wherein the skill template specifies one or more subtasks required to perform the skill, wherein at least one of the subtasks is a demonstration subtask that relies on learning visual characteristics of the workcell. A virtual model is generated of a portion of the workcell. A training system generates simulated local demonstration data from the virtual model of the portion of the workcell and tunes a base control policy for the demonstration subtask using the simulated local demonstration data generated from the virtual model of the portion of the workcell.
ROBOT REMOTE OPERATION CONTROL DEVICE, ROBOT REMOTE OPERATION CONTROL SYSTEM, ROBOT REMOTE OPERATION CONTROL METHOD, AND PROGRAM
A robot remote operation control device includes, in robot remote operation control for an operator to remotely operate a robot capable of gripping an object, an information acquisition unit that acquires operator state information on a state of the operator who operates the robot, an intention estimation unit that estimates a motion intention of the operator who causes the robot to perform a motion, on the basis of the operator state information, and a gripping method determination unit that determines a gripping method for the object on the basis of the estimated motion intention of the operator.