Y10S901/04

System and method for flexible human-machine collaboration

Methods and systems for enabling human-machine collaborations include a generalizable framework that supports dynamic adaptation and reuse of robotic capability representations and human-machine collaborative behaviors. Specifically, a method of feedback-enabled user-robot collaboration includes obtaining a robot capability that models a robot's functionality for performing task actions, specializing the robot capability with an information kernel that encapsulates task-related parameters associated with the task actions, and providing an instance of the specialized robot capability as a robot capability element that controls the robot's functionality based on the task-related parameters. The method also includes obtaining, based on the robot capability element's user interaction requirements, user interaction capability elements, via which the robot capability element receives user input and provides user feedback, controlling, based on the task-related parameters, the robot's functionality to perform the task actions in collaboration with the user input; and providing the user feedback including task-related information generated by the robot capability element in association with the task actions.

Industrial robot, controller, and method thereof

An industrial robot having high operability for a user is provided. An industrial robot includes a manipulator, a controller which controls an operation of the manipulator, and a detection device attached to the manipulator and detecting a gesture input. The controller executes a process corresponding to the detected gesture input.

System and calibration, registration, and training methods
10723022 · 2020-07-28 · ·

One variation of a method for manipulating a multi-link robotic arm includes: accessing a virtual model of the target object; extracting an object feature representing the target object from the virtual model; at the robotic arm, scanning a field of view of an optical sensor for the object feature, the optical sensor arranged on a distal end of the robotic arm proximal an end effector; in response to detecting the object feature in the field of view of the optical sensor, calculating a physical offset between the target object and the end effector based on a position of the object feature in the field of view of the optical sensor and a known offset between the optical sensor and the end effector; and driving a set of actuators in the robotic arm to reduce the physical offset.

Generating a robot control policy from demonstrations collected via kinesthetic teaching of a robot
11872699 · 2024-01-16 · ·

Generating a robot control policy that regulates both motion control and interaction with an environment and/or includes a learned potential function and/or dissipative field. Some implementations relate to resampling temporally distributed data points to generate spatially distributed data points, and generating the control policy using the spatially distributed data points. Some implementations additionally or alternatively relate to automatically determining a potential gradient for data points, and generating the control policy using the automatically determined potential gradient. Some implementations additionally or alternatively relate to determining and assigning a prior weight to each of the data points of multiple groups, and generating the control policy using the weights. Some implementations additionally or alternatively relate to defining and using non-uniform smoothness parameters at each data point, defining and using d parameters for stiffness and/or damping at each data point, and/or obviating the need to utilize virtual data points in generating the control policy.

System and calibration, registration, and training methods
10596700 · 2020-03-24 · ·

A method for manipulating a multi-link robotic arm includes: at a first time, recording a first optical image through an optical sensor arranged proximal a distal end of the robotic arm proximal an end effector; detecting a global reference feature in a first position in the first optical image; virtually locating a global reference frame based on the first position of the global reference feature in the first optical image; calculating a first pose of the end effector within the global reference frame at approximately the first time based on the first position of the global reference feature in the first optical image; and driving a set of actuators within the robotic arm to move the end effector from the first pose toward an object keypoint, the object keypoint defined within the global reference frame and representing an estimated location of a target object within range of the end effector.

SYSTEM AND METHOD FOR REINFORCING PROGRAMMING EDUCATION THROUGH ROBOTIC FEEDBACK
20190375096 · 2019-12-12 ·

A method for toy robot programming, the toy robot including a set of sensors, the method including, at a user device remote from the toy robot: receiving sensor measurements from the toy robot during physical robot manipulation; in response to detecting a programming trigger event, automatically converting the sensor measurements into a series of puppeted programming inputs; and displaying graphical representations of the set of puppeted programming inputs on a programming interface application on the user device.

Robot system

Provided is a robot system including a robot; a control device configured to control the robot; a portable teach pendant connected to the control device; and a teaching handle attached to the robot and connected to the control device, where the teach pendant is provided with a first enable switch configured to permit operation of the robot by the teach pendant, the teaching handle is provided with a second enable switch configured to permit operation of the robot by the teaching handle, and the control device enables operation of the robot by the teaching handle only when the first enable switch is in an off state and the second enable switch is switched to the on state, and enables operation of the robot by the teach pendant only when the second enable switch is in an off state and the first enable switch is switched to the on state.

GENERATING A ROBOT CONTROL POLICY FROM DEMONSTRATIONS COLLECTED VIA KINESTHETIC TEACHING OF A ROBOT
20190344439 · 2019-11-14 ·

Generating a robot control policy that regulates both motion control and interaction with an environment and/or includes a learned potential function and/or dissipative field. Some implementations relate to resampling temporally distributed data points to generate spatially distributed data points, and generating the control policy using the spatially distributed data points. Some implementations additionally or alternatively relate to automatically determining a potential gradient for data points, and generating the control policy using the automatically determined potential gradient. Some implementations additionally or alternatively relate to determining and assigning a prior weight to each of the data points of multiple groups, and generating the control policy using the weights. Some implementations additionally or alternatively relate to defining and using non-uniform smoothness parameters at each data point, defining and using d parameters for stiffness and/or damping at each data point, and/or obviating the need to utilize virtual data points in generating the control policy.

System and method for reinforcing programming education through robotic feedback
10427295 · 2019-10-01 · ·

A method for toy robot programming, the toy robot including a set of sensors, the method including, at a user device remote from the toy robot: receiving sensor measurements from the toy robot during physical robot manipulation; in response to detecting a programming trigger event, automatically converting the sensor measurements into a series of puppeted programming inputs; and displaying graphical representations of the set of puppeted programming inputs on a programming interface application on the user device.

Generating a robot control policy from demonstrations collected via kinesthetic teaching of a robot
10391632 · 2019-08-27 · ·

Generating a robot control policy that regulates both motion control and interaction with an environment and/or includes a learned potential function and/or dissipative field. Some implementations relate to resampling temporally distributed data points to generate spatially distributed data points, and generating the control policy using the spatially distributed data points. Some implementations additionally or alternatively relate to automatically determining a potential gradient for data points, and generating the control policy using the automatically determined potential gradient. Some implementations additionally or alternatively relate to determining and assigning a prior weight to each of the data points of multiple groups, and generating the control policy using the weights. Some implementations additionally or alternatively relate to defining and using non-uniform smoothness parameters at each data point, defining and using d parameters for stiffness and/or damping at each data point, and/or obviating the need to utilize virtual data points in generating the control policy.