G05B2219/40424

Self-locating robots

A method and apparatus for a robot self-locating on a movement surface. The method may comprise moving a first robot across the movement surface and relative to a workpiece, in which the movement surface faces the workpiece. The method may also form sensor data using a first number of sensors on the first robot as the first robot moves across the movement surface, in which the sensor data represents identifying characteristics of a portion of the movement surface. The method may also determine a location of the first robot on the movement surface using the sensor data. The method may further determine a location of a functional component of the first robot relative to the workpiece using the location of the first robot on the movement surface.

Semi-Autonomous Multi-Use Robot System and Method of Operation

A semi-autonomous robot system (10) that includes scanning and scanned data manipulation that is utilized for controlling remote operation of a robot system within an operating environment.

Self-Locating Robots

A method and apparatus for a robot self-locating on a movement surface. The method may comprise moving a first robot across the movement surface and relative to a workpiece, in which the movement surface faces the workpiece. The method may also form sensor data using a first number of sensors on the first robot as the first robot moves across the movement surface, in which the sensor data represents identifying characteristics of a portion of the movement surface. The method may also determine a location of the first robot on the movement surface using the sensor data. The method may further determine a location of a functional component of the first robot relative to the workpiece using the location of the first robot on the movement surface.

Semi-autonomous multi-use robot system and method of operation

A semi-autonomous robot system (10) that includes scanning and scanned data manipulation that is utilized for controlling remote operation of a robot system within an operating environment.

Eye-on-Hand Reinforcement Learner for Dynamic Grasping with Active Pose Estimation

A controller is provided for performing dynamic grasping of a target object using visual sensory inputs. The controller includes a robotic interface connected to a robotic arm including links connected by joints having actuators and encoders, and a gripper of the end-effector of the robotic arm configured to grasp the target object in response to robot control signals, and a vision sensor configured to continuously provide visual observations for tracking poses of the target object in a workspace and compute grasp poses, wherein the vision sensor is mounted on a distal end of the robotic arm adjacent to the gripper. The controller trains the Eye-on-Hand reinforcement learner policy, tracks the poses of the target object, and generates robot control signals to follow the target object while keeping it in the field of view of the vision sensor and grasp the target object in the workspace.