G05B2219/39527

Robot Gripper, and Method for Operating a Robot Gripper
20220184812 · 2022-06-16 ·

A robot gripper includes: a drive unit to drive a powertrain with active elements, wherein each element has a working region arranged in a body-fixed manner relative to the robot gripper, a respective element being moveable in and capable of reaching the working region; a control unit to control the drive unit; and a sensor system connected to the control unit to ascertain forces/moments applied externally to individual elements, the control unit configured such that collision monitoring is capable of being carried out for the elements, and when a collision is detected for an element, the drive unit is actuated according to a specified operation, including: providing a defined region within the working region for the elements, and collision monitoring for the elements only when the elements are located outside the assigned region, and deactivating collision monitoring when the elements are located at least partly within the assigned region.

Manufacturing control in the metal processing industry

An interior localization system for manufacturing control, with multiple, fixedly installed transceivers for determining the position of multiple mobile units, the position being determined in particular by evaluating the runtime of electromagnetic (radio) signals. The interior localization system is used to associate one of the mobile units to a person in an industrial manufacturing plant that processes steel and/or sheet metal, to determine the position of the associated person by localizing the associated mobile unit using the interior localization system and to integrate the interior localization system into a manufacturing control system of the industrial manufacturing plant.

ROBOTIC HAND SYSTEM AND METHOD FOR CONTROLLING ROBOTIC HAND
20230256618 · 2023-08-17 ·

Provided are a robotic hand system and a method for controlling a robotic hand. A robotic hand system operated by a user according to an example embodiment, the robotic hand system may include a robotic hand configured to grip a target object, a first sensor unit disposed on the robotic hand, the first sensor unit configured to detect a real-time posture of the robotic hand, a second sensor unit disposed on the robotic hand, the second sensor unit configured to detect three-dimensional surface information of the target object that appears based on the robotic hand, and a processor configured to infer, based on sensing information of the first sensor unit and the second sensor unit, a motion of the robotic hand conforming to an intention of the user, and operate the robotic hand according to the inferred motion. The robotic hand may include a finger module including a plurality of frames, and one or more joint portions connected to the plurality of frames, the one or more joint portions configured to change positions of the plurality of frames.

Method and computing system for performing motion planning based on image information generated by a camera

A system and method for motion planning is presented. The system is configured, when an object is or has been in a camera field of view of a camera, to receive first image information that is generated when the camera has a first camera pose. The system is further configured to determine, based on the first image information, a first estimate of the object structure, and to identify, based on the first estimate of the object structure or based on the first image information, an object corner. The system is further configured to cause an end effector apparatus to move the camera to a second camera pose, and to receive second image information for representing the object's structure. The system is configured to determine a second estimate of the object's structure based on the second image information, and to generate a motion plan based on at least the second estimate.

METHOD AND COMPUTING SYSTEM FOR PERFORMING MOTION PLANNING BASED ON IMAGE INFORMATION GENERATED BY A CAMERA
20210347051 · 2021-11-11 ·

A system and method for motion planning is presented. The system is configured, when an object is or has been in a camera field of view of a camera, to receive first image information that is generated when the camera has a first camera pose. The system is further configured to determine, based on the first image information, a first estimate of the object structure, and to identify, based on the first estimate of the object structure or based on the first image information, an object corner. The system is further configured to cause an end effector apparatus to move the camera to a second camera pose, and to receive second image information for representing the object's structure. The system is configured to determine a second estimate of the object's structure based on the second image information, and to generate a motion plan based on at least the second estimate.

Robot grip detection using non-contact sensors

A method is provided that includes controlling a robotic gripping device to cause a plurality of digits of the robotic gripping device to move towards each other in an attempt to grasp an object. The method also includes receiving, from at least one non-contact sensor on the robotic gripping device, first sensor data indicative of a region between the plurality of digits of the robotic gripping device. The method further includes receiving, from the at least one non-contact sensor on the robotic gripping device, second sensor data indicative of the region between the plurality of digits of the robotic gripping device, where the second sensor data is based on a different sensing modality than the first sensor data. The method additionally includes determining, using an object-in-hand classifier that takes as input the first sensor data and the second sensor data, a result of the attempt to grasp the object.

Control method and control system of manipulator

A control method of a manipulator is provided. The method includes photographing a target using a camera and detected the target using the photographed data. A holding motion for the target is set based on the detected target and a robot is operated to hold the target based on the set holding motion.

Autonomous unknown object pick and place

A set of one or more potentially graspable features for one or more objects present in a workspace area are determined based on visual data received from a plurality of cameras. For each of at least a subset of the one or more potentially graspable features one or more corresponding grasp strategies are determined to grasp the feature with a robotic arm and end effector. A score associated with a probability of a successful grasp of a corresponding feature is determined with respect to each of a least a subset of said grasp strategies. A first feature of the one or more potentially graspable features is selected to be grasped using a selected grasp strategy based at least in part on a corresponding score associated with the selected grasp strategy with respect to the first feature. The robotic arm and the end effector are controlled to attempt to grasp the first feature using the selected grasp strategy.

Sensorized Robotic Gripping Device

A robotic gripping device is provided. The robotic gripping device includes a palm and a plurality of digits coupled to the palm. The robotic gripping device also includes a time-of-flight sensor arranged on the palm such that the time-of-flight sensor is configured to generate time-of-flight distance data in a direction between the plurality of digits. The robotic gripping device additionally includes an infrared camera, including an infrared illumination source, where the infrared camera is arranged on the palm such that the infrared camera is configured to generate grayscale image data in the direction between the plurality of digits.

COMPUTER CONTROLLED POSITIONING OF DELICATE OBJECTS WITH LOW-CONTACT FORCE INTERACTION USING A ROBOT
20220314453 · 2022-10-06 ·

A computer positions an object using a computer-controlled positioning device. The computer is operatively associated with the positioning device via a control interface. The positioning device has a substantially-hollow interior chamber. The computer identifies a selected object located at a primary location within the interior chamber and having a primary orientation with respect thereto. The computer identifies a first array of elements constructed and arranged to generate contact-free support forces sufficient to maintain the selected object at the primary location. The computer identifies a second array of elements constructed and arranged to provide contact-free interaction forces sufficient to move the selected object within the interior chamber. The computer interacts with the selected object, using the control interface to adjust at least one of either the supporting forces and the interaction forces, to place the selected object into at least one of a secondary location or a secondary orientation.