G05B2219/40575

TACTILE AND/OR OPTICAL DISTANCE SENSOR, SYSTEM HAVING SUCH A DISTANCE SENSOR, AND METHOD FOR CALIBRATING SUCH A DISTANCE SENSOR OR SUCH A SYSTEM
20210299891 · 2021-09-30 ·

A tactile and/or optical distance sensor includes a housing, which has at least one elongate portion, a measurement arm, which is arranged in the housing, at least partially extends through the elongate portion and has a tactile and/or an optical probe element at one end, a transducer, which is configured to capture a position of the tactile probe element or a signal of the optical probe element and to generate associated probe element measurement signals, and an advance unit, with which the housing is linearly dis-placeable along an advance direction. A strain sensor is located in the region of the measurement arm extending through the elongate portion or at an adjacent region directly adjoining said region. In addition, a system for measuring the roughness of a surface of a workpiece and a method for calibrating a distance sensor or a system are provided.

Tactile Sensor

A tactile sensor including a cap having a top surface and an undersurface. The undersurface includes pins, each pin has a mark. A portion of the undersurface is attachable to a device. A camera positioned in view of the marks, captures images of the marks placed in motion by elastic deformation of the top surface of the cap. A processor receives the captured images and determines a set of relative positions of the marks in the captured images, by identifying measured image coordinates of locations in images of the captured images. Determine a net force tensor acting on the top surface using a stored machine vision algorithm, by matching the set of relative positions of the marks to a stored set of previously learned relative positions of the marks placed in motion. Control the device via a controller in response to the net force tensor determined in the processor.

HAPTIC PHOTOGRAMMETRY IN ROBOTS AND METHODS FOR OPERATING THE SAME
20230405835 · 2023-12-21 ·

Robots, robot systems, and methods for operating the same based on environment models including haptic data are described. An environment model which includes representations of objects in an environment is accessed, and a robot system is controlled based on the environment model. The environment model incudes haptic data, which provides more effective control of the robot. The environment model is populated based on visual profiles, haptic profiles, and/or other data profiles for objects or features retrieved from respective databases. Identification of objects or features can be based on cross-referencing between visual and haptic profiles, to populate the environment model with data not directly collected by a robot which is populating the model, or data not directly collected from the actual objects or features in the environment.

Incorporating Vision System and In-Hand Object Location System for Object Manipulation and Training

A system and method of object manipulation and training including providing at least one robotic hand including a plurality of grippers connected to a body and providing a plurality of cameras disposed in a periphery surface of the grippers. The method also includes providing a plurality of tactile sensors disposed in the periphery surface of the grippers and actuating the grippers to grasp an object. The method further includes detecting a position of the object with respect to the robotic hand via a first image feed from the tactile sensors and detecting a position of the object with respect to the robotic hand via a second image feed from the cameras. The method also includes generating instructions to grip and manipulate an orientation of the object based on the first and the second image feeds for a visualization of the object relative to the robotic hand.

Method of Automated Calibration for In-Hand Object Location System

A method of automated in-hand calibration including providing at least one robotic hand including a plurality of grippers connected to a body and providing at least one camera disposed on a periphery surface of the plurality of grippers. The method also includes providing at least one tactile sensor disposed in the at least one illumination surface and actuating the plurality of grippers to grasp an object. The method further includes locating a position of the object with respect to the at least one robotic hand and calibrating a distance parameter via the at least one camera. The method also includes calibrating the at least one tactile sensor with the at least one camera and generating instructions to grip and manipulate an orientation of the object via an image feed from the at least one camera for a visualization of the object.

Robot Grip Detection Using Non-Contact Sensors

A method is provided that includes controlling a robotic gripping device to cause a plurality of digits of the robotic gripping device to move towards each other in an attempt to grasp an object. The method also includes receiving, from at least one non-contact sensor on the robotic gripping device, first sensor data indicative of a region between the plurality of digits of the robotic gripping device. The method further includes receiving, from the at least one non-contact sensor on the robotic gripping device, second sensor data indicative of the region between the plurality of digits of the robotic gripping device, where the second sensor data is based on a different sensing modality than the first sensor data. The method additionally includes determining, using an object-in-hand classifier that takes as input the first sensor data and the second sensor data, a result of the attempt to grasp the object.

Object Grasp System and Method
20200331709 · 2020-10-22 · ·

A grasping system includes a robotic arm having a gripper. A fixed sensor monitors a grasp area and an onboard sensor moves with the gripper also monitors the area. A controller receives information indicative of a position of an object to be grasped and operates the robotic arm to bring the gripper into a grasp position adjacent the object based on information provided by the fixed sensor. The controller is also programmed to operate the gripper to grasp the object in response to information provided by the first onboard sensor.

Robot grip detection using non-contact sensors

A method is provided that includes controlling a robotic gripping device to cause a plurality of digits of the robotic gripping device to move towards each other in an attempt to grasp an object. The method also includes receiving, from at least one non-contact sensor on the robotic gripping device, first sensor data indicative of a region between the plurality of digits of the robotic gripping device. The method further includes receiving, from the at least one non-contact sensor on the robotic gripping device, second sensor data indicative of the region between the plurality of digits of the robotic gripping device, where the second sensor data is based on a different sensing modality than the first sensor data. The method additionally includes determining, using an object-in-hand classifier that takes as input the first sensor data and the second sensor data, a result of the attempt to grasp the object.

TACTILE INFORMATION ESTIMATION APPARATUS, TACTILE INFORMATION ESTIMATION METHOD, AND PROGRAM

According to some embodiments, a tactile information estimation apparatus may include one or more memories and one or more processors. The one or more processors are configured to input at least first visual information of an object acquired by a visual sensor to a model. The model is generated based on visual information and tactile information linked to the visual information. The one or more processors are configured to extract, based on the model, a feature amount relating to tactile information of the object.

Systems and methods for visuo-tactile object pose estimation

Systems and methods for visuo-tactile object pose estimation are provided. In one embodiment, a method includes receiving image data about an object and receiving depth data about the object. The method also includes generating a visual estimate of the object based on the image data and the depth data. The method further includes receiving tactile data about the object. The method yet further includes generating a tactile estimate of the object based on the tactile data. The method includes estimating a pose of the object based on the visual estimate and the tactile estimate.