Patent classifications
G05B2219/40625
Interactive Tactile Perception Method for Classification and Recognition of Object Instances
A controller is provided for interactive classification and recognition of an object in a scene using tactile feedback. The controller includes an interface configured to transmit and receive the control, sensor signals from a robot arm, gripper signals from a gripper attached to the robot arm, tactile signals from sensors attached to the gripper and at least one vision sensor, a memory module to store robot control programs, and a classifier and recognition model, and a processor to generate control signals based on the control program and a grasp pose on the object, configured to control the robot arm to grasp the object with the gripper. Further, the processor is configured to compute a tactile feature representation from the tactile sensor signals and to repeat gripping the object and computing a tactile feature representation with the set of grasp poses, after which the processor, processes the ensemble of tactile features to learn a model which is utilized to classify or recognize the object as known or unknown.
Haptic photogrammetry in robots and methods for operating the same
Robots, robot systems, and methods for operating the same based on environment models including haptic data are described. An environment model which includes representations of objects in an environment is accessed, and a robot system is controlled based on the environment model. The environment model incudes haptic data, which provides more effective control of the robot. The environment model is populated based on visual profiles, haptic profiles, and/or other data profiles for objects or features retrieved from respective databases. Identification of objects or features can be based on cross-referencing between visual and haptic profiles, to populate the environment model with data not directly collected by a robot which is populating the model, or data not directly collected from the actual objects or features in the environment.
END EFFECTOR DEVICE
The end effector device includes an end effector including a palm and a plurality of fingers, a drive device, a position shift direction determination unit and a position shift correction unit. Each finger includes a tactile sensor unit capable of detecting external forces in at least three axial directions. The position shift direction determination unit determines in which direction the object being grasped is position-shifted with respect to the fitting recess based on a detection result detected by the tactile sensor unit in a case where at least one of the external forces detected by the tactile sensor unit is a specified value or more. The position shift correction unit moves the palm in a direction opposite to a position shift direction of the object being grasped determined by the position shift direction determination unit.
Detecting slippage from robotic grasp
A plurality of sensors are configured to provide a corresponding output that reflects a sensed value associated with engagement of a robotic arm end effector with an item. The respective outputs of one or more sensors comprising the plurality of sensors are used to determine one or more inputs to a multi-modal model configured to provide, based at least in part on the one or more inputs, an output associated with slippage of the item within or from a grasp of the robotic arm end effector. A determination associated with slippage of the item within or from the grasp of the robotic arm end effector is made based at least in part on an output of the multi-modal model. A responsive action is taken based at least in part on the determination associated with slippage of the item within or from the grasp of the robotic arm end effector.
Interactive tactile perception method for classification and recognition of object instances
A controller is provided for interactive classification and recognition of an object in a scene using tactile feedback. The controller includes an interface configured to transmit and receive the control, sensor signals from a robot arm, gripper signals from a gripper attached to the robot arm, tactile signals from sensors attached to the gripper and at least one vision sensor, a memory module to store robot control programs, and a classifier and recognition model, and a processor to generate control signals based on the control program and a grasp pose on the object, configured to control the robot arm to grasp the object with the gripper. Further, the processor is configured to compute a tactile feature representation from the tactile sensor signals and to repeat gripping the object and computing a tactile feature representation with the set of grasp poses, after which the processor, processes the ensemble of tactile features to learn a model which is utilized to classify or recognize the object as known or unknown.
Tactile Sensing System
In a tactile sensing system, a sensor portion of a tactile sensor is provided at a grasping portion of a robot, and outputs plural signals respectively corresponding to plural first electrodes that face a second electrode. On the basis of all or some of the plural signals, an output section calculates respective pressure values of plural pressure detecting positions within a contacting surface of the sensor portion which contacting surface contacts a workpiece, and outputs data of a pressure distribution. Further, on the basis of all or some of the plural signals, the output section calculates one aggregate shearing force value for the entire contacting surface, and outputs data of the aggregate shearing force value.
Flex-rigid sensor array structure for robotic systems
A flex-rigid sensor apparatus for providing sensor data from sensors disposed on an end-effector/gripper to the control circuit of an arm-type robotic system. The apparatus includes piezo-type pressure sensors sandwiched between lower and upper PCB stack-up structures respectively fabricated using rigid PCB (e.g., FR-4) and flexible PCB (e.g., polyimide) manufacturing processes. Additional (e.g., temperature and proximity) sensors are mounted on the upper/flexible stack-up structure. A spacer structure is disposed between the two stack-up structures and includes an insulating material layer defining openings that accommodate the pressure sensors. Copper film layers are configured to provide Faraday cages around each pressure sensor. The pressure sensors, additional sensors and Faraday cages are connected to sensor data processing and control circuitry (e.g., analog-to-digital converter circuits) by way of signal traces formed in the lower and upper stack-up structures and in the spacer structure. An encapsulation layer is formed on the upper PCB stack-up structure.
Systems and methods for determining pose of objects held by flexible end effectors
Systems and methods for determining a pose of an object held by a flexible end effector of a robot are disclosed. A method of determining a pose of the object includes receiving tactile data from tactile sensors, receiving curvature data from curvature sensors, determining a plurality of segments of the flexible end effector from the curvature data, assigning a frame to each segment, determining a location of each point of contact between the object and the flexible end effector from the tactile data, calculating a set of relative transformations and determining a location of each point relative to one of the frames, generating continuous data from the determined location of each point, and providing the continuous data to a pose determination algorithm that uses the continuous data to determine the pose of the object.
SYSTEMS AND METHODS FOR VISUO-TACTILE OBJECT POSE ESTIMATION
Systems and methods for visuo-tactile object pose estimation are provided. In one embodiment, a computer implemented method includes receiving image data, depth data, and tactile data about the object in the environment. The computer implemented method also includes generating a visual estimate of the object that includes an object point cloud. The computer implemented method further includes generating a tactile estimate of the object that includes a surface point cloud based on the tactile data. The computer implemented method yet further includes estimating a pose of the object based on the visual estimate and the tactile estimate by fusing the object point cloud and the surface point cloud in a 3D space. The pose is a six-dimensional pose.
Robotic touch perception
An apparatus such as a robot capable of performing goal oriented tasks may include one or more touch sensors to receive touch perception feedback on the location of objects and structures within an environment. A fusion engine may be configured to combine touch perception data with other types of sensor data such as data received from an image or distance sensor. The apparatus may combine distance sensor data with touch sensor data using inference models such as Bayesian inference. The touch sensor may be mounted onto an adjustable arm of a robot. The apparatus may use the data it has received from both a touch sensor and distance sensor to build a map of its environment and perform goal oriented tasks such as cleaning or moving objects.