G05B2219/40575

HAND CONTROL APPARATUS AND HAND CONTROL SYSTEM
20190308333 · 2019-10-10 · ·

A hand control apparatus including an extracting unit extracting a grip pattern of an object having a shape closest to that of the object acquired by a shape acquiring unit from a storage unit storing and associating shapes of plural types of objects and grip patterns, a position and posture calculating unit calculating a gripping position and posture of the hand in accordance with the extracted grip pattern, a hand driving unit causing the hand to grip the object based on the calculated gripping position and posture, a determining unit determining if a gripped state of the object is appropriate based on information acquired by at least one of the shape acquiring unit, a force sensor and a tactile sensor, and a gripped state correcting unit correcting at least one of the gripping position and the posture when it is determined that the gripped state of the object is inappropriate.

Information processing apparatus, information processing method, and program

A position and an orientation of an object are measured with high accuracy. An approximate position-orientation of a target object is obtained, positional information of the target object is obtained by measuring the target object using a noncontact sensor, positional information of contact positions touched by a contact sensor is obtained by bringing the contact sensor into contact with the target object, and a position-orientation of the target object is obtained by associating shape information of the target object with the positional information of the target object and the positional information of the contact positions in accordance with the approximate position-orientation.

Robot Grip Detection Using Non-Contact Sensors

A method is provided that includes controlling a robotic gripping device to cause a plurality of digits of the robotic gripping device to move towards each other in an attempt to grasp an object. The method also includes receiving, from at least one non-contact sensor on the robotic gripping device, first sensor data indicative of a region between the plurality of digits of the robotic gripping device. The method further includes receiving, from the at least one non-contact sensor on the robotic gripping device, second sensor data indicative of the region between the plurality of digits of the robotic gripping device, where the second sensor data is based on a different sensing modality than the first sensor data. The method additionally includes determining, using an object-in-hand classifier that takes as input the first sensor data and the second sensor data, a result of the attempt to grasp the object.

Tactile sensor system and method for inspecting the condition of a structure

In a computer-implemented method and system for capturing the condition of a structure, the structure is scanned with a three-dimensional (3D) scanner. The 3D contact scanner includes a tactile sensor system having at least one tactile sensor for generating 3D data points based on tactile feedback resulting from physical contact with at least part of the structure. A 3D model is constructed from the 3D data and is then analyzed to determine the condition of the structure.

Systems and methods for visuo-tactile object pose estimation

Systems and methods for visuo-tactile object pose estimation are provided. In one embodiment, a computer implemented method includes receiving image data, depth data, and tactile data about the object in the environment. The computer implemented method also includes generating a visual estimate of the object that includes an object point cloud. The computer implemented method further includes generating a tactile estimate of the object that includes a surface point cloud based on the tactile data. The computer implemented method yet further includes estimating a pose of the object based on the visual estimate and the tactile estimate by fusing the object point cloud and the surface point cloud in a 3D space. The pose is a six-dimensional pose.

Haptic photogrammetry in robots and methods for operating the same

Robots, robot systems, and methods for operating the same based on environment models including haptic data are described. An environment model which includes representations of objects in an environment is accessed, and a robot system is controlled based on the environment model. The environment model incudes haptic data, which provides more effective control of the robot. The environment model is populated based on visual profiles, haptic profiles, and/or other data profiles for objects or features retrieved from respective databases. Identification of objects or features can be based on cross-referencing between visual and haptic profiles, to populate the environment model with data not directly collected by a robot which is populating the model, or data not directly collected from the actual objects or features in the environment.

Tactile sensor system and method for inspecting the condition of a structure

In a computer-implemented method and system for capturing the condition of a structure, the structure is scanned with a three-dimensional (3D) scanner. The 3D contact scanner includes a tactile sensor system having at least one tactile sensor for generating 3D data points based on tactile feedback resulting from physical contact with at least part of the structure. A 3D model is constructed from the 3D data and is then analyzed to determine the condition of the structure.

Autonomous mobile grabbing method for mechanical arm based on visual-haptic fusion under complex illumination condition

The present disclosure discloses an autonomous mobile grabbing method for a mechanical arm based on visual-haptic fusion under a complex illumination condition, which mainly includes approaching control over a target position and feedback control over environment information. According to the method, under the complex illumination condition, weighted fusion is conducted on visible light and depth images of a preselected region, identification and positioning of a target object are completed based on a deep neural network, and a mobile mechanical arm is driven to continuously approach the target object; in addition, the pose of the mechanical arm is adjusted according to contact force information of a sensor module, the external environment and the target object; and meanwhile, visual information and haptic information of the target object are fused, and the optimal grabbing pose and the appropriate grabbing force of the target object are selected. By adopting the method, the object positioning precision and the grabbing accuracy are improved, the collision damage and instability of the mechanical arm are effectively prevented, and the harmful deformation of the grabbed object is reduced.

Tactile sensor system and method for inspecting the condition of a structure

In a computer-implemented method and system for capturing the condition of a structure, the structure is scanned with a three-dimensional (3D) scanner. The 3D contact scanner includes a tactile sensor system having at least one tactile sensor for generating 3D data points based on tactile feedback resulting from physical contact with at least part of the structure. A 3D model is constructed from the 3D data and is then analyzed to determine the condition of the structure.

METHOD FOR UPDATING A SCENE REPRESENTATION MODEL

A computer implemented method for updating a scene representation model is disclosed. The method comprises obtaining a scene representation model representing a scene having one or more objects, the scene representation model being configured to predict a value of a physical property of one or more of the objects; obtaining a value of the physical property of at least one of the objects, the obtained value being derived from a physical contact of a robot with the at least one object; and updating the scene representation model based on the obtained value. An apparatus is also disclosed.