Patent classifications
G05B2219/40553
AUTONOMOUS MOBILE GRABBING METHOD FOR MECHANICAL ARM BASED ON VISUAL-HAPTIC FUSION UNDER COMPLEX ILLUMINATION CONDITION
The present disclosure discloses an autonomous mobile grabbing method for a mechanical arm based on visual-haptic fusion under a complex illumination condition, which mainly includes approaching control over a target position and feedback control over environment information.
According to the method, under the complex illumination condition, weighted fusion is conducted on visible light and depth images of a preselected region, identification and positioning of a target object are completed based on a deep neural network, and a mobile mechanical arm is driven to continuously approach the target object; in addition, the pose of the mechanical arm is adjusted according to contact force information of a sensor module, the external environment and the target object; and meanwhile, visual information and haptic information of the target object are fused, and the optimal grabbing pose and the appropriate grabbing force of the target object are selected.
By adopting the method, the object positioning precision and the grabbing accuracy are improved, the collision damage and instability of the mechanical arm are effectively prevented, and the harmful deformation of the grabbed object is reduced.
ROBOT SYSTEM AND METHOD OF FORMING THREE-DIMENSIONAL MODEL OF WORKPIECE
A robot system includes a robot installed in a work area and controlled by a second control device, a 3D camera operated by an operator, a sensor that is disposed in a manipulation area that is a space different from the work area, and wirelessly detects position information and posture information on the 3D camera, a display, and a first control device. The first control device acquires image information on a workpiece imaged by the 3D camera, acquires, from the sensor, the position information and the posture information when the workpiece is imaged by the 3D camera, displays the acquired image information on the display, forms a three-dimensional model of the workpiece based on the image information, and the acquired position information and posture information, displays the formed three-dimensional model on the display, and outputs first data that is data of the formed three-dimensional model to the second control device.
Interactive Tactile Perception Method for Classification and Recognition of Object Instances
A controller is provided for interactive classification and recognition of an object in a scene using tactile feedback. The controller includes an interface configured to transmit and receive the control, sensor signals from a robot arm, gripper signals from a gripper attached to the robot arm, tactile signals from sensors attached to the gripper and at least one vision sensor, a memory module to store robot control programs, and a classifier and recognition model, and a processor to generate control signals based on the control program and a grasp pose on the object, configured to control the robot arm to grasp the object with the gripper. Further, the processor is configured to compute a tactile feature representation from the tactile sensor signals and to repeat gripping the object and computing a tactile feature representation with the set of grasp poses, after which the processor, processes the ensemble of tactile features to learn a model which is utilized to classify or recognize the object as known or unknown.
Haptic photogrammetry in robots and methods for operating the same
Robots, robot systems, and methods for operating the same based on environment models including haptic data are described. An environment model which includes representations of objects in an environment is accessed, and a robot system is controlled based on the environment model. The environment model incudes haptic data, which provides more effective control of the robot. The environment model is populated based on visual profiles, haptic profiles, and/or other data profiles for objects or features retrieved from respective databases. Identification of objects or features can be based on cross-referencing between visual and haptic profiles, to populate the environment model with data not directly collected by a robot which is populating the model, or data not directly collected from the actual objects or features in the environment.
Interactive tactile perception method for classification and recognition of object instances
A controller is provided for interactive classification and recognition of an object in a scene using tactile feedback. The controller includes an interface configured to transmit and receive the control, sensor signals from a robot arm, gripper signals from a gripper attached to the robot arm, tactile signals from sensors attached to the gripper and at least one vision sensor, a memory module to store robot control programs, and a classifier and recognition model, and a processor to generate control signals based on the control program and a grasp pose on the object, configured to control the robot arm to grasp the object with the gripper. Further, the processor is configured to compute a tactile feature representation from the tactile sensor signals and to repeat gripping the object and computing a tactile feature representation with the set of grasp poses, after which the processor, processes the ensemble of tactile features to learn a model which is utilized to classify or recognize the object as known or unknown.
HAPTIC PHOTOGRAMMETRY IN ROBOTS AND METHODS FOR OPERATING THE SAME
Robots, robot systems, and methods for operating the same based on environment models including haptic data are described. An environment model which includes representations of objects in an environment is accessed, and a robot system is controlled based on the environment model. The environment model incudes haptic data, which provides more effective control of the robot. The environment model is populated based on visual profiles, haptic profiles, and/or other data profiles for objects or features retrieved from respective databases. Identification of objects or features can be based on cross-referencing between visual and haptic profiles, to populate the environment model with data not directly collected by a robot which is populating the model, or data not directly collected from the actual objects or features in the environment.
Haptic photogrammetry in robots and methods for operating the same
Robots, robot systems, and methods for operating the same based on environment models including haptic data are described. An environment model which includes representations of objects in an environment is accessed, and a robot system is controlled based on the environment model. The environment model incudes haptic data, which provides more effective control of the robot. The environment model is populated based on visual profiles, haptic profiles, and/or other data profiles for objects or features retrieved from respective databases. Identification of objects or features can be based on cross-referencing between visual and haptic profiles, to populate the environment model with data not directly collected by a robot which is populating the model, or data not directly collected from the actual objects or features in the environment.
Autonomous mobile grabbing method for mechanical arm based on visual-haptic fusion under complex illumination condition
The present disclosure discloses an autonomous mobile grabbing method for a mechanical arm based on visual-haptic fusion under a complex illumination condition, which mainly includes approaching control over a target position and feedback control over environment information. According to the method, under the complex illumination condition, weighted fusion is conducted on visible light and depth images of a preselected region, identification and positioning of a target object are completed based on a deep neural network, and a mobile mechanical arm is driven to continuously approach the target object; in addition, the pose of the mechanical arm is adjusted according to contact force information of a sensor module, the external environment and the target object; and meanwhile, visual information and haptic information of the target object are fused, and the optimal grabbing pose and the appropriate grabbing force of the target object are selected. By adopting the method, the object positioning precision and the grabbing accuracy are improved, the collision damage and instability of the mechanical arm are effectively prevented, and the harmful deformation of the grabbed object is reduced.