G05B2219/39543

Extracting grasping cues from tool geometry for digital human models

Grasping remains a complex topic for simulation. Embodiments provide a method to automatically determine grasping cues for tools. An example embodiment scans a CAD model representing a real-world tool to generate a series of sections from the CAD model. In turn, properties of each section are extracted and one or more regions of the CAD model are identified based upon the extracted properties and a tool family to which the tool represented by the CAD model belongs. To continue, a respective classification for each of the one or more identified regions is identified and grasping cues for the CAD model are generated based upon the determined respective classification for each of the one or more regions.

SENSOR DEVICE FOR A GRIPPING SYSTEM, METHOD FOR GENERATING OPTIMAL GRIPPING POSES FOR CONTROLLING A GRIPPING DEVICE, AND ASSOCIATED GRIPPING SYSTEM

A sensor apparatus for a gripping system, wherein the gripping system comprises a robot with a gripping device for handling objects and a robot or machine control for controlling the robot and/or the gripping device, and a method and associated gripping system.

Robotic system with enhanced scanning mechanism
11638993 · 2023-05-02 · ·

A method for operating a robotic system including determining an initial pose of a target object based on imaging data; calculating a confidence measure associated with an accuracy of the initial pose; and determining that the confidence measure fails to satisfy a sufficiency condition; and deriving a motion plan accordingly for scanning an object identifier while transferring the target object from a start location to a task location.

Systems and methods for determining digital model positioning for grasping

Embodiments determine positioning of a mannequin. One such embodiment begins by determining a frame of a grasping element of a mannequin represented by a computer-aided design (CAD) model and determining a frame of an object to be grasped, where is object is also represented by a CAD model. To continue, degrees of freedom of the mannequin are specified and limits on the specified degrees of freedom are set. In turn, using an inverse kinematic solver, positioning of the mannequin grasping the object is determined based upon: (i) the determined frame of the grasping element, (ii) the determined frame of the object, (iii) the specified degrees of freedom, and (iv) the set limits on the specified degrees of freedom.

Interactive tactile perception method for classification and recognition of object instances

A controller is provided for interactive classification and recognition of an object in a scene using tactile feedback. The controller includes an interface configured to transmit and receive the control, sensor signals from a robot arm, gripper signals from a gripper attached to the robot arm, tactile signals from sensors attached to the gripper and at least one vision sensor, a memory module to store robot control programs, and a classifier and recognition model, and a processor to generate control signals based on the control program and a grasp pose on the object, configured to control the robot arm to grasp the object with the gripper. Further, the processor is configured to compute a tactile feature representation from the tactile sensor signals and to repeat gripping the object and computing a tactile feature representation with the set of grasp poses, after which the processor, processes the ensemble of tactile features to learn a model which is utilized to classify or recognize the object as known or unknown.

INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD
20220297292 · 2022-09-22 ·

An information processor calculates, for a robot hand including a plurality of fingers, a gripping pose at which the robot hand grips a target object. The information processor includes a candidate single-finger placement position detector that detects, based on three-dimensional measurement data obtained through three-dimensional measurement of the target object and hand shape data about a shape of the robot hand, candidate placement positions for each of the plurality of fingers of the robot hand, a multi-finger combination searcher that searches for, among the candidate placement positions for each of the plurality of fingers, a combination of candidate placement positions to allow gripping of the target object, and a gripping pose calculator that calculates, based on the combination of candidate placement positions for each of the plurality of fingers, a gripping pose at which the robot hand grips the target object.

Grasping robot and control program for grasping robot
11407102 · 2022-08-09 · ·

A grasping robot includes: a grasping mechanism configured to grasp a target object; an image-pickup unit configured to shoot a surrounding environment; an extraction unit configured to extract a graspable part that can be grasped by the grasping mechanism in the surrounding environment by using a learned model that uses an image acquired by the image-pickup unit as an input image; a position detection unit configured to detect a position of the graspable part; a recognition unit configured to recognize a state of the graspable part by referring to a lookup table that associates the position of the graspable part with a movable state thereof; and a grasping control unit configured to control the grasping mechanism so as to displace the graspable part in accordance with the state of the graspable part recognized by the recognition unit.

REMOTE CONTROLLED DEVICE, REMOTE CONTROL SYSTEM AND REMOTE CONTROL DEVICE
20220250247 · 2022-08-11 ·

A remote controlled device comprises one or more memories and one or more processors. The one or more processors are configured to, when an event relating to a task being executed by a remote control object occurs: transmit information on a subtask of the task, receive a command relating to the subtask, and execute the task based on the command.

HANDLING SYSTEM, TRANSPORT SYSTEM, CONTROL DEVICE, STORAGE MEDIUM, AND HANDLING METHOD

According to one embodiment, a handling system includes a movable arm, a holding unit, a sensor, and a controller. The holding unit is attached to the movable arm and capable of holding an object by selecting one or more of a plurality of holding methods. The sensor is capable of detecting a plurality of the objects. The controller controls the movable arm and the holding unit. The controller calculates a score based on a selected holding method for each object and each holding method on the basis of information acquired from the sensor. The controller selects a next object to be held and a holding method on the basis of the score. The controller calculates a position at which the selected object is held and a posture of the movable arm.

System and method for determining grasping positions for two-handed grasps of industrial objects

A system and method is provided for determining grasping positions for two-handed grasps of industrial objects. The system may include a processor configured to determine a three dimensional (3D) voxel grid for a 3D model of a target object. In addition, the processor may be configured to determine at least one pair of spaced apart grasping positions on the target object at which the target object is capable of being grasped with two hands at the same time based on processing the 3D voxel grid for the target object with a neural network trained to determine grasping positions for two-handed grasps of target objects using training data. Such training data may include 3D voxel grids of a plurality of 3D models of training objects and grasping data including corresponding pairs of spaced-apart grasping positions for two-handed grasps of the training objects. Also, the processor may be configured to provide output data that specifies the determined grasping positions on the target object for two-handed grasps.