Patent classifications
G05B2219/40532
In-Situ Inspection Method Based on Digital Data Model of Weld
A method inspects weld quality in-situ. The method obtains a plurality of sequenced images of an in-progress welding process and generates a multi-dimensional data input based on the plurality of sequenced images and/or one or more weld process control parameters. The parameters may include: (i) shield gas flow rate, temperature, and pressure; (ii) voltage, amperage, wire feed rate and temperature (if applicable); (iii) part preheat/inter-pass temperature; and (iv) part and weld torch relative velocity). The method generates defect probability and analytics information by applying one or more computer vision techniques on the multi-dimensional data input. The analytics information includes predictive insights on quality features of the in-progress welding process. The method then generates a 3-D visualization of one or more as-welded regions, based on the analytics information, and the plurality of sequenced images. The 3-D visualization displays the quality features for virtual inspection and/or for determining weld quality.
ROBOT TEACHING BY HUMAN DEMONSTRATION
A method for teaching a robot to perform an operation based on human demonstration with images from a camera. The method includes a teaching phase where a 2D or 3D camera detects a human hand grasping and moving a workpiece, and images of the hand and workpiece are analyzed to determine a robot gripper pose and positions which equate to the pose and positions of the hand and corresponding pose and positions of the workpiece. Robot programming commands are then generated from the computed gripper pose and position relative to the workpiece pose and position. In a replay phase, the camera identifies workpiece pose and position, and the programming commands cause the robot to move the gripper to pick, move and place the workpiece as demonstrated. A teleoperation mode is also disclosed, where camera images of a human hand are used to control movement of the robot in real time.
LEARNING DEVICE, LEARNING METHOD, LEARNING MODEL, DETECTION DEVICE AND GRASPING SYSTEM
An estimation device includes a memory and at least one processor. The at least one processor is configured to acquire information regarding a target object. The at least one processor is configured to estimate information regarding a location and a posture of a gripper relating to where the gripper is able to grasp the target object. The estimation is based on an output of a neural model having as an input the information regarding the target object. The estimated information regarding the posture includes information capable of expressing a rotation angle around a plurality of axes.
TARGET OBJECT RECOGNITION DEVICE, MANIPULATOR, AND MOBILE ROBOT
Provided is art capable of recognizing the states of a plurality of target objects arranged in a prescribed space region. This target object recognition device is provided with: a plurality of calculation processing units (21, 22) which each calculate the attitude state of a target object in a prescribed space region using a different technique; a state recognition unit (23) which recognizes the layout state of all of a plurality of target objects arranged in the space region; a method determination unit (24) which, in accordance with the result of the recognition by the state recognition unit (23), determines a method for using the results of the calculation performed by the calculation processing units (21, 22); and a target object recognition unit (25) which recognizes the attitude states of the target objects by means of the determined method for using the results of the calculation.
Robot work system and method of controlling robot work system
An information processing apparatus obtains a plurality of combinations of a position of a work target candidate and a transport machine optimum control parameter which is a control parameter of the transport machine that maximizes performance of the work on a work target when the work target candidate is set as the work target, based on a captured image obtained by capturing an area including a plurality of the work target candidates transported by the transport machine, determines the work target from among the work target candidates based on the combinations, controls the transport machine based on the transport machine optimum control parameter of the determined work target, generates a control plan of the robot based on a position of the determined work target and the transport machine optimum control parameter of the work target and controls the robot according to the generated control plan.
Positioning a Robot Sensor for Object Classification
In one embodiment, a method includes receiving, from a first sensor on a robot, first sensor data indicative of an environment of the robot. The method also includes identifying, based on the first sensor data, an object of an object type in the environment of the robot, where the object type is associated with a classifier that takes sensor data from a predetermined pose relative to the object as input. The method further includes causing the robot to position a second sensor on the robot at the predetermined pose relative to the object. The method additionally includes receiving, from the second sensor, second sensor data indicative of the object while the second sensor is positioned at the predetermined pose relative to the object. The method further includes determining, by inputting the second sensor data into the classifier, a property of the object.
Learning device, learning method, learning model, detection device and grasping system
An estimation device includes a memory and at least one processor. The at least one processor is configured to acquire information regarding a target object. The at least one processor is configured to estimate information regarding a location and a posture of a gripper relating to where the gripper is able to grasp the target object. The estimation is based on an output of a neural model having as an input the information regarding the target object. The estimated information regarding the posture includes information capable of expressing a rotation angle around a plurality of axes.
REMOTE CONTROL SYSTEM AND REMOTE CONTROL METHOD
A remote control system includes: an imaging unit that shoots an environment in which a device to be operated including an end effector is located; a recognition unit that recognizes objects that can be grasped by the end effector based on a shot image of the environment shot by the imaging unit; an operation terminal that displays the shot image and receive handwritten input information input to the displayed shot image; and an estimation unit that, based on the objects that can be grasped and the handwritten input information input to the shot image, estimates an object to be grasped which has been requested to be grasped by the end effector from among the objects that can be grasped and estimates a way of performing a grasping motion by the end effector, the grasping motion having been requested to be performed with regard to the object to be grasped.
TRAINING SYSTEM AND PROCESSES FOR OBJECTS TO BE CLASSIFIED
The present disclosure relates to a training system and, more particularly, to a method and system for training objects to be classified and related processes. The processes includes: extracting, using a computing device, features of a plurality of objects; training, using the computing device, a machine learning model with selected ones of the extracted features; building, using the computing device, a final machine learning model of the selected features after all of the plurality of objects for training are captured; and performing, using the computing device, an action on subsequent objects based on the trained final machine learning model.
Automated manipulation of transparent vessels
An actuator and end effector are controlled according to images from cameras having a surface in their field of view. Vessels (cups, bowls, etc.) and other objects are identified in the images and their configuration is assigned to a finite set of categories by a classifier that does not output a 3D bounding box or determine a 6D pose. For objects assigned to a first subset of categories, grasping parameters for controlling the actuator and end effector are determined using only 2D bounding boxes, such as oriented 2D bounding boxes. For objects not assigned to the first subset, a righting operation may be performed using only 2D bounding boxes. Objects that are still not in the first set may then be grasped by estimating a 3D bounding box and 6D pose.