G05B2219/40564

Methods and systems for registering a three-dimensional pose of an object
11055562 · 2021-07-06 · ·

In an example, a system for registering a three-dimensional (3D) pose of a workpiece relative to a robotic device is disclosed. The system comprises the robotic device, where the robotic device comprises one or more mounted lasers. The system also comprises one or more sensors configured to detect laser returns from laser rays projected from the one or more mounted lasers and reflected by the workpiece. The system also comprises a processor configured to receive a tessellation of the workpiece, wherein the tessellation comprises a 3D representation of the workpiece made up of cells, convert the laser returns into a 3D point cloud in a robot frame, based on the 3D point cloud, filter visible cells of the tessellation of the workpiece to form a tessellation included set, and solve for the 3D pose of the workpiece relative to the robotic device based on the tessellation included set.

ROBOT HAND CONTROLLER, ROBOT SYSTEM, AND ROBOT HAND CONTROL METHOD
20210023701 · 2021-01-28 · ·

A robot hand controller includes an air supply unit configured to supply air into fingers of a robot hand and configured to discharge air in the fingers, and a controller configured to control the air supply unit, where the air supply unit includes two or more air passages respectively connected to the different fingers, the air passages capable of supplying the air into the fingers and discharging the air in the fingers independently from each other, and the controller controls supply and discharge of the air through each of the two or more air passages in response to a shape of the workpiece and an object in a vicinity of a transport destination of the workpiece.

SYSTEM AND METHOD FOR AUGMENTING A VISUAL OUTPUT FROM A ROBOTIC DEVICE
20210023703 · 2021-01-28 · ·

A method for visualizing data generated by a robotic device is presented. The method includes displaying an intended path of the robotic device in an environment. The method also includes displaying a first area in the environment identified as drivable for the robotic device. The method further includes receiving an input to identify a second area in the environment as drivable and transmitting the second area to the robotic device.

VIRTUAL TEACH AND REPEAT MOBILE MANIPULATION SYSTEM

A method for controlling a robotic device is presented. The method includes positioning the robotic device within a task environment. The method also includes mapping descriptors of a task image of a scene in the task environment to a teaching image of a teaching environment. The method further includes defining a relative transform between the task image and the teaching image based on the mapping. Furthermore, the method includes updating parameters of a set of parameterized behaviors based on the relative transform to perform a task corresponding to the teaching image.

TRAINING METHODS FOR DEEP NETWORKS

A method for training a deep neural network of a robotic device is described. The method includes constructing a 3D model using images captured via a 3D camera of the robotic device in a training environment. The method also includes generating pairs of 3D images from the 3D model by artificially adjusting parameters of the training environment to form manipulated images using the deep neural network. The method further includes processing the pairs of 3D images to form a reference image including embedded descriptors of common objects between the pairs of 3D images. The method also includes using the reference image from training of the neural network to determine correlations to identify detected objects in future images.

AUTONOMOUS TASK PERFORMANCE BASED ON VISUAL EMBEDDINGS

A method for controlling a robotic device is presented. The method includes capturing an image corresponding to a current view of the robotic device. The method also includes identifying a keyframe image comprising a first set of pixels matching a second set of pixels of the image. The method further includes performing, by the robotic device, a task corresponding to the keyframe image.

SYSTEM AND METHOD FOR ROBOTIC BIN PICKING USING ADVANCED SCANNING TECHNIQUES
20210023710 · 2021-01-28 ·

A method and system for programming picking and placing of a workpiece is provided. Embodiments may include associating a workpiece with an end effector that is attached to a robot and scanning the workpiece while the workpiece is associated with the end effector. Embodiments may also include determining a pose of the workpiece relative to the robot, based upon, at least in part, the scanning.

Machine Vision-Based Method and System for Measuring 3D Pose of a Part or Subassembly of Parts

A machine vision-based method and system for measuring 3D pose of a part or subassembly of parts having an unknown pose are disclosed. A number of different applications of the method and system are disclosed including applications which utilize a reprogrammable industrial automation machine such as a robot. The method includes providing a reference cloud of 3D voxels which represent a reference surface of a reference part or subassembly having a known reference pose. Using at least one 2D/3D hybrid sensor, a sample cloud of 3D voxels which represent a corresponding surface of a sample part or subassembly of the same type as the reference part or subassembly is acquired. The sample part or subassembly has an actual pose different from the reference pose. The voxels of the sample and reference clouds are processed utilizing a matching algorithm to determine the pose of the sample part or subassembly.

METHOD AND PROCESSING SYSTEM FOR UPDATING A FIRST IMAGE GENERATED BY A FIRST CAMERA BASED ON A SECOND IMAGE GENERATED BY A SECOND CAMERA
20200394810 · 2020-12-17 ·

A method and system for processing camera images is presented. The system receives a first depth map generated based on information sensed by a first type of depth-sensing camera, and receives a second depth map generated based on information sensed by a second type of depth-sensing camera. The first depth map includes a first set of pixels that indicate a first set of respective depth values. The second depth map includes a second set of pixels that indicate a second set of respective depth values. The system identifies a third set of pixels of the first depth map that correspond to the second set of pixels of the second depth map, identifies one or more empty pixels from the third set of pixels, and updates the first depth map by assigning to each empty pixel a respective depth value based on the second depth map.

SHADING TOPOGRAPHY IMAGING FOR ROBOTIC UNLOADING

Vision systems for robotic assemblies for handling cargo, for example, unloading cargo from a trailer, can determine the position of cargo based on shading topography. Shading topography imaging can be performed by using light sources arranged at different positions relative to the image capture device(s).