G05B2219/39391

Robotic arm processing method and system based on 3D image

Robotic arm processing method and system based on 3D image are provided. The processing method includes: providing robotic arm 3D model data and processing environment 3D model data; obtaining workpiece 3D model data, and generating a processing path consisting of contact points according to the workpiece 3D model data, wherein a free end of a robotic arm moves along the processing path to complete a processing procedure; generating a posture candidate group according to a relationship according to each one of the contact points corresponding to the free end of the robotic arm; selecting an actual moving posture from the posture candidate group; moving the free end of the robotic arm to each corresponding one of the contact points according to the selected actual moving posture; and moving the free end of the robotic arm along the processing path according to the actual moving postures to perform the processing procedure.

OPERATION SYSTEM

A robot system including a robot, a marker unit, a sensor, storage device, and a control device. The robot performs an operation with regard to a workpiece. The marker unit is attached to a measurement object and includes a base section and a plurality of markers attached to the base section. The sensor detects identification information and three-dimensional positions of the plurality of markers. The storage device stores teaching data including operation data and attachment position data indicating a correspondence relationship between the identification information of each of the markers and an attachment position of the corresponding marker. The control device calculates a three-dimensional position of the measurement object based on the three-dimensional positions of the plurality of markers and the attachment position data and controls the robot based on the three-dimensional position of the measurement object and the operation data so as to make the robot perform the operation.

Method for Producing a Product Comprising at Least Two Components

A method for producing or assembling a product which includes at least two components, for example a motor vehicle or a motor vehicle module, by at least two fixing parts. The first fixing part is formed as a female part and the second fixing part is formed as a male part. The components disposed at a processing station and the first fixing part are measured by a measuring device, for example by a stationary camera or a camera fastened on a first or a second manipulator or photogrammetry bar having three cameras, and a deviation from a target geometry or target position is determined, and a corrected target position of the second fixing part is calculated on the basis of the determined deviation, such that the second fixing part is joined together with the first fixing part by the first manipulator and the product is thus produced.

ROBOTIC MANIPULATION USING AN INDEPENDENTLY ACTUATED VISION SYSTEM, AN ADVERSARIAL CONTROL SCHEME, AND A MULTI-TASKING DEEP LEARNING ARCHITECTURE

An automation system includes a manipulation system including a manipulator for moving an object to a target location, a vision system for detecting landmarks on the object and the target location, and a learning and control module. The vision system is movable. The learning and control module is configured to control a movement of the manipulator and change a field of view of the vision system independent of the movement of the manipulator.

VIEWPOINT INVARIANT VISUAL SERVOING OF ROBOT END EFFECTOR USING RECURRENT NEURAL NETWORK
20240017405 · 2024-01-18 ·

Training and/or using a recurrent neural network model for visual servoing of an end effector of a robot. In visual servoing, the model can be utilized to generate, at each of a plurality of time steps, an action prediction that represents a prediction of how the end effector should be moved to cause the end effector to move toward a target object. The model can be viewpoint invariant in that it can be utilized across a variety of robots having vision components at a variety of viewpoints and/or can be utilized for a single robot even when a viewpoint, of a vision component of the robot, is drastically altered. Moreover, the model can be trained based on a large quantity of simulated data that is based on simulator(s) performing simulated episode(s) in view of the model. One or more portions of the model can be further trained based on a relatively smaller quantity of real training data.

CONTROL OF A ROBOT ASSEMBLY
20200114518 · 2020-04-16 ·

A method for the control of a robot assembly having at least one robot. The method includes acquiring pose data from an object arrangement having at least one object, which data has a first time interval; determining modified pose data from the object arrangement, which data has a second time interval that is larger or smaller than the first time interval, or is equal to the first time interval, on the basis of the acquired pose data; and controlling the robot assembly on the basis of said modified pose data.

VIEWPOINT INVARIANT VISUAL SERVOING OF ROBOT END EFFECTOR USING RECURRENT NEURAL NETWORK
20200114506 · 2020-04-16 ·

Training and/or using a recurrent neural network model for visual servoing of an end effector of a robot. In visual servoing, the model can be utilized to generate, at each of a plurality of time steps, an action prediction that represents a prediction of how the end effector should be moved to cause the end effector to move toward a target object. The model can be viewpoint invariant in that it can be utilized across a variety of robots having vision components at a variety of viewpoints and/or can be utilized for a single robot even when a viewpoint, of a vision component of the robot, is drastically altered. Moreover, the model can be trained based on a large quantity of simulated data that is based on simulator(s) performing simulated episode(s) in view of the model. One or more portions of the model can be further trained based on a relatively smaller quantity of real training data.

Automating robot operations

A method to control operation of a robot includes generating at least one virtual image by an optical 3D measurement system and with respect to a 3D measurement coordinate system, the at least one virtual image capturing a surface region of a component. The method further includes converting a plurality of point coordinates of the virtual image into point coordinates with respect to a robot coordinate system by a transformation instruction and controlling a tool element of the robot using the point coordinates with respect to the robot coordinate system so as to implement the operation.

Robotic system with error detection and dynamic packing mechanism
10618172 · 2020-04-14 · ·

A method for operating a robotic system includes determining a discretized object model based on source sensor data; comparing the discretized object model to a packing plan or to master data; determining a discretized platform model based on destination sensor data; determining height measures based on the destination sensor data; comparing the discretized platform model and/or the height measures to an expected platform model and/or expected height measures; and determining one or more errors by (i) determining at least one source matching error by identifying one or more disparities between (a) the discretized object model and (b) the packing plan or the master data or (ii) determining at least one destination matching error by identifying one or more disparities between (a) the discretized platform model or the height measures and (b) the expected platform model or the expected height measures, respectively.

Object Pickup Strategies for a Robotic Device

Example embodiments may relate to methods and systems for selecting a grasp point on an object. In particular, a robotic manipulator may identify characteristics of a physical object within a physical environment. Based on the identified characteristics, the robotic manipulator may determine potential grasp points on the physical object corresponding to points at which a gripper attached to the robotic manipulator is operable to grip the physical object. Subsequently, the robotic manipulator may determine a motion path for the gripper to follow in order to move the physical object to a drop-off location for the physical object and then select a grasp point, from the potential grasp points, based on the determined motion path. After selecting the grasp point, the robotic manipulator may grip the physical object at the selected grasp point with the gripper and move the physical object through the determined motion path to the drop-off location.