Non-contact method and system for controlling an industrial automation machine

11314220 · 2022-04-26

Assignee

Inventors

Cpc classification

International classification

Abstract

A method and system are provided for controlling an industrial automation machine using non-contact (virtual) position encoding. The system and method can be used to determine the position of an object under assembly on a conveyor system without mechanical coupling to a line or drive mechanism of the line.

Claims

1. A non-contact method of controlling an industrial automation machine configured to accept input from a mechanical position encoder to perform an industrial task on an inanimate object moving on a conveyor, the method comprising: providing at least one 3D or depth sensor at a vision station located in an industrial environment, each sensor having a field of view at the vision station to obtain a stream of point cloud data representative of a surface shape of the object moving along or about an axis in the vision station within its field of view wherein the step of providing provides continuous, non-contact, position and velocity measurements of the moving object; real time tracking the motion of the object within the vision station based on each stream of point cloud data utilizing a continuously iterated, cloud-based, pose estimation algorithm to process each stream of point cloud data in real time to obtain at least one continuous stream of estimated poses, wherein the step of tracking includes the step of processing the at least one continuation stream of estimated poses in real time to obtain kinematic state estimates of the object and processing the kinematic state estimates in real time to obtain an evolution of the state of the object; and providing a signal generator configured to be in direct communication with the industrial automation machine to generate a stream of command signals based on the evolution of the state of the object to control the machine via low-latency signaling so that the machine accurately performs the industrial task at at least one accurate location on the moving object wherein the stream of command signals are indicative of the position and velocity of the moving object and are based on the continuous, non-contact position and velocity measurements of the moving object.

2. The method as claimed in claim 1, wherein the industrial automation machine is an industrial robot.

3. The method as claimed in claim 1, wherein at least one of the command signals is a trigger signal.

4. The method as claimed in claim 1, wherein at least one of the command signals is a stream of quadrature signals for each axis.

5. The method claimed in claim 2, wherein the machine is an inspection machine to inspect the object.

6. The method as claimed in claim 2, wherein the industrial task includes one of an assembly task, an inspection task, a painting task, a scanning task and a coating task.

7. The method as claimed in claim 1, wherein the conveyor is a linear conveyor.

8. The method as claimed in claim 1 further comprising determining a multidimensional offset of the object from a reference pose and generating an offset signal for use by the industrial automation machine based on the offset.

9. The method as claimed in claim 1, wherein the step of processing the kinematic state estimates utilizes a transient model.

10. The method as claimed in claim 1, wherein the step of processing the kinematic state estimate utilizes a steady state model.

11. The method claims in claim 1, wherein a plurality of 3D or depth sensors are provided and wherein each of the sensors is mounted in a fixed position within the vision station.

12. A non-contact system for controlling an industrial automation machine configured to accept input from a mechanical position encoder to perform an industrial task on an inanimate object moving on a conveyor, the system comprising: at least one 3D or depth sensor, each sensor having a field of view at a vision station to obtain a stream of point cloud data representative of a surface shape of the object moving along or about an axis in the vision station within its field of view wherein the at least one 3D or depth sensor provides continuous, non-contact, position and velocity measurements of the moving object; a tracker to track the motion of the object within the vision station as a function of time based on each stream of point cloud data utilizing a continuously iterated, cloud-based, pose estimation algorithm to obtain at least one continuous stream of estimated poses, a kinematic estimator to process each stream of estimated poses in real time to obtain kinematic state estimates of the object and a kinematic evolution model to process the kinetic state estimates in reals time to obtain an evolution of the state of the object; and a signal generator configured to be in direct communication with the industrial automation machine to generate a stream of command signals based on the evolution of the state of the object to control the machine via low-latency signaling so that the machine accurately performs the industrial task at at least accurate location on the moving object wherein the stream of command signals are indicative of the position and velocity of the moving object and are based on the continuous, non-contact position and velocity measurements of the moving object.

13. The system as claimed in claim 12, wherein the industrial automation machine is an industrial robot.

14. The system as claimed in claim 12, wherein at least one of the command signals is a trigger signal.

15. The system as claimed in claim 12, wherein at least one of the command signals is a stream of quadrature signals for each axis.

16. The system claimed in claim 13, wherein the machine is an inspection machine to inspect the object.

17. The system as claimed in claim 13, wherein the industrial task includes one of an assembly task, an inspection task, a painting task, a scanning task and a coating task.

18. The system as claimed in claim 12, wherein the conveyor is a linear conveyor.

19. The system as claimed in claim 12, wherein the tracker includes at least one processor programmed with the pose estimation algorithm, the kinematic estimator and the kinematic evolution model.

20. The system as claimed in claim 19, wherein the programmed processor utilizes a transient model to process the kinematic state estimates.

21. The system as claimed in claim 19, wherein the programed processor utilizes a steady state model to process the kinematic state estimates.

22. The system claims in claim 12, wherein the system includes a plurality of 3D or depth sensors and wherein each of the sensors is mounted in a fixed position within the vision station.

23. The system as claimed in claim 12 further comprising means for determining a multidimensional offset of the object from a reference pose and a signal generator to generate an offset signal for use by the industrial automation machine based on the offset.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 is a block diagram, schematic view of a system constructed in accordance with at least one embodiment of the present invention;

(2) FIG. 2 is a view similar to the view of FIG. 1 except the signal generator is a quadrature signal generator and not a position trigger signal generator; and

(3) FIGS. 3a and 3b are schematic diagrams illustrating the generation of quadrature signals.

DESCRIPTION OF PREFERRED EMBODIMENTS

(4) As required, detailed embodiments of the present invention are disclosed herein; however, it is to be understood that the disclosed embodiments are merely exemplary of the invention that may be embodied in various and alternative forms. The figures are not necessarily to scale; some features may be exaggerated or minimized to show details of particular components. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the present invention.

Definitions and Mathematical Systems

(5) “Following the Line”—tracking the pose of an object as it moves along a conveyor

(6) “Target Object”—an object moving along a conveyor.

(7) “Effector”—a Tool which changes a Target Object in some way, e.g. a robot.

(8) “Inspector”—a Tool which measures a Target Object, e.g. a Gap and Flush sensor.

(9) “Tool”—an Effector or an Inspector.

(10) The “Virtual Encoder” method and system of the present invention is a method and apparatus to estimate the 6DOF pose of a Target Object as a function of time, then to communicate this information in some usable fashion. The consumer of the information is a Tool, which may be an Effector or an Inspector. The Virtual Encoder is distinguished from a Mechanical Encoder in that the Virtual Encoder is a non-contact device.

(11) The Virtual Encoder method and apparatus is a superior means for determining the position of objects along a conveyor line, compared with present methods using Mechanical Encoders. The Virtual Encoder enables assembly methods which have heretofore not been practical because of the limitations of present methods.

(12) Symbols and Mathematical Definitions

(13) custom character=(x,y,z,α,β,γ).sup.t—The pose of a rigid body requires 3 positions+3 rotations

(14) {custom character.sub.i, custom character.sub.i, custom character.sub.i, . . . }—the predicted pose, velocity, acceleration, . . . of an object at time t.sub.i

(15) {custom character.sub.i, custom character.sub.i, custom character.sub.i, . . . }—The measured pose, velocity, acceleration, . . . of an object at time t.sub.i

(16) custom character.sub.i={custom character.sub.i, custom character.sub.i, custom character.sub.i, . . . }—The actual dynamic state of an object

(17) custom character.sub.i={custom character.sub.i, custom character.sub.i, custom character.sub.i, . . . }—The predicted dynamic state of an object

(18) custom character.sub.i={custom character.sub.i, custom character.sub.i, custom character.sub.i, . . . }—The measured dynamic state of an object

(19) custom character.sub.i*={custom character.sub.i*, custom character.sub.i*, custom character.sub.i*, . . . }—The commanded dynamic state of an object

(20) custom character(t.sub.n)={custom character.sub.0, . . . , custom character.sub.n}—A (historical) trajectory of an object's pose through time

(21) custom character(t.sub.n)=custom character.sub.0, . . . , custom character.sub.n)—The (historical) trajectory of an object's state through time

(22) custom character(t.sub.n)={custom character.sub.0, . . . , custom character.sub.n,custom character.sub.n+1,custom character.sub.n+2, . . . }—The (predicted) evolution of an object's state

(23) custom character(custom character.sub.0, . . . , custom character.sub.n)=custom character(t.sub.n)—A kinematic estimator custom character estimates the dynamic state of custom character

(24) M(custom character(t.sub.n))=(custom character.sub.n)—A kinematic model M predicts an object's state's evolution.

(25) The Virtual Encoder method and system typically estimates the pose of a Target Object between ˜½ meter and ˜5 meters in length in motion at speeds less than ˜1000 mm/sec along a linear conveyor.

(26) The Virtual Encoder method and system typically estimates the pose of Target Objects in motion along a one-dimensional axis (the axis of travel of the linear conveyor) to enable Assembly Operations on the Target Object in motion:
custom character(t)=(x(t),y,z,α,β,γ).sup.t

(27) In at least one embodiment, the five steady-state dimensions {y,z,α,β,γ} may be modeled assuming they are fixed in a ‘reference’ pose. That is: the position and orientation of any point on the Target Object is determined by x(t) and knowledge of the original ‘reference’ orientation and position of the object, custom character.sub.0=(x.sub.0, y.sub.0, z.sub.0, α.sub.0,β.sub.0, γ.sub.0).sup.t
custom character.sub.A1(t)=({circumflex over (x)}(t),0,0,0,0,0).sup.t+custom character.sub.0

(28) In at least one embodiment, the five steady-state dimensions may be modeled by assuming they have undergone a rigid transform, custom character, relative to the ‘reference pose’. One embodiment requires a measurement of this rigid transform.
custom character.sub.A2(t)=({circumflex over (x)}(t),0,0,0,0,0).sup.t+custom character.Math.custom character.sub.0

(29) The Virtual Encoder method and system is preferably designed for assembly processes requiring position errors less than ˜25 mm: e.g. Paint Application, Badging Inspection, Paint Inspection, Gap and Flush Inspection. This translates to the requirement that
|custom character(t)−custom character(t)|.sub.max≤25 mm

(30) Communication of the object position as a function of time should occur by mimicking a quadrature position encoder signal to take advantage of the pre-existing, low-latency, encoder inputs which are supported by current-generation robot controllers.

(31) The Virtual Encoder system and method operate in two modes. In the first mode one assumes the Target Object is in motion along an assembly line while the Tool is immobile. In the second mode one assumes the Target Object is in motion and the Tool is tracking.

(32) ‘Trigger’ an operation at a desired position, while the Target Object moves along the assembly line. The Tool is immobile. This case can be thought of as taking a ‘snap shot’ of a Target Object when it reaches a predetermined position. The position (pose) at which an assembly process happens is controlled, but not the velocity.

(33) ‘Track’ the object in motion so that the Target Object is at a relative standstill relative to the Tool. This case can be thought of as working to fool the Tool into thinking the object is stationary. The pose and the velocity at which an assembly process happens are controlled.

(34) Design Specification: Block Diagram of FIG. 1 shows a continuous stream {custom character.sub.0, . . . , custom character.sub.n} of pose estimates for a Target Object. The Virtual Encoder method and system emits one or more trigger signal(s) when the Target Object reaches one or more predetermined position(s)—without physically contacting the Target Object.

(35) Design Specification: Block Diagram of FIG. 2 shows a continuous stream {custom character.sub.0, . . . , custom character.sub.n} of pose estimates for a Target Object. The Virtual Encoder method and system generates a quadrature signal indicative of the position and velocity of the Target Object—without physically contacting the Target Object.

Design Specification Details

(36) One or more Volumetric Sensors (i.e. 3D sensors) gather 3D data (Point Clouds) from Target Objects in motion through an assembly station. These Volumetric Sensors are mounted in fixed positions at distances between 1 and 4 meters from the Target Object(s). A given assembly station may be monitored by 1 to 4 Volumetric or 3D Sensors.

(37) Calculation of objects' positions as a function of time uses CICPEA technology (Continuously Iterated Cloud-Based Pose Estimation Algorithms). See ‘CICPEA’ below.

(38) Quadrature signal emulation occurs as described in ‘Quadrature Signal Generation’ below.

(39) Low-latency and low-jitter signaling (to trigger a Tool and/or to trigger a pose measurement) is important. For example, on a hypothetical assembly line moving at 200 mm/sec, a 50 millisecond trigger delay corresponds to a position error of 10 millimeters.

(40) Volumetric Sensors: Several volumetric sensors are known to the art and available commercially which are capable of producing streams of point clouds indicative of the surface shape of objects in motion within their fields of view. For example: Microsoft Kinect, PrimeSense Carmine, Orbbec Astra, Intel RealSense, etc.

(41) The sensors used to create streams of point clouds for the Virtual Encoder method and system are chosen according to the specific requirements of an application, and may or may not be modified or improved versions of commercially available sensors.

(42) CICPEA Pose Estimation

(43) CICPEA=Continuously Iterated Cloud-Based Pose Estimation Algorithms

(44) Under the assumption that a Target Object is moving ‘slowly’ compared to the rate of point cloud sampling it is reasonable to estimate that as time evolves, object poses change slowly. Hence, a good estimate for the pose of an object at t.sub.n+1 is the pose of the object at t.sub.n:

(45) ^ n + 1 ~ n or ^ n + 1 ~ n + . ~ n dt + .Math. ~ n dt 2 2 + .Math.

(46) Such approximations can improve the accuracy and speed of algorithms which operate on Point Clouds of data to produce Pose Estimates. Massively parallel geometric processors (such as NVIDIA computing hardware) enable Continuously Iterated Cloud-Based Pose Estimation Algorithms=CICPEA technology. CICPEA technology is used for pose estimation by the Virtual Encoder method and system.

(47) A variety of CICPEA algorithms are known to the art for continuous pose estimation, a prominent example being the KinFu algorithm.

Kinematics Estimator

(48) Several methods are known to the art for producing kinematic state estimates in one dimension from streams of (potentially noisy) pose estimates. In the context of the Virtual Encoder method and system these estimators are termed Kinematics Estimators.

(49) The Kinematics Estimator, custom character, block of the Virtual Encoder method and system, receives a stream of pose estimates from CICPEA algorithms and produces an estimate of the kinematic state of a Target Object:
custom character(custom character.sub.0, . . . ,custom character.sub.n)=custom character(t.sub.n)

(50) The kinematics estimator in use for any particular application is configurable by the Virtual Encoder method and system depending on the needs of that installation. Successful Kinematics Estimators used by the Virtual Encoder method and system include: Kalman Filters, a variety of robust estimators for position and velocity, linear least squares fit for position and velocity, and so on. Other appropriate methods will suggest themselves to persons versed in the art.

(51) Kinematics Model: Given a historical state estimate from a Kinematics Estimator, the task of the Kinematics Model is to predict the evolution of the state of the Target Object. Any number of Kinematics Models will suggest themselves to persons versed in the art, but a favored method for the Virtual Encoder is the following:

(52) For each new state estimate in the sequence {custom character.sub.0, . . . , custom character.sub.n} extract the position and velocity estimates from the final two (most recent) state estimates {{tilde over (x)}.sub.n−1, {dot over ({tilde over (x)})}.sub.n−1, {tilde over (x)}.sub.n, {dot over ({tilde over (x)})}.sub.n};

(53) Calculate the sample frequency λ=t.sub.n−t.sub.n−1;

(54) Set the steady-state velocity to the most recent velocity estimate ν.sub.∞={dot over ({tilde over (x)})}.sub.n and set a ‘slow velocity’ threshold ν.sub.slow to 6σ.sub.ν where σ.sub.ν is the conveyor velocity uncertainty;

(55) If v.sub.∞≤ν.sub.slow calculate a 1.sup.st-order (position/velocity) ‘transient’ model: M.sup.T=M.sub.1.sup.T(t.sub.n+dt): x(dt)=x+{dot over (x)}dt;

(56) else calculate a 3.sup.rd-order (position/velocity/acceleration/jerk) ‘transient’ model:

(57) M T = M 3 T ( t n + dt ) : x ( dt ) = x + x . dt + x .Math. dt 2 2 + x .Math. dt 3 6 ;

(58) Calculate the evolution of the state of the Target Object anew as follows:

(59) For t<t.sub.n+λ use the transient model to calculate M.sup.T(t): x(t.sub.n+λ−t);

(60) For t≥t.sub.n+λ use the ‘steady state’ model: M.sup.S(t): x(t−(t.sub.n+λ))={tilde over (x)}.sub.n+ν.sub.∞dt;

(61) Transient Models:

(62) If ν.sub.∞≤ν.sub.slow the 1.sup.st-order (position/velocity) transient model is calculated as follows

(63) M T = M 1 T ( t n + dt ) : x ( dt ) = x ~ n - 1 + x ~ n - 1 - x ~ n - 1 λ dt

(64) Else if ν.sub.∞>ν.sub.slow the 3.sup.d-order (position/velocity/acceleration/jerk) transient kinematic model is calculated by solving the following equations

(65) x ( dt ) = x + x . dt + x .Math. dt 2 2 + x .Math. dt 3 6 x ( 0 ) = x ~ n - 1 ; x ( λ ) = x ~ n x . ( 0 ) = x . ~ n - 1 ; x . ( λ ) = x . ~ n
Which yields:

(66) M T = M 3 T ( t n + dt ) : x ( dt ) = x 0 + v 0 dt + a dt 2 2 + j dt 3 6 , where x 0 = x ~ n - 1 ; x 1 = x ~ n ; v 0 = x . ~ n - 1 ; v 1 = x . ~ n j = ( x 1 - x 0 - v 0 λ ) - ( v 1 - v 0 ) λ 2 - λ 3 12 a = ( x 1 - x 0 - v 0 λ ) - j λ 3 / 6 λ 2 / 2

(67) Position Trigger Generator: Given a sequence of trigger positions {q.sub.0, . . . , q.sub.k} at which an assembly operation for a Target Object should be triggered: Each time the Kinematics Model is updated, calculate the predicted ‘trigger times’ {t.sub.0, . . . , t.sub.k} for these assembly operations via:

(68) x ( t 0 + ϵ ) = q 0 .Math. x ( t k + ϵ ) = q k
where ϵ is the signal latency for the assembly operation signal transmission. When a trigger time is reached, signal the trigger for that event and remove the trigger from the list of trigger positions for that Target Object.

(69) Quadrature Signal Generator: The task of the Quadrature Signal Generator is to create a stream of output quadrature signals to emulate the position evolution of the Target Object. Each time a new state estimate is available the Quadrature Signal Generator must be reinitialized.

(70) The time-resolution of the quadrature signal generating circuitry ϵ.sub.g and the time-resolution of the quadrature signal receiving circuitry ϵ.sub.r should be predetermined. The rate of production of output signals is limited by dt=4*max{ϵ.sub.g, ϵ.sub.r}. Pseudo-code for a quadrature signal is as follows:

(71) TABLE-US-00001 Begin Quadrature Signal Generator while (t ≥ t.sub.n + λ){ x=x.sub.0 x* = integer(M.sup.S(t)) t=t.sub.n if (x* ≠ x) { dt = 4*max (ϵ.sub.g, ϵ.sub.r) SendQuadraturePulse (x*−x) λ = t.sub.n − t.sub.n−1 x = x* while (t < t.sub.n + λ){ } x* = integer(M.sup.T(t)) t += dt if (x* ≠ x) { } SendQuadraturePulse (x*−x) End QuadratureSignalGenerator x = x* } t += dt }

(72) Quadrature Signal Generation: The signal generation of a Mechanical Rotary Encoder is driven by the rotation of a wheel. The Virtual Encoder emulates a quadrature signal in response to the stream of Target Object pose measurements ‘as-if’ a mechanical encoder was measuring the position of the Target Object.

(73) A quadrature encoder signal is a two-channel binary signal which indicates both the direction and rate of change of an object's position. The rate of change is indicated by the rate at which HIGH-LOW transitions occur. The direction is indicated by the relative phase of the A and B channels as illustrated in FIGS. 3a and 3b. Pseudo-code for a quadrative pulse is as follows:

(74) TABLE-US-00002 Subroutine Send Quadrature Pulse else { Parameter Integer dx* digital Output (1, true) if (dx* > 0) { wait (λ) digital Output (A, true) digital Output (0, true) wait (λ) wait (λ) digital Output (B, true) digital Output (1, false) wait (λ) wait (λ) digital Output (A, false) digital Output (0, false) wait (λ) wait (λ) digital Output (B, false) } wait (λ) End Subroutine Send Quadrature Pulse }

(75) The advantages of the above-noted method and system are numerous including but not limited to: 1. The Virtual Encoder method and system achieve much greater accuracies than the mechanical encoder method; 2. The apparatus of the virtual encoder method (3D sensors and computational hardware necessary to run algorithms for tracking the linear motion of an object) may be installed nearby the assembly station where the information is needed. Thus, there is no problem with installing multiple such stations along the same segment of an assembly line; 3. The Virtual Encoder system is not mechanically coupled to the conveyor line, so it has none of the robustness problems of the mechanical encoder; and 4. Since the ‘virtual’ encoder method and system are implemented via computer algorithms, the virtual encoder is also a ‘smart’ encoder, and may implement a variety of advanced signal filtering methods (such as Kalman filtering) which are well-known to improve the performance of position prediction methods in the presence of noise, but which are unavailable to purely mechanical encoders.

(76) The pose of an object can be estimated using a sensor capable of measuring range (depth) data. Location of the object relative to the sensor can be determined from one or more range measurements. Orientation of the object can be determined if the sensor provides multiple range measurements for points on the object. Preferably a dense cloud of range measurements is provided by the sensor so that orientation of the object can be determined accurately.

(77) Use of at least one embodiment of the present invention improves accuracy and robustness of position encoding along a conveyor system, and, consequently, enables manufacturing techniques which heretofore have not been possible or have not been not economically feasible. Examples of such practices and techniques are: 1. Accurately following contours of a known object being painted, scanned, or otherwise coated when the contours of the object are already known, hence improving efficiency of painting operations; 2. Allowing inspections to be performed at accurate locations on a vehicle while in motion, for example: measuring body panel gap and flush, making ultrasonic measurements for leaks around windows. 3. When coupled with a separate system for measuring the contours of an unknown system, allows to follow the contours of an unknown object being painted, scanned or otherwise coated, hence improving efficiency of such operations.

(78) In one preferred embodiment, the system includes one or more volumetric or 3D sensors configured to observe an object as it traverses an assembly or inspection station. The point cloud data from these sensors is fed to a computer, which implements algorithms for tracking the 1D motion of an object. The position and velocity estimates from these tracking algorithms may be fed through linear or non-linear filtering means such as Kalman filters, model-predictive algorithms, or other filters known to the art for improving position estimations. The result is translated to a time series of quadrature signals by electronic means. The quadrature signal train is fed to a robot or other device configured to use such a signal train for tracking the linear motion of an object.

(79) While exemplary embodiments are described above, it is not intended that these embodiments describe all possible forms of the invention. Rather, the words used in the specification are words of description rather than limitation, and it is understood that various changes may be made without departing from the spirit and scope of the invention. Additionally, the features of various implementing embodiments may be combined to form further embodiments of the invention.