System and method for error correction for CNC machines
09645217 ยท 2017-05-09
Assignee
Inventors
Cpc classification
G01R35/00
PHYSICS
G05B2219/39007
PHYSICS
International classification
G01R35/00
PHYSICS
Abstract
A method of determining a positioning error of a CNC machine is provided and an system thereof. A calibration element is placed within the CNC machine. A first sensor reading is taken of the calibration element. A second sensor reading is taken and the sensor is moved in a manner that the difference between the first and second sensor data decreases until the difference becomes less or equal to a pre-determined threshold value. A positioning error of the tool head is determined based on the movement of the sensor.
Claims
1. A method for determining a positioning error of a computerized numerical control (CNC) machine, wherein the CNC machine is equipped with a calibration element, the calibration element being in a first position, the method comprising the steps of: reading first sensor data from at least one sensor while the calibration element is at the first position, the sensor data corresponding to a distance associated with the calibration element; operating the CNC machine to perform a calibration movement while the calibration element remains generally at the first position; reading second sensor data from the at least one sensor while the calibration element is at a second position, the second position denoting the actual position of the calibration element after the calibration movement has been performed; causing the at least one sensor to move so that the difference between the first and second sensor data decreases until the difference becomes less than or equal to a pre-determined threshold value, wherein the at least one sensor comprises or is mounted to a movement element for moving the at least one sensor; and determining a positioning error of the CNC machine based on the movement of the at least one sensor.
2. The method according to claim 1 further comprising: determining, from the first and second sensor data, including the difference thereof, a first compensation direction, in such a way that a movement of the at least one sensor in the first compensation direction decreases the difference between the first and second sensor data; and causing the at least one sensor to move in the first compensation direction.
3. The method according to claim 2 further comprising performing a closed loop comprising the steps of: reading current sensor data from at least one sensor; determining, from the first and current sensor data, including from the difference thereof, a current compensation direction, in such a way that a movement of the at least one sensor in the current compensation direction decreases the absolute difference between the first and second sensor data; and causing the at least one sensor to move in the current compensation direction.
4. The method according to claim 2, wherein determining the first compensation direction comprises transforming the sensor data into components with respect to a pre-determined coordinate system, including an orthogonal coordinate system.
5. The method of claim 4, wherein reading at least one of the first, second, and current sensor data comprises reading from at least two sensors, and wherein determining the first compensation direction comprises determining components of a velocity vector with respect to the pre-determined coordinate system, including an orthogonal coordinate system, so that a corresponding movement of the at least one sensor decreases the absolute difference between the first and current sensor data.
6. The method according to claim 1, wherein the threshold value is expressed in terms of the coordinate system, including at least one of i) terms of components with respect to the coordinate system and ii) terms of the sensor data.
7. The method according to claim 1 further comprising outputting data indicating the positioning error of the tool head, including at least one of one of displaying, printing, transmitting, and saving the data.
8. A method for improving the accuracy of a computerized numerical control (CNC) machine, the method comprising: reading first sensor data from at least one sensor while the calibration element is at the first position, the sensor data corresponding to a distance associated with the calibration element; operating the CNC machine to perform a calibration movement while the calibration element remains generally at the first position; reading second sensor data from the at least one sensor while the calibration element is at a second position, the second position denoting the actual position of the calibration element after the calibration movement has been performed; causing the at least one sensor to move so that the difference between the first and second sensor data decreases until the difference becomes less than or equal to a pre-determined threshold value, wherein the at least one sensor comprises or is mounted to a movement element for moving the at least one sensor; and determining a positioning error of the CNC machine based on the movement of the at least one sensor; and compensating the positioning error of a tool head.
9. A system for determining a positioning error of a computerized numerical control (CNC) machine, wherein the CNC machine is equipped with a calibration element, the system comprising: at least one sensor configured to output sensor data, the sensor data corresponding to a distance associated with the calibration element; a movement element configured to move the at least one sensor; and a control unit configured to processing the sensor data received from the at least one sensor, and for controlling the movement element, wherein the control unit is configured to: receive first and second sensor data; output movement data to the movement element causing the movement element to move the at least one sensor so that the difference between the first and second sensor data decreases until the difference becomes less than or equal than a threshold value wherein the at least one sensor comprises or is mounted to a movement element for moving the at least one sensor; and determine a positioning error of the tool head based on the movement of the at least one sensor.
10. The system according to claim 9 further comprising an output unit configured to output error data corresponding to the positioning error of the CNC machine, wherein outputting comprises at least one of displaying, printing, transmitting and saving the error data.
11. The system according to claim 9, wherein at least one sensor is a contact point sensor, a dial indicator, a light sensor, a laser sensor, an ultrasonic sensor, a capacitive sensor, or an inductive sensor.
12. The system according to claim 9, wherein the movement element comprises at least one motor, including at least one of an electric motor and a linear actuator.
13. The system according to claim 9, wherein the movement element moves the at least one sensor by translation along at least one coordinate axis of a coordinate system.
14. The system according to claim 9, wherein the movement element comprises at least two motors, wherein the at least two motors may be controlled separately, wherein the movement element moves the at least one sensor by translation along at least two coordinate axis of the coordinate system separately.
15. The system according to claim 9, wherein the movement element moves at least two sensors together, including sensors fixed to a common support base.
16. The system according to claim 9, wherein three sensors are fixed at the edges of an imaginary triangle formed parallel to a surface of the support base, wherein each of the sensors is directed to the center of the triangle and inclined against the surface of the support base.
17. The system according to claim 9, wherein the calibration element comprises a ball.
18. The system according to claim 9, wherein the control unit is further configured to determine, from the first and second sensor data, including the difference thereof, a first compensation direction, in such a way that a movement of the at least one sensor in the first compensation direction decreases the difference between the first and second sensor data; and cause the at least one sensor to move in the first compensation direction.
19. The system according to claim 18, wherein the control unit determines the first compensation direction by determining components of a velocity vector with respect to the pre-determined coordinate system so that a corresponding movement of the at least one sensor decreases the absolute difference between the first and current sensor data.
20. The system according to claim 9, wherein the control unit is further configured to read current sensor data from at least one sensor; determine, from the first and current sensor data, including from the difference thereof, a current compensation direction, in such a way that a movement of the at least one sensor in the current compensation direction decreases the absolute difference between the first and second sensor data; and cause the at least one sensor to move in the current compensation direction.
21. A computer program product, which is stored on a non-transitory machine-readable medium containing computer instructions stored therein for causing a computer processor to perform determining a positioning error of a computerized numerical control (CNC) machine, wherein the CNC machine is equipped with a calibration element, the calibration element being in a first position, the computer program product comprising: computer code to read first sensor data from at least one sensor while the calibration element is at the first position, the sensor data corresponding to a distance associated with the calibration element; computer code to operate the CNC machine to perform a calibration movement while the calibration element remains generally at the first position; computer code to read second sensor data from the at least one sensor while the calibration element is at a second position, the second position denoting the actual position of the calibration element after the calibration movement has been performed; computer code to cause the at least one sensor to move so that the difference between the first and second sensor data decreases until the difference becomes less than or equal to a pre-determined threshold value, wherein the at least one sensor comprises or is mounted to a movement element for moving the at least one sensor; and computer code to determine a positioning error of the CNC machine based on the movement of the at least one sensor.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The present embodiments will be described by some preferred embodiments, provided as non-limiting examples, with reference to the enclosed drawings, in which:
(2)
(3)
(4)
(5)
(6)
DETAILED DESCRIPTION OF EMBODIMENT
(7) With reference to
(8) The machine tool head 101 represents an interface between the CNC machine and a tool, wherein the tool may be replaceable. The tool may be a tool for shaping, e.g. cutting, milling, drilling, or for measuring and/or testing.
(9) The calibration element 102 may be a an element explicitly used for determining a positioning error and/or otherwise calibrating the CNC tool head, or the calibration element 102 may be the tool itself. The former is preferable because the shape of a tool may make it difficult to determine a reliable positioning error of the tool head 101. The latter may be advantageous if the tool is not removable or difficult to remove from the tool head 101. In the present example the calibration element 102 has a shape of a ball which is connected to the tool head via a cylindrical element. This ball 102 is preferably formed of a hard material, like metal. The ball 102 may be solid or hollow.
(10) The number of sensors 103 may be one, two, three, or more than three. In the present example, three sensors 103-1, 103-2 and 103-3 are used. The sensors 103 may be mounted on a support base 104, wherein they may be fixed at the corners of an imaginary triangle, in particular, an equilateral triangle on the surface of the support base 104, or parallel to the surface. The sensors 103 may also be located on sockets, pedestals or the like, which may be fixed to the surface of the support base 104. The sensors 103 may have a cylindrical portion along a geometric sensor axis. In particular, they may include a stationary, in particular, non deflectable, portion whose position along the geometrical sensor axis is fixed. The sensors may further include a portion that is movable, in particular, deflectable, along the sensor axis, such as a sensor head.
(11) The sensors 103 may, in particular, be contact point sensors, where the sensor head includes a contact element which is in contact with a point on the surface of the ball 102. More specifically, the contact element is in contact with the point on the surface of the ball 102 and on the sensor axis which is closest to the stationary portion of the sensor 103. The sensors 103 may be inclined against the surface of the support base 104 by an inclination angle. The angle may be the same for each sensor 103, or different. The angle may be in the range of 40 to 80, preferably 50 to 70, more preferably 55 to 65. Under a higher inclination angle the ball 102 may be better accessible by the sensors 103. In particular, this permits easy positioning of the ball 102 and a collision free movement of the tool head 101. The inclination angles of the sensors 103 may be chosen so that the mutual angles between the axes of the sensors 103 are at least 90. The sensors 103 may be arranged to mutually point at the center of a ball 102. The three sensor axes of the sensors 103 may form a mutual angle of at least 90. The support base 104 may include a cylindrical portion. Moreover, the support base 104 may include socket, pedestals or the like, for mounting the sensors 103. The support base 104 may also include adjustment means for adjusting the height and/or lateral position of the sensors 103, and/or fixing means for fixing the height and/or lateral position of the sensors.
(12) The control unit 105 may include processing means for processing data received from the sensors 103 and/or data received otherwise. The control unit 105 may further include storage means for caching or storing data. The storage means may include volatile memory and/or persistent memory. Information representing the geometrical layout of the sensors 103, for example, the spatial orientation of their sensor axes, may be saved in the memory. The control unit 105 may include an input interface for receiving data, in particular, sensor data from the sensors 103. The input interface may include a plurality of entries. In particular, the sensors 103 may be connected to the input interface separately. The sensors 103 may be connected to the input interface by wired connection and/or wireless connection. The input interface may also serve for inputting instructions into the control unit 105 and/or updating the control unit 105. The control unit 105 may further include an output interface for outputting data. The output interface may be connected to the movement element 106. This connection may be a wired connection and/or a wireless connection. The output interface may be further connected to an output unit 107. This connection may also be a wired connection and/or a wireless connection.
(13) The movement element 106 may include one, two, three, or more than three motors, preferably electro-motors. In particular, the movement element 106 may include three motors which are configured to translate the support base 104 along each of the three coordinate axes x, y, and z of a Cartesian coordinate system. The three different translations may be controlled by addressing the three motors separately. The three motors may be connected to the control unit 105, in particular, to an output interface thereof, separately or collectively. The movement element 106 may be fixed to the sensors 103 directly and/or to the support base 104.
(14) The output unit 107 may include a display, a printer, a transmitter, and/or a storage device, and/or be connected to a display, a printer, a transmitter, and/or a storage device. The output unit 107 may also be connectable to the control of the CNC machine. The output device 107 may be connected to the control unit 105, in particular, the output interface thereof.
(15) In operation of the system, at least one of the sensors 103, preferable each of the sensors 103, outputs sensor data while the ball 102 is in a given position. That is, the sensor data represents the current position of the ball 102, without necessarily determining the actual position of the ball 102. The sensor data is then transferred to the control unit 105, in particular, to an input interface thereof.
(16) The control unit 105 receives the sensor data from the sensors 103, in particular, via an input interface. The control unit 105, in particular, processing means thereof, determine if the sensor data satisfies certain conditions. In particular, the control unit 105 may examine if the difference, in particular, the absolute difference, between sensor data taken at two different times falls below a threshold value. The control unit 105, in particular, processing means thereof, may determine movement data from the sensor data. The movement data and/or the sensor data may be cached and/or saved within the control unit 105, in particular, within storage means thereof. The movement data may include three separate commands for the three motors of the movement element 106. The control unit 105 may transmit the movement data to the movement element 106, in particular, via an output interface.
(17) The movement element 106 receives the movement data from the control unit 105, in particular, via an output interface thereof. The movement data may include commands for at least one of the motors. In particular, the movement data may include commands for three motors which are configured to translate the sensors 103 and/or the support base 104 along the Cartesian axes. The commands for a motor may include instructions to translate the sensors 103 and/or the support base 104 along the respective axis in a forward direction, in a backward direction, to reverse the translation, and/or to stop the translation. The commands for a motor may further include instructions to translate the sensor 103 and/or the support base 104 with a certain velocity and/or for a certain distance.
(18) After the movement element 106 has moved the sensors 103 and/or the support base 104 according to the movement data output by the control unit 105, the sensors 103 output new sensor data. The control unit 105, in particular, the input interface thereof, receives the new sensor data from the sensors 103 and compares the difference, in particular, the absolute difference, between the new sensor data and previous sensor data, in particular, sensor data representing an initial position of the ball 102, with a threshold value. If the threshold value is not met, new movement data is determined and output to the movement element 106. If the threshold value is met, the control unit 105, in particular, the processing means thereof, determines a positioning error of the tool head from the cached and/or stored movement data. In a Cartesian coordinate system, the positioning error (Dx, Dy, Dz) may be the sum of the movement data corresponding to the movements that were necessary to move the sensors 103 and/or the support base 104 in order to meet the threshold value.
(19) The control unit 105 may output the positioning error to the output unit 107. The output unit 107 may display, print, transmit, and/or save the positioning error. The output unit 107 may also input the positioning error into the control of the CNC machine.
(20) The sensors 103, the support base 104, the control unit 105, the movement element 106, and/or the output unit 107 may be separate units and/or elements, or may be part of the same unit and/or element of the system.
(21) With reference to
(22) In step 210, first sensor data S(t.sub.0), that is, sensor data at a time t.sub.0, is read. The first sensor data represents the first position of the calibration element 102 which is connected to the tool head 101. The first position corresponds to an initial position of the calibration element 102, that is, before the tool head 101 is moved in order to determine a positioning error thereof. The first position is known to the control of the CNC machine but unknown to the control unit 105. The CNC control may operate in Cartesian coordinates and set the first position to (0, 0, 0), wherein the first position may correspond to a pre-determined reference point on or in the calibration element 102, in particular, the center of a ball. It is, however, not necessary for the control unit 105 to determine the first position. In the case of three sensors 103-1 to 103-3, the first sensor data S(t.sub.0) includes first sensor data S.sub.1(t.sub.0), S.sub.2(t.sub.0), and S.sub.3(t.sub.0) of the sensors 103-1, 103-2, and 103-2, respectively. Each of the sensor data S.sub.1(t.sub.0), S.sub.2(t.sub.0), and S.sub.3(t.sub.0) may have components with respect to a pre-determined coordinate system. If Cartesian coordinates are used, the first sensor data S.sub.1(t.sub.0) of the sensor 103-1 may have components S.sub.1,x(t.sub.0), S.sub.1,y(t.sub.0), and S.sub.1,z(t.sub.0) with respect to the Cartesian coordinate axes x, y, and z. Similarly, S.sub.2(t.sub.0) and S.sub.3(t.sub.0) may have components S.sub.2,x(t.sub.0), S.sub.2,y(t.sub.0), S.sub.2,z(t.sub.0), S.sub.3,x(t.sub.0), S.sub.3,y(t.sub.0), and S.sub.3,z(t.sub.0). The Cartesian components of the first sensor data may be determined from the known direction of the sensors, that is, the direction of the geometrical sensor axis, by trigonometric computations, as known in the art. However, in embodiments it may be unnecessary to determine the Cartesian components of the first sensor data.
(23) In step 220, the CNC is operated to move the tool head 101 so that the calibration element 102 remains in a theoretically fixed position. That is, according to the CNC control this calibration movement does not change the position of the reference point of the calibration element 102. The calibration element 102 itself may however move. In particular, if the calibration element 102 includes a ball whose center is the reference point, the calibration movement leaves the center of the ball at the fixed position, while the ball may still rotate about any axis through its center. In other words, the CNC assumes that after the calibration movement the reference point is still at the first position, e.g. (0, 0, 0). Due to a positioning error of the CNC machine, in particular, of the tool head, the calibration element may however be at a second position which differs from the first position. If Cartesian coordinates x, y and z are used, said second position may be expressed as (Dx, Dy, Dz). This second position is known neither to the control of the CNC machine, which still assumes the position (0, 0, 0) instead of (Dx, Dy, Dz), nor by the control unit 105. It is an object of the present method to determine Dx, Dy, and Dz.
(24) In step 230, current sensor data S(t.sub.i), that is, sensor data at a time t.sub.i>t.sub.0, is read. If the time t.sub.i corresponds to a time t.sub.1>t.sub.0 before the sensors have been moved, the current sensor data S(t.sub.1) is the second sensor data representing the second position of the calibration element 102, that is, the position of the calibration element 102 after the calibration movement. The second position of the calibration element 102 corresponds to the positioning error (Dx, Dy, Dz) of the tool head and is unknown. In the case of three sensors 103-1 to 103-3, the current sensor data S(t.sub.i) includes current sensor data S.sub.1(t.sub.i), S.sub.2(t.sub.i), and S.sub.3(t.sub.i) of the sensors 103-1, 103-2, and 103-2, respectively. Each of the sensor data S.sub.1(t.sub.i), S.sub.2(t.sub.i), and S.sub.3(t.sub.i) may have components with respect to a pre-determined coordinate system. If Cartesian coordinates are used, the first sensor data S.sub.1(t.sub.i) of the sensor 103-1 may have components S.sub.1,x(t.sub.i), S.sub.1,y(t.sub.i), and S.sub.1,z(t.sub.i) with respect to the Cartesian coordinate axes x, y, and z. Similarly, S.sub.2(t.sub.i) and S.sub.3(t.sub.i) may have components S.sub.2,x(t.sub.i), S.sub.2,y(t.sub.i), S.sub.2,z(t.sub.i), S.sub.3,x(t.sub.i), S.sub.3,y(t.sub.i), and S.sub.3,z(t.sub.i). The Cartesian components of the current sensor data may be determined from the known direction of the sensors, that is, the direction of the geometrical sensor axis, by trigonometric computations, as known in the art. However, in embodiments it may be unnecessary to determine the Cartesian components of the current sensor data.
(25) In step 240, the current difference D(t.sub.i)=S(t.sub.0)S(t.sub.i) between the first and current sensor data is determined. In the case of three sensors 103-1 to 103-3, the current difference D(t.sub.i) may include the three differences D.sub.1(t.sub.i)=S.sub.1(t.sub.0)S.sub.1(t.sub.i), D.sub.2(t.sub.i)=S.sub.2(t.sub.0)S.sub.2(t.sub.i), and D.sub.3(t.sub.i)=S.sub.3(t.sub.0)S.sub.3(t.sub.i). In particular, the difference D(t.sub.i) may have Cartesian components D.sub.1,x(t.sub.i), D.sub.1,y(t.sub.i), D.sub.1,z(t.sub.i), D.sub.2,x(t.sub.i), D.sub.2,y(t.sub.i), D.sub.2,z(t.sub.i), D.sub.3,x(t.sub.i), D.sub.3,y(t.sub.i), and D.sub.3,z(t.sub.i), wherein D.sub.1,x(t.sub.i)=S.sub.1,x(t.sub.0)S.sub.1,x(t.sub.i) and so forth. The Cartesian components of D(t.sub.i) may be determined directly from the Cartesian components of S(t.sub.i), or alternatively, by transforming the differences D.sub.1(t.sub.i), D.sub.2(t.sub.i), and D.sub.3(t.sub.i) into displacement vectors along the geometrical axes of the sensors 103-1, 103-2, and 103-3, respectively, and then determining the Cartesian components of the respective displacement vectors from the known direction of the sensors, that is, the direction of the geometrical sensor axis, by trigonometric computations, as known in the art. However, in embodiments it may be unnecessary to determine the Cartesian components of the current difference. The signs of D.sub.1(t.sub.i), D.sub.2(t.sub.i), and D.sub.3(t.sub.i) determine if, at the time t.sub.i, the respective sensor is further deflected or less deflected than at the time t.sub.0.
(26) In step 250, the current sensor data is read and the difference D(t.sub.i), in particular, the absolute difference |D(t.sub.i)|, between the current sensor data and the first sensor data is compared to a threshold value T. If the threshold is met, that is, if the difference D(t.sub.i), in particular, the absolute difference |D(t.sub.i)|, is less or equal to the threshold value T, the positioning error is determined in step 280. If the threshold T is not met, that is, if the difference D(t.sub.i), in particular, the absolute difference |D(t.sub.i)|, is greater than the threshold value T, the method proceeds to step 260. In particular, the threshold value T may have Cartesian components T.sub.x, T.sub.y, and T.sub.z. The threshold condition may then include the conditions like |D.sub.1,x(t.sub.i)|T.sub.x, and similarly for the other components. It also possible to require different threshold values T.sub.1, T.sub.2, and T.sub.3 for the three sensors 103-1, 103-2, and 103-3. In this case the threshold condition may include conditions like |D.sub.1,x(t.sub.i)|T.sub.1,x, and similarly for the other components. Alternatively, the threshold condition may be checked for the sum of certain components of the difference D(t.sub.i). For example, the threshold condition may be evaluated as the sum of the difference D.sub.1(t.sub.i), D.sub.2(t.sub.i), D.sub.3(t.sub.i) of the sensors 103-1, 103-2, 103-3 with respect to each Cartesian component separately. In this case, the threshold condition may include conditions like |D.sub.1,x(t.sub.i)|+|D.sub.2,x(t.sub.i)|+|D.sub.3,x(t.sub.i)|T.sub.x, and similarly for the other components. In another example, the threshold condition may be evaluated as the sum of the Cartesian components D.sub.x(t.sub.i), D.sub.y(t.sub.i), D.sub.z(t.sub.i) of the difference with respect to each sensor 103-1, 103-2, 103-3 separately. In this case, the threshold condition may include the conditions like |D.sub.1,x(t.sub.i)|+|D.sub.1,y(t.sub.i)|+|D.sub.1,z(t.sub.i)|T.sub.1, and similarly for the other sensors. Combinations of the above-described examples are also possible. In particular, the threshold condition may include the condition |D.sub.1,x(t.sub.i)|+|D.sub.1,y(t.sub.i)|+|D.sub.1,z(t.sub.i)|+|D.sub.2,x(t.sub.i)|+|D.sub.2,y(t.sub.i)|+|D.sub.2,z(t.sub.i)|+|D.sub.3,x(t.sub.i)|+|D.sub.3,y(t.sub.i)|+|D.sub.3,z(t.sub.i)|T.
(27) In step 260, a compensation direction is determined. This determination may be based on the differences D.sub.1(t.sub.i), D.sub.2(t.sub.i), D.sub.3(t.sub.i), or on their Cartesian components D.sub.1,x(t.sub.i), D.sub.1,y(t.sub.i), D.sub.1,z(t.sub.i), D.sub.2,x(t.sub.i), D.sub.2,y(t.sub.i), D.sub.2,z(t.sub.i), D.sub.3,y(t.sub.i), and D.sub.3,z(t.sub.i). The compensation direction may be represented by a velocity vector V(t.sub.i)=(V.sub.x(t.sub.i), V.sub.y(t.sub.i), V.sub.z(t.sub.i)), wherein the components V.sub.x(t.sub.i), V.sub.y(t.sub.i), and V.sub.z(t.sub.i) represent the velocities along the x-axis, y-axis, and z-axis, respectively, with which the sensors will be moved in step 270. Here, the signs of V.sub.x(t.sub.i), V.sub.y(t.sub.i), and V.sub.z(t.sub.i) determine the direction of the movement along the respective axis, that is, a forward or backward translation, whereas their absolute value determines the speed of the translation along the respective axis. The velocity components V.sub.x(t.sub.i), V.sub.y(t.sub.i), V.sub.z(t.sub.i) may be determined from the differences D.sub.1(t.sub.i), D.sub.2(t.sub.i), D.sub.3(t.sub.i) as follows:
V.sub.x(t.sub.i)=K.sub.1,x.Math.D.sub.1(t.sub.i)+K.sub.2,x.Math.D.sub.2(t.sub.i)+K.sub.3,x.Math.D.sub.3(t.sub.i),
V.sub.y(t.sub.i)=K.sub.1,y.Math.D.sub.1(t.sub.i)+K.sub.2,y.Math.D.sub.2(t.sub.i)+K.sub.3,y.Math.D.sub.3(t.sub.i),
V.sub.z(t.sub.i)=K.sub.1,z.Math.D.sub.1(t.sub.i)+K.sub.2,z.Math.D.sub.2(t.sub.i)+K.sub.3,z.Math.D.sub.3(t.sub.i),
(28) The kinematic factors K are the relation between the compensation direction and the sensor movement. The factors K may be constant. In particular, the factor K of a sensor is constant when the spatial orientation of its sensor axis is fixed. Moreover, the factors K may be known, or may be determined as follows:
K.sub.1,x=A.Math.D.sub.1,x(t.sub.i)/D.sub.1(t.sub.i); K.sub.1,y=A.Math.D.sub.1,y(t.sub.i)/D.sub.1(t.sub.i); K.sub.1,z=A.Math.D.sub.1,z(t.sub.i)/D.sub.1(t.sub.i);
K.sub.2,x=B.Math.D.sub.2,x(t.sub.i)/D.sub.2(t.sub.i); K.sub.2,y=B.Math.D.sub.2,y(t.sub.i)/D.sub.2(t.sub.i); K.sub.2,z=B.Math.D.sub.2,z(t.sub.i)/D.sub.2(t.sub.i);
K.sub.3,x=C.Math.D.sub.3,x(t.sub.i)/D.sub.3(t.sub.i); K.sub.3,y=C.Math.D.sub.3,y(t.sub.i)/D.sub.3(t.sub.i), K.sub.3,z=C.Math.D.sub.3,z(t.sub.i)/D.sub.3(t.sub.i),
(29) The scale A, B, and C are factors related to the constructive solution of the kinematic system that moves the sensors 103. In particular, A, B and C may represent scale factors which the control unit 105 applies when computing the movement data. This may be advantageous if the sensors 103-1, 103-2, 103-3 have different gains. The scale factors A, B and C may be the same or different. The kinematic factors, for example K.sub.1, may include weight factors D.sub.1,x(t.sub.i)/D.sub.1(t.sub.i), D.sub.1,y(t.sub.i)/D.sub.1(t.sub.i), and D.sub.1,z(t.sub.i)/D.sub.1(t.sub.i), representing the relative contribution of a difference component D.sub.1,x(t.sub.i), D.sub.1,y(t.sub.i), and D.sub.1,z(t.sub.i) to the overall difference D.sub.1(t.sub.i) of the sensor 103-1, and likewise for the other sensors. This ensures that the compensation direction V(t.sub.i), as determined above, points in a direction corresponding to the relative highest overall sensor difference and thereby is closest to the actual displacement of the ball 102 due to a positioning error of the tool head 101. If the K factors are known, a determination of the Cartesian components of the sensor data S(t.sub.0), S(t.sub.i) and/or D(t.sub.i) may be omitted. Alternatively, the K factors may be obtained by employing a reference measurement, for example, the K factors may be obtained from the first sensor data, i.e. K.sub.1,x=A.Math.S.sub.1,x(t.sub.0)/S.sub.1(t.sub.0) etc. A determination of the Cartesian components of S(t.sub.i) and/or D(t.sub.i) may then be omitted.
(30) In step 270, the sensors 103-1, 103-2, 103-3 and/or the sensor base 104 are moved according to the velocity vector V(t.sub.i)=(V.sub.x(t.sub.i), V.sub.y(t.sub.i), V.sub.z(t.sub.i)). That is, the sensors 103-1, 103-2, 103-3 and/or the sensor base 104 are translated along the x-axis with a velocity V.sub.x(t.sub.i), along the y-axis with a velocity V.sub.y(t.sub.i), and along the z-axis with a velocity V.sub.z(t.sub.i). This results in a movement into the compensation direction which was determined in step system and, thereby, partially or fully compensates the difference D(t.sub.i). In the case when the movement element 106 includes three motors which are configured to translate the sensors 103-1, 103-2, 103-3 and/or the sensor base 104 along the three Cartesian axes, respectively, the velocity components V.sub.x(t.sub.i), V.sub.y(t.sub.i), and V.sub.z(t.sub.i) may be transformed into respective control data indicating a forward/backward translation with a respective speed, and directly input into the respective motors.
(31) The method then jumps back to step 230 where new current sensor data S(t.sub.i+1) is read at a time t.sub.i+1>t.sub.i, and a new difference D(t.sub.i+1)=S(t.sub.0)S(t.sub.i+1) is determined in step 240. Due to the fact that the kinematic factors K in the compensation direction V(t.sub.i) include weight factors, as described in step 260, the new difference D(t.sub.i+1) will be smaller than the preceding difference D(t.sub.i), that is, |D(t.sub.i+1)|<|D(t.sub.i)|. Therefore, the process converges in every loop. In step 250, it is checked if the difference D(t.sub.i+1) meets the threshold value T. If the threshold value is met, the method proceeds to step 280. If the threshold value is not met, a compensation direction V(t.sub.i+1) is determined from the difference D(t.sub.i+1) in step 260, and the sensors 103 and/or base 104 are moved accordingly. The difference t=t.sub.t+1t.sub.i is called takt time and may, for example, be 1 ms. The takt time t is preferably a constant, in particular, a pre-settable constant. However, it is also possible that the takt time t is variable.
(32) In step 280, the positioning error is determined based on the movements of the sensors 103 that were necessary to meet the threshold in step 250. The positioning error may be determined by superposing all movement data, starting with the sensor movement after reading the second sensor data. The Cartesian components (Dx, Dy, Dz) of the positioning error may be obtained by adding all components of the movement data. As an example, if n steps were necessary to meet the threshold value, that is, |D(t.sub.n)|T is satisfied, the positioning error may be determined as follows:
Dx=(V.sub.x(t.sub.1)+ . . . +V.sub.x(t.sub.n1)).Math.t,
Dy=(V.sub.y(t.sub.1)+ . . . +V.sub.y(t.sub.n1)).Math.t,
Dz=(V.sub.z(t.sub.1)+ . . . +V.sub.z(t.sub.n1)).Math.t.
(33) Alternatively, a value C(t.sub.i)=(C.sub.x(t.sub.i), C.sub.y(t.sub.i), C.sub.z(t.sub.i))=(V.sub.x(t.sub.i).Math.t, V.sub.y(t.sub.i).Math.t, V.sub.z(t.sub.i).Math.t) indicating the compensation movement may be computed and stored in every loop, for example in step 260. Then, for the above-mentioned example, the positioning error may be determined as follows:
Dx=C.sub.x(t.sub.1)+ . . . +C.sub.x(t.sub.n1), Dy=C.sub.y(t.sub.1)+ . . . +C.sub.y(t.sub.n1), Dz=C.sub.z(t.sub.1)+ . . . +C.sub.z(t.sub.n1).
(34) Alternatively, the value C(t.sub.i) indicating the compensation movement may be determined recursively, that is, C(t.sub.i)=C(t.sub.i1)+V(t.sub.i).Math.t, in every loop, for example in step 260. Then, for the above-mentioned example, the positioning error may be determined as
Dx=C.sub.x(t.sub.n1), Dy=C.sub.y(t.sub.n1), Dz=C.sub.z(t.sub.n1).
(35) Since the positioning error is determined from data corresponding to all movements of the sensors 103-1, 103-2, 103-3 which were necessary to meet the threshold condition, the sensor values S.sub.1, S.sub.2, and S.sub.3 need not be very precise. In fact, the method may work as long as, at least at some point t, the differences converge, that is |D(t.sub.t+1)|<|D(t.sub.i)| holds for all t.sub.i>t until the threshold condition is met. This way, even detrimental influences, like jolts or vibrations which may break the convergence temporarily, may not affect the result of the method.
(36)
(37) In step 210, the calibration element 102 is in its first position, say (0, 0), and the sensors 103-1 and 103-2 provide first sensor data corresponding to the first position of the calibration element 102.
(38) In step 220, the calibration element 102 is moved as discussed hereinbefore. In particular, according to the CNC control, the calibration element 102 is in the same position as in step 210, that is, at (0, 0). Due to a positioning error of the CNC machine, however, the calibration element is now at a point (Dx, Dy).
(39) In step 230, second sensor data is read from the sensors 103-1 and 103-2 and, in step 240, the differences D.sub.1 and D.sub.2 between the first and second sensor data of the sensors 103-1, and 103-2, respectively, is determined. The differences D.sub.1 and D.sub.2 represent the displacement vectors of the contact elements of the sensors 103-1, 103-2, respectively. The Cartesian components D.sub.1,x, D.sub.1,y, D.sub.2,x and D.sub.2,y of D.sub.1 and D.sub.2 are depicted in
(40)
(41) The present embodiments may be embodied in many different forms, including, but in no way limited to, computer program logic for use with a processor (e.g., a microprocessor, microcontroller, digital signal processor, or general purpose computer), programmable logic for use with a programmable logic device (e.g., a Field Programmable Gate Array (FPGA) or other PLD), discrete components, integrated circuitry (e.g., an Application Specific Integrated Circuit (ASIC)), or any other means including any combination thereof. In a typical embodiment of the present invention, predominantly all of the described logic is implemented as a set of computer program instructions that is converted into a computer executable form, stored as such in a computer readable medium, and executed by a microprocessor under the control of an operating system.
(42) Computer program logic implementing all or part of the functionality previously described herein may be embodied in various forms, including, but in no way limited to, a source code form, a computer executable form, and various intermediate forms (e.g., forms generated by an assembler, compiler, linker, or locator). Source code may include a series of computer program instructions implemented in any of various programming languages (e.g., an object code, an assembly language, or a high-level language such as Fortran, C, C++, JAVA, or HTML) for use with various operating systems or operating environments. The source code may define and use various data structures and communication messages. The source code may be in a computer executable form (e.g., via an interpreter), or the source code may be converted (e.g., via a translator, assembler, or compiler) into a computer executable form.
(43) The computer program may be fixed in any form (e.g., source code form, computer executable form, or an intermediate form) either permanently or transitorily in a tangible storage medium, such as a semiconductor memory device (e.g., a RAM, ROM, PROM, EEPROM, or Flash-Programmable RAM), a magnetic memory device (e.g., a diskette or fixed disk), an optical memory device (e.g., a CD-ROM), a PC card (e.g., PCMCIA card), or other memory device. The computer program may be fixed in any form in a signal that is transmittable to a computer using any of various communication technologies, including, but in no way limited to, analog technologies, digital technologies, optical technologies, wireless technologies, networking technologies, and internetworking technologies. The computer program may be distributed in any form as a removable storage medium with accompanying printed or electronic documentation (e.g., shrink wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the communication system (e.g., the Internet or World Wide Web).
(44) Hardware logic (including programmable logic for use with a programmable logic device) implementing all or part of the functionality previously described herein may be designed using traditional manual methods, or may be designed, captured, simulated, or documented electronically using various tools, such as Computer Aided Design (CAD), a hardware description language (e.g., VHDL or AHDL), or a PLD programming language (e.g., PALASM, ABEL, or CUPL).
(45) Programmable logic may be fixed either permanently or transitorily in a tangible storage medium, such as a semiconductor memory device (e.g., a RAM, ROM, PROM, EEPROM, or Flash-Programmable RAM), a magnetic memory device (e.g., a diskette or fixed disk), an optical memory device (e.g., a CD-ROM), or other memory device. The programmable logic may be fixed in a signal that is transmittable to a computer using any of various communication technologies, including, but in no way limited to, analog technologies, digital technologies, optical technologies, wireless technologies (e.g., Bluetooth), networking technologies, and internetworking technologies. The programmable logic may be distributed as a removable storage medium with accompanying printed or electronic documentation (e.g., shrink wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the communication system (e.g., the Internet or World Wide Web). Of course, some embodiments of the invention may be implemented as a combination of both software (e.g., a computer program product) and hardware. Still other embodiments of the invention are implemented as entirely hardware, or entirely software.