Method for extracting gear tooth profile edge based on engagement-pixel image edge tracking method

11371821 · 2022-06-28

Assignee

Inventors

Cpc classification

International classification

Abstract

A method for extracting a gear tooth profile edge based on an engagement-pixel image edge tracking method includes defining a transmission ratio relationship between a cutter and an envelope tooth profile, setting a cutter profile step size and an envelope step size, acquiring instantaneous contact images at different engaging times, and performing a binarization processing on each curve envelope cluster image; sweeping a boundary of an envelope curve cluster, acquiring pixel points of the edge; preliminarily tracking a tooth profile edge, and then performing a secondary extraction and compensation on the pixel points; calibrating coordinates of a cutter profile; extracting a pixel coordinate of an instantaneous engaging point; converting the pixel points among different instantaneous engaging images; extracting a final tooth profile coordinate of the gear, and performing a tooth shape error analysis and a contact line error analysis.

Claims

1. A method for extracting a gear tooth profile edge based on an engagement-pixel image edge tracking method, comprising the following steps: step (1): defining a transmission ratio relationship between a cutter and an envelope tooth profile, setting a cutter profile step size and an envelope step size, acquiring instantaneous contact images at different engaging times, and performing a binarization processing on each curve envelope cluster image; step (2): sweeping a boundary of an envelope curve cluster, acquiring pixel points of the edge; whereas in order to meet a stability requirement during a gear transmission process, a gear tooth profile is a smooth tooth profile with a regular curvature variation; preliminarily tracking a tooth profile edge by means of a traditional edge tracking method, and then performing a secondary extraction and compensation on the pixel points to improve an accuracy of the tooth profile in combination of two main features, namely, a feature of a step-shaped tooth profile pixel edge and a feature of a pixel absence in a position with a small curvature variation between adjacent tooth profiles; step (3): calibrating coordinates of a cutter profile; step (4): extracting a pixel coordinate of an instantaneous engaging point; step (5): converting the pixel points among different instantaneous engaging images; step (6): extracting a final tooth profile coordinate and the tooth profile edge of the gear according to a coordinate transformation process of steps (1) to (5), and performing a tooth shape error analysis and a contact line error analysis.

2. The method for extracting the gear tooth profile edge based on the engagement-pixel image edge tracking method according to claim 1, wherein in step (3), the step of calibrating the coordinates of the cutter profile includes: simulating a forming machining process of a gear conjugate surface to obtain an image of the envelope curve cluster; changing coordinate values between the image of the envelope curve cluster and an image obtained by a binaryzation; and converting a theoretical value of the cutter into a coordinate value of a pixel to meet a requirement of subsequent processing.

3. The method for extracting the gear tooth profile edge based on the engagement-pixel image edge tracking method according to claim 1, wherein in step (4), the step of extracting the pixel coordinate of the instantaneous engaging point includes: taking the pixel coordinate of the cutter as a basis; combining an engaging point between the cutter and the tooth profile during each instantaneous engaging process; and obtaining the pixel coordinate of the instantaneous engaging point.

4. The method for extracting the gear tooth profile edge based on the engagement-pixel image edge tracking method according to claim 1, wherein in step (5), the step of converting the pixel points among different instantaneous engaging images includes: performing a corresponding coordinate transformation on the pixel points among the different instantaneous engaging images, since each instant corresponds to the different instantaneous engaging pixel images.

5. The method for extracting the gear tooth profile edge based on the engagement-pixel image edge tracking method according to claim 1, wherein in step (6), the gear is a face gear.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 shows the distribution rule of the pixel points of the tooth profile edge of a face gear.

(2) FIG. 2 shows the compensated pixel points of the tooth profile edge of a face gear.

(3) FIG. 3 shows the relationship between the instantaneous engaging points and the pixel points.

(4) FIG. 4 shows that the tracking planning the pixel points horizontally approach.

(5) FIG. 5 shows that the tracking planning the pixel points approach in the normal direction.

(6) FIG. 6 shows the tooth surface of the face gear.

(7) FIG. 7 shows the tooth profile error comparison of the face gear.

(8) FIG. 8 shows the contact line comparison of the face gear.

(9) FIG. 9 shows the contact line error comparison of the face gear.

DETAILED DESCRIPTION OF THE EMBODIMENTS

(10) The present invention takes the face gear which has complex conjugate curved surface as an example for verification and analysis, and the specific steps are as follows.

(11) 1. According to the moving track of a hob cutter and the relevant machining parameters, the instantaneous engaging envelope images at different engaging times are obtained, and the binarization is performed, wherein the instantaneous tooth profile equation of the hob cutter is as follows:

(12) r 1 ( θ 1 ) = [ x 1 , y 1 , z 1 ] = [ ( h - R ) sin θ 1 + ( R sin θ 1 - w ) cos θ 1 , .Math. ( R - h ) cos θ 1 + ( R sin θ 1 - w ) sin θ 1 , u 1 ] ; ( 1 )

(13) the instantaneous envelope equation of face gear is as follows:

(14) r 2 ( θ 1 ) = [ x 2 , y 2 , z 2 ] = [ ( x 1 cos ϕ g - z 1 sin ϕ g ) cos θ 2 - y 1 sin θ 2 , .Math. ( x 1 cos ϕ g - z 1 sin ϕ g ) sin θ 2 + y 1 cos θ 2 , .Math. - x 1 sin ϕ g - z 1 cos ϕ g ] ; ( 2 )

(15) in the equations, [x.sub.1,y.sub.1,z.sub.1]=[x.sub.1′(ϕs), y.sub.1′(ϕ.sub.s), z.sub.1′(ϕ.sub.s)]; θ.sub.1 is the engaging angle at different engaging moments; θ.sub.2=θ.sub.1/m.sub.gs, m.sub.gs is the transmission ratio of the face gear pair; θ.sub.s is the envelope angle of the hob cutter, ϕ.sub.g=ϕ.sub.s/m.sub.gs; wherein h and w are the x-coordinate and the y-coordinate of the cutter respectively; and R is the radius of the pitch circle of the cylindrical gear.

(16) 2. Regarding the image edge tracking, the gear tooth profiles are all smooth tooth profiles with regular curvature variation. The edge racking of this type of profile is relatively simple. Consequently, in the present invention, the pixel of the tooth profile edge is preliminarily extracted by the traditional edge tracking method i.e. reptile method. The pixel point of the gear tooth profile edge has the following two features.

(17) {circle around (1)} Due to the feature of step-shaped tooth profile pixel edge, the position of the theoretical tooth profile cannot be accurately defined, as shown in FIG. 2. As can be seen from FIG. 2, in the n-th pixel unit (composed of one column of pixel points), the starting pixel point (the smallest coordinate value Y in the pixel unit, Y.sub.min) is closer to the theoretical tooth profile than the terminal pixel point (the largest coordinate value Y in the pixel unit, Y.sub.max), but a certain error still exists.

(18) Assuming that the starting pixel point of the first pixel unit is the starting point, the coordinate of the starting point is set as P.sub.11(x.sub.11,y.sub.11). The coordinate of the starting pixel point of the n-th pixel unit is P.sub.ni(x.sub.ni,y.sub.ni), wherein n=1, 2 . . . ; i is the pixel point, i=1, 2 . . . ; when i=1, it is the starting pixel point of the n-th pixel unit. The slope between the pixel point P.sub.11 and any pixel point in the n-th pixel unit can be defined as below:

(19) k ni = arctan ( y ni - y 11 x 11 - x ni ) ; ( 3 )

(20) the position with a maximum slope in each pixel unit is expressed below:
max(k.sub.ni)=[k.sub.21,k.sub.31 . . . k.sub.n1]  (4);

(21) then the pixel point corresponding to the position with the maximum slope is the starting pixel point in each pixel unit:
[P.sub.21,P.sub.31 . . . P.sub.n1]=location(max(k.sub.ni))  (5).

(22) {circle around (2)} Regarding some of the positions of the tooth profile edge, since adjacent tooth profiles have relatively small variations in curvature, in the process of image binarization, there will be a distribution rule of pixel points as shown in FIG. 2, namely, each column (Y-axis direction) of the image has a plurality of pixel coordinates. In this case, if merely the starting point of the pixels is taken as the extracted pixel point, some pixel points will be missed and the accuracy of the tooth profile will be adversely affected. In order to increase the amount of pixel points in the edge and improve the accuracy of the tooth profile, in the case where the starting point of pixels of each pixel unit is determined, a spline interpolation function is established. In the spline interpolation function, value y of the pixel points is taken as an independent variable, so that different coordinate values x of the pixel points can be obtained and taken as compensated pixel points to increase the amount of pixel points.

(23) Assuming that the coordinate of the starting point of an ideal straight line (i.e. the starting pixel point of the n-th pixel unit) is P.sub.n1(x.sub.n1,y.sub.n1), and the coordinate of the terminal point (i.e. the starting pixel point of the (n+1)-th pixel unit) is P.sub.(n+1)1(x.sub.(n+1)1,y.sub.(n+1)1), the equation of an implicit function of the straight line is as follows:

(24) f ( x , y ) = y - y ( n + 1 ) - y n 1 x n 1 - x ( n + 1 ) 1 x - b = 0 ; ( 6 )

(25) wherein the value of b can be solved when P.sub.n1(x.sub.n1,y.sub.n1) is substituted into the equation. The compensation principle of the amount of the compensated pixel points between the two adjacent starting pixel points is as follows. Assuming that
D.sub.n=|y.sub.(n+1)1−y.sub.n1|  (7),

(26) the amount of compensated pixel points between the adjacent starting pixel points is D.sub.n−1. For the coordinate of the compensated pixel point, the corresponding x.sub.ni can be obtained by substituting y.sub.n1+1, y.sub.n1+2 . . . , y.sub.n1+D.sub.n−1 into equation (8):

(27) X ni = ( .Math. p = 1 D n - 1 Σ ( y n 1 + p ) - b ) .Math. x n 1 - x ( n + 1 ) 1 y ( n + 1 ) 1 - y n 1 . ( 8 )

(28) 3. The techniques involved in the process of calibrating the coordinate of the cutter profile mainly include:

(29) {circle around (1)} the proportional scaling and translation transformation between the theoretical coordinate and the pixel coordinate; the coordinate transformation matrix is as follows:

(30) M 1 = [ Dx 0 Δ x 0 Dy Δ y 0 0 1 ] ; ( 9 )

(31) wherein, Dx and Dy are the magnification ratios the coordinates; Δx and Δy are the displacements of movement;

(32) {circle around (2)} the spline interpolation of the transformed pixel coordinates, namely, the value X of the pixel coordinate of the cutter is taken as an independent variable with a step size having a predetermined interval to perform a spline interpolation on value Y of the pixel coordinate of the cutter profile.

(33) 4. Since there is only one engaging point at each engaging moment when the cutter is engaged with the gear tooth profile (the left and right tooth profiles each have one engaging point), the instantaneous engaging point of the cutter and the gear tooth profile is the coordinate coincidence point. In the pixel image, as shown in FIG. 3, the coordinate coincidence point can be determined by the judgment rule that the coordinate values are equal to each other. However, there is an inevitable gap between the obtained coordinates of the edge pixel points and the calibrated coordinates of the cutter, so an approximation is required to eliminate this gap, i.e. the starting pixel point of each pixel unit and the compensated pixel point thereof are translated until they coincide with the coordinate value of the calibrated cutter. In this case, the coincidence point can be determined as the instantaneous engaging point. The basis for the determination is that, as can be seen from FIG. 3, the position of the minimal normal displacement between the pixel point of the image edge and the calibrated cutter is the engaging point. There are two main approximation paths from the edge pixel point to the engaging point.

(34) {circle around (1)} Horizontal approximation along the x-axis. The X-coordinate step size of the calibrated cutter is defined, and the corresponding interpolation Y is obtained. Then, the interpolation Y is rounded to an integer to obtain the coordinate of the cutter in pixel coordinates. The initial pixel point L is horizontally translated until the nearest initial pixel point coincides with the pixel coordinate of the cutter (i.e. the position of minimal displacement L.sub.min), and this point is the instantaneous engaging point, as shown in FIG. 4.

(35) {circle around (2)} Approximation along the normal direction. The X-coordinate step size of the calibrated cutter is defined, and the corresponding interpolation coordinate Y is calculated to obtain the coordinate of the cutter in pixel coordinates. The image edge pixel points are translated along the normal direction until the nearest pixel point coincides with the pixel coordinate of the cutter, and this point is the instantaneous engaging point, as shown in FIG. 5.

(36) 5. In the process of converting the pixel points among different instantaneous engaging images:

(37) {circle around (1)} when the transmission mode is a gear transmission between the intersecting axes, by extracting the displacement variation quantity of the node position in each instantaneous engaging image, the movement displacement of each instantaneous engaging image can be obtained and the transformation coordinate matrix is established; the transformation coordinate matrix is as below:

(38) M 2 = [ 0 0 Δ x 0 0 0 0 0 0 ] ; ( 10 )

(39) {circle around (2)} when the transmission mode is a gear transmission between the parallel axes, the angular displacement of each instantaneous engaging image can be obtained according to the rotation angle variation of the node position in each instantaneous engaging image, and the transformed coordinate matrix is obtained:

(40) M 2 = [ cos θ 1 - sin θ 1 0 sin θ 1 cos θ 1 0 0 0 1 ] . ( 11 )

(41) 6. Assuming the theoretical coordinate of the cutter is P′(x, y) and the pixel coordinate of the engaging point is P(X, Y), the final pixel coordinate of the tooth profile edge can be obtained by the following equation:

(42) [ X Y 1 ] = [ D x 0 Δ x 0 Dy Δ y 0 0 1 ] [ x y 1 ] + M 2 ( 12 )

(43) Taking the face gear as an example, the method provided by the present invention can be used to analyze the tooth profile of the face gear (as shown in FIG. 6), the tooth profile error of the face gear (as shown in FIG. 7), and the numerical solution of the contact track (as shown in FIG. 8) and the corresponding error thereof (as shown in FIG. 9). It is proved by the results that the accuracy is higher of the tooth profile of the face gear when obtained by the method provided by the present invention. So it can be used in an accurate solution of the tooth profile of the face gear. Accordingly, the present invention can be used as a powerful tool in the simulation and machining of the face gear and the accurate forming of the tooth surface.