MULTISPECTRAL CAMERA EXTERNAL PARAMETER SELF-CALIBRATION ALGORITHM BASED ON EDGE FEATURES

20220036589 · 2022-02-03

    Inventors

    Cpc classification

    International classification

    Abstract

    The present invention discloses a multispectral camera external parameter self-calibration algorithm based on edge features, and belongs to the field of image processing and computer vision. Because a visible light camera and an infrared camera belong to different modes, fewer satisfactory point pairs are obtained by directly extracting and matching feature points. In order to solve the problem, the method starts from the edge features, and finds an optimal corresponding position of an infrared image on a visible light image through edge extraction and matching. In this way, a search range is reduced and the number of the satisfactory matched point pairs is increased, thereby more effectively conducting joint self-calibration on the infrared camera and the visible light camera. The operation is simple and results are accurate.

    Claims

    1. A multispectral camera external parameter self-calibration algorithm based on edge features, comprising the following steps: 1) original image correction: conducting de-distortion and binocular correction on an original image according to internal parameters and original external parameters of the infrared camera and the visible light camera; 2) scene edge detection: extracting the edges of an infrared image and a visible light image respectively; 3) judging an optimal corresponding position of the infrared image on the visible light image: matching the edges of the infrared image with the edges of the visible light image, and determining the corresponding position according to a matching result; 4) extracting and selecting an optimal matching point pair: extracting and selecting a satisfactory matching point pair according to the optimal corresponding position of the infrared image on the visible light image; 5) judging a feature point coverage area: dividing the image into m*n grids; if the feature points cover all the grids, executing a next step; otherwise continuing to shoot the image and extracting the feature points; 6) correcting the calibration result: using image coordinates of all the feature points to calculate the positional relationship between the two cameras after correction; and then superimposing with the original external parameters.

    2. The multispectral camera external parameter self-calibration algorithm based on edge features according to claim 1, wherein the specific process of the step 1) is as follows: 1-1) calculating the coordinates in a normal coordinate system corresponding to pixel points of the image, wherein a pixel coordinate system takes the upper left corner of the image as an origin, and x-axis and y-axis of the pixel coordinate system are parallel to x-axis and y-axis of an image coordinate system, respectively; the unit of the pixel coordinate system is the pixel; the normal coordinate system is the projection of a camera coordinate system on the plane Z=1; the camera coordinate system is a coordinate system which takes the center of the camera as an origin of the image coordinate system, takes image directions as XY axis directions and takes a direction perpendicular to the image as Z axis direction; the relationship between pixel coordinates and normal coordinates is as follows: u = KX [ u v 1 ] = [ f x 0 c x 0 f y c y 0 0 1 ] [ X Y 1 ] wherein u = [ u v 1 ] indicates the pixel coordinate of the image; K = [ f x 0 c x 0 f y c y 0 0 1 ] indicates an internal parameter matrix of the camera; f.sub.x and f.sub.y respectively indicate the focal distances of the image in x direction and y direction; the unit is the pixel; (c.sub.x, c.sub.y) indicates the principal point position of the camera, i.e., the corresponding position of the camera center on the image; X = [ X Y 1 ] is a coordinate in the normal coordinate system; the normal coordinate system corresponding to the pixel points is calculated, i.e., X=K.sup.−1u, through the known pixel coordinate system of the image and the internal parameters of the camera; 1-2) removing image distortion the radial distortion of the image is described as follows:
    x.sub.d=x(1+k.sub.1r.sup.2+k.sub.2r.sup.4+k.sub.3r.sup.6)
    y.sub.d=y(1+k.sub.1r.sup.2+k.sub.2r.sup.4+k.sub.3r.sup.6) wherein r.sup.2=x2+y.sup.2, k.sub.1, k.sub.2 and k.sub.3 are radial distortion parameters; the tangential distortion of the image is described as follows:
    x.sub.d=x+(2p.sub.1xy+p.sub.2(r.sup.2+2x.sup.2))
    y.sub.d=y+(p.sub.1(r.sup.2+2y.sup.2)+2p.sub.2xy) wherein p.sub.1 and P.sub.2 are tangential distortion coefficients; the coordinate relationship before and after distortion is as follows:
    x.sub.d=x(1+k.sub.1r.sup.2+k.sub.2r.sup.4+k.sub.3r.sup.6)+(2p.sub.1xy+p.sub.2(r.sup.2+2x.sup.2))
    y.sub.d=y(1+k.sub.1r.sup.2+k.sub.2r.sup.4+k.sub.3r.sup.6)+(p.sub.1(r.sup.2+2y.sup.2)+2p.sub.2xy) wherein (x,y) is a normal coordinate in an ideal state, and (x.sub.d, y.sub.d) is an actual normal coordinate with distortion; 1-3) reversing the two images according to the original rotation relationship between the two cameras: an original rotation matrix R and a translation vector t between the two cameras are known:
    X.sub.r=RX.sub.l+t wherein X.sub.l indicates the normal coordinate of the infrared camera, and X.sub.r indicates the normal coordinate of the visible light camera; and the infrared image is rotated to positive direction of R by half an angle, and the visible light image is rotated to opposite direction of R by half an angle; 1-4) restoring the de-distorted and rotated image to the pixel coordinate system according to the formula u=KX.

    3. The multispectral camera external parameter self-calibration algorithm based on edge features according to claim 1, wherein the step 3) specifically comprises the following steps: 3-1) calculating a cross correlation coefficient of a visible light edge image and an infrared edge image by using a normalized cross-correlation matching method; ρ ( u , v ) = .Math. i = 0 M .Math. j = 0 N ( Im O eu , v ( i , j ) - Im IRe ( i , j ) ) σ O u , v σ IR wherein (u,v) indicates the position of the infrared edge image Im.sub.IRe relative to the visible light edge image Im.sub.Oe; and Im.sub.Oeu,v indicates a part of Im.sub.Oe taking (u,v) as a starting point and having the same size as Im.sub.Ire; σ.sub.O.sub.u,v and σ.sub.IR indicate standard deviations of corresponding images; selecting a group of points {(u.sub.k, v.sub.k)} that maximize {ρ(u,v)} as candidate corresponding positions; 3-2) rotating each candidate position for multiple times according to an angle range; and selecting the corresponding position and rotation angle that maximize ρ(u,v).

    4. The multispectral camera external parameter self-calibration algorithm based on edge features according to claim 1, wherein the step 4) specifically comprises the following steps: 4-1) selecting the optimal corresponding position of the infrared image on the visible light image; translating and rotating the infrared image according to the result of step 3); and then detecting the feature points on the visible light image and the translated and rotated infrared image respectively; 4-2) dividing the infrared image and visible light image areas into m×n blocks at the same time; for each feature point p.sub.i.sup.l of the infrared image, finding a corresponding block b.sub.x.sub.i.sub.,y.sub.i.sup.l of the feature point in the infrared image; recording the search range of the visible light image corresponding to the block b.sub.x.sub.i.sub.,y.sub.i.sup.l as {P.sub.i.sup.r} as shown in FIG. 3; finding a variable which can describe the similarity of the feature points to assess the similarity of any point in p.sub.i.sup.l and {P.sub.i.sup.r}; if a maximum similarity is greater than a threshold t.sub.1, regarding the point as a rough matching point p.sub.i.sup.r; 4-3) if the maximum similarity s.sub.first and the second maximum similarity s.sub.second in p.sub.i.sup.l and {p.sub.i.sup.r} satisfy:
    F(s.sub.first,s.sub.second)≥t.sub.2 reserving the matching, wherein t.sub.2 is a threshold and F(s.sub.first, s.sub.second) is used for describing a relationship between s.sub.first and s.sub.second; after selection according to the rule, matching the corresponding feature point p′.sub.i.sup.l of p.sub.i.sup.r in the infrared image according to the steps 4-2) and 4-3); and reserving the matching custom-characterp.sub.i.sup.l, p.sub.i.sup.rcustom-character if p′.sub.i.sup.l=p.sub.i.sup.l is satisfied; 4-4) based on the infrared image feature point p.sub.i.sup.l=(x.sub.i.sup.l, y.sub.i.sup.l), conducting parabolic fitting to optimize an integer pixel feature point p.sub.i.sup.r=(x.sub.i.sup.r, y.sub.i.sup.r) corresponding to the visible light image, to obtain a sub-pixel feature point p′.sub.i.sup.r=(x.sub.i.sup.r+j.sub.rx*,y.sub.i.sup.r+j.sub.ry*) corresponding to the visible light image, wherein j.sub.rx* is a sub-pixel offset in x direction and j.sub.ry* is a sub-pixel offset in y direction; 4-5) based on the integer pixel feature point p.sub.i.sup.r=(x.sub.i.sup.r, y.sub.i.sup.r), corresponding to the visible light image, calculating the sub-pixel feature point p′.sub.i.sup.l=(x.sub.l.sup.i+j.sub.lx*, y.sub.i.sup.r+j.sub.lx*) corresponding to the infrared image according to the method of 4-4), wherein j.sub.lx*is a sub-pixel offset in x direction and j.sub.ly* is a sub-pixel offset in y direction; 4-6) obtaining a final matching point pair as (p′.sub.i.sup.l, p′.sub.i.sup.r); and restoring p′.sub.i.sup.l to the coordinates before rotation and translation of the infrared image according to the inverse process of step 4-1).

    5. The multispectral camera external parameter self-calibration algorithm based on edge features according to claim 3, wherein the step 4) specifically comprises the following steps: 4-1) selecting the optimal corresponding position of the infrared image on the visible light image; translating and rotating the infrared image according to the result of step 3); and then detecting the feature points on the visible light image and the translated and rotated infrared image respectively; 4-2) dividing the infrared image and visible light image areas into m×n blocks at the same time; for each feature point p.sub.i.sup.l of the infrared image, finding a corresponding block b.sub.x.sub.i.sub.,y.sub.i.sup.l of the feature point in the infrared image; recording the search range of the visible light image corresponding to the block b.sub.x.sub.i.sub.,y.sub.i.sup.l as {P.sub.i.sup.r} as shown in FIG. 3; finding a variable which can describe the similarity of the feature points to assess the similarity of any point in p.sub.i.sup.l and {P.sub.i.sup.r}; if a maximum similarity is greater than a threshold t.sub.1, regarding the point as a rough matching point p.sub.i.sup.r; 4-3) if the maximum similarity s.sub.first and the second maximum similarity s.sub.second in p.sub.i.sup.l and {p.sub.i.sup.r} satisfy:
    F(s.sub.first,s.sub.second)≥t.sub.2 reserving the matching, wherein t.sub.2 is a threshold and F(s.sub.first, s.sub.second) is used for describing a relationship between s.sub.first and s.sub.second; after selection according to the rule, matching the corresponding feature point p′.sub.i.sup.l of p.sub.i.sup.r in the infrared image according to the steps 4-2) and 4-3); and reserving the matching custom-characterp.sub.i.sup.l, p.sub.i.sup.rcustom-character if p′.sub.i.sup.l=p.sub.i.sup.l is satisfied; 4-4) based on the infrared image feature point p.sub.i.sup.l=(x.sub.i.sup.l, y.sub.i.sup.l), conducting parabolic fitting to optimize an integer pixel feature point p.sub.i.sup.r=(x.sub.i.sup.r, y.sub.i.sup.r) corresponding to the visible light image, to obtain a sub-pixel feature point p′.sub.i.sup.r=(x.sub.i.sup.r+j.sub.rx*, y.sub.i.sup.r+j.sub.ry*) corresponding to the visible light image, wherein j.sub.rx* is a sub-pixel offset in x direction and j.sub.ry* is a sub-pixel offset in y direction; 4-5) based on the integer pixel feature point p.sub.i.sup.r=(x.sub.i.sup.r, y.sub.i.sup.r) corresponding to the visible light image, calculating the sub-pixel feature point p′.sub.i.sup.l=(x.sub.i.sup.l+j.sub.lx*, y.sub.i.sup.r+j.sub.ly*) corresponding to the infrared image according to the method of 4-4), wherein j.sub.lx* is a sub-pixel offset in x direction and j.sub.ly* is a sub-pixel offset in y direction; 4-6) obtaining a final matching point pair as custom-characterp′.sub.i.sup.l, p′.sub.i.sup.rcustom-character; and restoring p′.sub.i.sup.l to the coordinates before rotation and translation of the infrared image according to the inverse process of step 4-1).

    6. The multispectral camera external parameter self-calibration algorithm based on edge features according to claim 1, wherein the step 6) specifically comprises the following steps: 6-1) further screening the point pairs by using random sample consensus; 6-2) solving a basic matrix F and an essential matrix E: a relationship between the pixel points u.sub.l and u.sub.r corresponding to infrared light and visible light and the basic matrix F is:
    u.sub.r.sup.TFu.sub.l=0 substituting the coordinates of the corresponding points into the above formula to construct a homogeneous linear equation system to solve F; a relationship between the basic matrix and the essential matrix is:
    E=K.sub.r.sup.TFK.sub.l wherein K.sub.l and K.sub.r are respectively the internal parameter matrices of the infrared camera and the visible light camera; 6-3) decomposing a relationship between rotation and translation from the essential matrix: the relationship between the essential matrix E and rotation R and translation t is as follows:
    E=[t].sub.xR wherein [t].sub.x indicates a cross product matrix of t; conducting singular value decomposition on E to obtain: E = U Σ V T = U [ 1 0 0 0 1 0 0 0 0 ] V T defining two matrices Z = [ 0 1 0 - 1 0 0 0 0 0 ] and W = [ 0 - 1 0 1 0 0 0 0 1 ] , ZW = Σ thus, writing E in the following two forms
    E=UZU.sup.TUWV.sup.T  (1)
    setting[t].sub.x=UZU.sup.T,R=UWV.sup.T
    E=−UZU.sup.TUW.sup.TV.sup.T  (2)
    setting[t].sub.x=−UZU.sup.T,R=UW.sup.TV.sup.T 6-4) superimposing the decomposed relationship between rotation and translation into the original positional relationship between the infrared camera and the visible light camera.

    7. The multispectral camera external parameter self-calibration algorithm based on edge features according to claim 3, wherein the step 6) specifically comprises the following steps: 6-1) further screening the point pairs by using random sample consensus; 6-2) solving a basic matrix F and an essential matrix E: a relationship between the pixel points u.sub.l and u.sub.r corresponding to infrared light and visible light and the basic matrix F is:
    u.sub.r.sup.TFu.sub.l=0 substituting the coordinates of the corresponding points into the above formula to construct a homogeneous linear equation system to solve F; a relationship between the basic matrix and the essential matrix is:
    E=K.sub.r.sup.TFK.sub.i wherein K.sub.l and K.sub.r are respectively the internal parameter matrices of the infrared camera and the visible light camera; 6-3) decomposing a relationship between rotation and translation from the essential matrix: the relationship between the essential matrix E and rotation R and translation t is as follows:
    E=[t].sub.xR wherein [t].sub.x indicates a cross product matrix of t; conducting singular value decomposition on E to obtain: E = U Σ V T = U [ 1 0 0 0 1 0 0 0 0 ] V T defining two matrices Z = [ 0 1 0 - 1 0 0 0 0 0 ] and W = [ 0 - 1 0 1 0 0 0 0 1 ] , ZW = Σ thus, writing E in the following two forms
    E=UZU.sup.TUWV.sup.T  (1)
    setting[t].sub.x=UZU.sup.T,R=UWV.sup.T
    E=−UZU.sup.TUW.sup.TV.sup.T  (2)
    setting[t].sub.x=−UZU.sup.T,R=UW.sup.TV.sup.T 6-4) superimposing the decomposed relationship between rotation and translation into the original positional relationship between the infrared camera and the visible light camera.

    8. The multispectral camera external parameter self-calibration algorithm based on edge features according to claim 4, wherein the step 6) specifically comprises the following steps: 6-1) further screening the point pairs by using random sample consensus; 6-2) solving a basic matrix F and an essential matrix E: a relationship between the pixel points u.sub.l and u.sub.r corresponding to infrared light and visible light and the basic matrix F is:
    u.sub.r.sup.TFu.sub.l=0 substituting the coordinates of the corresponding points into the above formula to construct a homogeneous linear equation system to solve F; a relationship between the basic matrix and the essential matrix is:
    E=K.sub.r.sup.TFK.sub.l wherein K.sub.l and K.sub.r are respectively the internal parameter matrices of the infrared camera and the visible light camera; 6-3) decomposing a relationship between rotation and translation from the essential matrix: the relationship between the essential matrix E and rotation R and translation t is as follows:
    E=[t].sub.xR wherein [t].sub.x indicates a cross product matrix of t; conducting singular value decomposition on E to obtain: E = U Σ V T = U [ 1 0 0 0 1 0 0 0 0 ] V T defining two matrices Z = [ 0 1 0 - 1 0 0 0 0 0 ] and W = [ 0 - 1 0 1 0 0 0 0 1 ] , ZW = Σ thus, writing E in the following two forms
    E=UZU.sup.TUWV.sup.T  (1)
    setting[t].sub.x=UZU.sup.T,R=UWV.sup.T
    E=−UZU.sup.TUW.sup.TV.sup.T  (2)
    setting[t].sub.x=−UZU.sup.T,R=UW.sup.TV.sup.T 6-4) superimposing the decomposed relationship between rotation and translation into the original positional relationship between the infrared camera and the visible light camera.

    Description

    DESCRIPTION OF DRAWINGS

    [0051] FIG. 1 is an entire flow chart.

    [0052] FIG. 2 is a flow chart of binocular correction.

    [0053] FIG. 3(a) is a schematic diagram of an infrared block, and FIG. 3(b) is a schematic diagram of a visible light block.

    DETAILED DESCRIPTION

    [0054] The present invention aims to solve the change of a positional relationship between an infrared camera and a visible light camera due to factors such as temperature, humidity and vibration. The present invention will be described in detail below in combination with drawings and embodiments.

    [0055] 1) Original image correction: conducting de-distortion and binocular correction on an original image according to internal parameters and original external parameters of the infrared camera and the visible light camera. The flow is shown in FIG. 2.

    [0056] 1-1) Calculating the coordinates in a normal coordinate system corresponding to the pixel points of the image, wherein a pixel coordinate system takes the upper left corner of the image as an origin, and x-axis and y-axis of the pixel coordinate system are parallel to x-axis and y-axis of an image coordinate system, respectively; the unit of the pixel coordinate system is the pixel; the pixel is a basic and indivisible unit of image display; the normal coordinate system is the projection of a camera coordinate system on the plane Z=1; the camera coordinate system is a coordinate system which takes the center of the camera as an origin of the image coordinate system, takes image directions as XY axis directions and takes a direction perpendicular to the image as Z axis direction; the relationship between pixel coordinates and normal coordinates is as follows:

    [00008] u = KX [ u v 1 ] = [ f x 0 c x 0 f y c y 0 0 1 ] [ X Y 1 ]

    [0057] wherein

    [00009] u = [ u v 1 ]

    indicates the pixel coordinate of the image;

    [00010] K = [ f x 0 c x 0 f y c y 0 0 1 ]

    indicates an internal parameter matrix of the camera; f.sub.x and f.sub.y respectively indicate the focal distances of the image in x direction and y direction (the unit is the pixel); (c.sub.x, c.sub.y) indicates the principal point position of the camera, i.e., the corresponding position of the camera center on the image; and

    [00011] X = [ X Y 1 ]

    is a coordinate in the normal coordinate system. The normal coordinate system corresponding to the pixel points can be calculated, i.e.,


    X=K.sup.−1u

    [0058] 1-2) Removing image distortion: due to the limitation of a lens production process, a lens under actual conditions has some distortion phenomena, causing nonlinear distortion. Therefore, a pure linear model cannot accurately describe an imaging geometric relationship. The nonlinear distortion can be roughly classified into radial distortion and tangential distortion.

    [0059] The radial distortion of the image is a position deviation of the image pixel points with the distortion center as the center point along the radial direction, thereby causing the distortion of the picture formed in the image. The radial distortion is roughly described as follows:


    x.sub.d=x(1+k.sub.1r.sup.2+k.sub.2r.sup.4+k.sub.3r.sup.6)


    y.sub.d=y(1+k.sub.1r.sup.2+k.sub.2r.sup.4+k.sub.3r.sup.6)

    [0060] wherein r.sup.2=x.sup.2+y.sup.2, k.sub.1, k.sub.2 and k.sub.3 are radial distortion parameters.

    [0061] The tangential distortion is generated by the defect in the camera manufacturing that makes the lens not parallel to the image plane, and can be quantitatively described as:


    x.sub.d=x+(2p.sub.1xy+p.sub.2(r.sup.2+2x.sup.2))


    y.sub.d=y+(p.sub.1(r.sup.2+2y.sup.2)+2p.sub.2xy)

    [0062] wherein p.sub.1 and p.sub.2 are tangential distortion coefficients.

    [0063] In conclusion, the coordinate relationship before and after distortion is as follows:


    x.sub.d=x(1+k.sub.1r.sup.2+k.sub.2r.sup.4+k.sub.3r.sup.6)+(2p.sub.1xy+p.sub.2(r.sup.2+2x.sup.2))


    y.sub.d=y(1+k.sub.1r.sup.2+k.sub.2r.sup.4+k.sub.3r.sup.6)+(p.sub.1(r.sup.2+2y.sup.2)+2p.sub.2xy)

    [0064] wherein (x,y) is a normal coordinate in an ideal state, and (x.sub.d, y.sub.d) is an actual normal coordinate with distortion.

    [0065] 1-3) Reversing the two images according to the original rotation relationship between the two cameras: an original rotation matrix R and a translation vector t between the two cameras are known:


    X.sub.r=RX.sub.l+t

    [0066] wherein X.sub.l indicates the normal coordinate of the infrared camera, and X.sub.r indicates the normal coordinate of the visible light camera. The infrared image is rotated to positive direction of R by half an angle, and the visible light image is rotated to opposite direction of R by half an angle.

    [0067] 1-4) Restoring the de-distorted and rotated image to the pixel coordinate system according to the formula u=KX.

    [0068] 2) Scene edge detection: extracting the edges of an infrared image and a visible light image respectively.

    [0069] 3) Judging an optimal corresponding position of the infrared image on the visible light image: matching the edges of the infrared image with the edges of the visible light image, and determining the corresponding position according to a matching result.

    [0070] 3-1) Calculating a cross correlation coefficient of a visible light edge image and an infrared edge image by using a normalized cross-correlation matching method.

    [00012] ρ ( u , v ) = .Math. i = 0 M .Math. j = 0 N ( Im O eu , v ( i , j ) - Im IRe ( i , j ) ) σ O u , v σ IR

    [0071] wherein (u,v) indicates the position of the infrared edge image Im.sub.IRe relative to the visible light edge image Im.sub.Oe; and Im.sub.Oeu,v indicates a part of Im.sub.Oe taking (u,v) as a starting point and having the same size as Im.sub.IRe.Math.σ.sub.O.sub.u,v and σ.sub.IR respectively indicate standard deviations of corresponding images.

    [0072] A group of points {(u.sub.k, v.sub.k)} that maximize ρ(u,v) are selected as candidate corresponding positions.

    [0073] 3-2) Rotating each candidate position for multiple times according to an angle range; the range of −10° to 10° is divided into 200 parts, that is, rotating by 0.1° each time from −10° position, and selecting the corresponding position and rotation angle that maximize ρ(u,v).

    [0074] 4) Extracting and selecting an optimal matching point pair: extracting and selecting a satisfactory matching point pair according to the optimal corresponding position of the infrared image on the visible light image.

    [0075] 4-1) Selecting the optimal corresponding position of the infrared image on the visible light image; translating and rotating the infrared image according to the result of step 3); and then detecting the feature points on the visible light image and the translated and rotated infrared image respectively.

    [0076] 4-2) Dividing the infrared image and visible light image areas into m×n blocks at the same time; for each feature point p.sub.i.sup.l of the infrared image, finding a corresponding block b.sub.x.sub.i.sub.,y.sub.i.sup.l of the feature point in the infrared image; recording the search range of the visible light image corresponding to the block b.sub.x.sub.i.sub.,y.sub.i.sup.l as {P.sub.i.sup.r}, as shown in FIGS. 3(a) and 3(b); finding a variable which can describe the similarity of the feature points to assess the similarity of any point in p.sub.i.sup.l and {P.sub.i.sup.r}; if a maximum similarity is greater than a threshold t.sub.1, regarding the point as a rough matching point p.sub.i.sup.r.

    [0077] 4-3) If the maximum similarity s.sub.first and the second maximum similarity s.sub.second in p.sub.i.sup.l and {p.sub.i.sup.r} satisfy:


    F(s.sub.first,s.sub.second)≥t.sub.2

    [0078] reserving the matching, wherein t.sub.2 is a threshold and F(s.sub.first, s.sub.second) is used for describing a relationship between s.sub.first and s.sub.second.

    [0079] After selection according to the rule, matching the corresponding feature point p′.sub.i.sup.l of p.sub.i.sup.r in the infrared image according to the above steps; and reserving the matching custom-characterp.sub.i.sup.l, p.sub.i.sup.rcustom-character if p′.sub.i.sup.l=p.sub.i.sup.l is satisfied.

    [0080] 4-4) Based on the infrared image feature point p.sub.i.sup.l=(x.sub.i.sup.l, y.sub.i.sup.l), conducting parabolic fitting to optimize an integer pixel feature point p.sub.i.sup.r=(x.sub.i.sup.r, y.sub.i.sup.r) corresponding to the visible light image, to obtain a sub-pixel feature point p′.sub.i.sup.r=(x.sub.i.sup.r+j.sub.rx*, y.sub.i.sup.r+j.sub.ry*) corresponding to the visible light image, wherein j.sub.rx* is a sub-pixel offset in x direction and j.sub.ry* is a sub-pixel offset in y direction.

    [0081] 4-5) Based on the integer pixel feature point p.sub.i.sup.r=(x.sub.i.sup.r, y.sub.i.sup.r) corresponding to the visible light image, calculating the sub-pixel feature point p′.sub.i.sup.l=(x.sub.i.sup.l+j.sub.lx*,y.sub.i.sup.r+j.sub.ly*) corresponding to the infrared image according to the method of 4-4), wherein j.sub.lx* is a sub-pixel offset in x direction and j.sub.ly* is a sub-pixel offset in y direction.

    [0082] 4-6) Obtaining a final matching point pair as custom-characterp′.sub.i.sup.l, p′.sub.i.sup.rcustom-character; and restoring p′.sub.i.sup.l to the coordinates before rotation and translation of the infrared image according to the inverse process of step 4-1).

    [0083] 5) Judging a feature point coverage area: dividing the image into m*n grids; if the feature points cover all the grids, executing a next step; otherwise continuing to shoot the image and extracting the feature points.

    [0084] 6) Correcting the calibration result: using image coordinates of all the feature points to calculate the positional relationship between the two cameras after correction; and then superimposing with the original external parameters.

    [0085] 6-1) Further screening the point pairs by using random sample consensus (RANSAC).

    [0086] 6-2) Solving a basic matrix F and an essential matrix E: a relationship between the pixel points u.sub.l and u.sub.r corresponding to infrared light and visible light and the basic matrix F is:


    u.sub.r.sup.TFu.sub.l=0

    [0087] The coordinates of the corresponding points are substituted into the above formula to construct a homogeneous linear equation system to solve F.

    [0088] A relationship between the basic matrix and the essential matrix is:


    E=K.sub.r.sup.TFK.sub.l

    [0089] wherein K.sub.l and K.sub.r are respectively the internal parameter matrices of the infrared camera and the visible light camera.

    [0090] 6-3) Decomposing a relationship between rotation and translation from the essential matrix: the relationship between the essential matrix E and rotation R and translation t is as follows:


    E=[t].sub.xR

    [0091] wherein [t].sub.x indicates a cross product matrix of t.

    [0092] Conducting singular value decomposition on E to obtain:

    [00013] E = U Σ V T = U [ 1 0 0 0 1 0 0 0 0 ] V T

    [0093] Defining two matrices

    [00014] Z = [ 0 1 0 - 1 0 0 0 0 0 ] and W = [ 0 - 1 0 1 0 0 0 0 1 ] , ZW = Σ

    [0094] Thus, writing E in the following two forms


    E=UZU.sup.TUWV.sup.T  (1)


    setting[t].sub.x=UZU.sup.T,R=UWV.sup.T


    E=−UZU.sup.TUW.sup.TV.sup.T  (2)


    setting[t].sub.x=−UZU.sup.T,R=UW.sup.TV.sup.T

    [0095] 6-4) Superimposing the decomposed relationship between rotation and translation into the original positional relationship between the infrared camera and the visible light camera.