Method and device for merging images of calibration devices
10516822 ยท 2019-12-24
Assignee
Inventors
- Chin-Kuei Chang (New Taipei, TW)
- Shang-Chieh Lu (Tainan, TW)
- Wei-Yao Chiu (Chiayi, TW)
- Bor-Tung Jiang (Hsinchu, TW)
Cpc classification
G06T3/4038
PHYSICS
International classification
Abstract
The present disclosure provides an image merging method. The image merging method includes the following step. First, the calibration unit is provided, wherein a calibration device of the calibration unit includes a plurality of known characteristic information. The calibration device is captured. A conversion relationship is created. A relationship of positions of the images is analysis according to the conversion relationship. The images are formed. In additional, an image merging device is provided.
Claims
1. An image merging method, comprising the steps of: providing a plurality of calibration devices, being arranged corresponding to different positions of an object to be inspected, wherein each of the calibration devices includes a plurality of known characteristic information; obtaining raw images of the calibration devices respectively, wherein each of the raw images corresponds to one of the calibration devices, each of the raw images includes a plurality of image characteristic information that are corresponding to the known characteristic information of its corresponding calibration device; establishing a conversion formula according to the known characteristic information of each of the calibration devices and the image characteristic information of the corresponding image; performing an analysis for obtaining position relationship between the raw images according to the conversion formula, comprising: establishing a plurality of virtual point characteristic information, wherein the virtual point characteristic information for each raw image corresponds to a plurality of virtual images outside the raw image, and the virtual images are arranged to correspond to other raw images, wherein two virtual images can be inferred from each raw image; and comparing and obtaining position relationships between raw images that are disposed neighboring to one another according to the conversion formula by the use of the image characteristic information and the virtual point characteristic information; and merging the raw images.
2. The image merging method of claim 1, wherein the imaging of each of the calibration devices further comprising the step of: extracting the image characteristic information from each raw image.
3. The image merging method of claim 2, wherein the extracting of the image characteristic information further comprising the step of: identifying linear features or corner features in the image characteristic information.
4. The image merging method of claim 2, wherein the extracting of the image characteristic information further comprising the step of: establishing position relationships between the image characteristic information.
5. The image merging method of claim 4, wherein the establishing of the position relationships between the image characteristic information further comprising the step of: using a character to determine positions of the image characteristic information.
6. The image merging method of claim 5, wherein the using of the character to determine positions of the image characteristic information further comprising the step of: using markers or character symbols of a scale gage to determine positions of the image characteristic information.
7. The image merging method of claim 4, wherein the establishing of the position relationships between the image characteristic information further comprising the step of: proceeding a texture feature analysis.
8. The image merging method of claim 7, wherein the texture feature analysis is enabled to obtain the position relationships between the image characteristic information based upon the relationships of texture features found in the image characteristic information.
9. The image merging method of claim 8, wherein the texture feature analysis further comprising the step of: finding occurrence frequencies of a specific texture for the image characteristic information, wherein the texture features found in the image characteristic information comprise the specific texture.
10. The image merging method of claim 1, wherein the conversion formula is established by the use of an algorithm of collinearity condition equations.
11. The image merging method of claim 1, wherein after the merging of the raw images, the method further comprising the step of: inspecting the object to be inspected.
12. An image merging device, comprising: a plurality of calibration devices, being arranged at different positions of an object to be inspected, wherein each of the calibration devices includes a plurality of known characteristic information; a plurality of image capturing devices, each being provided for capturing a raw image of its corresponding calibration device; and a processing unit, coupled to the image capturing devices for receiving the raw images captured by the image capturing devices; wherein each of the raw images includes a plurality of image characteristic information that are corresponding to the known characteristic information of its corresponding calibration device, and the processing unit is enabled to establish a conversion formula according to the known characteristic information of each of the calibration devices and the image characteristic information of the corresponding image, moreover, the processing unit is enabled to perform an analysis for obtaining position relationship between the image capturing devices and the object according to the conversion formula so as to stitch the raw images, comprising: establishing a plurality of virtual point characteristic information, wherein the virtual point characteristic information for each raw image corresponds to a plurality of virtual images outside the raw image, and the virtual images are arranged to correspond to other raw images, wherein two virtual images can be inferred from each raw image; and comparing and obtaining position relationships between raw images that are disposed neighboring to one another according to the conversion formula by the use of the image characteristic information and the virtual point characteristic information.
13. The image merging device of claim 12, wherein each of the calibration device further comprising: a scale gage, being disposed at the periphery of the known characteristic information.
14. The image merging device of claim 12, wherein the known characteristic information are alternatively arranged into an array.
15. The image merging device of claim 12, wherein each of the image capturing devices is a device selected from the group consisting of: a charge-coupled device (CCD), a complementary metal-oxide semiconductor (CMOS), a pan-tilt-zoom camera, and a digital monitor.
16. The image merging device of claim 12, wherein the processing unit uses an image processing algorithm to extract the image characteristic information respectively from each of the raw images.
17. The image merging device of claim 12, wherein the conversion formula is established by the use of an algorithm of collinearity condition equations.
18. The image merging device of claim 12, wherein each of the calibration devices is substantially a textured plane with non-specific iteration structures.
19. The image merging device of claim 12, wherein each of the known characteristic information is a shape selected from the group consisting of: a rectangle, a square, a triangle, and a cross.
20. The image merging device of claim 12, wherein each of the known characteristic information is feature selected from the group consisting of: a line and a character.
21. The image merging device of claim 13, wherein the scale gage is formed with markers or character symbols, and the markers or the character symbols are provided for a user to determine the coordinates of the known characteristic information and also the relative position relationships between the known characteristic information.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The present disclosure will become more fully understood from the detailed description given herein below and the accompanying drawings which are given by way of illustration only, and thus are not limitative of the present disclosure and wherein:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
DETAILED DESCRIPTION
(12) In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the disclosed embodiments. It will be apparent, however, that one or more embodiments may be practiced without these specific details. In other instances, well-known structures and devices are schematically shown in order to simplify the drawing.
(13)
(14) The image merging method S100 starts from the step S110. At step S110, a calibration unit 110 is provided.
(15) Please refer to
(16) The calibration unit 110 includes a plurality of calibration devices 112 that are disposed at different positions on an object to be inspected 50. In
(17) In addition, each of the plural calibration devices 112 includes a plurality of known characteristic information 114, whereas the known characteristic information 114 in this embodiment refers to the distribution of patterns on the calibration devices 112. Thus, each calibration device 112 can be a textured plane with non-specific iteration structures, whereas the structure can be a pattern of various geometrical shapes. In the embodiment shown in
(18)
(19) At the step S120 of
(20) It is noted that each of the image capturing units 120 can be a device selected from the group consisting of: a charge-coupled device (CCD), a complementary metal-oxide semiconductor (CMOS), a pan-tilt-zoom camera, and a digital monitor, but it is not limited thereby.
(21)
(22) By the aforesaid arrangement, a raw image 12 of the first calibration device 112a is captured by the first image capturing unit 122, a raw image 24 of the second calibration device 1122 is captured by the second image capturing unit 124, and a raw image 36 of the third calibration device 112c is captured by the first image capturing unit 126.
(23) Please refer to
(24) At step S122, the plural image characteristic information are extracted from each raw image. In the embodiment shown in
(25) In
(26) After step S122 that is shown in
(27) At step S124a, a character is used to determine positions of the plural image characteristic information 12a12c, 24a24c and 36a. Taking the scale gage 116 shown in
(28) In addition to the step S124a, the step S124b can be performed optionally. At step S124b, a texture feature analysis is enabled for finding occurrence frequencies of specific textures in the plural image characteristic information so as to obtain the position relationships between the plural image characteristic information based upon the relationships of texture features found in the plural image characteristic information 12a12c, 24a24c and 36a.
(29) In
(30) After the known characteristic information 114 is imaged by the calibration devices 112, the step S130 is being proceeded as shown in
(31) Taking the image merging device 100 shown in
(32) Specifically, the conversion formula describes the corresponding relationship of an image in a space, or the corresponding relationship between images, i.e. between image capturing units. In this embodiment, the conversion formula is established by the use of an algorithm of collinearity condition equations, which is described in
(33)
(34) As shown in
(35) The image capturing unit 120, such as a camera, is disposed in the image coordinate system 70, and is enabled to capture an image of the object A. By rotating of the imaging of the object A, a rotation image place G can be obtained, whereas the center L of the camera is located at (x.sub.a, y.sub.a) and the focal length f of the camera is the distance between the center L and the rotation image place G in the image coordinate system 70 of an x-axis, y-axis and z-axis. In this embodiment, the center L of the camera is not disposed at the origin point of the image coordinate system 70, but it is not limited thereby, that is, the center L of the camera can be disposed at the origin point of the image coordinate system 70.
(36) The relationship between the camera imaging position and the object A can be described by the following formula:
(37)
(38) The formula (1) represents to the position relationship between the plural known characteristic information 114 of one calibration device 112 that is being captured by the image capturing unit 120 and the plural image characteristic information of the corresponding image of the referring calibration device. Moreover, the base coordinate of the image is (x.sub.0, y.sub.0), but in this embodiment, it is (0, 0) for simplicity.
(39) In addition, the base coordinate for the object coordinate system 60 is (X.sub.L, Y.sub.L, Z.sub.L), but also in this embodiment, it is (0, 0, 0) for simplicity. Moreover, R.sub.X represents X-axis rotation matrix, R.sub.Y represents Y-axis rotation matrix, and R.sub.Z represents Z-axis rotation matrix, that can be represented by the following formulas:
(40)
wherein, , , represent respectively the angles of X-axis rotation matrix, Y-axis rotation matrix, and Z-axis rotation matrix.
(41) After expanding the above formulas (2), (3) and (4), the formula (5) can be obtained as following:
(42)
wherein, m.sub.11=cos cos ; m.sub.12=sin sin cos +cos sin ; m.sub.13=cos sin cos +sin sin ; m.sub.21=cos sin ; m.sub.22=sin sin sin +sin cos ; m.sub.23=cos sin sin +sin cos ; m.sub.31=sin ; m.sub.32=sin cos ; and m.sub.33=cos cos
(43) Dividing the formula (5) by m.sub.33, a formula (6) is obtained as following:
(44)
(45) As the focal length f is known in this embodiment, and the distance between the object A and the center L of the camera is set to be a constant, the value of Z.sub.A-Z.sub.L is a constant, so that the formula (6) can be converted into a formula (7) as following:
(46)
wherein,
(47)
(48) Formula (7) is the conversion formula needed in the present disclosure. As described in formula (7), u and v are respectively the X-axis coordinate and Y-axis coordinate of the object A in the object coordinate system 60, so that they can be obtained in advance. Consequently, the eight variables of a.sub.1 , a.sub.2, a.sub.3, a.sub.4, a.sub.5, b.sub.1, b.sub.2 and b.sub.3 are the coefficients that are to be calculated. Since one formula (7) can be generated for each one of the plural known characteristic information 114, it is obvious that at least four known characteristic information 114 will be required so as to obtain the eight variables. In this embodiment, there are nine known characteristic information 114, and thus there are also nine corresponding image characteristic information, that is sufficient to obtain a solution for the formula (7). In this embodiment, the solution is obtained using a least square method, nut it is not limited thereby.
(49) At step S140, an analysis is performed for obtaining position relationship between the plural raw images 12, 24, 36 according to the conversion formula. Taking the image merging device 100 for example, the process unit 130 is used for analyzing the position relationship between the plural image capturing units 120 according to the conversion formula. That is, according to the conversion formula in this embodiment, the position relationship between the plural image capturing units 120 can be established.
(50) Please refer to
(51) At the step S142, a plurality of virtue point characteristic information is established outside the raw images. According to the embodiment shown in
(52) Similarly, there can be two virtue images 22, 26 to be inferred from the raw image 24 that is captured by the second image capturing unit 124, whereas the two virtue images 22, 26 are arranged corresponding to raw images 12, 36 of the first image capturing unit 122 and the third image capturing unit 126 in respective. Further, there can be two virtue images 32, 34 to be inferred from the raw image 36 that is captured by the third image capturing unit 126, whereas the two virtue images 32, 34 are arranged corresponding to raw images 12, 24 of the first image capturing unit 122 and the second image capturing unit 124 in respective.
(53) Thus, it is not necessary to enable the image capturing units 120 to capture the complete images of the calibration unit 110, but only capture the image 12 of the first calibration device 112a in a specific region for instance, and then as the virtue images 14, 16 relating to the neighboring region of the specific region of the first calibration device 112a can be inferred and obtained, the complete images of the calibration unit 110 can be simulated and thus obtained. In another condition that the size of the calibration unit 110 is small than the object 50 for covering the object completely as shown in
(54) At step S144, a comparison is performed according to the conversion formula by the use of the plural image characteristic information and the plural virtue point characteristic information so as to obtain position relationships between raw images that are disposed neighboring to one another.
(55) As the embodiment shown in
(56) Thus, the relative positioning between the first image capturing unit 122, the second image capturing unit 124 and the third image capturing unit 126 can be acquired accordingly. In an embodiment when the image characteristic information 12a of the raw image 12 is used as a datum point, not only the positions of the other image characteristic information 12b and 12c in the raw image 12 can be inferred and acquired, but also the relative position relationship between the image characteristic information 24a24c in the raw image 24 and the image characteristic information 12a12c can be inferred. Similarly, the relative position relationship between the image characteristic information 36a in the raw image 36 and the image characteristic information 12a12c can also be inferred. Thus, the position relationship between the raw images 12, 24 and 36 can be acquired, that is, the conversion relationship between the plural image capturing units 120 is established.
(57) As shown in
(58)
(59) To sum up, by the use of the aforesaid image merging method and device, the known characteristic information of each calibration device in the calibration unit is imaged for establishing a conversion formula between the calibration devices and the raw images, and thereby further establishing the position relationships between various image capturing units accordingly. Thereafter, basing upon the position relationships, the plural raw images are stitched together into a composite image. Consequently, the plural non-overlapping raw images are stitched together into a composite image with high precision.
(60) In the present disclosure, it is not necessary to enable the image capturing units 120 to capture the complete images of the calibration unit in advance, but only capture the images of the first calibration unit in a specific region, and then the virtue images relating to the neighboring region of the specific region of the calibration unit can be inferred and obtained. Therefore, not only the plural non-overlapping raw images are stitched together into a composite image with high precision while achieving larger field of view relating to the object to be inspected, but also the installment cost of the image capturing units can be reduced.
(61) In addition, it is no need to adjust the relative positioning of the calibration unit 110 and the image capturing units in the present disclosure, so that no parameter adjustment process is required for adjusting the imaging position and horizontal skew of the image capturing units and also no calibration process is required for calibrating those different image capturing units. Consequently, the pre-operation time for preparing the image capturing units can be reduced.
(62) Since there is no need to deliberately adjust and fix the positions of the aforesaid image capturing units, the image capturing units can be arranged into arrays of any shapes at will, such as a ring-shape array, an L-shape array or a cross-shape array, according to actual requirements with respect to resolution and field of view. Consequently, there can be all sorts of image capturing units of various pixel support, resolutions, mounting angles and sizes that can be adapted for the present disclosure. Thus, the flexibility and convenience of the present disclosure are enhanced.
(63) Operationally, since the source or stuffing are placed on a calibration unit to be imaged, and also as the calibration unit being imaged is formed with known characteristic information of obvious position relationship, it is easy to extract image characteristic information without being adversely affected by the source or stuffing on the calibration unit. Thereby, the imaging precision is enhanced for achieving an image merging with high precision, and consequently the capability of the resulting visual inspection system is increased.
(64) Moreover, in an embodiment, after applying the image merging method on the imaging of a scale gage by the use of the image merging device, a composite image can be obtained with high precision. Thus, the image merging method and device of the present disclosure can achieving high-precision image merging.
(65) With respect to the above description then, it is to be realized that the optimum dimensional relationships for the parts of the disclosure, to include variations in size, materials, shape, form, function and manner of operation, assembly and use, are deemed readily apparent and obvious to one skilled in the art, and all equivalent relationships to those illustrated in the drawings and described in the specification are intended to be encompassed by the present disclosure.