Image processing system and method thereof for generating projection images based on inward or outward multiple-lens camera
11595574 · 2023-02-28
Assignee
Inventors
Cpc classification
H04N5/2624
ELECTRICITY
H04N23/45
ELECTRICITY
G06T7/80
PHYSICS
H04N23/667
ELECTRICITY
H04N23/90
ELECTRICITY
H04N23/951
ELECTRICITY
International classification
Abstract
An image processing system is disclosed, comprising: an M-lens camera, a compensation device and a correspondence generator. The M-lens camera generates M lens images. The compensation device generates a projection image according to a first vertex list and the M lens images. The correspondence generator is configured to conduct calibration for vertices to define vertex mappings, horizontally and vertically scan each lens image to determine texture coordinates of its image center, determine texture coordinates of control points according to the vertex mappings, and P1 control points in each overlap region in the projection image; and, determine two adjacent control points and a coefficient blending weight for each vertex in each lens image according to the texture coordinates of the control points and the image center in each lens image to generate the first vertex list, where M>=2.
Claims
1. An image processing system, comprising: an M-lens camera that captures a X-degree horizontal field of view (FOV) and a Y-degree vertical FOV to generate M lens images; one or more first processors configured to perform a set of actions comprising: generating a projection image according to a first vertex list and the M lens images; and a second processor configured to perform a set of operations comprising: conducting calibration for multiple vertices to define first vertex mappings between the M lens images and the projection image; horizontally and vertically scanning each lens image to determine texture coordinates of an image center of each lens image; determining texture coordinates of all control points according to the first vertex mappings and P1 control points in each overlap region in the projection image; and determining two adjacent control points and a coefficient blending weight for each vertex in each lens image according to the texture coordinates of all the control points and the image center in each lens image to generate the first vertex list, wherein X<=360, Y<180, M>=2 and P1>=3.
2. The system according to claim 1, wherein the first vertex list comprises the vertices with first data structures, and the first data structure of each vertex comprises the first vertex mapping between the M lens images and the projection image, two indices of the two adjacent control points and the coefficient blending weight in each lens image.
3. The system according to claim 1, wherein the projection image is a collection of quadrilaterals defined by the vertices and the operation of determining the texture coordinates of the control points comprises: selecting one column of quadrilaterals from each overlap region in the projection image as a predetermined column; when the projection image is a wide-angle image, defining predetermined column, the leftmost and the rightmost columns of quadrilaterals in the projection image as control columns; when the projection image is a panoramic image, defining each predetermined column as a control column; placing a top control point and a bottom control point at the centers of the top and the bottom quadrilaterals for each control column to determine x coordinates of the P1 control points for each control column in the projection image; dividing a distance between the top control point and the bottom control point by (P1-2) to obtain y coordinates of (P1-2) control points for each control column in the projection image; and performing interpolation to obtain texture coordinates of each control point according to the x and the y coordinates of each control point and the first vertex mappings.
4. The system according to claim 3, wherein the operation of determining the texture coordinates of the control points further comprises: for a quadrilateral i in each predetermined column, searching for a control point j closest to the quadrilateral i; and categorizing the quadrilateral i as a measuring region j; where 1=<i<=n1, 1=<j=<P1 and n1 denotes a number of quadrilaterals in each predetermined column.
5. The system according to claim 1, wherein the projection image is derived from a predefined projection of the M lens images, and is one of a wide-angle image and a panoramic image.
6. The system according to claim 5, wherein the predefined projection is one of equirectangular projection, cylindrical projection, Miller projection, Mercator projection, Lambert cylindrical equal area projection and Pannini projection.
7. The system according to claim 1, wherein the operation of horizontally and vertically scanning comprises: for a target lens image, respectively scanning each row of pixels from left to right and from right to left to obtain a left boundary point and a right boundary point of each row according to a luma threshold value; respectively scanning each column of pixels from top to bottom and from bottom to top to obtain a top boundary point and a bottom boundary point of each column according to the luma threshold value; calculating a row center for each row according to the left boundary point and the right boundary point of each row; calculating a column center for each column according to the top boundary point and the bottom boundary point of each column; averaging the row centers of all rows to obtain x coordinate of the image center in the target lens image; and averaging the column centers of all columns to obtain y coordinate of the image center in the target lens image.
8. The system according to claim 1, wherein the coefficient blending weight for a target vertex in a specified lens image is associated with a first angle and a second angle, wherein the first angle is formed between a first vector from the image center of the specified lens image to a first control point of the two adjacent control points and a second vector from the image center of the specified lens image to the target vertex, and wherein the second angle is formed between a third vector from the image center of the specified lens image to a second control point of the two adjacent control points and the second vector.
9. The system according to claim 1, wherein the action of generating the projection image comprises: an action of determining a set of optimal warping coefficients for the control points according to a 2D error table comprising multiple test optimal warping coefficients and multiple accumulation pixel value differences in measuring regions located in each overlap region and corresponding to the control points in measure mode; an action of modifying texture coordinates in each lens image for all vertices from the first vertex list based on either the multiple test warping coefficients in measure mode or the set of optimal warping coefficients in rendering mode to generate a second vertex list; an action of forming the projection image according to the M lens images and the second vertex list in rendering mode; and an action of measuring the multiple accumulation pixel value differences in the measuring regions in measure mode; wherein values of the multiple test warping coefficients are associated with an offset that a lens center of the M lenses is separated from a camera system center of the M-lens camera; and wherein the second vertex list comprises the vertices with second data structures that define second vertex mappings between the M lens images and the projection image.
10. The system according to claim 9, wherein in rendering mode, the action of modifying texture coordinates comprises: for a target vertex from the first vertex list, (1) for each lens image, retrieving two selected coefficients from the optimal warping coefficients according to two indices of the two adjacent control points in the first data structure of the target vertex; (2) for each lens image, calculating an interpolated warping coefficient according to the two selected coefficients and the coefficient blending weight in the first data structure of the target vertex; (3) for each lens image, calculating modified texture coordinates according to the interpolated warping coefficient and the texture coordinates of the target vertex in the first data structure of the target vertex; and (4) repeating actions (1) to (3) until the modified texture coordinates of all vertices are calculated to generate the second vertex list.
11. The system according to claim 1, wherein the M-lens camera is an inward M-lens camera and an intersection of optical axes of the inward M lenses is formed above the inward M lenses.
12. The system according to claim 1, wherein the M-lens camera is an outward M-lens camera and an intersection of optical axes of the inward M lenses is formed below the outward M lenses.
13. An image processing method, comprising steps of: conducting calibration for vertices to define first vertex mappings between a projection image and M lens images generated by an M-lens camera that captures a X-degree horizontal field of view (FOV) and a Y-degree vertical FOV; horizontally and vertically scanning each lens image to determine texture coordinates of an image center of each lens image; determining texture coordinates of control points according to the first vertex mappings and P1 control points in each overlap region in the projection image; determining two adjacent control points and a coefficient blending weight for each vertex in each lens image according to the texture coordinates of the control points and the image center in each lens image to generate a first vertex list; and generating the projection image according to the first vertex list and the M lens images; wherein X<=360, Y<180, M>=2 and P1>=3.
14. The method according to claim 13, wherein the first vertex list comprises the vertices with their first data structures, and the first data structure of each vertex comprises a first vertex mapping between the M lens images and the projection image, two indices of the two adjacent control points and the coefficient blending weight for a corresponding vertex in each lens image.
15. The method according to claim 13, wherein the projection image is a collection of quadrilaterals defined by the vertices and the step of determining the texture coordinates of the control points comprises: selecting one column of quadrilaterals from each overlap region in the projection image as a predetermined column; when the projection image is a wide-angle image, defining the predetermined column, the leftmost and the rightmost columns of quadrilaterals in the projection image as control columns; when the projection image is a panoramic image, defining each predetermined column as a control column; placing a top control point and a bottom control point at the centers of the top and the bottom quadrilaterals for each control column to determine x coordinates of the P1 control points for each control column in the projection image; dividing a distance between the top control point and the bottom control point by (P1-2) to obtain y coordinates of (P1-2) control points for each control column in the projection image; and performing interpolation to obtain texture coordinates of each control point according to the x and the y coordinates of each control point in the projection image and the first vertex mappings.
16. The method according to claim 15, wherein the step of determining the texture coordinates of the control points further comprises: for a quadrilateral i in each predetermined column, searching for the closest control point j; and categorizing the quadrilateral i as a measuring region j; where 1=<i<=n1, 1=<j=<P1 and n1 denotes a number of quadrilaterals in each predetermined column.
17. The method according to claim 13, wherein the projection image is derived from a predefined projection of the M lens images, and is one of a wide-angle image and a panoramic image.
18. The method according to claim 17, wherein the predefined projection is one of equirectangular projection, cylindrical projection, Miller projection, Mercator projection, Lambert cylindrical equal area projection and Pannini projection.
19. The method according to claim 13, wherein the step of horizontally and vertically scanning comprises: for a target lens image, respectively scanning each row of pixels from left to right and from right to left to obtain a left boundary point and a right boundary point of each row according to a luma threshold value; respectively scanning each column of pixels from top to bottom and from bottom to top to obtain a top boundary point and a bottom boundary point of each column according to the luma threshold value; calculating a row center for each row according to the left boundary point and the right boundary point of each row; calculating a column center for each column according to the top boundary point and the bottom boundary point of each column; averaging the row centers of all rows to obtain x coordinate of the image center of the target lens image; and averaging the column centers of all columns to obtain y coordinate of the image center of the target lens image.
20. The method according to claim 13, wherein the coefficient blending weight for a target vertex in a specified lens image is associated with a first angle and a second angle, wherein the first angle is formed between a first vector from the image center of the specified lens image to a first control point of the two adjacent control points and a second vector from the image center of the specified lens image to the target vertex, and wherein the second angle is formed between a third vector from the image center of the specified lens image to a second control point of the two adjacent control points and the second vector.
21. The method according to claim 13, wherein the step of generating the projection image comprises: determining a set of optimal warping coefficients for the control points according to a 2D error table comprising multiple test optimal warping coefficients and multiple accumulation pixel value differences in measuring regions located in each overlap region and corresponding to the control points; modifying all the texture coordinates in each lens image for all vertices from the first vertex list based on the set of optimal warping coefficients to generate a second vertex list; and forming the projection image according to the M lens images and the second vertex list; wherein values of the multiple test warping coefficients are associated with an offset that a lens center of the M lenses is separated from a camera system center of the M-lens camera; and wherein the second vertex list comprises the vertices with second data structures that define second vertex mappings between the M lens images and the projection image.
22. The method according to claim 21, wherein the step of modifying all the texture coordinates comprises: for a target vertex from the first vertex list, (1) for each lens image, retrieving two selected coefficients from the set of the optimal warping coefficients according to two indices of the two adjacent control points in the first data structure of the target vertex; (2) for each lens image, calculating an interpolated warping coefficient according to the two selected coefficients and the coefficient blending weight in the first data structure of the target vertex; (3) for each lens image, calculating modified texture coordinates according to the interpolated warping coefficient and texture coordinates of the target vertex in the first data structure of the target vertex; and (4) repeating steps (1) to (3) until the modified texture coordinates of all vertices are calculated to generate the second vertex list.
23. The method according to claim 21, wherein the step of determining the set of optimal warping coefficients comprises: (a) setting the values of the test optimal warping coefficients to one of predefined values in a predefined value range according to the offset that the lens center of the M lenses is separated from the camera system center of the M-lens camera; (b) modifying all the texture coordinates in each lens image for all vertices from the first vertex list based on the values of the test optimal warping coefficients; (c) calculating the accumulation pixel value differences in the measuring regions; (d) repeating the steps (a) to (c) until all the predefined values in the predefined value range are processed to form the 2D error table; and (e) determining the optimal warping coefficient of each target control point on a group-by-group basis according to local minimums among the accumulation pixel value differences of each control point in each decision group comprising the target control point and one or two neighboring control points.
24. The method according to claim 13, wherein the M-lens camera is an inward M-lens camera and an intersection of optical axes of the inward M lenses is formed above the inward M lenses.
25. The method according to claim 13, wherein the M-lens camera is an outward M-lens camera and an intersection of optical axes of the inward M lenses is formed below the outward M lenses.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The present invention will become more fully understood from the detailed description given hereinbelow and the accompanying drawings which are given by way of illustration only, and thus are not limitative of the present invention, and wherein:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27)
(28)
(29)
DETAILED DESCRIPTION OF THE INVENTION
(30) As used herein and in the claims, the term “and/or” includes any and all combinations of one or more of the associated listed items. The use of the terms “a” and “an” and “the” and similar referents in the context of describing the invention are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. Throughout the specification, the same components with the same function are designated with the same reference numerals.
(31) A feature of the invention is to use an inward multiple-lens camera to generate multiple lens images and then produce a wide-angle image by a stitching and blending process.
(32)
(33) The advantage of the inward multiple-lens cameras is that the lenses are fully wrapped in the housing 32/33 of electronic products, so the electronic products are convenient to carry around and the lenses are free of abrasion from use.
(34)
(35) According to the invention, each lens of the inward/outward multiple-lens camera simultaneously captures a view with a x1-degree horizontal field of view (HFOV) and a y1-degree vertical FOV (VFOV) to generate a lens image and then the multiple lens images from the inward/outward multiple-lens camera form a projection image with a x2-degree HFOV and a y2-degree VFOV, where 0<x1<x2<=360 and 0<y1<y2<180. For example, each lens of the inward three-lens camera in
(36)
(37) A wide variety of projections are suitable for use in the projection image processing system 500 of the invention. The term “projection” refers to flatten a globe's surface into a 2D plane, e.g., a projection image. The projection includes, without limitations, equirectangular projection, cylindrical projection and modified cylindrical projection. The modified cylindrical projection includes, without limitations, Miller projection, Mercator projection, Lambert cylindrical equal area projection and Pannini projection. Thus, the projection image includes, without limitations, an equirectangular projection image, a cylindrical projection image and a modified cylindrical projection image.
(38) The image capture module 51 is a multiple-lens camera, such as an outward multiple-lens camera, an inward two-lens camera or an inward three-lens camera, which is capable of simultaneously capturing a view with a X-degree HFOV and a Y-degree VFOV to generate a plurality of lens images, where X<=360 and Y<180. For purpose of clarity and ease of description, the following examples and embodiments are described with reference to equirectangular projection and with the assumption that the image capture module 51 is an inward three-lens camera and the projection image is an equirectangular wide-angle image. The operations of the projection image processing system 500 and the methods described in connection with
(39) Throughout the specification and claims, the following terms take the meanings explicitly associated herein, unless the context clearly dictates otherwise. The term “texture coordinates” refers to a 2D Cartesian coordinate system in a texture space (such as a lens/texture image). The term “rasterization” refers to a process of computing the mapping from scene geometry (or from the projection image) to texture coordinates of each lens image.
(40) The processing pipeline for the projection image processing system 500 is divided into an offline phase and an online phase. In the offline phase, the three inward lenses of the image capture module 51 are calibrated separately. The correspondence generator 53 adopts appropriate image registration techniques to generate an original vertex list, and each vertex in the original vertex list provides the vertex mapping between the equirectangular projection image and lens images (or between the equirectangular coordinates and the texture coordinates). For example, the sphere 62 in
(41)
(42) According to the geometry of the equirectangular projection image and lens images, the correspondence generator 53 in offline phase computes equirectangular coordinates and texture coordinates for each vertex in the polygon mesh in
(43) TABLE-US-00001 TABLE 1 Attributes Descriptions (x, y) Equirectangular coordinates N Number of covering/overlapping lens images ID.sub.1 ID of first lens image (u.sub.1, v.sub.1) Texture coordinates in first lens image (idx.sub.10, idx.sub.11) Warping coefficient indices in first lens image Alpha.sub.1 Blending weight for warping coefficients in first lens image w.sub.1 Blending weight for stitching in first lens image . . . . . . ID.sub.N ID of N.sup.th lens image (u.sub.N, v.sub.N) Texture coordinates in N.sup.th lens image (idx.sub.N0, idx.sub.N1) Warping coefficient indices in N.sup.th lens image Alpha.sub.N Blending weight for warping coefficients in N.sup.th lens image w.sub.N Blending weight for stitching in N.sup.th lens image
(44) In an ideal case, the lens centers 76 of the three lenses in the image capture module 51 are simultaneously located at the camera system center 73 of the framework 110 (not shown), so a single ideal imaging point 70 derived from a far object 75 is located on an image plane 62 with 2-meter radius (r=2). Take lens-B and lens-C for example. Since the ideal imaging position 70 in the lens-B image matches the ideal image position 70 in the lens-C image, a perfect stitching/blending result is produced in the equirectangular wide-angle image after an image stitching/blending process is completed. However, in real cases, the lens centers 76 for lens-B and Lens-C are separated from the system center 73 by an offset ofs as shown in left portion of
(45) Due to the optical characteristics of lenses, such as lens shading and luma shading, the image center 74 of a lens image with a size of Wi×Hi is not necessarily located in the middle (Wi/2, Hi/2) of the lens image, where Wi and Hi respectively denote the width and height of the lens image. In offline phase, the correspondence generator 53 takes the following five steps to determine the texture coordinates of a real image center 74 of each lens image: (i) Determine a luma threshold TH for defining the edge boundary of one lens image. (ii) Scan each row of pixels from left to right to determine a left boundary point of each row as shown in
(46)
(47) According to the invention, each overlap region in the projection image contains P1 control points in a column, where P1>=3. The sizes of the overlap regions are varied according to the FOVs of the lenses, the resolutions of lens sensors and the lens angles arranged in the image capture module 51. Normally, the width of each overlap region is greater than or equal to a multiple of the width of one column of quadrilaterals in the projection image. If there are multiple columns of quadrilaterals located inside each overlap region, only one column of quadrilaterals (hereinafter called “predetermined column”) inside each overlap region is pre-determined for accommodating P1 control points. In the example of
(48) In offline phase, the correspondence generator 53 takes the following four steps (a)˜(d) to determine the texture coordinates of control points in the equirectangular wide-angle image: (a) Define the leftmost and the rightmost columns of quadrilaterals and the predetermined column located in each overlap region in the equirectangular wide-angle image as control columns. (b) For each control column, respectively place the top and the bottom control points at the centers of the top and the bottom quadrilaterals. In this step (b), x coordinates of the control points in one column in equirectangular domain are determined according to x coordinates (e.g., x1 and x2) of the right boundary and the left boundary of quadrilaterals in each control column, e.g., x=(x1+x2)/2. (c) Divide the distance between the top and the bottom control points by (P1-2) to obtain they coordinates of (P1-2) control points in equirectangular domain for each control column. In other words, the (P1-2) control points are evenly distributed between the top and the bottom control points for each control column.
(49)
(50) Referring back to
(51) In measure mode, the texture coordinates in each lens image for each vertex from the original vertex list are modified by the vertex processing device 510 (so that the image processing apparatus 520 can measure region errors E(1)˜E(10) of measuring regions M(1)˜M(10)) according to either two “test” warping coefficients or one “test” warping coefficient with one constant warping coefficient of two immediately-adjacent control points of a target vertex and a corresponding blending weight (for warping coefficients) in each lens image for the target vertex (Step S905 & S906 in
(52) A feature of the invention is to determine optimal warping coefficients for the ten control points R(1)˜R(10) within a predefined number of loops (e.g., max in
(53)
(54) Step S902: Respectively set the Q1 number of iterations and test warping coefficients to new values. In one embodiment, set the Q1 number of iterations to 1 in a first round and increment Q1 by 1 in each of the following rounds; since ofs=3 cm, set all the ten test warping coefficients C.sub.t(1)˜C.sub.t(10) to 0.96 in a first round (i.e., C.sub.t(1)= . . . =C.sub.t(10)=0.96), and then set them to 0.97, . . . , 1.04 in order in the following eight rounds.
(55) Step S904: Clear all region errors E(i), where i=1, . . . , 10.
(56) Step S905: Generate a modified vertex list according to the original vertex list and values of the test warping coefficients C.sub.t(1)˜C.sub.t(10). Again, take
(57) TABLE-US-00002 TABLE 2 Attributes Descriptions (x, y) Equirectangular coordinates N Number of covering/overlapping lens images ID.sub.1 ID of first lens image (u′.sub.1, v′.sub.1) Modified texture coordinates in first lens image w.sub.1 Blending weight for stitching in first lens image . . . ID.sub.N ID of N.sup.th lens image (u′.sub.N, v′.sub.N) Modified texture coordinates in N.sup.th lens image w.sub.N Blending weight for stitching in N.sup.th lens image
(58) Step S906: Measure/obtain region errors E(1)˜E(10) of ten measuring regions M(1)˜M(10) in the equirectangular wide-angle image by the image processing apparatus 520 (will be described in connection with
(59) Step S908: Store all region errors E(1)˜E(10) and all values of test warping coefficients in a 2D error table. Table 3 shows an exemplary two-dimensional (2D) error table for ofs=3 cm (test warping coefficients ranging from 0.96 to 1.04). In Table 3, there are ten region errors E(1)˜E(10) and nine values of test warping coefficients (0.96-1.04).
(60) TABLE-US-00003 TABLE 3 Test warping 1st 2nd 3rd — 7th 8th 9th coefficient 0.96 0.97 0.98 — 1.02 1.03 1.04 E(1) E(2) — E(7) — E(8) E(9) E(10)
(61) Step S910: Determine whether the Q1 number of iterations reaches a max value of 9. If YES, the flow goes to step S912; otherwise, the flow goes to Step S902.
(62) Step S912: Perform coefficient decision according to the 2D error table.
(63) Step S914: Output optimal warping coefficients C(i), where i=10. In rendering mode, the optimal warping coefficients C(1)˜C(10) are outputted to the vertex processing device 510 for generation of a corresponding modified vertex list, and then the image processing apparatus 520 generates a corresponding wide-angle image (will be described below) based on the modified vertex list and three lens images from the image capture module 51.
(64)
(65) Step S961: Set Q2 to 0 for initialization.
(66) Step S962: Retrieve a selected decision group from the 2D error table. Referring to
(67) Step S964: Determine local minimums among the region errors for each control point in the selected decision group. Table 4 is an example showing the region errors E(6)˜E(8) and the nine values of the test warping coefficients.
(68) TABLE-US-00004 TABLE 4 test warping index coefficient E(6) E(7) E(8) 1 0.96 1010 2600(*) 820 2 0.97 1005 2650 750 3 0.98 1000 2800 700 4 0.99 900 3000 600(*) 5 1.00 800(*) 2700 650 6 1.01 850 2500 580 7 1.02 950 2400(*) 500(*) 8 1.03 960 2820 700 9 1.04 975 2900 800
(69) As shown in Table 4, there is one local minimum among the nine region errors of R(6), and there are two local minimums among the nine region errors of R(7) and R(8), where each local minimum is marked with an asterisk.
(70) Step S966: Choose candidates according to the local minimums. Table 5 shows candidates selected from the local minimums in Table 4, where ID denotes the index, WC denotes the warping coefficient and RE denotes the region error. The number of candidates is equal to the number of the local minimums in Table 4.
(71) TABLE-US-00005 TABLE 5 Number R(6) R(7) R(8) of local 1 2 2 minimums ID WC RE ID WC RE ID WC RE Candidate 5 1.00 800 1 0.96 2600 4 0.99 600 [0] Candidate 7 1.02 2400 7 1.02 500 [1]
(72) Step S968: Build a link metric according to the candidates in Table 5. As shown in
(73) Step S970: Determine the minimal sum of link metric values among the paths. For the link metric values M.sub.0,0.sup.R7,R8=0.03 and M.sub.0,1.sup.R7,R8=0.06, their minimum value d.sub.0.sup.R7,R8=min(M.sub.0,0.sup.R7,R8, M.sub.0,1.sup.R7,R8)=0.03. For the link metric values M.sub.1,0.sup.R7,R8=0.03 and M.sub.1,1.sup.R7,R8=0.00, their minimum value d.sub.1.sup.R7,R8=min(M.sub.1,0.sup.R7,R8, M.sub.1,1.sup.R7,R8)=0.00. Then, respectively compute sums of link metric values for path 0-0-0 and path 0-1-1 as follows: S.sub.0.sup.R7=d.sub.0.sup.R6,R7+d.sub.0.sup.R7,R8=0.04+0.03=0.07 and S.sub.1.sup.R7=d.sub.1.sup.R6,R7+d.sub.1.sup.R7,R8=0.02+0.00=0.02. Since S.sub.0.sup.R7>S.sub.1.sup.R7, it is determined that S.sub.1.sup.R7 (for path 0-1-1) is the minimal sum of link metric values among the paths as the solid-line path shown in
(74) Step S972: Determine an optimal warping coefficient for the selected control point. As to the example given in step S970, since S.sub.1.sup.R7 (for path 0-1-1) is the minimal sum of link metric values among the paths, 1.02 is selected as the optimal warping coefficient of control point R(7). However, if two or more paths have the same sum at the end of calculation, the warping coefficient of the node with minimum region error is selected as the optimal warping coefficient of the selected control point. Here, the Q2 number of iterations is incremented by 1.
(75) Step S974: Determine whether the Q2 number of iterations reaches a limit value of TH1(=10). If YES, the flow is terminated; otherwise, the flow goes to Step S962 for a next control point.
(76) After all the texture coordinates in the three lens images for all vertices from the original vertex list are modified by the vertex processing device 510 according to the ten optimal warping coefficients (C(1)˜C(10)), the mismatch image defects caused by shifted lens centers of the camera 51 (e.g., a lens center 76 is separated from the system center 73 by an offset ofs) would be greatly improved (i.e., the real imaging positions 78 are pulled toward the idea imaging positions 70) as shown on the right side of
(77)
(78) Referring to
(79) For a triangle case, the rasterization engine 521 and the texture mapping engines 52a˜52b perform operations similar to the above quadrilateral case for each point/pixel in a triangle formed by each group of three vertices from the modified vertex list to generate two corresponding sample values s1 and s2, except that the rasterization engine 521 computes three spatial weighting values (e, f, g) for three input vertices (E, F, G) forming a triangle of the polygon mesh in
(80) Next, according to the equirectangular coordinates (x, y) of the point Q, the rasterization engine 521 determines whether the point Q falls in one of the five measuring regions M(1)˜M(5) and then asserts the control signal CS1 to cause the measuring unit 525 to estimate/measure the region error of the measuring region if the point Q falls in one of the five measuring regions. The measuring unit 525 may estimate/measure the region errors of the measuring regions M(1)˜M(5) by using known algorithms, such as SAD (sum of absolute differences), SSD (sum of squared differences), MAD (median absolute deviation), etc. For example, if the point Q is determined to fall in measuring region M(1), the measuring unit 525 may accumulate the absolute value of the sample value difference between each point in the measuring region M(1) of the lens-B image and its corresponding point in the measuring region M(1) of the lens-C image to obtain the SAD value as the region error E(1) for the measuring region M(1), by using the following equations: E=|s1−s2|, E(1)+=E. In this manner, the measuring unit 525 measures five region errors E(1)˜E(5) for the measuring regions M(1)˜M(5) in measure mode. In the same manner, the measuring unit 525 measures region errors E(6)˜E(10) for the five measuring regions M(6)˜M(10) according to the modified vertex list, the lens-C image and the lens-A image in measure mode.
(81) In rendering mode, the rasterization engine 521 and the texture mapping circuit 522 operate in the same way as in measure mode. Again, take the above case (the point Q has equirectangular coordinates (x, y) within the quadrilateral EFGH that are overlapped with lens-B and lens-C images (N=2)) for example. After the texture mapping engines 52a˜52b texture map the texture data from the lens-B and lens-C images to generate two sample values s1 and s2, the blending unit 523 blends the two sample values (s1, s2) together to generate a blended value Vb of point Q using the following equation: Vb=fw.sub.1*s1+fw.sub.2*s2. Finally, the blending unit 523 stores the blended value Vb of point Q into the destination buffer 524. In this manner, the blending unit 523 sequentially stores all the blended values Vb into the destination buffer 524 until all the points within the quadrilateral EFGH are processed/completed. On the other hand, if a point Q′ has equirectangular coordinates (x′, y′) within a quadrilateral E′F′G′H′ located in non-overlap region (N=1) of one lens image (e.g., lens-B image), the rasterization engine 521 will only send one texture coordinates (e.g., (u1, v1)) to one texture mapping engine 52a, and one face blending weight fw.sub.1(=1) to the blending unit 523. Correspondingly, the texture mapping engine 52a textures map the texture data from the lens-B image to generate a sample value s1. The blending unit 523 generates and stores a blended value Vb(=fw.sub.1*s1) of point Q′ in the destination buffer 524. In this manner, the blending unit 523 sequentially stores all the blended values into the destination buffer 524 until all the points within the quadrilateral E′F′G′H′ are processed/completed. Once all the quadrilaterals/triangles are processed, a projection image is stored in the destination buffer 524.
(82) In a case that the image capture module 51 is an outward M-lens camera and the projection image is a panoramic image, in offline phase, the correspondence generator 53 also takes the above four steps (a)˜(d) to determine the texture coordinates of control points, except to modify step (a) as follows: Define the predetermined column located in each overlap region in the equirectangular panoramic image as “a control column”. Take M=4 (as shown in
(83) The compensation device 52 and the correspondence generator 53 according to the invention may be hardware, software, or a combination of hardware and software (or firmware). An example of a pure solution would be a field programmable gate array (FPGA) design or an application specific integrated circuit (ASIC) design. In a preferred embodiment, the vertex processing device 510 and an image processing apparatus 520 are implemented with a graphics processing unit (GPU) and a first program memory; the stitching decision unit 530 and the correspondence generator 53 are implemented with a first general-purpose processor and a second program memory. The first program memory stores a first processor-executable program and the second program memory stores a second processor-executable program. When the first processor-executable program is executed by the GPU, the GPU is configured to function as: the vertex processing device 510 and the image processing apparatus 520. When the second processor-executable program is executed by the first general-purpose processor, the general-purpose processor is configured to function as: the stitching decision unit 530 and the correspondence generator 53.
(84) In an alternative embodiment, the compensation device 52 and the correspondence generator 53 are implemented with a second general-purpose processor and a third program memory. The third program memory stores a third processor-executable program. When the third processor-executable program is executed by the second general-purpose processor, the second general-purpose processor is configured to function as: the vertex processing device 510, the image processing apparatus 520, the stitching decision unit 530 and the correspondence generator 53.
(85) While certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that this invention should not be limited to the specific construction and arrangement shown and described, since various other modifications may occur to those ordinarily skilled in the art.