Abstract
The present invention relates to a method and system for the optical and geometric calibration of a device configured for reconstructing the three-dimensional shape of an object. The object is reconstructed from a set of images thereof acquired by a plurality of cameras, where said plurality of cameras must necessarily be calibrated so that reconstruction is carried out without errors. In the context of the invention, calibration consists of obtaining the extrinsic parameters of said plurality of cameras; that is, the positions of their optical centers and the spatial orientations of their optical axes.
Claims
1. A computer-implemented method for calibrating an object reconstruction device, wherein the device comprises: a) a plurality of C cameras (Cam) adapted for image acquisition, the j-th camera being identified as Cam.sub.j for all j=1 . . . C, comprising: a sensor with a sensitive area formed by pixels for capturing an image, and an optic, with an optical axis with orientation vector O.sub.J for establishing a focus on the sensor and the portion of the space projected on the sensor, and the plurality of cameras (Cam) being spatially distributed in positions of a closed surface (S) defining an observation region (R) therein, where the cameras (Cam) are configured for having the orientation vectors O.sub.J of their optical axes oriented towards a pre-established point (P) of the observation region (R); characterized in that the device (1) further comprises: b) a calibration solid comprising a number NB of balls (BL), with NB greater than or equal to one, the k-th ball being identified as BL.sub.k for all k=1 . . . NB and wherein if two or more balls are used they have different diameters; c) at least one positioning mechanism of the calibration solid configured for positioning said calibration solid in the observation region; d) computational equipment in communication with the at least one positioning mechanism and with the plurality of cameras (Cam) and configured for: causing in the plurality of cameras (Cam) the capture of images of the observation region (R) when the at least one positioning mechanism positions the calibration solid in said observation region (R), and receiving the images acquired by the plurality of cameras (Cam); wherein the method comprises the following steps: 1. a step of data capture of the object reconstruction device comprising the following sub-steps: for each of the cameras Cam.sub.j, providing their intrinsic parameters and estimated values of at least the following extrinsic calibration parameters: orientation O.sub.J in the observation region (R) within the closed surface (S) and position of an optical center c.sub.j being also the point on the optical axis of the optics where all rays passing through it remain unrefracted; establishing, based on the intrinsic and extrinsic parameters, a bijective correspondence f.sub.j between each coordinate of a pixel px of the image captured by the sensor of the camera Cam.sub.j and the set of coordinates of the points of the epipolar line r of the space, which passes through the optical center c.sub.j of the camera Cam.sub.j and are projected by the optic of the camera Cam.sub.j in that same pixel px after the image capture; performing a predetermined number N of experiments (E), the i-th experiment being identified as E.sub.i for all i=1 . . . N, according to the following sub-steps: the positioning mechanism positioning the calibration solid in the observation region (R) at a different position; activating the plurality of cameras (Cam) for each of them (Cam.sub.j) to capture an image of the calibration solid positioned in the observation region (R); receiving the images I.sub.ij acquired by the cameras (Cam), where in each image I.sub.ij each of the balls BL.sub.k captured in the image is shown as connected regions BP.sub.k.sup.ij of pixels, where index i with i=1 . . . N denotes the experiment number, index j denotes the camera which has captured the image I.sub.ij with j=1 . . . C, and k denotes the ball (BL) from among the balls of the calibration solid with k=1 . . . NB, 2. a step of post-processing the captured data comprising the following sub-steps: going through the set of captured images I.sub.ij, where for each captured image I.sub.ij the following sub-steps are carried out: identifying the connected regions BP.sub.k.sup.ij of pixels corresponding to a specific ball BL.sub.k of the calibration solid, if it appears in the image I.sub.ij; for each connected region BP.sub.k.sup.ij of a ball BL.sub.k of the calibration solid that does appear in an image I.sub.ij, determining: a pixel px.sub.ijk corresponding to the center of said connected region BP.sub.k.sup.ij in the image I.sub.ij; and in the space within the closed surface, determining a normalized direction vector d.sub.ijk of an epipolar line r.sub.ijk which corresponds to the pixel px.sub.ijk through correspondence f.sub.j; determining the points of the space p.sub.ik′ which minimize the summation of distances to the epipolar lines r.sub.ijk defined by a directions d.sub.ijk of each ball BL.sub.k in each experiment E.sub.i: where I is the identity matrix and where T denotes the transposition of the vector d.sub.ijk; for each camera Cam.sub.j, determining the extrinsic calibration parameters such that the correspondence f.sub.j between the epipolar line defined between the center c.sub.j and the point p.sub.ik′, and the pixel px.sub.ijk, for any experiment E.sub.i and for any ball BL.sub.k is satisfied.
2. The method according to claim 1, wherein the number NB of balls (BL) of the calibration solid is greater than or equal to two and wherein the balls (BL) are attached by rods.
3. The method according to claim 2, wherein the step of post-processing the captured data comprises performing, for each camera Cam.sub.j, the following additional steps: determining a set of pairs of points p.sub.k.sub.1′,p.sub.k.sub.2′ of the space, with k.sub.1 different from k.sub.2, which represent centers of balls (BL.sub.k1 y BL.sub.k2) of the calibration solid (3) of one and the same experiment (E); zero-initializing a value of the position error e.sub.1 of the optical center c.sub.j of the camera Cam.sub.j; carrying out the following sub-steps for each pair p of points p.sub.k.sub.1′,p.sub.k.sub.2′ with p=1 . . . N.sub.p, N.sub.p being the total number of pairs of points: i. determining p.sub.k.sub.1′p.sub.k.sub.2′ as the segment joining points p.sub.k.sub.1′ and p.sub.k.sub.2′, ii. determining cm′.sub.k.sub.1.sub.k.sub.2 as the midpoint of said segment, iii. determining d.sub.k.sub.1.sub.k.sub.2 as the actual distance between the centers of the balls BL.sub.k1 and BL.sub.k2 of the calibration solid (3), iv. determining r.sub.k.sub.1′ as the epipolar line extending from point p.sub.k.sub.1′ and the optical center of the camera c.sub.j to be corrected, v. determining r.sub.k.sub.2′ as the epipolar line extending from point p.sub.k.sub.2′ and the optical center of the camera c.sub.j to be corrected, vi. determining the points p.sub.k.sub.1, correction of point p.sub.k.sub.1′, and p.sub.k.sub.2, correction of point p.sub.k.sub.2′, of the space within the closed surface (S) as the points which satisfy: being on the line defining the segment p.sub.k.sub.1′p.sub.k.sub.2′; the distance between p.sub.k.sub.1 and p.sub.k.sub.2 being d.sub.k.sub.1.sub.k.sub.2; the midpoint cm.sub.k.sub.1.sub.k.sub.2 between p.sub.k.sub.1 and p.sub.k.sub.2 being cm′.sub.k.sub.1.sub.k.sub.2; vii. determining the epipolar line r.sub.k.sub.1 as the line parallel to the epipolar line r.sub.k.sub.1′ which passes through point p.sub.k.sub.1; viii. determining the epipolar line r.sub.k.sub.2 as the line parallel to the epipolar line r.sub.k.sub.2′ which passes through point p.sub.k.sub.2; ix. determining the corrected optical center {tilde over (c)}.sub.j as the intersection between epipolar lines r.sub.k.sub.1 and r.sub.k.sub.2; x. determining the value of the position error for the pair p as
e.sub.j.sup.p=(c.sub.j−{tilde over (c)}.sub.j)/N.sub.p xi. determining e.sub.j as e.sub.j+e.sub.j.sup.p; determining a new optical center of the camera Cam.sub.j by means of the correction of the position error e.sub.j as ĉ.sub.j=c.sub.j+e.sub.j.
4. The method according to claim 2, wherein at least one of: step 2) of calculating points p.sub.ik′ of claim 1, the steps according to claim 2, or step 2) of calculating points p.sub.ik′ of claim 1 and the steps according to claim 2 executed sequentially, are performed iteratively until complying with a stop criterion, preferably a maximum pre-established number of iterations or a pre-selected maximum error value.
5. The method according to claim 1, wherein in the sub-step in which the extrinsic calibration parameters of step 2) are determined, a descent algorithm, a conjugate gradient method, or a GMRES (Generalized Minimal RESidual Method), is applied.
6. The method according to claim 1, wherein the determination of the pixel px.sub.ijk corresponding to the center of the connected region BP.sub.k.sup.ij is based on the segmentation of said connected region BP.sub.k.sup.ij in the images I.sub.ij acquired by the plurality of cameras (Cam), particularly using a thresholding algorithm.
7. The method according to claim 6, wherein the balls (BL) of the calibration solid (3) are spherical and the determination of the pixel px.sub.ijk corresponding to the center of the connected region BP.sub.k.sup.ij is performed by fitting a circumference to the contour of said connected region BP.sub.k.sup.ij in the images I.sub.ij or calculating the center of masses of the connected region BP.sub.k.sup.ij.
8. The method according to claim 1, wherein the number of experiments N must comply with the condition N≥6/NB.
9. The method according to claim 1, wherein the at least one distinctive characteristic of the at least one ball (BL) of the calibration solid (3) which allows establishing the correspondence between each connected region BP.sub.k.sup.ij in the acquired images I.sub.ij and the at least one ball (BL) of the calibration solid (3) captured in the image is one or more of: the color of the ball (BL); the size of the ball (BL); the texture of the ball (BL); and the shape of the ball (BL).
10. The method according to claim 2, wherein the at least one calibration solid (3) comprises two or more substantially spherical balls (BL) of different diameters attached by rods of a pre-established geometry.
11. The method according to claim 1, wherein the observation region (R) has a polyhedral or spheroidal configuration, the center of the polyhedral or spheroidal region corresponding with the pre-established point (P) of the observation region (R) with respect to which the orientation vectors O.sub.J of the optical axes of the plurality of cameras (Cam) are oriented.
12. The method according to claim 1, wherein the at least one positioning mechanism (5) of the calibration solid (3) is configured for positioning the calibration solid at the pre-established point (P) of the observation region (R) with respect to which the orientation vectors O.sub.J of the optical axes of the plurality of cameras (Cam) are oriented.
13. The method according to claim 1, wherein the at least one positioning mechanism (5) of the calibration solid (3) is configured for letting the calibration solid (3) fall due to the action of gravity.
14. The method according to claim 1, wherein the at least one positioning mechanism (5) of the calibration solid (3) is configured for casting the calibration solid (3) from a lower part of the observation region (R).
15. The method according to claim 1, wherein the method further comprises, for each camera Cam.sub.j: its reorientation; its repositioning; or its reorientation and repositioning; depending on the specific extrinsic calibration parameters, such that the new orientation vectors O.sub.J of the optical axes of each camera Cam.sub.j are oriented towards the pre-established point (P) of the observation region (R).
16. A calibration system for calibrating an object reconstruction device, comprising: a) a plurality of C cameras (Cam) adapted for image acquisition, the j-th camera being identified as Cam.sub.j for all j=1 . . . C, comprising: a sensor with a sensitive area formed by pixels for capturing an image, and an optic, with an optical axis with orientation vector O.sub.J for establishing a focus on the sensor and the portion of the space projected on the sensor, and the plurality of cameras (Cam) being spatially distributed in positions of a closed surface (S) defining an observation region (R) therein, where the cameras (Cam) are configured for having the orientation vectors O.sub.J of their optical axes oriented towards a pre-established point (P) of the observation region (R); characterized in that the device (1) further comprises: b) a calibration solid comprising a number NB of balls (BL), with NB greater than or equal to one, the k-th ball being identified as BL.sub.k for all k=1 . . . NB and wherein if two or more balls are used they have different diameters; c) at least one positioning mechanism of the calibration solid configured for positioning said calibration solid in the observation region; d) computational equipment in communication with the at least one positioning mechanism and with the plurality of cameras (Cam) and configured for: causing in the plurality of cameras (Cam) the capture of images of the observation region (R) when the at least one positioning mechanism positions the calibration solid in said observation region (R), and receiving the images acquired by the plurality of cameras (Cam); wherein the method comprises the following steps: 1. a step of data capture of the object reconstruction device comprising the following sub-steps: for each of the cameras Cam.sub.j, providing their intrinsic parameters and estimated values of at least the following extrinsic calibration parameters: orientation O.sub.J in the observation region (R) within the closed surface (S) and position of an optical center c.sub.j being also the point on the optical axis of the optics where all rays passing through it remain unrefracted; establishing, based on the intrinsic and extrinsic parameters, a bijective correspondence f.sub.j between each coordinate of a pixel px of the image captured by the sensor of the camera Cam.sub.j and the set of coordinates of the points of the epipolar line r of the space, which passes through the optical center c.sub.j of the camera Cam.sub.j and are projected by the optic of the camera Cam.sub.j in that same pixel px after the image capture; performing a predetermined number N of experiments (E), the i-th experiment being identified as E.sub.i for all i=1 . . . N, according to the following sub-steps: the positioning mechanism positioning the calibration solid in the observation region (R) at a different position; activating the plurality of cameras (Cam) for each of them (Cam.sub.j) to capture an image of the calibration solid positioned in the observation region (R); receiving the images I.sub.ij acquired by the cameras (Cam), where in each image I.sub.ij each of the balls BL.sub.k captured in the image is shown as connected regions BP.sub.k.sup.ij of pixels, where index i with i=1 . . . N denotes the experiment number, index j denotes the camera which has captured the image I.sub.ij with j=1 . . . C, and k denotes the ball (BL) from among the balls of the calibration solid with k=1 . . . NB, 2. a step of post-processing the captured data comprising the following sub-steps: going through the set of captured images I.sub.ij, where for each captured image I.sub.ij the following sub-steps are carried out: identifying the connected regions BP.sub.k.sup.ij of pixels corresponding to a specific ball BL.sub.k of the calibration solid, if it appears in the image I.sub.ij; for each connected region BP.sub.k.sup.ij of a ball BL.sub.k of the calibration solid that does appear in an image I.sub.ij, determining: a pixel px.sub.ijk corresponding to the center of said connected region BP.sub.k.sup.ij in the image I.sub.ij; and in the space within the closed surface, determining a normalized direction vector d.sub.ijk of an epipolar line r.sub.ijk which corresponds to the pixel px.sub.ijk through correspondence f.sub.j; determining the points of the space p.sub.ik′ which minimize the summation of distances to the epipolar lines r.sub.ijk defined by a directions d.sub.ijk of each ball BL.sub.k in each experiment E.sub.i: where I is the identity matrix and where T denotes the transposition of the vector d.sub.ijk; for each camera Cam.sub.j, determining the extrinsic calibration parameters such that the correspondence f.sub.j between the epipolar line defined between the center c.sub.j and the point p.sub.ik′, and the pixel px.sub.ijk, for any experiment E.sub.i and for any ball BL.sub.k is satisfied.
17. (canceled)
18. A non-transitory computer-readable medium comprising instructions which, when run by a computational equipment, causes the computational equipment to: 1. capture data of the object reconstruction device comprising the following sub-steps: for each of the cameras Cam.sub.j, provide their intrinsic parameters and estimated values of at least the following extrinsic calibration parameters: orientation O.sub.J in the observation region (R) within the closed surface (S) and position of an optical center c.sub.j being also the point on the optical axis of the optics where all rays passing through it remain unrefracted; establish, based on the intrinsic and extrinsic parameters, a bijective correspondence f.sub.j between each coordinate of a pixel px of the image captured by the sensor of the camera Cam.sub.j and the set of coordinates of the points of the epipolar line r of the space, which passes through the optical center c.sub.j of the camera Cam.sub.j and are projected by the optic of the camera Cam.sub.j in that same pixel px after the image capture; perform a predetermined number N of experiments (E), the i-th experiment being identified as E.sub.i for all i=1 . . . N, according to the following sub-steps: the positioning mechanism positioning the calibration solid in the observation region (R) at a different position; activating the plurality of cameras (Cam) for each of them (Cam.sub.j) to capture an image of the calibration solid positioned in the observation region (R); receiving the images I.sub.ij acquired by the cameras (Cam), where in each image I.sub.ij each of the balls BL.sub.k captured in the image is shown as connected regions BP.sub.k.sup.ij of pixels, where index i with i=1 . . . N denotes the experiment number, index j denotes the camera which has captured the image I.sub.ij with j=1 . . . C, and k denotes the ball (BL) from among the balls of the calibration solid with k=1 . . . NB, 2. post-process the captured data comprising the following sub-steps: going through the set of captured images I.sub.ij, where for each captured image I.sub.ij the following sub-steps are carried out: identify the connected regions BP.sub.k.sup.ij of pixels corresponding to a specific ball BL.sub.k of the calibration solid, if it appears in the image I.sub.ij; for each connected region BP.sub.k.sup.ij of a ball BL.sub.k of the calibration solid that does appear in an image I.sub.ij, determine: a pixel px.sub.ijk corresponding to the center of said connected region BP.sub.k.sup.ij in the image I.sub.ij; and in the space within the closed surface, determining a normalized direction vector d.sub.ijk of an epipolar line r.sub.ijk which corresponds to the pixel px.sub.ijk through correspondence f.sub.j; determine the points of the space p.sub.ik′ which minimize the summation of distances to the epipolar lines r.sub.ijk defined by a directions d.sub.ijk of each ball BL.sub.k in each experiment E.sub.i; where I is the identity matrix and where T denotes the transposition of the vector d.sub.ijk; for each camera Cam.sub.j, determine the extrinsic calibration parameters such that the correspondence f.sub.j between the epipolar line defined between the center c.sub.j and the point p.sub.ik′, and the pixel px.sub.ijk, for any experiment E.sub.i and for any ball BL.sub.k is satisfied.
Description
DESCRIPTION OF THE DRAWINGS
[0161] These and other characteristics and advantages of the invention will become more apparent from the following detailed description of a preferred embodiment given solely by way of illustrative and non-limiting example in reference to the appended drawings.
[0162] FIG. 1 shows an illustrative example of the step of taking data of the method of the invention by using of an image capture device and subsequent reconstruction of the captured object.
[0163] FIG. 2A shows two ways of representation of the projection within a camera having the optical center located between the sensor and the optical lens/lenses or the sensor located between the optical center and the optical lens/lenses.
[0164] FIG. 2B schematically illustrates the step of post-processing data of the method of the invention.
[0165] FIG. 3 schematically shows an embodiment with the epipolar lines of four of the cameras of the device which are not correctly calibrated corresponding to a point of the three-dimensional space.
[0166] FIG. 4 shows a diagram of the steps of the method according to an embodiment in which a scale correction is carried out.
[0167] FIG. 5 shows the calibration solid used as an embodiment in the detailed method based on FIGS. 1-4.
DETAILED DESCRIPTION OF THE INVENTION
[0168] FIG. 1 shows an illustrative example of the step of taking data of the method according to an embodiment of the invention. The data required for carrying out the calibration of the device (1) of this embodiment is acquired by a plurality of C cameras, one of which has been identified in the figure as the j-th camera, Cam.sub.j, where j=1 . . . C. This data capture is performed by means of a predetermined number N of experiments (E), the i-th experiment of which has been identified in the drawing as E.sub.i, for all i=1 . . . N.
[0169] The cameras (Cam) are spatially distributed in positions of a closed surface (S) enclosing an observation region (R) therein. In this example, the surface (S) is spherical and the observation region (R) is an inner space, where the orientation vectors O.sub.J of the optical axes of the cameras (Cam) are oriented towards the central point (P) of the surface (S).
[0170] For carrying out the calibration, the device (1) comprises a calibration solid (3) to be photographed by the plurality of cameras (Cam). When the calibration solid (3) to be reconstructed is positioned in said observation region (R), the cameras (Cam) remain positioned around it, which enables photographing said calibration solid (3) from multiple angles and positions. The calibration solid (3) of FIG. 1 comprises two spheres of different diameter attached by a rod of known geometry. FIG. 1 shows a number of calibration solids (3) for schematically showing the process of falling from an upper position to a lower position. The images of the calibration solid (3) are captured when said calibration solid (3) passes through the observation region (R).
[0171] The device (1) also comprises a positioning mechanism (5) of the calibration solid (3) configured for positioning said calibration solid (3) in the observation region (R). In this embodiment, the positioning mechanism (5) is a device which lets the solid to be captured fall such that a path which passes through the observation region (R) is assured. Finally, the device to be calibrated comprises a computational equipment (4) configured for simultaneously communicating with the positioning mechanism (5) and with the plurality of cameras (Cam).
[0172] The following sub-steps must be performed to acquire the data required for carrying out the calibration of the device (1): [0173] Providing the intrinsic parameters of the cameras (Cam).
[0174] These parameters can be found on the technical data sheets thereof or can be provided by their manufacturers or distributors. [0175] Initializing the extrinsic parameters of the cameras (Cam) by establishing an approximate initial value of their optical centers c.sub.j and their orientations in space O.sub.J. [0176] Establishing a bijective correspondence f.sub.j between each coordinate of a pixel px of the image captured by the sensor (2.1) of the camera Cam.sub.j and the set of coordinates of the points of the epipolar line r of the space, which passes through the optical center c.sub.j of the camera Cam.sub.j and are projected by the optic (2.2) of the camera Cam.sub.j in that same pixel px after the image capture; [0177] performing N experiments as follows: [0178] The computational equipment (4) communicates with the positioning mechanism (5) in order to position the calibration solid (3) in the observation region (R). In this particular example, the positioning mechanism (5) lets the calibration solid fall due to the action of gravity from the highest possible position of the surface (S). [0179] Simultaneously, the computational equipment (4) activates the plurality of cameras (Cam) so that each of them (Cam.sub.j) captures an image of the calibration solid (3) positioned in the observation region (R). In particular, the suitable moment for image acquisition is when the calibration solid (3) is positioned in the center (P) of the spherical observation region (R). [0180] Finally, the images acquired by the cameras (Cam) are sent to the computational equipment (4) for post-processing.
[0181] In the embodiment shown in FIG. 1, by letting the calibration solid (3) fall due to the action of gravity, it remains in the air temporarily without being concealed by the positioning mechanism (5). Thus, given that the calibration solid is photographed suspended in the air, there is no visual impediment of the cameras with respect to said solid or a part of same, particularly when it is positioned in the center of the observation region (R), which is also the point of convergence (P) of the cameras (Cam) save the errors leading to the need for calibration.
[0182] Furthermore, in the different experiments the natural variability established by letting the calibration solid (3) fall under different initial conditions can assure that the calibration solid (3) is photographed in different positions and orientations. To enhance said variability, a calibration solid (3) of balls with different diameters is furthermore used in the example of FIG. 1.
[0183] Given that there are two balls of the calibration solid (3), to assure geometric restrictions and having a system of 6 equations to be solved, the experiments must be at least three in number. Thus, the condition N≥6/NB, where N is the number of experiments and NB is the number of balls of the calibration solid (3), is complied with.
[0184] FIG. 2A schematically illustrate two different ways of representing the projection within a camera i.sup.th, a first case in the upper part of FIG. 2A shows its optical center O.sub.i located between the sensor (2.1) and the optics (2.2) (represented as a trapezium). In this case the image is inverted; and, a second case in the lower part of FIG. 2A shows its sensor (2.1) located between the optical center O.sub.i and the optics (2.2) resulting in an image having the same orientation than the object being captured by the camera. In both cases the line connecting a point of the real object and the optical center O.sub.i impinges on the sensor (2.1). Both cases may be used as specific examples of cameras for carrying out the invention although the first case is the one commonly used. In both cases the focus (f) is the distance between the optical center O.sub.i and the sensor (2.1).
[0185] FIGS. 2B and 3 schematically illustrate the step of post-processing data of the method of the invention. On the left side of FIG. 2B, the set of previously acquired data according to the example described in FIG. 1 can be distinguished. This data is the set of images I.sub.ij acquired in each i-th experiment, E.sub.i, by each j-th camera, Cam.sub.j.
[0186] FIG. 2B shows in rows the images corresponding to each experiment (identified by index i). The columns identify the camera (index j) which captured the image in a specific experiment.
[0187] In the first row corresponding to the first experiment, the circumference and center thereof corresponding to the capture of the ball is shown schematically in each image.
[0188] First, post-processing requires going through the set of images, where for each captured image I.sub.ij the following sub-steps are carried out: [0189] identifying the connected regions BP.sub.k.sup.ij of pixels corresponding to a specific ball BL.sub.k of the calibration solid (3), if it appears in the image I.sub.ij. [0190] The technique used in the example of FIG. 2B is thresholding, whereby the background of the image and the rod acquire the value one and the connected regions the value one. For the sake of simplicity, the example only shows the connected regions (BP.sub.k.sup.11,BP.sub.k.sup.12,BP.sub.k.sup.13) identified for the three cameras of experiment 1, E.sub.1, corresponding with the k-th ball, BL.sub.k, of the calibration solid (3). [0191] for each connected region BP.sub.k.sup.ij of a ball BL.sub.k of the calibration solid (3) determining: [0192] the pixel px.sub.ijk corresponding to the center of said connected region BP.sub.k.sup.ij. For the sake of simplicity, this example shows the pixels px.sub.11k,px.sub.12k,px.sub.13k which correspond to the centers of the connected regions BP.sub.k.sup.11,BPk.sup.12,BP.sub.k.sup.13. The technique used in this example for the determination of the pixels px.sub.ijk is the fitting of the contours of the connected regions to a circumference the center of which is estimated. In each of the images of the first experiment the circumference used in the fitting as well as the estimated center is graphically depicted. [0193] the normalized direction vector d.sub.ijk of the epipolar line r.sub.ijk which corresponds to the pixel px.sub.ijk through correspondence f.sub.j. For the sake of simplicity, only vectors d.sub.11k,d.sub.12kyd.sub.13k are shown in the example.
[0194] FIG. 3 shows an example of epipolar lines r.sub.ijk of four different cameras to be calibrated. In an ideal scenario, the epipolar lines should converge at a single point of the three-dimensional space. Following the example of FIG. 2B, said point would correspond with the physical center of the ball BL.sub.k. However, in an actual scenario like the one shown in FIGS. 2 and 3, the extrinsic parameters of the cameras are not ideal but rather suffer variations, usually changes over time due to various factors. For example, due to precision tolerances of the assembly of the device, the aging of its components, or the mechanical stress it withstands.
[0195] Therefore, to continue with the calibration method it must be assumed that 1) the epipolar lines do not have to converge at one a point of the three-dimensional space, and 2) it is possible to determine a point of said three-dimensional space (p.sub.1k′ in FIGS. 2 and 3) which minimizes the distance to all the epipolar lines r.sub.ijk. FIG. 3 illustrates both assumptions. It should be observed that the index i has acquired the value 1 since in FIG. 3 only experiment 1, E.sub.1, is shown.
[0196] Taking said assumptions into account, the method continues in FIG. 2B (right side of the figure) by determining precisely the points of space p.sub.ik′ which minimize the summation of distances to the epipolar lines r.sub.ijk defined by the directions d.sub.ijk oriented in an exact manner towards each ball BL.sub.k in each experiment E.sub.i. For the sake of simplicity, only point p.sub.1k′ is shown in FIGS. 2B and 3.
[0197] On the right side of FIG. 2B, separated by a vertical line, the three cameras that have captured each of the experiments depicted by rows on the left side of said FIG. 2B are shown. It particularly shows the epipolar lines (n.sub.11k, r.sub.12k and r.sub.13k) determined respectively specific as lines which pass through the optical center of the camera (c.sub.1,c.sub.2,c.sub.3) and through the point of space which corresponds to the position of the pixel of the sensor (2.1) where the estimation of the center of the projected ball (px.sub.1lk,px.sub.12k,px.sub.13k) has been determined. The pixels located on the sensor (2.1) on the right side of the figure are shown as a bullet point symbol, although in this figure the sensor is shown in section. This line is what allows, for example, determining the unit vectors of direction of the epipolar line (d.sub.11k,d.sub.12k,d.sub.13k)
[0198] It can be observed that the epipolar lines (r.sub.11k, r.sub.12k and r.sub.13k) which pass through the optical center of the camera (c.sub.1,c.sub.2,c.sub.3) do not pass through point (p.sub.1k′) corresponding to the center of the ball of the solid (3) due to position errors, orientation, assembly failures, etc.
[0199] Finally, the extrinsic calibration parameters are determined for each camera Cam.sub.j such that the correspondence f.sub.j between the epipolar line defined between the center c.sub.j and the point p.sub.ik′, and the pixel px.sub.ijk, for any experiment E.sub.i and for any ball BL.sub.k determined as was just described, is satisfied. For determining these parameters, a conjugate gradient method is used in this particular example.
[0200] From these extrinsic parameters, it can be assured that the orientation of the optical axis of the cameras in space has been estimated with precision, but not the position of the optical center thereof. The images captured by the cameras are enlarged or magnified, i.e., they cause the effect that the cameras are closer to the photographed object than what they actually are. This effect is a scale error that must be corrected to know exactly the spatial position of the optical center of each of the cameras.
[0201] FIG. 4 illustrates a diagram of this step of the method for correcting the scale. For the sake of simplicity, a diagram of how to correct the optical center of the j-th camera for a pair of analyzed points of the space and for a single experiment is shown. These calculations will be repeated for each pair of points, for each camera, and for each experiment.
[0202] First, the points of the three-dimensional space p.sub.k.sub.1′,p.sub.k.sub.2′ are determined by means of the method described in the preceding figures. This pair of points p corresponds with the centers of two balls BL.sub.1 and BL.sub.2 of the calibration solid (3). The center of the segment joining both points cm′.sub.k.sub.1.sub.k.sub.2 is shown in FIG. 4.
[0203] Next, the epipolar lines, r.sub.k.sub.1′ and r.sub.k.sub.2′, which pass through the points of the estimated space (p.sub.k.sub.1′,p.sub.k.sub.2′) and through the estimated value of the optical center c.sub.j of the j-th camera to be corrected are determined. As mentioned above, this estimated value of the optical center c.sub.j is located closer to the calibration solid (3) than what it actually is. In this figure, the camera is schematically depicted two times, each of those times by means of an ellipse representing the optical unit and by means of a thick vertical line representing the sensor on which the epipolar line is projected. Black circles are also used for identifying intersecting points.
[0204] The method continues by determining the corrected points of the space, p.sub.k.sub.1 and p.sub.k.sub.2. These points satisfy three conditions: they are located on the same line defined by the segment joining estimated points p.sub.k.sub.1′yp.sub.k.sub.2′, the distance thereof must be equal to the actual distance between the centers of the balls of the calibration solid (3), and the midpoint cm.sub.k.sub.1.sub.k.sub.2 between them coincide with the estimated midpoint cm′.sub.k.sub.1.sub.k.sub.2.
[0205] Based on corrected points of the space p.sub.k.sub.1 and p.sub.k.sub.2, the corrected epipolar lines r.sub.k.sub.1 and r.sub.k.sub.2 which pass through said points and are parallel to epipolar lines r.sub.k.sub.1′ and r.sub.k.sub.2′ are determined. The intersection of these corrected epipolar lines allows estimating the corrected optical center {tilde over (c)}.sub.j of the j-th camera.
[0206] The position error of the optical center e.sub.j of the j-th camera is calculated as the mean of the differences between the initially estimated optical center c.sub.j and the corrected optical centers {tilde over (c)}.sub.j for each pair of points.
[0207] Finally, the optical center of the j-th camera is estimated as the estimated value of the optical center c.sub.j plus the value of the position error of the calculated optical center e.sub.j.
[0208] In a preferred embodiment, the steps of the method embodied in FIGS. 1 to 4 are performed sequentially and iteratively. The step of post-processing data of FIGS. 2 and 3 follow the step of acquiring data, mainly the captured images, of FIG. 1. Next, the step of scale correction of FIG. 4 is performed, and upon concluding this scale correction, another iteration is performed with the steps of acquiring and post-processing data. The method concludes when a stop criterion after scale correction is satisfied. Preferably, the stop criterion is established as a threshold on the variation of the scale adjustments e.sub.j between the last iteration and the preceding one. When this variation falls below a specific threshold, it is considered that the calibration has attained the required precision and the method ends definitively.
[0209] After the termination of the method, according to another embodiment it is possible to reorient and/or reposition the cameras (Cam) of the device (1) depending on the estimated extrinsic parameters.
[0210] FIG. 5 shows in detail the calibration solid (3) used in the preceding examples. This calibration solid (3) comprises two balls, BL.sub.1 and BL.sub.2, of different diameter and attached by a rod of known geometry. By arranging two balls, two calibration points per experiment and greater variability of views in the captures are obtained, and a scale correction can be carried out based on the distance between the balls. Even more advantageously, the fact that the diameters of the balls are different implies enhancing the natural variability in positioning (when left to fall due to the action of gravity or when cast from the lower position of the observation region (R)) and allows greater distinction of the connected regions corresponding to said projected balls. The size of the balls of the calibration solid (3) of FIG. 5 is such that it complies with being sufficiently different so that the apparent size ranges in the sensors (size of the projected circle) varying depending on the position of each ball in the observation region (R) do not overlap.