Methods and computer program products for calibrating stereo imaging systems by using a planar mirror
10846885 · 2020-11-24
Assignee
Inventors
Cpc classification
H04N13/221
ELECTRICITY
H04N13/282
ELECTRICITY
International classification
G06T7/80
PHYSICS
H04N13/282
ELECTRICITY
Abstract
Production of calibrated stereo images and more particularly methods of producing calibrated stereo images by using a planar mirror and computer program products to carry out the methods. By using mirrored view(s) of at least one camera with multiple (different) mirrored views of an object in one or more captured images, the 3D coordinates of a point in real space with respect to the mirror's coordinate system can be easily determined even if the mirror's coordinate system is not known in advance. Additionally, the real distance between two selected spatial points appearing in one or more captured images can be determined on the basis of their corresponding image points. The invention includes the steps of finding a reference coordinate system by using the captured images, and then determining the transformations between the reference coordinate system and the camera coordinate system, as described in greater detail herein.
Claims
1. A method for calibrating a stereo imaging system, the method comprising: obtaining at least two images, each of the images being captured from a different camera position and comprising pictures of a mirrored view of at least one camera used to capture the respective image and a mirrored view of an object, thereby obtaining multiple views of said object; finding a respective center of a picture of the mirrored view of the at least one camera in each of the images; obtaining a focal length in pixels of the at least one camera; determining a direction of a normal vector of a mirror from a center of the mirrored view of the at least one camera; determining a distance between the at least one camera and the mirror for each of the images by using a reference point on the at least one camera, said reference point having known coordinates in a camera coordinate system, and using the coordinates of a corresponding point of the mirrored view of the at least one camera; determining a mirror plane equation in the camera coordinate system by using a direction and a distance of a normal plane of the mirror and the focal length in pixels of the at least one camera; defining an up-vector in the plane of the mirror; selecting a reference point in the mirror's plane; defining a reference coordinate system with said reference point as its origo and said up-vector as its vertical y-axis; for each image, separately determining the coordinate transformation from the coordinate system of the at least one camera into a mirror coordinate system, for each image, determining the transformation from the respective mirror coordinate system into said reference coordinate system; and for any pair of images, determining the coordinate transformation from a camera coordinate system of a first camera position into a camera coordinate system of a second camera position.
2. The method of claim 1, wherein the up-vector is obtained by projecting a gravity vector onto the plane of the mirror.
3. The method of claim 1, wherein the up-vector is obtained by selecting corresponding point pairs in the at least one image.
4. The method of claim 1, wherein the at least one camera is comprised in any one of a mobile phone, a smart phone, a phablet, a tablet computer, a notebook, a digital camera, or the like.
5. A method of measuring a calibrated distance between two points of an object, the method comprising: calibrating said stereo imaging system according to claim 1; selecting an associated point pair of an object in one of the images; and calculating a real distance between the two points of said selected point pair of the object from the corresponding image pixel pair by using epipolar geometry.
6. A method of calibrated depth estimation for an object, the method comprising: calibrating said stereo imaging system according to claims 1; and generating a depth image of an object from the at least one captured image.
7. A method for calibrating a stereo imaging system, the method comprising: obtaining an image, said image comprising a view of an object, a mirrored view of the object, and a mirrored view of at least one camera used to capture the image, thereby obtaining multiple views of the object; finding a center of the mirrored view of the at least one camera in the image; obtaining a focal length in pixels of the at least one camera; determining a direction of a normal vector of the mirror from the center of the mirrored view of the at least one camera; determining a distance between the at least one camera and the mirror for the image by using a reference point on the at least one camera, said reference point having known coordinates in a camera coordinate system, and using coordinates of a corresponding point of the mirrored view of the at least one camera; determining a mirror plane equation in a coordinate system of the at least one camera by using the direction normal vector of the mirror, the distance between the at least one camera and the mirrors, and the focal length in pixels of the at least one camera; and determining a coordinate transformation from the coordinate system of the at least one camera into an arbitrary mirror coordinate system having an origo in a plane of the mirror and a z-axis parallel to a normal vector of plane of the mirror.
8. The method of claim 7, wherein the at least one camera is comprised in any one of a mobile phone, a smart phone, a phablet, a tablet computer, a notebook, a digital camera, or the like.
9. A method of calibrated depth estimation for an object, the method comprising: calibrating said stereo imaging system according to claims 7, and generating a depth image of an object from the captured image.
10. A non-transitory memory, which includes computer-readable instructions that, when executed by a computer, cause the computer to: obtain at least two images, each of the images being captured from a different camera position and comprising pictures of a mirrored view of at least one camera used to capture the respective image and a mirrored view of an object, thereby obtaining multiple views of said object; find a respective center of a picture of the mirrored view of the at least one camera in each of the; obtain a focal length in pixels of the at least one camera; determine a direction of a normal vector of a mirror from a center of the mirrored view of the at least one camera; determine a distance between the at least one camera and the mirror for each of the images by using a reference point on the at least one camera, said reference point having known coordinates in a camera coordinate system, and using the coordinates of a corresponding point of the mirrored view of the at least one camera; determine a mirror plane equation in the camera coordinate system by using a direction and a distance of a normal plane of the mirror and the focal length in pixels of the at least one camera; define an up-vector in the plane of the mirror; select a reference point in the mirror's plane; define a reference coordinate system with said reference point as its origo and said up-vector as its vertical y-axis; for each image, separately determine the coordinate transformation from the coordinate system of the at least one camera into a mirror coordinate system for each image, determining the transformation from the respective mirror coordinate system into said reference coordinate system; and for any pair of images, determine the coordinate transformation from a camera coordinate system of a first camera position into a camera coordinate system of a second camera.
11. The non-transitory memory of claim 10, wherein the up-vector is obtained by projecting a gravity vector onto the plane of the mirror.
12. The non-transitory memory of claim 10, wherein the up-vector is obtained by selecting corresponding point pairs in the at least one image.
13. The non-transitory memory of claim 10, wherein the at least one camera is comprised in any one of a mobile phone, a smart phone, a phablet, a tablet computer, a notebook, a digital camera or the like.
14. The non-transitory memory of claim 10, wherein the computer-readable instructions further cause the computer to: capture an image, the image comprising multiple views of an object; and generate a depth image of the object from the at least one captured image.
15. The non-transitory memory of claim 10, wherein the computer-readable instructions further cause the computer to: capture an image, the image comprising multiple views of an object; select an associated point pair of the object in the captured image; and calculate a real distance between two points of said selected point pair of the object from the corresponding image pixel pair by using epipolar geometry.
16. A non-transitory memory, which includes computer-readable instructions that, when executed by a computer, cause the computer to: obtain an image, said image comprising a view of an object, a mirrored view of the object, and a mirrored view of at least one camera used to capture the image, thereby obtaining multiple views of the object; find a center of the mirrored view of the at least one camera in the image; obtain a focal length in pixels of the at least one camera; determine a direction of a normal vector of the mirror from the center of the mirrored view of the at least one camera; determine a distance between the at least one camera and the mirror for the image by using a reference point on the at least one camera, said reference point having known coordinates in a camera coordinate system, and using coordinates of a corresponding point of the mirrored view of the at least one camera; determine a mirror plane equation in a coordinate system of the at least one camera by using the direction normal vector of the mirror, the distance between the at least one camera and the mirrors, and the focal length in pixels of the at least one camera; and determine a coordinate transformation from the coordinate system of the at least one camera into an arbitrary mirror coordinate system having an origo in a plane of the mirror and a z-axis parallel to a normal vector of plane of the mirror.
17. The non-transitory memory of claim 16, wherein the camera is any one of a mobile phone, a smart phone, a phablet, a tablet computer, a notebook, a digital camera or the like.
18. The non-transitory memory of claim 16, wherein the computer-readable instructions further cause the computer to: capture an image, the image comprising multiple views of an object; and generate a depth image of the object from the at least one captured image.
19. The non-transitory memory of claim 16, wherein the computer-readable instructions further cause the computer to: capture an image, the image comprising multiple views of an object; select an associated point pair of the object in the captured image; and calculate a real distance between two points of said selected point pair of the object from the corresponding image pixel pair by using epipolar geometry.
20. The non-transitory memory of claim 16, wherein the reference point is the epipole of the image.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The invention will now be described in detail through preferred embodiments with reference to the accompanying drawings wherein:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
DETAILED DESCRIPTION OF THE INVENTION
(11) Within the context of the present description, the term image means the product of image capturing performed by an image recording device, such as an image sensor or a camera, generally referred to as camera hereinafter, and the term picture means a visual representation of an object (or person) within a captured image. An image may be a still image or a frame of a video sequence (also referred to as video image). The picture of an object in an image may represent either a normal view of an object or a mirrored view of an object showing in the mirror.
(12)
(13) A planar mirror 140 also has a Cartesian coordinate system K.sub.M having mutually orthogonal axes x.sub.M, y.sub.M and z.sub.M, wherein the axes x.sub.M and y.sub.M, and therefore the origo g of the mirror coordinate system K.sub.M are all in the plane of the mirror 140. The real camera device 100 has a mirrored view, a so-called virtual camera device 101 appearing behind the mirror. The virtual camera device 101 also has a virtual camera 111, which is a mirrored view of the real camera 110.
(14) A vector m is defined to be perpendicular to the mirror 140 and to have a length which is equal to the distance between the mirror 140 and the real camera 110. One can calculate the vector m using the point of the mirror 140 where the virtual camera 111 appears in the image that contains the camera's mirrored view as it will be described later.
(15) According to a first aspect of the present invention, the calibration is based on a camera-mirror setup shown in
(16) It is noted that in
(17) In
(18) The main steps of the calibration method of the present invention according to its first aspect are shown by the flow diagram of
(19) In step S200, at least two images are obtained by using the aforementioned camera-mirror setup shown in
(20) The image processing part of the method has the following four phases: A) Determining a coordinate transformation M*.sub.1 from a first camera coordinate system K.sub.C1 to an arbitrary mirror coordinate system K*.sub.M1 by using a first image, and determining a coordinate transformation M*.sub.2 from a second camera coordinate system K.sub.C2 to another arbitrary mirror coordinate system K*.sub.M2 by using a second image. The mirror coordinate systems K*.sub.M1 and K*.sub.M2 are selected so that their origo resides in the mirror's plane and their z-axis is parallel to a normal vector of the mirror's plane. B) Using a freely selected global vector, a so called up-vector that can be detected in relation to all of the initial images, one can define a common y-axis of these mirror coordinate systems K*.sub.M1 and K*.sub.M2, and hence the transformations M**.sub.1 and M**.sub.2 from the first and second camera coordinate systems K.sub.C1, K.sub.C2 to the mirror coordinate systems K**.sub.M1 and K**.sub.M2, respectively. C) Determining a global origo g and finding the coordinates of this origo g in the mirror coordinate systems K**.sub.M1, and K**.sub.M2, thereby obtaining a specific mirror coordinate system K.sub.M, and then determining the coordinate transformations M.sub.1 and M.sub.2 from the first and second camera coordinate systems K.sub.C1, K.sub.C2, respectively, to the common mirror coordinate system K.sub.M, which is used as a reference coordinate system. D) Determining a coordinate transformation F from any camera coordinate system K.sub.Ck to the first camera coordinate system K.sub.C1 by determining the coordinate transformation M.sub.k with respect to a further image I.sub.k, wherein F=M.sub.k.sup.1M.sub.1.
(21) The above phases of the calibration method of the present invention will now be described in detail with reference to the flow diagram shown in
(22) Determination of the Transformation M*
(23) In order to determine a coordinate transformation from a camera coordinate system to an arbitrary mirror coordinate system, the center of the pictures C1, C2 of the mirrored cameras is first to found in each of the images in step S202.
(24) In the calculations we assume that the coordinate transformations have the following general form:
(25)
(26) where M is a complete homogenous transformation matrix, R is a 33 rotation matrix, and t is a 31 translation vector.
(27) To calculate the rotation matrix R, first the z-axis of the camera is to be transformed to the normal vector n of the mirror plane. The normal vector is
(28)
wherein m is me vector pointing from the mirror to the camera and orthogonal to the mirror's plane. Consequently, m defines the distance between the mirror and the camera.
(29) The rotation matrix R should transform the y-axis of the camera to the projection of the same global vector to the mirror plane. Hence, it is necessary to define a vector u that is common to all captured images. Based on said vector u, the rotation matrix may be defined as:
R=(unun)
(30) where un stands for cross product of the vectors u and n. The projection of the global vector u* onto the mirror's plane will result in an up-vector u of the mirror coordinate system K.sub.M.
(31) In a camera coordinate system, the mirror plane can be formulated as:
m.sup.Tx+m=0
(32) wherein x is any point of the mirror's plane.
(33) Note that there exist numerous possible transformations M* from a particular camera coordinate system to an arbitrary mirror coordinate system K*.sub.M, since the mirror coordinate system is not completely specified at this stage. The only restrictions for the mirror coordinate system are that the third column of the rotation matrix R in the coordinate transformation M* should be
(34)
and the translation vector t of the transformation M* should be a vector pointing from the camera's focal point to any point of the mirror's plane, that is
m.sup.Tt+m=0
(35) In step S210, the mirror plane equation is determined. To this end the value of the vector m is to be calculated. This can be done in three steps. First, the direction of the vector m is determined using the value of a so-called focal length in pixels acquired in step S204 and then the length of the vector m is determined using a selected point of the camera device, said point having known coordinates in the camera's coordinate system.
(36) The focal length f of the camera may either be a constant value and thus specified by the manufacturer of the camera, or it may be set by the user when capturing the images. In both cases, the focal length f of the camera is therefore assumed to be known. Next, the value of the focal length in pixels H is to be obtained. This may be obtained by the following steps.
(37) Let Q be a point in the (either real or virtual) space and let p denote a respective pixel in the captured image. The pixel coordinates p.sub.x, p.sub.y of the point p in the image may be defined in the camera coordinate system by the equations:
(38)
(39) where f is the focal length of the capturing camera and s is the pixel size of the camera. Generally, the pixel size s is a camera-specific parameter given by the manufacturer of the camera. Its value is typically about 1 micron.
(40) For making the following calculations easier, the parameter focal length in pixels H is defined as the ratio of the focal length f and the pixel size s of the camera:
(41)
(42) In the next step S206, the direction n of the mirror's normal vector m will be determined. It can be calculated using the fact that the line between the center of the real camera and the center of the mirrored view of the camera is perpendicular to the mirror's plane. Hence the direction n of the mirror's normal vector m can be calculated as follows:
(43)
(44) wherein (c.sub.x,c.sub.y) are the coordinates of the center of the picture C1, CA of the mirrored camera in the captured image and a is a scalar value that gives a vector of length 1 for n:
(45)
(46) For determining the mirror vector m it is still necessary to find its length (i.e. the distance between the mirror and the camera), namely the scalar value of the vector m. This value is called the aspect ratio of the camera in the image.
(47) It is easy to calculate said aspect ratio if the camera's plane is parallel to the mirror's plane (i.e. the camera's z-axis is perpendicular to the mirror's plane). In this case it can be calculated using the ratio of the distance between two real points the distance of the corresponding points shown in the image, measured in pixels.
(48) Calculating the distance between the camera and the mirror's plane will be more complicated if the camera is not parallel to the mirror. For doing these calculations it is assumed that there is a point U on the camera device, said point having known coordinates in the camera coordinate system K.sub.C and this point can be detected on the captured image.
(49) Let us define the length of the vector m by the expression m=n. The coordinates of the mirrored view V of the point U as a function of can be calculated as follows:
V=U2(n.sup.TU+1)n
(50) It is assumed that a projection of V onto the image has been detected. Let us denote this projected point by v. The coordinates of v can be expressed in the following way:
(51)
(52) Any of these equations can be solved to find , since they are linear in this single variable. As mentioned before, this leads to finding m=n.
(53) It is noted that one needs to ensure that the selected point U does not reside in the direction of the vector m, since in this case the projection onto the image will always coincide with the projection of the camera's center and the calculations cannot be carried out.
(54) As a result, the mirror plane equation can be obtained in step S212 according to the above mentioned formula:
m.sup.Tx+m=0
(55) Determination of the Transformation M**
(56) Next, in step S212, a specific up-vector u is defined for the mirror coordinate system K.sub.M in the following way.
(57) Let u* be any vector in the space. A possible selection for u* may be the gravity vector which can be obtained from a gravity sensor of the camera device, for example. Another option may be to select two points in the space with known distance from the mirror's plane. In this latter case one need to be able to find the corresponding pixels in the captured images. In fact it is not necessary to actually know this vector u*, it is only needed to know (or to calculate) its projection onto the mirror's plane, which vector is denoted by u. This projected vector u is regarded as a so-called up-vector of the mirror coordinate system. The up-vector allows to define a coordinate transformation M** from the camera coordinate system to the mirror coordinate system in a more determined way, through setting the second column of the rotation matrix R to u. It is noted that at this point the rotation matrix R is entirely defined since the third column is the mirror's normalized normal vector and the first column can be acquired from the principle of orthonormality.
(58) Determination of the Transformation M
(59) In step S216, the origo of the mirror coordinate system K**.sub.M is determined. This can be done in several ways, the most preferred ways of which will be introduced hereinafter. In these schemes the mirror coordinate system will provide a reference coordinate system for subsequent coordinate transformations.
(60) In a first preferred way, the origo of the mirror coordinate system is obtained by freely selecting a point in the space in step S214. To this end, it is assumed that there is a point p at a known distance d from the mirror and this point can be seen in each of the at least one captured images. For example, this point may be selected as a visual mark on the mirror itself. The origo of the mirror coordinate system is considered to be the projection of this point p onto the mirror's plane. Let the image pixel coordinates of the selected point p in the k-th image be (p.sub.x.sup.k,p.sub.y.sup.k), and let its distance from the mirror bed. Let g.sup.k be the base vector of the image ray. This means that the point p referring to (p.sub.x.sup.k,p.sub.y.sup.k) can be written as a multiple of g.sup.k, wherein g.sup.k can be written using the pixel coordinates and the focal length in pixels of the camera:
(61)
(62) The 3D real coordinates p=g.sup.k can be easily calculated in the camera coordinate system by noting that it is the cross point of a multiple of the ray vector and the translation of the mirror plane by d, that is
m.sup.Tx+m+d=0.
(63) As a result g.sup.k can be calculated by finding a multiplication factor for which:
(64)
(65) From the above equation the 3D coordinates of point p in the camera coordinate system is:
(66)
(67) The origo of the mirror coordinate system can be obtained by adding a vector of length d and the direction of the mirror plane normal to p, resulting in the following expression:
(68)
(69) A second preferred way of determining the origo of the mirror coordinate system is to select an arbitrary point in the mirror plane in step S214 (e.g. the projection of the focal point of the camera), finding the associated image point in one of the captured images, and then finding a few further corresponding points in at least one other captured image. The origo of the mirror coordinate system can then be calculated by means of an optimization method (e.g. least mean square or the generalized Hough transform). It is noted that in this scheme, more than one associated point pairs are needed for the calculations. The optimization problem comes straightforwardly from the above equations. Let us assume that there are some corresponding pixels in the images (p.sub.x.sup.k,p.sub.y,i.sup.k), where the index i denotes the different points, and the index k denotes the different images. Then the 3D of the base vector of the image ray g.sup.k of the a pixel point i in an image k is
(70)
(71) It is noted that the distances of these points from the mirror's plane is unknown. Let us denote these distances by d.sub.k. This results in the following set of equations:
(72)
(73) where the coordinates of t.sub.k and d.sub.k are unknown for all points. It is obvious that one corresponding point pair comes with one new unknown d.sub.k and gives a two-dimensional constraint shown above for each image pair. As a result, two corresponding point pairs determine the missing translations (t.sub.k) to the common origo of the mirror coordinate system.
(74) A third preferred way of determining the origo of the mirror coordinate system is, as shown in the example of
(75) Based on the above calculations and considerations, the coordinate transformation from the coordinate system of the image-capturing camera into a mirror coordinate system is determined for each image (step S218), and then the coordinate transformation from a particular mirror coordinate system into a reference coordinate system is determined for each image (step S219).
(76) Hence, in step S220, a coordinate transformation between any two camera coordinate systems, each belonging to a particular spatial image-capturing position, can be carried out by using the above mentioned fundamental matrix:
F.sub.kn=M.sub.k.sup.1M.sub.n
(77) wherein M.sub.k and M.sub.n are the coordinate transformations from the camera coordinate systems K.sub.Ck and K.sub.Cn, respectively, into the mirror coordinate system K.sub.M. The advantage of the above described calibration method is that the coordinate transformation matrices M can be determined for each captured image separately, thus the calculation of the fundamental matrix F requires less computational force than in other known methods.
(78) The fundamental matrix F can be visualized by epipolar lines as shown in
(79) In a second aspect of the present invention, multiple views of the object are shown within one image, wherein one of the views of the object is normal view and the other view of the object is a mirrored view thereof. The image shall also contain the mirrored view of the image-capturing camera itself.
(80) According to the second aspect of the present invention, the calibration is based on the camera-mirror setup shown in
(81) It is noted that in
(82) In
(83) The main steps of the calibration method of the present invention according to its second aspect is shown by the flow diagram of
(84) In step S300 one image is captured using the aforementioned camera-mirror setup as shown in
(85) The image processing part of the method according to the second aspect of the present invention requires the only image processing phase of: A) Determining a coordinate transformation M* from the camera coordinate system K.sub.C to an arbitrary mirror coordinate system K*.sub.M by using the image. The mirror coordinate system K*.sub.M is selected so that its origo resides in the mirror's plane and its z-axis is parallel to a normal vector of the mirror's plane.
(86) It is noted that in this aspect of the present invention, an arbitrary mirror coordinate system is enough for the calibration of the camera-mirror setup shown in
(87) The above phase A) of the image processing is carried out in the same way as in the first aspect, with the difference that only one coordinate transformation is determined between the camera coordinate system and the mirror coordinate system (which may have its origo anywhere in the mirror's plane and its up-vector extending in any direction within the mirror's plane). Accordingly, steps S302 to S310 correspond to steps S202 to S210 of the first method, respectively. In particular, in step S302, the center of the picture C1 of the mirrored camera is found in the image, then in step S304 the capturing focal length in pixels f/s of the camera is obtained, followed by determining the direction of the mirror's normal vector in step S306 and determining the distance between the mirror and the camera, i.e. the value of m in step S308. As a result, the mirror plane equation is obtained in step S310 on the basis of the captured image.
(88) In this case the center of the picture C1 of the mirrored camera is an epipole E of the stereo image system defined by the real and mirrored views of the object. Herein the term epipole is used to define the point where the epipolar lines meet. In projective geometry the epipole is the point where the lines that are parallel with the mirror's normal vector meet. This means that a line v that connects the epipole E with any point V1 of the picture O1 of the normal view of the object in the image I3 also contains the corresponding point V2 of the picture O2 of the mirrored object. By finding these points V1, V2 in the image I3, the position of the point in the real three dimensional space can be determined. In this regard it is assumed that the pixel coordinates of a point and the mirrored view of that point are both known, while only the distance between said point and the mirror is unknown. In this case there are two specific constraints, namely: 1. The distance between the point and the mirror's plane equals to the distance between the mirrored point and the mirror. 2. The vector that connects the real point and the mirrored point is perpendicular to the mirror's plane (and therefore it is also parallel to the normal vector of the mirror's plane).
(89) From the above two conditions the distance between the real point and the mirror can be simply calculated as described below.
(90) Let (u.sub.x,u.sub.y) be the coordinates of the picture u of a real point p in a captured image and (v.sub.x,v.sub.y) be the coordinates of the picture v of the mirrored view q of the point p within the same image. Once the distance c between the point p and the mirror is determined, the 3D coordinates of the real point p in the camera coordinate system can be easily calculated using the equations as described above.
(91) Let .sub.1 and .sub.2 be selected in a way that
(92)
Clearly, m.sup.T p+d=c and m.sup.Tq+d=c.
(93) Hence,
(94)
(95) Furthermore it is known that the differential vector pq is parallel to the vector m, hence pq=m. Substituting .sub.1 and .sub.2 leads to a simple linear equation system for c and T. By solving the equation system, the 3D coordinates of the point p can be calculated.
(96) Based on the above calculations and considerations, the coordinate transformation from the coordinate system of the image-capturing camera into an arbitrary mirror coordinate system having an origo in the mirror's plane and a z-axis parallel to a normal vector of the mirror's plane can be determined in step S316.
(97) Upon calculating the positions of further associated point pairs in the image, the distances between these points in the real 3D space can be calculated.
(98) The methods of the invention allow to determine real 3D coordinates of points which appear in any one of the at least one captured image. Thus the methods of the invention can be further used, for example, to measure the distance between two points of an object, which are visible in at least two different views in the at least one captured image. The different views of the object may include, for example, two different mirror views in two captured images, or a normal view and a mirrored view of the object within one image.
(99) Accordingly, in a third aspect of the invention, it is provided a method of measuring a calibrated distance between two points of an object, wherein the method comprises the steps of: capturing at least one image with multiple views of said object by means of a camera-mirror setup including at least one camera and a planar mirror, calibrating said camera-mirror setup through the steps of the method according to the first or second aspects of the invention, selecting an associated point pair of the object in one of the at least one captured image, and calculating the real distance between the two points of said selected point pair of the object from the corresponding image pixel pair by using epipolar geometry.
(100) Once a stereo imaging system described above is calibrated by means of the above steps, a depth estimation for a captured object may be performed to generate a depth image of the object. Furthermore, once the stereo imaging system of the invention is calibrated through the above steps, the measurement of any kind of distances between two points becomes possible by finding associated point pairs in the at least one captured image.
(101) Accordingly, in a fourth aspect of the invention, it is provided a method of calibrated depth estimation for an object, wherein the method comprises the steps of: capturing at least one image with multiple views of said object by means of a camera-mirror setup including at least one camera and a planar mirror, calibrating said camera-mirror setup through the steps of the method of any one of the first or second aspects of the invention, and generating a depth image of the object from the at least one captured image.
(102) In a fifth aspect, the present invention also relates to a computer program product, which includes computer-readable instructions that, when running on a computer, carry out the above steps of the method according to the first aspect of the present invention.
(103) In a sixth aspect, the present invention also relates to a computer program product, which includes computer-readable instructions that, when running on a computer, carry out the above steps of the method according to the second aspect of the present invention.